Course Review

We opened this course with a question: if thermodynamics tells us whether a reaction can occur, what tells us how fast it proceeds, and how it happens at the molecular level? The answer, developed over the past twelve lectures, is that two kinds of macroscopic measurement provide complementary windows into mechanism: how concentrations change with time reveals the rate law, which constrains the mechanisms consistent with the data, and how rate constants change with temperature reveals the energy barriers along the reaction coordinate.

From Measurement to Mechanism

We began with the practical problem of turning laboratory data into rate laws: integrated rate laws connect the differential equations we write down to the concentration–time curves we actually measure, and methods such as the integral method, isolation, and initial rates provide systematic ways to extract reaction orders and rate constants from those curves. This is not routine data processing. Because different mechanisms predict different rate laws, extracting a rate law from experiment is the first step towards understanding what molecules are doing.

Most reactions do not consist of a single elementary step, and the rate equations for multi-step mechanisms are coupled differential equations that generally have no analytical solution. The pre-equilibrium approximation and the steady-state approximation can make these problems tractable. Both replace an exact differential equation with an algebraic relationship, but for different reasons: in the pre-equilibrium case, a fast reversible step has had time to reach equilibrium; in the steady-state case, the intermediate is consumed as fast as it is formed, so its concentration stops changing.

A Toolkit with Recurring Structure

Several of the systems we studied produce rate laws with the same general form: a rate proportional to some combination of concentrations in the numerator, divided by a sum of terms in the denominator that represent competing processes. This form appears in the SSA treatment of the general two-step mechanism, in Michaelis–Menten enzyme kinetics, and in the Lindemann mechanism for unimolecular reactions. The SSA rate law for \(\mathrm{A + B} \rightleftharpoons \mathrm{C} \rightarrow \mathrm{D}\),

\[\nu = \frac{k_1 k_2 [\mathrm{A}][\mathrm{B}]}{k_{-1} + k_2}\]

is one example. In each case, the denominator arises because an intermediate can be removed by more than one process, and the relative rates of these competing processes determine the overall kinetics.

These general rate laws also share a common analytical strategy: we extract physical insight by asking what happens when one term in the denominator dominates the other. In the SSA result above, the limits \(k_{-1} \gg k_2\) and \(k_2 \gg k_{-1}\) recover the pre-equilibrium and irreversible cases. For the Lindemann mechanism, high and low pressure limits recover first-order and second-order kinetics. For Michaelis–Menten, low and high substrate concentration give first-order and zero-order behaviour. In each system the limiting cases correspond to physically distinct regimes, and identifying which limit applies is often more informative than the full expression.

Enzyme kinetics and surface adsorption share a separate connection. The Michaelis–Menten equation and the Langmuir isotherm share the same functional form because both describe reversible binding to a finite number of sites. Once every site is occupied, adding more substrate or increasing the gas pressure has no further effect: the system is saturated. This behaviour follows from finite capacity alone, regardless of whether the sites are enzyme active sites or surface adsorption sites.

A third thread runs through the Lindemann mechanism, enzyme kinetics, and the H2 + Br2 chain reaction: all involve reactive intermediates that are created and destroyed by competing processes, producing rate laws where concentrations appear in both numerator and denominator. The feedback equation, \(\mathrm{d}[\mathrm{I}]/\mathrm{d}t = \nu_\mathrm{f} + \phi[\mathrm{I}]\), provides a framework for understanding this competition. Negative feedback (\(\phi < 0\)) shows why the SSA works: consumption terms that grow with \([\mathrm{I}]\) pull the intermediate towards a steady state. Positive feedback (\(\phi > 0\)) shows one way it can fail: when branching outpaces termination, the intermediate concentration grows exponentially rather than settling, producing the explosive behaviour that defines chain-branching systems.

Temperature Completes the Picture

Throughout most of the course, rate constants appeared as fixed numbers whose values we extracted from data. Measuring how \(k\) varies with temperature provides a second experimental probe of mechanism, independent of the concentration dependence. The Arrhenius equation, \(k = A\mathrm{e}^{-E_\mathrm{a}/RT}\), is the standard empirical equation for this temperature dependence. For a single elementary step, \(E_\mathrm{a}\) corresponds directly to the height of the energy barrier along the reaction coordinate. For a multi-step mechanism, the apparent \(E_\mathrm{a}\) is a composite that reflects the energetics of several contributing steps, and its interpretation is less straightforward. A negative apparent activation energy, or a curved Arrhenius plot, is a signature that the mechanism is more complex than it first appears, just as a non-integer reaction order signals a multi-step mechanism when we analyse concentration data.

So: how do we extract molecular-level understanding from macroscopic measurements? We measure concentrations as a function of time, extract rate laws, and compare them with the predictions of proposed mechanisms. We vary temperature and use the Arrhenius equation to probe energy barriers. When a rate law has concentrations in both numerator and denominator, or a non-integer order, or a negative apparent activation energy, the data are telling us that the mechanism involves more than one step. The tools for reading that message are the ones developed in this course: integrated rate laws, the pre-equilibrium and steady-state approximations, limiting-case analysis, and the Arrhenius equation.