8 Molecular dynamics: observables and neutron connections

8.1 From trajectories to observables

A molecular dynamics trajectory is a sequence of atomic configurations over time — positions and velocities at each timestep. This raw data must be processed to extract physically meaningful quantities. Some quantities involve simple averages: where are atoms on average, and how are they distributed in space? Others probe dynamics: how do properties change over time, and how quickly does the system lose memory of its initial state?

8.2 Ensemble-averaged structure

Neutron diffraction measures a time-averaged structure. Atoms vibrate; the measured positions are averages over that motion. At higher temperatures, the amplitude of vibration increases, and for some systems atoms may sample multiple local configurations.

MD naturally provides this ensemble average. The time-averaged atomic positions from a trajectory correspond to what diffraction measures — both average over the thermal motion of atoms. For an ordered crystal, this gives thermally-expanded lattice parameters and anisotropic displacement parameters that can be compared directly with Rietveld refinement.

For disordered systems, the comparison is more subtle. Bragg diffraction reports the average structure, which may not correspond to any single instantaneous configuration. If atoms locally displace from high-symmetry positions but do so differently in different unit cells, diffraction sees only the average. The MD trajectory contains the full distribution of local configurations — information that goes beyond what Bragg diffraction can access.

8.3 Pair distribution functions

The pair distribution function \(g(r)\) measures the probability of finding an atom at distance \(r\) from another atom, relative to a uniform distribution. Peaks in \(g(r)\) appear at characteristic neighbour distances: the first peak corresponds to nearest neighbours, the second to next-nearest neighbours, and so on. The function encodes spatial correlations — how the presence of one atom affects where others are likely to be found.

Experimentally, \(g(r)\) comes from total scattering measurements. The Fourier transform of the structure factor \(S(Q)\) gives the PDF directly. This is sensitive to local structure in ways that Bragg diffraction is not — disorder, distortions, and short-range correlations all appear in the PDF.

From an MD trajectory, \(g(r)\) is computed by histogramming interatomic distances across all configurations:

\[g(r) = \frac{V}{4\pi r^2 N^2} \left\langle \sum_{i \neq j} \delta(r - |\mathbf{r}_i - \mathbf{r}_j|) \right\rangle\]

The average is over trajectory frames. In practice, distances are binned into shells of width \(\delta r\) and normalised appropriately.

This provides a direct comparison between simulation and experiment. If the simulated \(g(r)\) matches the measured PDF, the simulation captures the local structure correctly. Disagreement points to problems — perhaps with the \(E(\mathbf{r})\) method, perhaps with the structural model, perhaps with the sample itself.

For systems with multiple atom types, partial pair distribution functions \(g_{\alpha\beta}(r)\) separate contributions from different pair types: Li–Li, Li–O, O–O, and so on. Simulation provides these automatically. Experimentally, isotope substitution or anomalous X-ray scattering can separate them, though this is more challenging. The partials reveal which correlations are responsible for features in the total \(g(r)\) — information that aids interpretation of complex structures.

8.4 Dynamics and the dynamic structure factor

The quantities above are static — averages over the trajectory with no reference to how the system evolves in time. But MD also captures dynamics: how properties change, how quickly correlations decay, how the system moves from one configuration to another.

For dynamics, the key experimental observable is the dynamic structure factor \(S(\mathbf{Q}, \omega)\), measured by inelastic neutron scattering. At higher energy transfers, \(S(\mathbf{Q}, \omega)\) captures vibrational excitations — phonons. Near the elastic line, small energy transfers reveal diffusive motion and relaxation processes; this is the quasielastic regime. Several routes connect MD trajectories to \(S(\mathbf{Q}, \omega)\): velocity autocorrelation functions for vibrations, mean squared displacements for diffusion coefficients, and Van Hove correlation functions for the full space-time picture.

8.5 Time correlation functions

Many dynamical quantities take the form of time correlation functions. The idea is simple: how does some property at time \(t\) relate to the same property at time 0? If a property fluctuates randomly, it will quickly become uncorrelated with its initial value. If it changes slowly or oscillates, correlations persist.

The general form is:

\[C_{AB}(t) = \langle A(0) B(t) \rangle\]

where \(A\) and \(B\) are dynamical variables and the angle brackets denote an average over time origins. In practice, this is computed by averaging over many starting points along the trajectory:

\[C_{AB}(t) = \frac{1}{T-t} \int_0^{T-t} A(t') B(t' + t) \, \mathrm{d}t'\]

where \(T\) is the total trajectory length. Different choices of \(A\) and \(B\) give different physical information.

8.6 Vibrational density of states

For harmonic systems, the vibrational density of states comes from diagonalising the dynamical matrix, as described in Chapter 6. MD provides an alternative that does not assume harmonicity.

The velocity autocorrelation function measures how atomic velocities correlate with themselves at later times:

\[C_{vv}(t) = \langle \mathbf{v}(0) \cdot \mathbf{v}(t) \rangle\]

At \(t = 0\), this equals \(\langle v^2 \rangle\), proportional to the temperature. As time increases, velocities lose correlation with their initial values — collisions and interactions randomise them. The rate at which this happens, and any oscillations along the way, encode information about the vibrational motion.

The Fourier transform of the velocity autocorrelation function gives the vibrational density of states:

\[g(\omega) = \int_{-\infty}^{\infty} C_{vv}(t) \, \mathrm{e}^{-\mathrm{i}\omega t} \, \mathrm{d}t\]

This works because oscillatory motion at frequency \(\omega\) contributes oscillations at that frequency to the velocity autocorrelation function, which appear as a peak at \(\omega\) in the Fourier transform.

The result includes all anharmonic effects — the atoms move on the full potential energy surface, not a quadratic approximation to it. For strongly anharmonic systems, the MD-derived density of states may differ substantially from harmonic phonon calculations. Comparison with the high-energy-transfer region of \(S(\mathbf{Q}, \omega)\) from INS then tests whether the simulation captures the real anharmonicity.

For systems with multiple atom types, partial densities of states can be computed by restricting the velocity autocorrelation to specific species — lithium vibrations, oxygen vibrations, and so on. Simulation provides these decompositions directly. Experimentally, separating contributions is harder, though isotope substitution can help: replacing hydrogen with deuterium shifts the frequencies associated with those atoms, allowing their contribution to be identified. The partials reveal which species are responsible for features in the total spectrum — a peak at a particular frequency might come from light, fast-moving lithium ions or from heavier framework atoms, and the decomposition tells you which.

8.7 Diffusion and mean squared displacement

Many energy materials involve ionic transport — lithium in batteries, protons in fuel cells, oxygen in solid oxide cells. MD simulates this transport directly.

The mean squared displacement (MSD) measures how far atoms move from their initial positions:

\[\text{MSD}(t) = \langle |\mathbf{r}(t) - \mathbf{r}(0)|^2 \rangle\]

where the average is over all atoms of the species of interest and over time origins.

For atoms vibrating around fixed sites, the MSD oscillates and remains bounded — atoms jiggle but do not go anywhere on average. For diffusing species, the MSD grows with time. At long times, diffusive motion gives:

\[\text{MSD}(t) \rightarrow 6D^*t\]

where \(D^*\) is the self-diffusion coefficient and the factor of 6 comes from three-dimensional motion (2 for each dimension). Plotting MSD against time and fitting the slope gives \(D^*\) directly.

This can be decomposed by species. In a lithium-ion conductor, lithium motion (which should show diffusion) can be tracked separately from the framework atoms (which should show only vibration). The simulation reveals not just that diffusion occurs, but which atoms are mobile and which are not.

8.8 Mechanistic insight from trajectories

The trajectory contains more than just a diffusion coefficient. Every atomic position at every timestep is recorded — in principle, the complete mechanistic picture is there. For simple diffusion mechanisms, this can be extracted directly by tracking individual atoms through the trajectory. Does a lithium ion hop between discrete sites, or does it follow a continuous path? Does it wait at one site for a long time then jump quickly, or does it move gradually? Are jumps correlated — does one ion moving trigger another to move? These questions can be answered by visualising trajectories and by statistical analysis of jump events.

For more complex mechanisms, the analysis becomes harder. If many atoms move cooperatively, or if the motion involves subtle rearrangements of the surrounding structure, identifying what is happening requires more sophisticated analysis. But the information is present in the trajectory — the challenge is extracting it.

This mechanistic insight can also inform experimental analysis. Quasielastic scattering data are typically fitted to models — jump diffusion, continuous diffusion, confined motion — but the choice of model is often unclear from the scattering data alone. If trajectory analysis shows that lithium ions hop between discrete sites with a well-defined residence time, this justifies using a jump diffusion model. Computation provides the mechanistic picture; experiment tests whether that picture is consistent with the measured scattering.

8.9 The Van Hove correlation function

The quantities discussed so far either capture spatial information (\(g(r)\)) or temporal information (velocity autocorrelation, MSD). The Van Hove correlation function combines both, measuring spatial correlations as a function of time — and its Fourier transform connects directly to the \(S(\mathbf{Q}, \omega)\) that neutron scattering measures.

The self part \(G_s(\mathbf{r}, t)\) asks: given that an atom was at some position at time 0, what is the probability of finding that same atom at displacement \(\mathbf{r}\) from its initial position after time \(t\)?

\[G_s(\mathbf{r}, t) = \frac{1}{N} \left\langle \sum_i \delta(\mathbf{r} - [\mathbf{r}_i(t) - \mathbf{r}_i(0)]) \right\rangle\]

At \(t = 0\), this is a delta function at the origin — each atom is where it started. As time evolves, \(G_s(\mathbf{r}, t)\) broadens as atoms move. For vibrating atoms, it broadens but remains centred near the origin. For diffusing atoms, the distribution spreads and the width grows with time.

The distinct part \(G_d(\mathbf{r}, t)\) asks a different question: given that atom \(i\) was at some position at time 0, what is the probability of finding a different atom \(j\) at distance \(\mathbf{r}\) from that initial position after time \(t\)?

\[G_d(\mathbf{r}, t) = \frac{1}{N} \left\langle \sum_{i \neq j} \delta(\mathbf{r} - [\mathbf{r}_j(t) - \mathbf{r}_i(0)]) \right\rangle\]

At \(t = 0\), this is the pair distribution function \(g(r) - 1\): peaks at the neighbour distances, reflecting the local structure around each atom. As time evolves, \(G_d(\mathbf{r}, t)\) reveals how this local structure changes. In a solid, the peaks persist — neighbours vibrate but remain in place. In a liquid, the peaks decay as neighbours diffuse away and are replaced by other atoms. The timescale over which correlations decay characterises structural relaxation.

For a diffusing species in a solid framework, the behaviour is intermediate. The mobile ions lose their initial correlations as they hop between sites, while the framework atoms maintain theirs. The distinct part thus probes collective behaviour: not just where one atom goes, but how the arrangement of atoms evolves.

8.10 Connection to experiment

The Van Hove functions connect to \(S(\mathbf{Q}, \omega)\) through a space-time Fourier transform:

\[S(\mathbf{Q}, \omega) = \int \int G(\mathbf{r}, t) \, \mathrm{e}^{\mathrm{i}(\mathbf{Q} \cdot \mathbf{r} - \omega t)} \, \mathrm{d}\mathbf{r} \, \mathrm{d}t\]

The self and distinct parts contribute differently to the measured scattering. For nuclei with large incoherent scattering cross-sections — hydrogen being the prime example — the scattering is dominated by the self part, reflecting single-particle motion. For coherent scatterers, the distinct part contributes, and the scattering reflects collective dynamics and structural correlations.

MD provides \(G(\mathbf{r}, t)\) directly — a real-space, real-time picture. The Fourier transform to \(S(\mathbf{Q}, \omega)\) then allows comparison with experiment: does the simulation reproduce the measured scattering? If it does, the mechanistic picture extracted from trajectory analysis is consistent with what the neutrons see.

8.11 The computational–experimental loop

The connection between MD and neutron scattering runs both ways.

Computation provides microscopic detail. From trajectories, we see how atoms move — mechanisms are visible directly. Calculating \(S(\mathbf{Q}, \omega)\) from the simulation then allows comparison with experiment. Agreement means the microscopic picture from simulation is consistent with the measured dynamics.

Disagreement raises questions. The \(E(\mathbf{r})\) method may be inadequate — the potential may not capture the relevant physics. But simulations typically model idealised structures: perfect stoichiometry, no impurities, no grain boundaries, infinite crystals via periodic boundary conditions. Real samples have defects, secondary phases, surface effects, non-stoichiometry. Disagreement might mean the simulation is wrong, or it might mean the sample differs from its idealised description. Computation can reveal what ideal behaviour should look like, highlighting where real samples deviate.

Experiment provides constraints and raises questions. Unexpected quasielastic broadening, a lineshape that does not fit standard models, dynamics that differ from predictions — these prompt computational investigation. What mechanism would produce this scattering? The most productive work often involves iteration: measurement raises a question, simulation proposes an answer, comparison tests the proposal.