Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Transition State Theory: Validation on a Simple Hamiltonian Model, Papers of Cryptography and System Security

The validation of transition state theory (tst) on a simple hamiltonian model. The authors find that the system dynamics can be accurately represented as a markov chain on metastable regions, where transition periods are statistically independent and exponentially distributed. They also derive and evaluate the mean transition frequency between metastable regions, successfully predicting the number of transitions as a function of the energy difference. Equations and explanations of the potential energy landscape and the hamiltonian equations.

Typology: Papers

Pre 2010

Uploaded on 08/31/2009

koofers-user-cn5-1
koofers-user-cn5-1 🇺🇸

10 documents

1 / 17

Toggle sidebar

Related documents


Partial preview of the text

Download Transition State Theory: Validation on a Simple Hamiltonian Model and more Papers Cryptography and System Security in PDF only on Docsity! Testing transition state theory on a simple Hamiltonian model Abstract Transition state theory relies on several assumptions in order to make predictions about metastable behavior in complex dynamical systems. We test those assumptions and verify its predictions on a simple Hamiltonian model of three particles moving in the plane, interacting via a pairwise Lennard-Jones potential. We find that the dynamics of the system can accurately be represented as a Markov chain on the metastable regions, in which transition periods are statistically independent and exponentially distributed and, in addition, that the manner in which the mean transition frequency scales with system energy can be accurately predicted by transition state theory. 1. Introduction Complex dynamical systems often display metastable behavior, in the sense that the trajectories remain confined for long periods of time in separated regions of phase space and only switch from one region to another very occasionally. Due to the separation of timescales between the dynamics within the metastable regions and the frequency of transitions between them, it is tempting to approximate the dynamics by a Markov chain over the state-space of the metastable regions with appropriate rate constants. One of the earliest attempts to determine these rate constants is transition state theory (TST). Unfortunately, both the Markov chain representation and TST rely on many assumptions which are very difficult to assess, like the ergodicity of the system, the fact that successive transitions between the metastable regions are statistically independent with waiting times Poisson (i.e. exponentially) distributed, the existence of a dividing surface in phase-space with low recrossing rate, etc. In this paper we test numerically these assumptions and verify the predictions of TST on a simple but nontrivial test model consisting of a Hamiltonian system of three particles moving in the plane and interacting via a pairwise Lennard-Jones potential. We find that waiting times between transitions are statistically independent and exponentially distributed, and that there exists a planar dividing surface in phase space separating two metastable regions containing a saddle point of minimal potential energy near which transitions are most likely to occur. In addition, we derive and evaluate the mean transition frequency between metastable regions, with which we successfully predict the number of transitions as a function of the difference between the total system energy and the minimum saddle potential. 2. The System The test system we consider consists of three point particles, with coordinates and momenta , moving in the plane in the absence of an external field but under a pairwise Lennard-Jones potential ( ii pq , ) ( ) ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ −= 612 11 rr rπ , (1) where r is the distance between two particles. This potential mimics the electrostatic and gravitational forces between physical particles by causing attraction at large distances and repulsion at small distances. The total system potential defined on the particle configuration space is then ( ) ⎟⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −+⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −+⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −= 6 23 12 23 6 13 12 13 6 12 12 12 111111 rrrrrr qV i , (2) with ( ) ( )22 ijijij yyxxr −+−= the distance between particles i and j. Stable equilibrium is achieved when the particles form an equilateral triangle, whose sides minimize the pairwise potential ( )rπ . Setting 0= dr dπ , we find that this occurs when , producing a minimum system potential of 6/1min231312 2==== rrrr 75.113 6 min 12 min min −=⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −≡ rr V . Naturally, when the particles are indistinguishable there is only one such equilateral triangle configuration, where we invoke the homogeneity of space and ignore translation and rotation. Labeling the particles, however, leads to two: one in which the crossproduct ( ) ( )1312 qqqq vrrv −×−=σ is positive and one in which it is negative (we use vector notation to emphasize the geometric interpretation of the coordinates). We may think of these as corresponding to right and left-handed coordinate systems, respectively. As these two configurations are the only stable equilibria in configuration space and are equivalent by symmetry, the closures of their basins of attraction must partition the space evenly, with their common boundary B forming a planar submanifold of codimension 1. This boundary consists of unstable configurations where 0=σ – that is, where and are colinear. Due to its situation symmetrically between the basins of attraction of V , the spectrum of the Hessian of V evaluated on B contains positive eigenvalues corresponding to the eigenvectors spanning B, and a single negative eigenvalue corresponding to the eigenvector perpendicular to B. For this reason, we denote all q as saddle points of the potential. Moreover, because as 21, qq 3q B∈ 0↑V ∞→q and the system energy is always negative, configuration space is inaccessible for large q , effectively bounding M (which is, in addition, closed). Uniform continuity of V therefore implies that there must exist a saddle point of minimum potential. By symmetry this can only occur when the inner particle is equidistant between the outer two, and so we can solve for this common distance by setting sadr 0=dr dVsad , where (the amount necessary to achieve a transition), relative to the saddle potential. ε is like an order parameter for the system, in that the mean transition frequency is zero for 0<ε and becomes strictly positive at 0=ε . As we will see, the frequency scales withε from this point on like 2 1~ − ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ + sad barrier V V ε , where sadVε is the unnormalized excess kinetic, until the largeε limit, when metastability breaks down and the system roams phase space freely. Four hundred simulation runs were performed by varyingε from .0001 to .0400 by increments of .0001, each for a duration of time steps. The time step was set to .001 seconds, resulting in a “real time” simulation duration of . These numbers were chosen heuristically to produce meaningful statistics without unnecessarily long run times. During each run, the sequence of transition times was recorded as well as, for some, the system kinetic and potential energies at each time step. It is the statistics of these transition times that we wish to predict using transition state theory. 810 510 4. Theoretical Predictions 4.1 Setup In this section we make use of the topology of configuration space, with respect to the potential V, as well as the initial conditions outlined above to derive the mean transition frequency between metastable states, the quantity on which a Markov model would be based. From there it is a simple matter to derive the Marcus Formula, a predictor for the expected waiting times between metastable state transitions. To derive these quantities, we begin by invoking one of the fundamental assumptions of TST, that the flow tϕ representing the dynamics of the system is ergodic with respect to an invariant probability measure µ on the manifold 12R≡M , which represents the phase space accessible to the system before initial conditions are imposed. As tϕ is induced by a vector field F on phase space, which is, in turn, imposed by the gradient of V, ergodicity here is essentially a statement about the form of the potential. We make the further assumption that µ has a corresponding density f such that ( ) ( )dxxfdx =µ , with dx the Lebesgue measure on M. Denoting Aχ the characteristic function of a set A, ergodicity gives us ( ) ( )( )∫∞→= T tAT dtx T A 0 1lim ϕχµ (9) for almost all and for any measurable subset A of M. In what follows, the variable A will stand for either of our two open sets and . Mx∈ +A −A Let ( be the amount of time the trajectory )xt jA, ( )xtϕ spends in region A on its jth visit, and the number of visits up until time T. We define the mean visit frequency of A to be the function ( xTAN ,, ) ( ) ( ) T xTANx TA ,,lim ∞→ =ω (10) and the mean visiting time in A (equivalent to the mean waiting time between transitions) to be the function ( ) ( ) ( ) ( ) ∑ = ∞→ = xTAN j jATA xt xTAN x ,, 1 ,,, 1limτ . (11) Aω is what we are ultimately after. Equation (6) gives us , ( ) ( ) ( ) TAxt xTAN j jA ⋅=∑ = µ ,, 1 , and so we arrive at the relation ( ) ( ) ( )Axx AA µτω = . However, as the right side of the equality does not depend on x (due to ergodicity), neither does the left, and it becomes ( )AAA µτω = . (12) Rigorous justification for this result follows from the following theorem, proved in (1), which we will need to explicitly calculate +A ω and −A ω . Let A be open with C1 border . Then for almost all , A∂ Mx∈ ( ) ( )( ) ( ) ( )∫∂ + − ⋅=≡ AAA xdxfxnFZx σωω ˆ1 , (13) where is the unit vector pointing outward from the surface element ( )xn̂ ( )xdσ and . ( ) ( )0,max aa =+ This formula can be interpreted as follows. ( ) ( )xdxf σ is a differential unit of the probability measure µ restricted to the surface A∂ . The dot product ( )xnF ˆ⋅ represents the component of the flow pointing outward from A∂ at x, with the enclosing operator ensuring we only include locations where the flow indeed points out. The product of these terms represents the amount of measure “flowing out of A” at the location x. As µ is an invariant measure, the integral of this quantity over the entire closed surface must equal the total amount of measure flowing into A. The key point is that while the flow of measure into A and out of A must be equal, their common magnitude is variable, and it is independent of ( )+⋅ A∂ ( )Aµ . 4.2 The partition function On the phase space manifold M, the probability measureµ has the following differential form: ( ) ( )( ) ( )( ) ( )( ) ( )( ) qdpdqpJpPqMqpHEZqpd iiiciciiii 661 ,,, δδδδµ −= − (14) where the delta functions ensure that positive measure is restricted to the submanifold MM ⊂′ where the conserved quantities – energy, center of mass and momentum, and angular momentum – take on their appropriate value. It is on this submanifold that the dynamics of our system occur. The differential represents a volume element of M, qdpd 66 ( ) sadsad VVE +⋅= ε is the total system energy, and the partition function Z acts as a normalization factor. It is the calculation of the partition function to which we now turn. Explicit integration over the manifold M is clearly infeasible. Instead, we introduce generalized coordinates whose number reflect the actual number of degrees of freedom possessed by the system and integrate over the lower-dimensional manifold M ′ that they parameterize. Before initial conditions are introduced, phase space is 12- dimensional, representing all of M, due to the three position and three momentum variables, each with two components. Excluding the energy constraint, the initial conditions imposed are vector equations, implying that each one subtracts two degrees of freedom from the system. Specifically, (6) reduces the dimensionality of configuration space by two, (7) reduces the dimensionality of momentum space by two, and (8) reduces that of both spaces by one. We are therefore left with 62312 =⋅− independent generalized coordinates 3,2,1,~,~ =ipq ii , which together parameterize the 6-dimensional manifold M ′ . After our change of variables, the measureµ restricted to M ′ becomes ( ) ( )( ) pdqdEpqH Z pqd iiii ~~~,~ 1~,~ 33−= δµ , (15) An explicit (and integrable) form for the Hamiltonian is now obtained by taking a harmonic approximation of the potential ( )iqV at a fixed point in configuration space, where the first order terms in the Taylor expansion will vanish. As will be clear in a moment, it is advantageous for all eigenvalues of the Hessian of the potential at this point to be positive, ruling out the saddle points. Hence we are left with centering our approximation at either of the potential minima, where is identical by symmetry and positive definite. Denoting VH VH ( )mmmm qqqq 321 ,,≡ as our chosen minimum configuration, where particle i is located in the plane at , the potential (2) takes the form miq ( ) ( ) ( )23 1 2~ 2 1 i i i m i m i qOqqVqV ++= ∑ = λ , (16) where is the imiλ th eigenvalue of evaluated at and we have selected VH mq iq~ to be coordinates along the directions parallel to the eigenvectors of , which, by self- adjointness of , we take to form an orthonormal basis for the 3-dimensional generalized configuration space. We may thus put VH VH Note that the argument to the δ-function has not changed – it has simply been rearranged. Also, we have imposed positivity on the quantity 21~2 1 pVE sad −− , to ensure that the physically necessary condition 1~2 1 pVE sad ≥− holds. This condition expresses the important requirement that as the system crosses the dividing plane A∂ , the kinetic energy stored in the motion perpendicular to the plane cannot exceed the excess kinetic energy alloted to the system, sadsad VVE ε=− . The next few steps are analagous to those in the derivation of Z. We first employ the change of variables i s ii qx ~λ= , ss xdqd 32 2 2 ~ λλ = , to produce ( )∫ ∫ ∑ ∞ += ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ −−−+= 0 1 222 1 3,2 22 1 32 ~~~ 2 1~ 2 1~1 4 pdpxddpVExpp Z sad i iissA R δ λλ ω , Then, using the co-area formula, we let 22 2 1 2 2 2 1 2 ~~ ppxxr +++= and put ( )∫ ∫ ∞ + ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ −−−= 0 1 3 4 2 1 2 1 32 ~~ 2 1 2 1~1 pddrrSpVErp Z sadssA R δ λλ ω . Finally, taking 2 2 1 rt = , we have ( )∫ ∫ ∞ + ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ −−−= 0 1 2 11 32 4 ~2~ 2 1~ 4 pddttpVEtp Z S sadssA R δ λλ ω ∫ ∞ + ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ −−= 0 1 2 11 32 4 ~~ 2 1~2 pdpVEp Z S sadssλλ (22) The elementary integral (22) can now be solved to obtain ( )∫ ∞ +−−= 032 42 duuVE Z S sadssA λλ ω ( )2 32 4 2 12 sadss VE Z S −⋅= λλ ( ) ss sad Z VES 32 2 4 λλ − = (23) Inserting (19) into (23), and using the ratio ππ π 22 3 2 6 4 == S S , we arrive at a closed form expression for Aω : ( ) ( )2min6 31 32 2 4 4 VES VES i m i ss sad A − − = ∏ ≤≤ λ λλ ω ( ) ( )2min 2 32 31 2 1 VE VE sad ss i m i − − = ∏ ≤≤ λλ λ π (24) It is a simple matter now to derive the Marcus formula for the mean visiting time Aτ . We recall the relation (12), which relates Aτ with Aω through the probability measure ( )Aµ : ( )AAA µτω = . By symmetry, as the sets and partition evenly, we deduce +A −A AM ∂\ ( ) ( ) 2 1 == ++ AA µµ . Hence, ( ) ( )2 2 min 31 32 2 1 sad i m i ss A A VE VE − − == ∏ ≤≤ λ λλπ ω τ (25) 5. Results Before checking the accuracy of our predictions for Aω and Aτ , we show that the statistics of our 3-body system conform to the assumptions of TST. Namely, that visiting times between transitions are exponentially distributed, suggesting a memoryless Markov process, and that successive visiting times are statistically independent. The assumption regarding the existence of an optimal dividing surface in phase space with low crossing rate (our ) holds trivially due to the symmetry of the metastable regions. While the ergodicity of the flow A∂ tϕ is not verified explicitly, it is presumed to follow from the accuracy of our predictions, which rely upon it heavily. In Figure 1 we show the successive waiting time correlation function 2 1 τ ττ ρε += ii (26) as a function of ε , where iτ is the waiting time between the i th and (i+1)th transitions (i.e., the ( visiting time inside either or , modulo 1). As )thi 2/ +A −A ε reaches values at which a statistically meaningful number of transitions occur, we see that ρ converges to 1.0, indicating the statistical independence of the waiting times. The mean value of all correlations with respect to ε , ignoring zero values for which no transitions occurred, is 0.981. Statistical independence is further suggested by the so-called parking lot plot for 0400.=ε in Figure 2. Here, successive waiting times ( )1, +ii ττ are plotted, with the expectation that no patterns will emerge suggesting statistical dependence. In fact, two discernable patterns can be seen. The first is the decay in density of points as ones moves away from the origin, resulting from the fact that for larger values ofε , shorter waiting Having successfully matched prediction to data, however, we must still address the impact of error introduced by the Verlet algorithm on our transition data. It must be remembered that our experiment depends sensitively on the excess kinetic energy possessed by the system, which can amount to as little as one ten thousanth of the total system energy, and the discretization error acts as a form of noise on top of this quantity. Moreover, in the neighborhood of a saddle point, translational error can enable a particle to “tunnel through” a potential barrier, much as a physical particle would quantum mechanically, producing an artificial transition. These concerns may be allayed with a simple realization, however. While particle velocity does not appear explicitly in (5), reproduced here, ( ) ( ) ( ) ( ) ( ) ( 42212 tt m rrttqtqttq ∆Ο+∆∇+∇−∆−−=∆+ ππ ), we can see by subtracting from both sides and dividing by ( )tq t∆ , ( ) ( ) ( ) ( ) ( ) ( ) ( )321 tt m rr t ttqtq t tqttq ∆Ο+∆ ∇+∇ − ∆ ∆−− = ∆ −∆+ ππ , this position integration formula can be equivalently interpreted as a velocity integration formula. In these terms, the error impinges on, and is proportional to, the velocity. But the neighborhoods of transition points are specifically those regions of phase space where nearly all energy has been converted into potential, leaving little for kinetic, implying the error would be negligibly small. This argument is confirmed by Figure 6, which displays the energy fluctuations over time for a particular simulation run. The fluctuations during time intervals around transition points – marked by arrows – are noticibly smaller than the typical fluctuations, which are themselves of order .005 energy units, 100 times smaller than the saddle energy. Obviously they could never produce errant transitions. In conclusion, we have shown that the large scale dynamics of our 3-body Hamiltonian system satisfy the assumptions of transition state theory. First, there exist metastable regions of phase space and in which the system remains confined for long periods of time between transitions, and which are separated by a dividing plane +A −A A∂ with low recrossing rate. Second, the waiting times between successive transitions from one region to another are statistically independent and exponentially distributed, suggesting that the discrete state-space dynamics possess the Markov property, and in fact can be modeled as a Poisson process. We have also shown that the mean frequency distribution and, hence, the mean visiting time in a given metastable region can be successfully predicted based on assumptions of ergodicity and smoothness of the boundary . Future work may involve increasing the dimensionality of the problem by adding more particles or embedding the particles in higher dimensions, or explicitly adding noise to the dynamics and deriving the mean frequency as a function of both energy and temperature. A∂
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved