Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Perturbation Theory - Quantum Chemistry, Lecture notes of Quantum Chemistry

Time-independent perturbation theory

Typology: Lecture notes

2020/2021

Uploaded on 06/21/2021

anandamayi
anandamayi 🇺🇸

4.3

(8)

12 documents

1 / 42

Toggle sidebar

Related documents


Partial preview of the text

Download Perturbation Theory - Quantum Chemistry and more Lecture notes Quantum Chemistry in PDF only on Docsity! Supplementary subject: Quantum Chemistry Perturbation theory 6 lectures, (Tuesday and Friday, weeks 4-6 of Hilary term) Chris-Kriton Skylaris Physical & Theoretical Chemistry Laboratory February 24, 2006 Bibliography All the material required is covered in “Molecular Quantum Mechanics” fourth edition by Peter Atkins and Ronald Friedman (OUP 2005). Specifically, Chapter 6, first half of Chapter 12 and Section 9.11. Further reading: “Quantum Chemistry” fourth edition by Ira N. Levine (Prentice Hall 1991). “Quantum Mechanics” by F. Mandl (Wiley 1992). “Quantum Physics” third edition by Stephen Gasiorowicz (Wiley 2003). “Modern Quantum Mechanics” revised edition by J. J. Sakurai (Addison Wesley Long- man 1994). “Modern Quantum Chemistry” by A. Szabo and N. S. Ostlund (Dover 1996). 1 Contents 1 Introduction 2 2 Time-independent perturbation theory 2 2.1 Non-degenerate systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2.1.1 The first order correction to the energy . . . . . . . . . . . . . . . 4 2.1.2 The first order correction to the wavefunction . . . . . . . . . . . 5 2.1.3 The second order correction to the energy . . . . . . . . . . . . . 6 2.1.4 The closure approximation . . . . . . . . . . . . . . . . . . . . . . 8 2.2 Perturbation theory for degenerate states . . . . . . . . . . . . . . . . . . 8 3 Time-dependent perturbation theory 12 3.1 Revision: The time-dependent Schrödinger equation with a time-independent Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2 Time-independent Hamiltonian with a time-dependent perturbation . . . 13 3.3 Two level time-dependent system - Rabi oscillations . . . . . . . . . . . . 17 3.4 Perturbation varying “slowly” with time . . . . . . . . . . . . . . . . . . 19 3.5 Perturbation oscillating with time . . . . . . . . . . . . . . . . . . . . . . 20 3.5.1 Transition to a single level . . . . . . . . . . . . . . . . . . . . . . 20 3.5.2 Transition to a continuum of levels . . . . . . . . . . . . . . . . . 21 3.6 Emission and absorption of radiation by atoms . . . . . . . . . . . . . . . 23 4 Applications of perturbation theory 28 4.1 Perturbation caused by uniform electric field . . . . . . . . . . . . . . . . 28 4.2 Dipole moment in uniform electric field . . . . . . . . . . . . . . . . . . . 28 4.3 Calculation of the static polarizability . . . . . . . . . . . . . . . . . . . . 29 4.4 Polarizability and electronic molecular spectroscopy . . . . . . . . . . . . 30 4.5 Dispersion forces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.6 Revision: Antisymmetry, Slater determinants and the Hartree-Fock method 35 4.7 Møller-Plesset many-body perturbation theory . . . . . . . . . . . . . . . 36 Lecture 1 4 group terms according to powers of λ in order to get {Ĥ(0)ψ(0)n −E(0)n ψ(0)n } + λ{Ĥ(0)ψ(1)n + Ĥ(1)ψ(0)n − E(0)n ψ(1)n − E(1)n ψ(0)n } (7) + λ2{Ĥ(0)ψ(2)n + Ĥ(1)ψ(1)n + Ĥ(2)ψ(0)n −E(0)n ψ(2)n − E(1)n ψ(1)n −E(2)n ψ(0)n } + · · · = 0 Notice how in each bracket terms of the same order are grouped (for example Ĥ(1)ψ (1) n is a second order term because the sum of the orders of Ĥ(1) and ψ (1) n is 2). The powers of λ are linearly independent functions, so the only way that the above equation can be satisfied for all (arbitrary) values of λ is if the coefficient of each power of λ is zero. By setting each such term to zero we obtain the following sets of equations Ĥ(0)ψ(0)n = E (0) n ψ (0) n (8) (Ĥ(0) −E(0)n )ψ(1)n = (E(1)n − Ĥ(1))ψ(0)n (9) (Ĥ(0) −E(0)n )ψ(2)n = (E(2)n − Ĥ(2))ψ(0)n + (E(1)n − Ĥ(1))ψ(1)n (10) · · · To simplify the expressions from now on we will use bra-ket notation, representing wavefunction corrections by their state number, so ψ (0) n ≡ |n(0)〉, ψ(1)n ≡ |n(1)〉, etc. 2.1.1 The first order correction to the energy To derive an expression for calculating the first order correction to the energy E(1), take equation (9) in ket notation (Ĥ(0) −E(0)n )|n(1)〉 = (E(1)n − Ĥ(1))|n(0)〉 (11) and multiply from the left by 〈n(0)| to obtain 〈n(0)|(Ĥ(0) − E(0)n )|n(1)〉 = 〈n(0)|(E(1)n − Ĥ(1))|n(0)〉 (12) 〈n(0)|Ĥ(0)|n(1)〉 − E(0)n 〈n(0)|n(1)〉 = E(1)n 〈n(0)|n(0)〉 − 〈n(0)|Ĥ(1)|n(0)〉 (13) E(0)n 〈n(0)|n(1)〉 − E(0)n 〈n(0)|n(1)〉 = E(1)n − 〈n(0)|Ĥ(1)|n(0)〉 (14) 0 = E(1)n − 〈n(0)|Ĥ(1)|n(0)〉 (15) where in order to go from (13) to (14) we have used the fact that the eigenfunctions of the unperturbed Hamiltonian Ĥ(0) are normalised and the Hermiticity property of Ĥ(0) which allows it to operate to its eigenket on its left 〈n(0)|Ĥ(0)|n(1)〉 = 〈(Ĥ(0)n(0))|n(1)〉 = 〈(E(0)n n(0))|n(1)〉 = E(0)n 〈n(0)|n(1)〉 (16) Lecture 1 5 So, according to our result (15), the first order correction to the energy is E(1)n = 〈n(0)|Ĥ(1)|n(0)〉 (17) which is simply the expectation value of the first order Hamiltonian in the state |n(0)〉 ≡ ψ (0) n of the unperturbed system. Example 1 Calculate the first order correction to the energy of the nth state of a har- monic oscillator whose centre of potential has been displaced from 0 to a distance l. The Hamiltonian of the unperturbed system harmonic oscillator is Ĥ(0) = − h̄ 2 2m d2 dx2 + 1 2 kx̂2 (18) while the Hamiltonian of the perturbed system is Ĥ = − h̄ 2 2m d2 dx2 + 1 2 k(x̂− l)2 (19) = − h̄ 2 2m d2 dx2 + 1 2 kx̂2 − lkx̂+ l21 2 k (20) = Ĥ(0) + lĤ(1) + l2Ĥ(2) (21) where we have defined Ĥ(1) ≡ −kx̂ and Ĥ(2) ≡ 1 2 k and l plays the role of the perturbation parameter λ. According to equation 17, E(1)n = 〈n(0)|Ĥ(1)|n(0)〉 = −k〈n(0)|x̂|n(0)〉 . (22) From the theory of the harmonic oscillator (see earlier lectures in this course) we know that the diagonal matrix elements of the position operator within any state |n(0)〉 of the harmonic oscillator are zero (〈n(0)|x̂|n(0)〉 = 0) from which we conclude that the first order correction to the energy in this example is zero. 2.1.2 The first order correction to the wavefunction We will now derive an expression for the calculation of the first order correction to the wavefunction. Multiply (9) from the left by 〈k(0)|, where k = n, to obtain 〈k(0)|Ĥ(0) − E(0)n |n(1)〉 = 〈k(0)|E(1)n − Ĥ(1)|n(0)〉 (23) (E (0) k − E(0)n )〈k(0)|n(1)〉 = −〈k(0)|Ĥ(1)|n(0)〉 (24) 〈k(0)|n(1)〉 = 〈k (0)|Ĥ(1)|n(0)〉 E (0) n −E(0)k (25) where in going from (23) to (24) we have made use of the orthogonality of the zeroth order wavefunctions (〈k(0)|n(0)〉 = 0). Also, in (25) we are allowed to divide with E(0)n − E(0)k Lecture 1 6 because we have assumed non-degeneracy of the zeroth-order problem (i.e. E (0) n −E(0)k = 0 ). To proceed in our derivation for an expression for |n(1)〉 we will employ the iden- tity operator expressed in the eigenfunctions of the unperturbed system (zeroth order eigenfunctions): |n(1)〉 = 1̂|n(1)〉 = ∑ k |k(0)〉〈k(0)|n(1)〉 (26) Before substituting (25) into the above equation we must resolve a conflict: k must be different from n in (25) but not necessarily so in (26). This restriction implies that the first order correction to |n〉 will contain no contribution from |n(0)〉. To impose this restriction we require that that 〈n(0)|n〉 = 1 (this leads to 〈n(0)|n(j)〉 = 0 for j ≥ 1. Prove it! ) instead of 〈n|n〉 = 1. This choice of normalisation for |n〉 is called intermediate normalisation and of course it does not affect any physical property calculated with |n〉 since observables are independent of the normalisation of wavefunctions. So now we can substitute (25) into (26) and get |n(1)〉 = ∑ k =n |k(0)〉〈k (0)|Ĥ(1)|n(0)〉 E (0) n −E(0)k = ∑ k =n |k(0)〉 H (1) kn E (0) n −E(0)k (27) where the matrix element H (1) kn is defined by the above equation. 2.1.3 The second order correction to the energy To derive an expression for the second order correction to the energy multiply (10) from the left with 〈n(0)| to obtain 〈n(0)|Ĥ(0) −E(0)n |n(2)〉 = 〈n(0)|E(2)n − Ĥ(2)|n(0)〉 + 〈n(0)|E(1)n − Ĥ(1)|n(1)〉 0 = E(2)n − 〈n(0)|Ĥ(2)|n(0)〉 − 〈n(0)|Ĥ(1)|n(1)〉 (28) where we have used the fact that 〈n(0)|n(1)〉 = 0 (section 2.1.2). We now solve (28) for E (2) n E(2)n = 〈n(0)|Ĥ(2)|n(0)〉 + 〈n(0)|Ĥ(1)|n(1)〉 = H(2)nn + 〈n(0)|Ĥ(1)|n(1)〉 (29) which upon substitution of |n(1)〉 by the expression (27) becomes E(2)n = H (2) nn + ∑ k =n H (1) nkH (1) kn E (0) n −E(0)k . (30) Example 2 Let us apply what we have learned so far to the “toy” model of a system which has only two (non-degenerate) levels (states) |1(0)〉 and |2(0)〉. Let E(0)1 < E(0)2 and assume that there is only a first order term in the perturbed Hamiltonian and that the Lecture 2 9 Energy E1 E2 E3 E4 E5 E6 0 Figure 1: Effect of a perturbation on energy levels. In this example the perturbation removes all the degeneracies of the unperturbed levels. where now we use two indices to represent each state: the first index n runs over the different energy eigenvalues while the second index i runs over the d degenerate states for a particular energy eigenvalue. Since we have d degenerate states of energy E (0) n , any linear combination of these states is also a valid state of energy E (0) n . However, as the perturbation parameter λ is varied continuously from 0 to some finite value, it is likely that the degeneracy of the states will be lifted (either completely or partially). The ques- tion that arises then is whether the states ψ (0) n,i of equation (39) are the “correct” ones, i.e. whether they can be continuously transformed to the (in general) non-degenerate perturbed states. It turns out that this is usually not the case and one has to first find the “correct” zeroth order states φ (0) n,j = d∑ i=1 |(n, i)(0)〉cij j = 1, . . . , d (40) where the coefficients cij that mix the ψ (0) n,i are specific to the perturbation Ĥ (1) and are determined by its symmetry. Here we will find a way to determine the “correct” zeroth order states φ (0) n,j and the first order correction to the energy. To do this we start from equation 9 with φ (0) n,i in place of ψ (0) n,i (Ĥ(0) − E(0)n )ψ(1)n,i = (E(1)n,i − Ĥ(1))φ(0)n,i (41) Notice that we include in the notation for the first order energy E (1) n,i the index i since the Lecture 2 10 perturbation may split the degenerate energy level E (0) n . Figure 1 shows an example for a hypothetical system with six states and a three-fold degenerate unperturbed level. Note that the perturbation splits the degenerate energy level. In some cases the perturbation may have no effect on the degeneracy or may only partly remove the degeneracy. The next step involves multiplication from the left by 〈(n, j)(0)| 〈(n, j)(0)|Ĥ(0) −E(0)n |(n, i)(1)〉 = 〈(n, j)(0)|E(1)n,i − Ĥ(1)|φ(0)n,i〉 (42) 0 = 〈(n, j)(0)|E(1)n,i − Ĥ(1)|φ(0)n,i〉 (43) 0 = ∑ k 〈(n, j)(0)|E(1)n,i − Ĥ(1)|(n, k)(0)〉cki (44) where we have made use of the Hermiticity of Ĥ(0) to set the left side to zero and we have substituted the expansion (40) for φ (0) n,i. Some further manipulation of (44) gives:∑ k [〈(n, j)(0)|Ĥ(1)|(n, k)(0)〉 − E(1)n,i 〈(n, j)(0)|(n, k)(0)〉]cki = 0 (45)∑ k (H (1) jk − E(1)n,iSjk)cki = 0 (46) We thus arrive to equation 46 which describes a system of d simultaneous linear equations for the d unknowns cki, (k = 1, . . . , d) for the “correct” zeroth order state φ (0) n,i. Actually, this is a homogeneous system of linear equations as all constant coefficients (i.e. the righthand side here) are zero. The trivial solution is obviously cki = 0 but we reject it because it has no physical meaning. As you know from your maths course, in order to obtain a non-trivial solution for such a system we must demand that the determinant of the matrix of the coefficients is zero: |H(1)jk − E(1)n,iSjk| = 0 (47) We now observe that as En,i occurs in every row, this determinant is actually a dth degree polynomial in En,i and the solution of the above equation for its d roots will give us all the E (1) n,i (i = 1, . . . d) first order corrections to the energies of the d degenerate levels with energy E (0) n . We can then substitute each E (1) n,i value into (46) to find the corresponding non-trivial solution of cki (k = 1, . . . d) coefficients, or in other words the function φ (0) n,i. Finally, you should be able to verify that E (1) n,i = 〈φ(0)n,i|Ĥ(1)|φ(0)n,i〉, i.e. that the expression (17) we have derived which gives the first order energy as the expectation value of the first order Hamiltonian in the zeroth order wavefunctions still holds, provided the “correct” degenerate zeroth order wavefunctions are used. Example 3 A typical example of degenerate perturbation theory is provided by the study of the n = 2 states of a hydrogen atom inside an electric field. In a hydrogen atom all Lecture 2 11 four n = 2 states (one 2s orbital and three 2p orbitals) have the same energy. The lifting of this degeneracy when the atom is placed in an electric filed is called the Stark effect and here we will study it using first order perturbation theory for degenerate systems. Assuming that the electric field E is aplied along the z-direction, the form of the perturbation is λĤ(1) = eEzz (48) where the strength of the field Ez plays the role of the parameter λ. Even though we have four states, based on parity and symmetry considerations we can show that only elements between the 2s and 2pz orbitals will have non-zero off-diagonal Ĥ (1) matrix elements and as a result the 4×4 system of equations (46) is reduced to the following 2×2 system (note that here all states are already orthogonal so the overlap matrix is equal to the unit matrix): eEz ( 〈2s|z|2s〉 〈2s|z|2pz〉 〈2pz|z|2s〉 〈2pz|z|2pz〉 )( c1 c2 ) = E(1) ( c1 c2 ) (49) which after evaluating the matrix elements becomes( 0 −3eEza0 −3eEza0 0 )( c1 c2 ) = E(1) ( c1 c2 ) . (50) The solution of the abovem system results in the following first order energies and “cor- rect” zeroth order wavefunctions E(1) = ±3eEza0 (51) φ (0) n,1 = 1√ 2 (|2s〉 − |2pz〉) , φ(0)n,2 = 1√ 2 (|2s〉 + |2pz〉) (52) Therefore, the effect of the perturbation (to first order) on the energy levels can be summarised in the diagram of Figure 2. Finally, we should mention that the energy levels of the hydrogen atom are also affected in the presence of a uniform magnetic field B. This is called the Zeeman effect, and the form of the perturbation in that case is Ĥ(1) = e 2m (L + 2S) · B where L is the orbital angular momentum of the electron and S is its spin angular momentum. Lecture 3 14 We will use perturbation theory to approximate the solution Ψ(r, t) to the time-dependent Schrödinger equation of the perturbed system. ih̄ ∂Ψ ∂t = Ĥ(t)Ψ (61) At any instant t, we can expand the Ψ(r, t), in the complete set of eigenfunctions ψ (0) k (r) of the zeroth order Hamiltonian Ĥ(0), Ψ(r, t) = ∑ k bk(t)ψ (0) k (r) (62) but of course the expansion coefficients bk(t) vary with time as Ψ(r, t) does. In fact let us define bk(t) = ak(t)e −iE(0)k t/h̄ in the above equation to get Ψ(r, t) = ∑ k ak(t)ψ (0) k (r)e −iE(0)k t/h̄ . (63) Even though this expression looks more messy than (62), we prefer it because it will simplify the derivation that follows and also it directly demonstrates that when the ak(t) lose their time dependence, i.e. when λ→ 0 and ak(t) → ak, (63) reduces to (57). We substitute the expansion (63) into the time-dependent Schrödinger equation 53 and after taking into account the fact that the ψ (0) n = |n(0)〉 are eigenfunctions of Ĥ(0) we obtain ∑ n an(t)λĤ (1)(t)|n(0)〉e−iE(0)n t/h̄ = ih̄ ∑ n dan(t) dt |n(0)〉e−iE(0)n t/h̄ (64) The next step is to multiply with 〈k(0)| from the left and use the orthogonality of the zeroth order functions to get ∑ n an(t)λ〈k(0)|Ĥ(1)(t)|n(0)〉e−iE (0) n t/h̄ = ih̄ dak(t) dt e−iE (0) k t/h̄ (65) Solving this for dak(t)/dt results in the following differential equation dak(t) dt = λ ih̄ ∑ n an(t)H (1) kn (t)e i(E (0) k −E (0) n )t/h̄ = λ ih̄ ∑ n an(t)H (1) kn (t)e iωknt (66) where we have defined ωkn = (E (0) k − E(0)n )/h̄ and H(1)kn (t) = 〈k(0)|Ĥ(1)(t)|n(0)〉. We now integrate the above differential equation from 0 to t to obtain ak(t) − ak(0) = λ ih̄ ∑ n ∫ t 0 an(t ′)H(1)kn (t ′)eiωknt ′ dt′ (67) Lecture 3 15 The purpose now of the perturbation theory we will develop is to determine the time-dependent coefficients ak(t). We begin by writing a perturbation expansion for the coefficient ak(t) in terms of the parameter λ ak(t) = a (0) k (t) + λa (1) k (t) + λ 2a (2) k (t) + . . . (68) where you should keep in mind that while λ and t are not related in any way, we take t = 0 as the “beginning of time” for which we know exactly the composition of the system so that ak(0) = a (0) k (0) (69) which means that a (l) k (0) = 0 for l > 0. Furthermore we will assume that a(0)g (0) = δgj (70) which means that at t = 0 the system is exclusively in a particular state |j(0)〉 and all other states |g(0)〉 with g = j are unoccupied. Now substitute expansion (68) into (67) and collect equal powers of λ to obtain the following expressions a (0) k (t) − a(0)k (0) = 0 (71) a (1) k (t) − a(1)k (0) = 1 ih̄ ∑ n ∫ t 0 a(0)n (t ′)H(1)kn (t ′)eiωknt ′ dt′ (72) a (2) k (t) − a(2)k (0) = 1 ih̄ ∑ n ∫ t 0 a(1)n (t ′)H(1)kn (t ′)eiωknt ′ dt′ (73) . . . (74) We can observe that these equations are recursive: each of them provides an expression for a (m) f (t) in terms of a (m−1) f (t). Let us now obtain an explicit expression for a (1) f (t) by first substituting (71) into (72), and then making use of (70): a (1) f (t) = 1 ih̄ ∑ n ∫ t 0 a(0)n (0)H (1) fn (t ′)eiωfnt ′ dt′ = 1 ih̄ ∫ t 0 H (1) fj (t ′)eiωfjt ′ dt′ . (75) The probability that the system is in state |f (0)〉 is obtained in a similar manner to equation 58 and is given by the squared modulus of the af (t) coefficient Pf(t) = |af (t)|2 (76) but of course a significant difference from (58) is that Pf = Pf (t) now changes with time. Using the perturbation expansion (68) for af (t) we have Pf(t) = |a(0)f (t) + λa(1)f (t) + λ2a(2)f (t) + . . . |2 . (77) Note that in most of the examples that we will study in these lectures we will confine ourselves to the first order approximation which means that we will also approximate the above expression for Pf(t) by neglecting from it the second and higher order terms. Lecture 3 16 Note 3.1 The previous derivation of time-dependent perturbation theory is rather rig- orous and is also very much in line with the approach we used to derive time-independent perturbation theory. However, if we are only interested in obtaining only up to first order corrections, we can follow a less strict but more physically motivated approach (see also Atkins). We begin with (67) and set λ equal to 1 to obtain ak(t) − ak(0) = 1 ih̄ ∑ n ∫ t 0 an(t ′)H(1)kn (t ′)eiωknt ′ dt′ (78) This equation is exact but it is not useful in practice because the unknown coefficient ak(t) is given in terms of all other unknown coefficients an(t) including itself ! To proceed we make the following approximations: 1. Assume that at t = 0 the system is entirely in an initial state j, so aj(0) = 1 and an(0) = 0 if n = j. 2. Assume that the time t for which the perturbation is applied is so small that the change in the values of the coefficients is negligible, or in other words that aj(t) 1 and an(t) 0 if n = j. Using these assumptions we can reduce the sum on the righthand side of equation 78 to a single term (the one with n = j for which aj(t) 1). We will also rename the lefthand side index from k to f to denote some “final” state with f = j to obtain af(t) = 1 ih̄ ∫ t 0 H (1) fj (t ′)eiωfjt ′ dt′ (79) This approximate expression for the coefficients af(t) is correct to first order as we can see by comparing it with equation 75. Example 4 Show that with a time-dependent Hamiltonian Ĥ(t) the energy is not con- served. We obviously need to use the time-dependent Schrödinger equation ih̄ ∂Ψ ∂t = Ĥ(t)Ψ ⇔ ∂Ψ ∂t = − i h̄ Ĥ(t)Ψ (80) where the system is described by a time-dependent state Ψ. We now look for an ex- pression for the derivative of the energy 〈H〉 = 〈Ψ|Ĥ(t)|Ψ〉 (expectation value of the Hamiltonian) with respect to time. We have ∂〈H〉 ∂t = 〈∂Ψ ∂t |Ĥ(t)|Ψ〉 + 〈Ψ|∂Ĥ(t) ∂t |Ψ〉 + 〈Ψ|Ĥ(t)|∂Ψ ∂t 〉 (81) Lecture 4 19 3.4 Perturbation varying “slowly” with time Here we will study the example of a very slow time-dependent perturbation in order to see how time-dependent theory reduces to the time-independent theory in the limit of very slow change. We define the perturbation as follows Ĥ(1)(t) = { 0, t < 0 Ĥ(1)(1 − e−kt), t ≥ 0 . (90) where Ĥ(1) is a time-independent operator, which however may not be a constant as for example it may depend on x̂, and so on. The entire perturbation Ĥ(1)(t) is time- dependent as Ĥ(1) is multiplied by the term (1 − e−kt) which varies from 0 to 1 as t increases from 0 to infinity. Substituting the perturbation into equation (75) we obtain a (1) f (t) = 1 ih̄ H (1) fj ∫ t 0 (1 − e−kt′)eiωfjt′dt′ = 1 ih̄ H (1) fj [ eiωfjt − 1 iωfj + e−(k−iωfj)t − 1 k − iωfj ] (91) If we assume that we will only examine times very long after the perturbation has reached its final value, or in other words kt >> 1, we obtain a (1) f (t) = 1 ih̄ H (1) fj [ eiωfjt − 1 iωfj + −1 k − iωfj ] (92) and finally that the rate in which the perturbation is switched is slow in the sense that k2 << ω2fj, we are left with a (1) f (t) = − H (1) fj h̄ωfj eiωfjt (93) The square of this, which is the probability of being in state |f (0)〉 to first order is Pf(t) = |a(1)f (t)|2 = |H(1)fj |2 h̄2ω2fj . (94) We observe that the resulting expression for Pf (t) is no longer time-dependent. In fact, it is equal to the square modulus |〈f (0)|j(1)〉|2 of the expansion coefficient in |f (0)〉 of the first order state |j(1)〉 as given in equation 25 of time-independent perturbation theory. Thus in the framework of time-independent theory (94) is interpreted as being the fraction of the state |f (0)〉 in the expansion of |j(1)〉 while in the time-dependent theory it represents the probability of the system being in state |f (0)〉 at a given time. Lecture 4 20 3.5 Perturbation oscillating with time 3.5.1 Transition to a single level We will examine here a harmonic time-dependent potential, oscillating in time with angular frequency ω = 2πν. The form of such a perturbation is Ĥ(1)(t) = 2V cosωt = V (eiωt + e−iωt) (95) where V does not depend on time (but of course it could be a function of coordinates, e.g. V = V (x)). This in a sense is the most general type of time-dependent perturbation as any other time-dependent perturbation can be expanded as a sum (Fourier series) of harmonic terms like those of (95). Inserting this expression for the perturbation Ĥ(1)(t) into equation 75 we obtain a (1) f (t) = 1 ih̄ Vfj ∫ t 0 (eiωt ′ + e−iωt ′ )eiωfjt ′ dt′ = 1 ih̄ Vfj [ ei(ωfj+ω)t − 1 i(ωfj + ω) + ei(ωfj−ω)t − 1 i(ωfj − ω) ] (96) where Vfj = 〈f (0)|V |j(0)〉. If we assume that ωfj − ω 0, or in other words that E (0) f E(0)j + h̄ω, only the second term in the above expression survives. We then have a (1) f (t) = i h̄ Vfj 1 − ei(ωfj−ω)t (ωfj − ω) (97) from which we obtain Pf(t) = |a(1)f (t)|2 = 4|Vfj|2 h̄2(ωfj − ω)2 sin2 1 2 (ωfj − ω)t . (98) This equation shows that due to the time-dependent perturbation, the system can make transitions from the state |j(0)〉 to the state |f (0)〉 by absorbing a quantum of energy h̄ω. Now in the case where ωfj = ω exactly, the above expression reduces to lim ω→ωfj Pf(t) = |Vfj |2 h̄2 t2 (99) which shows that the probability increases quadratically with time. We see that this expression allows the probability to increase without bounds and even exceed the (max- imum) value of 1. This is of course not correct, so this expression should be considered valid only when Pf(t) << 1, according to the assumption behind first order perturbation theory through which it was obtained. Our discussion so far for the harmonic perturbation has been based on the assump- tion that E (0) f > E (0) j so that the external oscillating field causes stimulated absorption of energy in the form of quanta of energy h̄ω. However, the original equation 96 for Lecture 4 21 a (1) f (t) also allows us to have E (0) f < E (0) j . In this case we can have E (0) f E(0)j − h̄ω and then the first term in equation 96 dominates from which we can derive an expression analogous to (98): Pf(t) = |a(1)f (t)|2 = 4|Vfj|2 h̄2(ωfj + ω)2 sin2 1 2 (ωfj + ω)t (100) This now describes stimulated emission of quanta of frequency ω/2π that is caused by the time-dependent perturbation and causes transitions from the higher energy state E (0) j to the lower energy state E (0) f . One can regard the time-dependent perturbation here as an inexhaustible source or sink of energy. 3.5.2 Transition to a continuum of levels In many situations instead of a single final state |f (0)〉 of energy E(0)f there is usually a group of closely-spaced final states with energy close to E (0) f . In that case we should calculate the probability of being in any of those final states which is equal to the sum of the probabilities for each state, so we have P (t) = ∑ n, E (0) n E(0)f |a(1)n (t)|2 . (101) As the final states form a continuum, it is customary to count the number of states dN(E) with energy in the interval (E,E+ dE) in terms of the density of states ρ(E) at energy E as dN(E) = ρ(E) dE (102) Using this formalism, we can change the sum of equation 101 into an integral P (t) = ∫ E(0)f +ΔE E (0) f −ΔE ρ(E)|a(1)E (t)|2dE (103) where the summation index n has been substituted by the continuous variable E. Ac- cording to our assumption E E(0)f so the above expression after substitution of (98) becomes P (t) = ∫ E(0)f +ΔE E (0) f −ΔE 4 |Vfj|2 h̄2 sin2 1 2 (E/h̄− E(0)j /h̄− ω)t (E/h̄− E(0)j /h̄− ω)2 ρ(E)dE (104) where the integral is evaluated in a narrow region of energies around E (0) f . The integrand above contains a term that, as t grows larger it becomes sharply peaked at E = E (0) j +h̄ω and sharply decaying to zero away from this value (see Figure 3). This then allows us to approximate it by treating |Vfj| as a constant and also the density of states as a Lecture 4 24 where the sum over k runs over all the electrons, and the position vector of the kth electron is rk. The nucleus of the atom is assumed to be fixed at the origin of coordinates (r = 0). We can immediately see that the work of section 3.5 for a perturbation oscillating with time according to a harmonic time-dependent potential applies here if we set V = μzEz in equation 95 and all the expressions derived from it. In particular, equation 98 for the probability of absorption or radiation for transition from state |j0〉 to the higher energy state |f (0)〉 takes the form Pfj(t) = 4|μz,fj|2E2z (ω) h̄2(ωfj − ω)2 sin2 1 2 (ωfj − ω)t . (110) You will notice in the above expression that we have written Ez as Ez(ω) in order to remind ourselves that it does depend on the angular frequency ω of the radiation. In fact the above expression is valid only for monochromatic radiation. Most radiation sources produce a continuum of frequencies, so in order to take this fact into account we need to integrate the above expression over all angular frequencies Pfj(t) = ∫ ∞ −∞ 4|μz,fj|2E2z (ω) h̄2(ωfj − ω)2 sin2 1 2 (ωfj − ω)tdω (111) = 4|μfj|2E2z (ωfj) h̄2 ∫ ∞ −∞ sin2[1 2 (ω − ωfj)t] (ω − ωfj)2 dω (112) = 2πt|μfj|2E2z (ωfj) h̄2 (113) where we have evaluated the above integral using the same technique we used for the derivation of Fermi’s golden rule in the previous section. The rate of absorption of radiation is equal to the time derivative of the above expression and as we are interested in atoms in the gas phase, we average the above expression over all directions in space. It turns out that this is equivalent to replacing |μz,fj|2 by the mean value of x, y and z components, 1 3 |μfj|2, which leads to Wf←j(t) = 2π|μfj|2E2z (ωfj) 3h̄2 (114) A standard result from the classical theory of electromagnetism is that the energy den- sity ρrad(ωfj) (i.e. energy contained per unit volume of space for radiation of angular frequency ωfj) of the electromagnetic field is ρrad(ωfj) = 2ε0E2z (ωfj) (115) which upon substitution into (114) gives Wf←j(t) = 2π|μfj|2 6ε20h̄ 2 ρrad(ωfj) (116) Lecture 4 25 We can also write this equation as Wf←j = Bjf ρrad(ωfj) (117) where the coefficient Bjf = 2π|μfj|2 6ε20h̄ 2 (118) is the Einstein coefficient of stimulated absorption. As we know from the theory of section 3.5, it is also possible to write a similar equation for stimulated emission in which case the Einstein coefficient of stimulated emission Bfj will be equal to the Bjf as a result of the Hermiticity of the dipole moment operator. If the system of atoms and radiation is in thermal equilibrium, at a temperature T , the number of atoms Nf in state |f (0)〉 and the the number of atoms Nj in state |j(0)〉 should not change with time, which means that there should be no net transfer of energy between the atoms and the radiation field: NjWf←j = NfWf→j . (119) Given that Bfj = Bjf this equation leads to the result Nj = Nf which shows that the populations of the two states are equal. This can not be correct: we know from the generally applicable principles of statistical thermodynamics that the populations of the two states should obey the Boltzmann distribution Nf Ni = e−Efj/kT (120) To overcome this discrepancy, Einstein postulated that there must also be a process of spontaneous emission in which the upper state |f (0)〉 decays to the lower state |j(0)〉 independently of the presence of the radiation of frequency ωfj . According to this the rate of emission should be written as Wf→j = Afj +Bfjρrad(ωfj) (121) where Afj is the Einstein coefficient of spontaneous emission which does not need to be multiplied by ρrad(ωfj) as spontaneous emission is independent of the presence of the radiation ωfj . The expression for this coefficient is (see Atkins for a derivation): Afj = h̄ω3fj π2c3 Bfj (122) As we saw, spontaneous emission was postulated by Einstein as it is not predicted by combining a quantum mechanical description of the atoms with a classical description of the electric field. It is predicted though by the theory of quantum electrodynamics where the field is also quantized. The types of interaction of radiation with atoms that we have studied here are summarized in Figure 4. Lecture 4 26 (STIMULATED) ABSORPTION STIMULATED EMISSION SPONTANEOUS EMISSION Before: After: fj E(0)f E(0)j fj fj fj E(0)f E(0)j E(0)f E(0)j fj E(0)f E(0)j E(0)f E(0)j E(0)f E(0)j Figure 4: Schematic representation of stimulated absorption, stimulated emission and spontaneous emission. Lecture 5 29 where ( dE dEz ) Ez=0 is the first derivative of the energy with respect to Ez evaluated at Ez = 0, etc. Of course, the zeroth order term E(0) is the value of the energy at Ez = 0. If we now differentiate the above Taylor expansion with respect to Ez, and substitute for the left hand side what we found in (127), we obtain an expression for the dipole moment in non-zero electric field 〈μz〉 = − ( dE dEz ) Ez=0 − ( d2E dE2z ) Ez=0 Ez − 1 2 ( d3E dE3z ) Ez=0 E2z + · · · (129) We usually write the above expression as 〈μz〉 = μ0z + αzzEz + 1 2 βzzzE2z + · · · (130) where by comparison with (129) we define following quantities as derivatives of the energy with respect to the electric field at zero electric field (Ez = 0). The permanent dipole moment μ0z = − ( dE dE ) Ez=0 = −〈0(0)|μ̂z|0(0)〉 (131) which is the first order energy correction to the ground state wavefunction. The polarizability αzz = − ( d2E dE2 ) Ez=0 (132) and the first hyperpolarizability βzzz = − ( d3E dE3 ) Ez=0 . (133) 4.3 Calculation of the static polarizability We can readily derive a formula for the calculation of the polarizability from the expres- sion for the second order correction to the energy, equation 30. Here we apply it to the calculation of the polarizability of the ground state αzz = −2E(2)0 = −2 ∑ n =0 〈0(0)|μ̂z|n(0)〉〈n(0)|μ̂z|0(0)〉 E (0) 0 − E(0)n . (134) The above is an explicit expression for the polarizability of a molecule in terms of integrals over its wavefunctions. We can write it in the following more compact form αzz = 2 ∑ n =0 μz,0nμz,n0 ΔEn0 (135) Lecture 5 30 where we have defined the dipole moment matrix elements μz,mn = 〈m(0)|μ̂z|n(0)〉 and the denominator ΔEn0 = E (0) n − E(0)0 . This compact form can be used to express the mean polarizabilty which is the property that is actually observed when a molecule is rotating freely in the gas phase or in solution and one measures the average of all its orientations to the applied field: α = 1 3 (αxx + αyy + αzz) = 2 3 ∑ n =0 μx,0nμx,n0 + μy,0nμy,n0 + μz,0nμz,n0 ΔEn0 (136) = 2 3 ∑ n =0 µ0n · µn0 ΔEn0 (137) = 2 3 ∑ n =0 |μ0n|2 ΔEn0 (138) At this point we can also use the closure approximation (37) to eliminate the sum over states and derive a computationally much simpler but also more approximate expression for the polarizability. α 2 3ΔE ∑ n =0 µ0n · µn0 = 2 3ΔE (∑ n µ0n · µn0 − µ00 · µ00 ) = 2(〈μ2〉 − 〈μ〉2) 3ΔE (139) 4.4 Polarizability and electronic molecular spectroscopy As we saw in the previous section the polarizability depends on the square of transition dipole moments μn0 between states |n(0)〉 and |0(0)〉. If we now re-write expression 138 as α = h̄2e2 me ∑ n =0 fn0 ΔE2n0 (140) where we have used the oscillator strengths fn0 defined as fn0 = ( 4πme 3e2h̄ ) νn0|μn0|2 . (141) The oscillator strengths can be determined from the intensities of the electronic transi- tions of a molecule and the energies ΔEn0 from the frequencies where these transitions occur. From expression 140 we can observe that a molecule will have a large polariz- ability the higher the intensity and the lower the frequency of its electronic transitions. We can now further approximate (140) by replacing ΔEn0 by its average ΔE to obtain α h̄ 2e2 meΔE2 ∑ n =0 fn0 (142) Lecture 5 31 This allows us to make use of the following standard result which is known as the Kuhn-Thomas sum rule ∑ n fn0 = Ne (143) where Ne is the total number of electrons in the molecule. Notice that the sum rule involves a summation over all states, including n = 0, but this is compatible with (142) as f00 = 0 by definition. We therefore obtain α h̄ 2e2Ne meΔE2 (144) which again shows that the polarisability increases with increasing number of electrons and decreasing mean excitation energy. We therefore expect molecules composed of heavy atoms to be highly polarizable. Example 5 Prove the Kuhn-Thomas sum rule (143). Let us first prove the following relation in one dimension ∑ n (E(0)n − E(0)a )|〈n(0)|x̂|a(0)〉|2 = h̄2 2m . (145) Start with the following commutation relation, [x̂, Ĥ(0)] = ih̄ m p̂x (146) that you can prove quite trivially if you take into account that the Hamiltonian is a sum of a kinetic energy operator and a potential energy operator. We next sandwich this commutator between 〈n(0)| and |a(0)〉 to obtain 〈n(0)|x̂Ĥ(0)|a(0)〉 − 〈n(0)|Ĥ(0)x̂|a(0)〉 = ih̄ m 〈n(0)|p̂x|a(0)〉 (147) (E(0)a − E(0)n )〈n(0)|x̂|a(0)〉 = ih̄ m 〈n(0)|p̂x|a(0)〉 (148) 〈n(0)|x̂|a(0)〉 = ih̄〈n (0)|p̂x|a(0)〉 m(E (0) a − E(0)n ) (149) where we have made use of the Hermiticity of Ĥ(0). Lecture 5 34 replacing Δ (0) nA0A +Δ (0) nB0B with the an average value ΔEA +ΔEB and apply equation 37: E(2) −2 3 ( 1 4πε0R3 )2( 1 ΔEA + ΔEB ) ∑ nA,nB =(0A,0B) (µA,0AnA · µA,nA0A)(µB,0BnB · µB,nB0B) − ( 1 24π2ε20R 6 )( 1 ΔEA + ΔEB ) 〈μ2A〉〈μ2B〉 where 〈μ2A〉 = 〈0(0)A |μ̂2A|0(0)A 〉 and there is no 〈μA〉2 term since we assumed that the permanent dipole moments of A and B are zero. Having reached this stage, we can re-express the dispersion energy by using relation (139) between the mean square dipole moment and the polarizability (in the absence of a permanent dipole moment, 〈μ2A〉 3 2 αAΔEA ) to obtain E(2) − ( 3 32π2ε20 )( ΔEAΔEB ΔEA + ΔEB ) αAαB R6 . (158) Finally we approximate the mean excitation energy with the ionization energy of each species ΔEA IA to arrive at the London formula for the dispersion energy between two non-polar species E(2) − ( 3 32π2ε20 )( IAIB IA + IB ) αAαB R6 . (159) This very approximate expression can provide chemical insight from “back of the en- velope calculations” of the dispersion energy between atoms based on readily available quantitites such as the polarizabilities and the ionization energies. Based on this formula we expect large, highly polarisable atoms to have strong dispersion interactions. Lecture 6 35 4.6 Revision: Antisymmetry, Slater determinants and the Hartree- Fock method The Pauli exclusion principle follows from the postulate of (non-relativistic) quantum mechanics that a many-electron wavefunction must be antisymmetric with respect to interchange of the coordinates of any two electrons 3 Φ(x1, . . . ,xi, . . . ,xj , . . . ,xNe) = −Φ(x1, . . . ,xj , . . . ,xi, . . . ,xNe) (160) where xj = {rj, σj} collectively denotes the space (rj) and spin (σj) coordinates of electron j. We often choose to approximate the many-electron wavefunction as a product of single-electron wavefunctions (spinorbitals). Such a simple product of spin orbitals (also known as a Hartree product) is not antisymmetric. To overcome this limitation we define the wavefunction as a Slater determinant, which is antisymmetric as the interchange of any of its rows, which correspond to its electron coordinates, will change its sign. In Hartree-Fock theory, we assume that the many-electron wavefunction has the form of a Slater determinant and we seek to find the best possible such wavefunction (for the ground state). To achieve this goal we use the variational principle which states that the total energy for the optimum determinant which we seek is going to be lower than the energy calculated from any other determinant EHF0 = 〈Ψ(0)0 |Ĥ|Ψ(0)0 〉 ≤ 〈Ψ|Ĥ|Ψ〉 (161) where we have assumed that the Hartee-Fock solution Ψ (0) 0 and all trial Slater determi- nants Ψ are normalized. EHF0 is the Hartree-Fock energy for the ground state which we are seeking. The full Hamiltonian for the electrons in a material (e.g. a molecule or a portion of solid) has the following form Ĥ = h̄2 2me Ne∑ i=1 ∇2i − Ne∑ i=1 NN∑ I=1 ZIe 2 4πε0|rI − ri| + 1 2 Ne∑ i,j i=j e2 4πε0|ri − rj| (162) where we have assumed that the material consists of Ne electrons and NN nuclei. The first term is the sum of the kinetic energy of each electron and the second term is the sum of the electrostatic attraction of each electron from the NN nuclei, each of which is fixed (Born Oppenheimer approximation) at position rI . The final term is the repulsive electrostatic (Coulomb) interaction between the electrons and consists of a sum over all distinct pairs of electrons. 3More generally, the postulate states that a wavefunction must be antisymmetric with respect to interchange of any pair of identical fermions (=particles with half-integer spin quantum number such as electrons and protons) and symmetric with respect to interchange of any pair of identical bosons (=particles with integer spin quantum number, such as photons and α-particles) Lecture 6 36 The variational principle (161) results into single-electron Schrödinger equations of the form f̂iχi(x) = εiχi(x) (163) for the spinorbitals χi that make up Ψ (0) 0 . However, the difficulty is that the Fock operator f̂i above is constructed from the (unknown!) solutions χi. In practice the way we solve these equations is by guessing a form for the χi, using it to build an approximate f̂i from which we solve the eigenvalue problem (163) to obtain a fresh (better) set of χis. We then repeat this procedure until the χis we obtain do not change any more - this condition is often referred to as Self-Consistency. In the literature Hartree-Fock (HF) calculations are also called Self-Consistent-Field (SCF) calculations. 4.7 Møller-Plesset many-body perturbation theory In this section we will see how time-independent perturbation theory can be used as an improvement on the Hartree-Fock approximation. Let us rewrite the Hamiltonian (162) in the following form: Ĥ = Ne∑ i=1 [ h̄2 2me ∇2i − NN∑ I=1 ZIe 2 4πε0|rI − ri| ] + 1 2 Ne∑ i,j i=j e2 4πε0|ri − rj | (164) = Ne∑ i=1 ĥi + 1 2 Ne∑ i,j i=j e2 4πε0|ri − rj | (165) which demonstrates the fact that the first two terms are “separable” into sums of one- electron Hamiltonians ĥi while this is obviously not possible for the last term as each 1/|ri−rj | can not be “broken” into a sum of a term for electron i and a term for electron j. The problem of the sum of one-electron Hamiltonians ∑Ne i=1 ĥi is computationally triv- ial as its solutions are antisymmetrised products (Slater determinants) of one-electron wavefunctions (=molecular spinorbitals). In contrast, because of the non-separability of the third term, such a simple solution is not possible for Ĥ. Its solution is extremely complicated and computationally tractable only for very small systems (e.g. the hy- drogen molecule). Thus this is a case where perturbation theory can be very useful for approximating the solution to Ĥ. As a first attempt to apply perturbation theory we may treat the ∑Ne i=1 ĥi part of (165) as the zeroth order Hamiltonian and the remaining part as the perturbation. This is not a very good choice though as the perturbation is of similar magnitude to the zeroth order Hamiltonian. Instead, we will define the zeroth order Hamiltonian as follows Ĥ(0) = Ne∑ i=1 ( ĥi + υ̂ HF i ) = Ne∑ i=1 f̂i (166) Lecture 6 39 According to the above we have 〈Φ(0)0 |Ĥ(1)|Φrs (0)xy 〉 = 〈Φ(0)0 |Ĥ|Φrs (0)xy 〉 = 〈xy||rs〉 (178) We now re-write (173) confining its summations to only doubly-excited determinants E (2) 0 = 1 2 Ne∑ x,y=1 1 2 ∞∑ r,s=Ne+1 〈Φ(0)0 |Ĥ(1)|Φrs(0)xy 〉〈Φrs(0)xy |Ĥ(1)|Φ(0)0 〉 E (0) 0 − Ers(0)xy (179) where the factors of 1/2 are introduced in order to make sure that each distinct pair of indices is used only once (e.g. if we have the pair x=1 and y=5, we will also have the same pair when y=1 and x=5, so we multiply with 1/2 to make sure we count this distinct pair only once) while the cases where x = y and/or r = s lead to zero matrix elements so it does not matter that they are included in the sum. We now substitute (178) into the above expression to obtain E (2) 0 = 1 4 Ne∑ x,y=1 ∞∑ r,s=Ne+1 〈xy||rs〉〈rs||xy〉 E (0) 0 −Ers(0)xy (180) Finally, we need to express the denominator in terms of spinorbital energies. According to (170) we have: E (0) 0 −Ers(0)xy = εa + . . .+ εx + . . .+ εy + . . .+ εz − (εa + . . .+ εr + . . .+ εs + . . .+ εz) = εx + εy − εr − εs . Using this result for E (0) 0 −Ers(0)xy we get the expression for the MP2 energy in terms of spinorbitals and their energies E (2) 0 = 1 4 Ne∑ x,y=1 ∞∑ r,s=Ne+1 〈xy||rs〉〈rs||xy〉 εx + εy − εr − εs . (181) MP2 calculations with their ability to include at least some of the correlation en- ergy, are a definite improvement over HF calculations. Figure 5 demonstrates this with some examples of bond lengths of small molecules calculated with the two methods and compared with experiment. We should observe however that MP theory is also qualitatively different from HF theory. The total (perturbed) Hamiltonian in MP theory (165) is the exact one, in- volving the true electron-electron interactions (the 1/|ri − rj| terms). In contrast the HF Hamiltonian (zeroth order, Ĥ(0)) corresponds to a system of non-interacting parti- cles that move in an effective (averaged) potential. Thus, MP theory includes electron Lecture 6 40 CH4 NH3 H2O FH HF 2.048 1.897 1.782 1.703 MP2 2.048 1.912 1.816 1.740 Experiment 2.050 1.913 1.809 1.733 MoleculeMethod Figure 5: Comparison of HF and MP2 calculations of equilibrium bond lengths (in atomic units) of some hydrides of first row elements. correlation and the perturbed wavefunction does take into account the instant interac- tions between electrons: the modulus of the wavefunction (and hence the probability distribution) decreases as a function of the positions of any pair of electrons when they are approaching each other in space. This dynamical correlation is absent from a HF wavefunction (Slater determinant). In section 4.5 we saw that dispersion interactions between molecules are due to instantaneous fluctuations of their electronic distributions. We expect that HF calculations would be incapable of predicting dispersion interactions while MP calculations should be able to. This is indeed the case. For example, a HF calculation predicts zero binding between two Ne atoms, while an MP2 calculation pre- dicts binding with an equilibrium distance of 6.06 a.u. and a binding energy of 2.3 meV. The “exact” values for these quantities are 5.84 a.u. and 3.6 meV respectively. There are numerous cases where dispersion interactions play a key role. In computational sim- ulations involving such cases methods like MP theory need to be used. Figure 6 shows some examples of materials with dispersion interactions. Lecture 6 41 a b c Figure 6: Examples of dispersion interactions in molecular structure. (a) Fragments of polyethylene, held together by dispersion forces; (b) The two strands in DNA are held together by hydrogen bonds but they are also stabilized by dispersion forces between the bases (planes) of each strand; (c) The structure of graphite consists of sheets of carbon held together by dispersion forces.
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved