Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Time-Dependent Schrödinger Equation and Quantum Transitions, Exercises of Quantum Mechanics

Time-Dependent Schrödinger EquationQuantum TransitionsQuantum Mechanics

The time-dependent Schrödinger equation (TDSE) and its application to quantum transitions. The TDSE is a first-order differential equation that governs the time evolution of a quantum system, with the Hamiltonian operator determining the energy of the system. the wave function of a two-level system in the presence and absence of radiation, and the resulting rates of absorption and emission. It also touches upon the concept of spontaneous emission, which is challenging to derive using quantum mechanics.

What you will learn

  • What is the role of the Hamiltonian operator in the TDSE?
  • How does the wave function of a two-level system evolve in the presence and absence of radiation?
  • What is spontaneous emission, and why is it challenging to derive using quantum mechanics?
  • What is the time-dependent Schrödinger equation (TDSE) and how does it govern the time evolution of a quantum system?

Typology: Exercises

2021/2022

Uploaded on 09/27/2022

jacksonhh
jacksonhh 🇬🇧

4.2

(23)

34 documents

1 / 14

Toggle sidebar

Related documents


Partial preview of the text

Download Time-Dependent Schrödinger Equation and Quantum Transitions and more Exercises Quantum Mechanics in PDF only on Docsity! 1 The Time-Dependent Schrödinger Equation with applications to The Interaction of Light and Matter and The Selection Rules in Spectroscopy. Lecture Notes for Chemistry 452/746 by Marcel Nooijen Department of Chemistry University of Waterloo 1. The Time-Dependent Schrödinger equation. An important postulate in quantum mechanics concerns the time-dependence of the wave function. This is governed by the time-dependent Schrödinger equation i x t t H x th ∂Ψ ∂ = ( , ) $ ( , )Ψ (1.1) where $H is the Hamiltonian operator of the system (the operator corresponding to the classical expression for the energy). This is a first order differential equation in t , which means that if we specify the wave function at an initial time t0 , the wave function is determined at all later times. Let me emphasize that this means the wavefunction has to be specified for all x at initial time t0 . These initial conditions are familiar from wave equations as discussed in MS Chapter 2. In classical physics we often deal with second order differential equations and in addition the time derivative ∂Ψ ∂( , ) /x t t would then need to be specified for all x. Let me emphasize here that although the experimental results that can be predicted from QM are statistical in nature, the Schrödinger equation that determines the wave function as a function of time is completely deterministic. 2 ======= Special solutions: Stationary States (only if $H is time-independent). If we assume that the wave function can be written as a product: Ψ( , ) ( ) ( )x t x t= φ γ we can separate the time dependence from the spatial dependence of the wave function in the usual way. The separation constant is called E and will turn out to be the energy of the system for such solutions → = → = − −... ( ) ( ) ( ) ( )/i d t dt E t t e iE t th hγ γ γ 0 (1.2) $ ( ) ( ) $ ( ) ( )H x E x H x E xn n nφ φ φ φ= → = (1.3) equation (1.2) is called the time-independent Schrödinger equation and plays a central role in all of chemistry. Since the operator $H is Hermitean the eigenfunctions form a complete and (can be chosen to be an ) orthonormal set of functions. Using these eigenfunctions of $H special solutions to the time-dependent Schrödinger equation can be expressed as Ψ Ψ( , ) ( ) ; ( , ) ( )( )/x t x e x t xn iE t t n n= =− −φ φ0 0 h (1.4) For these special solutions of the Schrödinger equation, all measurable properties are independent of time. For this reason they are called stationary states. For example the probability distribution Ψ Ψ( , ) ( , ) ( )x t x t xn 2 0 2 2= = φ , (1.5) but also $ $ $A A A t t = ∀ 0 , (1.6) as is easily verified, by substituting the product form of the wave function. Also the probabilities to measure an eigenvalue ak are independent of time, as seen below $ ( ) ( ) ( ) ( ) ( , ) ( )*A x a x P t x x t P tk k k k k kϕ ϕ ϕ= → = = −∞ ∞z Ψ 2 0 (1.7) The common element in each of these proofs is that the time-dependent phase factor cancels because we have both Ψ( , )x t and Ψ*( , )x t in each expression. Let me note that the stationary solutions are determined by the initial condition. If you start off with a stationary state at t0 , the wave function is a stationary state for all time. 5 where we assume known the energies and wave functions. Let us assume that the wave function at time t = 0 is given by Ψ Φ Φ( ) ( ) ( )t x c x c= = +0 1 1 2 2 (2.4) where c1 and c2 are arbitrary coefficients. Then the wave function at time t is given by Ψ Φ Φ( ) ( ) ( )t x c e x c eiE t iE t= +− − 1 1 2 2 1 2 (2.5) Please verify for yourself that this satisfies the TDSE Eqn. (2.2), assuming (2.3). Note that the energy has units radians/s here because we have suppressed h . In this case of no radiation we find that if we would measure the energy of the system we would find E1 with probability c e ciE t 1 2 1 2 1− = , or E2 with probability c2 2, independent of time. In particular excited states would not decay and have infinite lifetimes! Other properties would depend on time however. For long times this picture is clearly deficient (we have not included spontaneous emission in this description which results in finite lifetimes for excited states), but it follows from the QM we taught you sofar. - Case II. General treatment of radiation in 2-level system. Without loss of generality we can write Ψ Φ Φ( ) ( ) ( ) ( ) ( )t x c t e x c t eiE t iE t= +− − 1 1 2 2 1 2 , (2.6) where the coefficients c c1 2, depend on time now. Substituting in the TDSE and using the full Hamiltonian, we should satisfy the eqn: i t H∂Ψ ∂ − =Ψ 0 (2.7) or 0 1 1 1 1 1 0 1 1 1 1 2 2 2 2 2 0 2 2 2 2 1 1 1 1 2 2 2 2 = ∂ ∂ + − − + ∂ ∂ + − − − − − − − − − − i c t x e E c t x e H c t x e H t c t x e i c t x e E c t x e H c t x e H t c t x e iE t iE t iE t iE t iE t iE t iE t iE t Φ Φ Φ Φ Φ Φ Φ Φ ( ) [ ( ) ( ) ( ) ( ) ] '( ) ( ) ( ) ( ) [ ( ) ( ) ( ) ( ) ] '( ) ( ) ( ) (2.8) We note that the terms between square brackets cancel (the reason to write Ψ( )t as in eqn. 2.6). We will now integrate this Eqn. against Φ1( )*x and Φ2( )*x respectively. Furthermore we assume (for sake of simplicity) that Φ Φ Φ Φ1 1 2 2 0z z= =( ) ( ) ( ) ( ) ( ) ( )* ' * 'x H t x dx x H t x dx (2.9) and 6 Φ Φ Φ Φ Φ Φ 1 2 2 1 1 2 1 2 z z z = = − + ⋅ ≡ +− − ( ) ( ) ( ) ( ) ( ) ( ) [ ] ( ) ( ) ( ) ( )[ ] * ' * ' * x H t x dx x H t x dx e e E x x dx V e ei t i t i t i tω ω ω ωω µ ω r r (2.10) Performing the integration, we get two eqns, that are fully equivalent to Eqn. 2.8 i c t e V e e c t e V e e c t e i c t e iE t i t i t iE t i t i t iE t iE t ∂ ∂ − + = − + + ∂ ∂ = − − − − − − 1 2 1 2 1 2 1 2 0 0 ( )[ ] ( ) ( )[ ] ( ) ω ω ω ω ω ω (2.11) Multiplying the first equation by eiE t1 and the second by eiE t2 we get i c t V e e c t e i c t V e e c t e i t i t i E E t i t i t i E E t ∂ ∂ − + = ∂ ∂ − + = − − − − − 1 2 2 1 2 1 2 1 0 0 ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ω ω ω ω ω ω (2.12) and defining E E E21 2 1 0= − > we obtain i c t V e e c t i c t V e e c t i E t i E t i E t i E t ∂ ∂ = + ∂ ∂ = + − − + + − − 1 2 2 1 21 21 21 21 ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) ω ω ω ω ω ω (2.13) In principle these equations can be solved fairly easily on a computer. To get some further insight we assume that we are always close to resonance (the Bohr condition) ω ≈ E21. Only the slowly oscillating terms matter, as the fast oscillations only provide some fine structure on top of the slow oscillations. This is true in particular if we have a distribution of frequencies around ω ≈ E21 (why?). In this case the equations can be approximated as i c t V e c t i c t V e c t i E t i E t ∂ ∂ = ∂ ∂ = − − − 1 2 2 1 21 21 ( ) ( ) ( ) ( ) ( ) ( ) ω ω ω ω (2.14) It is to be noted that in the first equation we have the ei tω component of H t'( ), while in the second equation we keep the e i t− ω term. This is the origin of the equality of the Einstein coefficients for absorption and stimulated emission of radiation: if ω ≈ E21 then, of course, E E12 21= − ≈ −ω . In the following we will look at two important approximations. II-A. Precise resonance ω = E21 In this case the phase factors are precisely unity, leading to 7 i c t V c t Vc t i c t V c t Vc t ∂ ∂ = ≡ ∂ ∂ = ≡ 1 2 2 2 1 1 ( ) ( ) ( ) ( ) ( ) ( ) ω ω (2.15) and hence we get second order equations ∂ ∂ = − ∂ ∂ = − 2 1 2 1 2 2 2 2 c t Vc t c t Vc t ( ) ( ) (2.16) with known solutions c t Vt c t Vt1 2( ) cos( ); ( ) sin( )= + = +ϕ ϕ (2.17) where ϕ is determined by the initial values of c c1 2, . Under these conditions the probabilities to measure the energy E1 or E2 hence oscillate in time as cos ( )2 Vt and sin ( )2 Vt respectively. On average there is equal probability to find E1 or E2, independent of the initial conditions. The oscillation time depends on V , i.e. it is proportional to the strength of the field and to the transition dipole! It does not depend on the energy difference between the two states or the frequency of the applied field. II-B. Near resonance, short times, weak fields. If we assume that at t = 0, Ψ Φ( ) ( )t x= =0 1 , i.e. c c1 21 0= =; , we can assume that c1 remains more or less unity and integrate the equation for c t2( ). This yields c i V e dt iV i E ei E t i E 2 0 21 21 21 1( ) ( ) ( ) ( ) ( )( ) ( )τ ω ω ω τ ω ω τ= − = − − − −z − − − − (2.18) The probability to find the energy E2 upon measurement at time τ would be c V E e e V E E V E E V E E V F i E i E 2 2 2 21 2 2 21 2 21 2 21 2 2 21 2 2 2 21 21 2 2 2 21 211 1 2 2 4 1 2 1 2 1 2 ( ) ( ) ( ) ( )( ) ( ) ( ) ( cos( ) ) ( ) ( ) sin [ ( ) ] ( ) sin [ ( ) ] [ ( ) ] ( ) ( ) ( ) ( )τ ω ω ω ω ω τ ω ω ω τ ω τ ω τ ω τ ω τ ω ω τ ω τ= − − − = − − − = − − = − − ≡ − − − (2.19) 10 The stationary state is reached if dN dt2 0/ = or N t N t1 2( ) ( )= . This is not correct. We are missing something, namely spontaneous emission of radiation that would occur from states Φ2 even in the absence of resonant radiation. Curiously, this naturally looking phenomenon (finite lifetimes of excited states, even in the absence of radiation) is very hard to derive using QM. The theory that is needed to accomplish a rigorous derivation requires a quantization of the electromagnetic field (i.e. the introduction of photons). It is called quantum field theory. A practical way to account for many of the observed phenomena is to define the process of spontaneous emission. It is an approximation though. We get for the rate of change dN dt B E N t N t A N t2 21 1 2 2= − −ρ( )[ ( ) ( )] ( ) (2.28) At the steady state (equilibrium) we can solve for the intensity of the radiation ρ( ) ( / ) E A B N N21 1 2 1 = − (2.29) For a black body at temperature T that hypothetically would consists of a two-level system in equilibrium with the radiation it generates and absorbs, we know the population ratio from the Maxwell-Boltzmann distribution law N N e h kT 1 2 12/ /= − ν (2.30) The above equation for the radiation density then agrees with the black-body radiation law if A h c B= 8 2 12 3π ν (2.31) Einstein essentially knew about statistical mechanics, discrete energy levels, and black body radiation and from this he deduced the concepts of spontaneous and stimulated emission and the idea of lasing. He was remarkable, even for a genius. 11 3. Selection rules in spectroscopy. A rigorous treatment of electromagnetic radiation (oscillating field) involves the time- dependent Schrödinger equation, usually in the form of time-dependent perturbation theory (see MS and lecture notes in previous section). At a given instant in time the electric field is more or less constant over the region of a (small) molecule, as the wavelength of the radiation (> ~200 nm) is so large compared to molecular dimensions. The relevant quantity that determines intensities in the spectrum is the transition dipole moment between initial and final states Ψ Ψf i dint int$z µ τ (3.1) Here Ψint indicates the total internal wave function involving both nuclear and electronic coordinates, but not the rotational part of the wave function. Similarly $µ α α α= ∑q rr , (3.2) the total dipole moment operator involves both nuclei and electrons (indicated through summation over α ). In order to make the problem managable we distinguish electrons: normal modes: molecular rotation: R r r r q i i (3.3) and the overall internal wavefunction (disregarding rotations) can be written (approximately) as Ψ Ψ Φav a vr q qint ( ; ) ( )= r , (3.4) The electronic wave function Ψa r q( ; )r depends on all of the electrons and in addition there is a parametric dependence of the electronic wavefunction on the normal modes (internal coordinates) q . The vibrational part Φv v vq q q( ) ( ) ( )....= φ φ 1 21 2 is assumed a product of harmonic oscillator functions for each normal mode qi . If we do not use a subscript we indicate the whole set of coordinates, e.g. q q q qi→ =l q l q1 2, ,... . The rotational wave function , ( , ) JJ M θ ϕΩ determines the probability distribution of the orientation of the molecule in space. This rotational part is treated differently from the rest. Let us first discuss the problem at a fixed orientation. We can write the overall transition moment in (3.1) as 12 r µ µ τ µfi bw av w v b el a eld q q r q r q drdq= =z z zΨ Ψ Φ Φ Ψ Ψ* *$ ( ) ( ) ( ; )$ ( ; ) (3.5) In order for transitions to occur this dipole moment should be non-zero. Denoting the most complicated integral over the electronic coordinates as µab q( ) this can be written as Ψ Ψ Φ Φbw av w v abd q q q dq* *$ ( ) ( ) ( )z z=µ τ µ r (3.6) Moreover we can assume a Taylor series expansion of the q-dependent transition dipole by writing r r r µ µ µ ab ab e i ab i q q iq q q q e ( ) ( ) ...≈ + ∂ ∂ +∑ = , (3.7) where the first term indicates the electronic transition dipole moment at the equilibrium geometry, which we will denote below as µ µab ab eq0 = ( ). This is all we need to analyse the various cases. A. Pure rotational transitions: a b w v= =, . r r r r µ µ µ µfi v v aa ab ii i aaq q q q dq= + ∂ ∂ =z ∑Φ Φ* ( ) ( )( )0 0 (3.8) µaa R0 ( ) r is the permanent dipole moment in the electronic state a and r µ fi ≠ 0 only if the molecule has a so-called permanent dipole moment. Further treatment of the interaction with the field will lead to the selection rule for diatomics ∆J = ±1, ∆M = 0 (see below). B. Vibrational transitions: a b= , w v≠ . T q q q q dqfi w v aa ab ii i= + ∂ ∂z ∑Φ Φ* ( ) ( )( ) r r µ µ0 (3.9) Due to the orthogonality of Φw and Φv the integral only yields non-zero if only one normal mode is excited (say qj ). This means that all factors in the product function Φw w wq q q( ) ( ) ( )....= φ φ1 1 2 2 stay the same except for the one involving the normal mode q j . The transition moment then reduces to r r µ µ φ φfi aa j w j j v j jq q q q dq= ∂ ∂ z * ( ) ( ) (3.10)
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved