Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

All Combined Lectures - Quantum Mechanics I - 2009 | PHYS 3810, Study notes of Quantum Mechanics

Material Type: Notes; Professor: Murdock; Class: Quantum Mechanics I; Subject: PHYS Physics; University: Tennessee Tech University; Term: Spring 2009;

Typology: Study notes

Pre 2010

Uploaded on 07/30/2009

koofers-user-97p-1
koofers-user-97p-1 🇺🇸

10 documents

1 / 47

Toggle sidebar

Related documents


Partial preview of the text

Download All Combined Lectures - Quantum Mechanics I - 2009 | PHYS 3810 and more Study notes Quantum Mechanics in PDF only on Docsity! Phys 3810 (Quantum Mechanics) Notes David Murdock, TTU May 4, 2009 2 Chapter 1 The Schrödinger Equation and Its Meaning 1.1 The Schrödinger Equation Classical mechanics is all about finding the trajectory of a particle of mass m when it is subjected to a force F (r, t). The object is to find the position as a function of time, r(t). One way or another that’s what we did in classical mechanics. We can not do this to describe motion at the atomic level because it is essentially impossible to find such a trajectory function. What we do find is the wave function for the state of the particle. The wave function Ψ(x, t) is complex–valued. (For now we will work with one space dimension.) And without further ado, the wave function is given by the Schrödinger Equation, ih̄ ∂Ψ ∂t = − h̄ 2 2m ∂2Ψ ∂x2 + VΨ Essentially, we will spend the rest of this chapter unraveling what it tells us, and the rest of the course finding solutions to it in different cases of physical interest. 1.2 Statistical Meaning of Ψ The statistical interpretation of Ψ is due to Born and sez that |Ψ(x, t)|2 gives the probability of finding the particle at point x at time t: |Ψ(x, t)|2 = Probability of finding particle between x and x+ dx at time t. See the discussion of the text for different attempts to get the meaning of this. Though the search for understanding of QM continues to this day, the consensus is that nature, at its most basic level, is probabilistic and we have to accept that fact— and learn how to deal with it. 5 6 CHAPTER 1. THE SCHRÖDINGER EQUATION AND ITS MEANING 1.3 Probability Relations for a discrete probability distribution N (j): N = ∞ ∑ j=0 N (j) P (j) = N (j) N 〈f(j)〉 = ∞ ∑ j=0 f(j)P (j) Standard deviation in j for this distribution is σ2 = 〈j2〉 − 〈j〉2 A probability distribution for a continuous variable, ρ(x) satisfies Pab = ∫ b a ρ(x) dx ∫ ∞ −∞ ρ(x) dx= 1 〈f(x)〉 = ∫ ∞ −∞ f(x)ρ(x) dx and the standard deviation of x is σ2 ≡ 〈(∆x)〉 = 〈x2〉 − 〈x〉2 If our wave function Ψ(x, t) is to give a sensible probability distribution it must be normalized, that is, it must satisfy ∫ ∞ −∞ |Ψ(x, t)|2 dx = 1 An essential idea of QM (shown in a text example) is that of the probability current. One can show that with J(x, t) ≡ ih̄ 2m ( Ψ ∂Ψ∗ ∂x − Ψ∗ ∂Ψ ∂x ) then the rate of change of the probability of finding the particle between a and b is dPab dt = J(a, t)− J(b, t) 1.4 Momentum An expectation value of a physical quantity is the average value we get for that quantity if we make repeated measurements of it for particles prepared in the same Ψ. It is 〈x〉 = ∫ ∞ −∞ x|Ψ(x, t)|2 dx The nearest thing to a “velocity” (or rather its expectation value) that we have in QM is to take the time derivative of 〈x〉. We can then relate this to the momentum, 〈p〉 by multiplying by m. We find: 〈p〉 = md〈x〉 dt = −ih̄ ∫ ( Ψ∗ ∂Ψ ∂x ) dx = ∫ Ψ∗ ( h̄ i ∂ ∂x ) Ψ dx and comparing the expectation values of x and p leads us to the associations x↔ x (factor) p ↔ h̄ i ∂ ∂x (operator) 1.5. UNCERTAINTY PRINCIPLE 7 When we want other physical quantities we will build them up from x and p. Then the expec- tation value of the general physical quantity is 〈Q(x, p)〉 = ∫ Ψ∗Q ( x, h̄ i ∂ ∂x ) Ψ dx As an example of this, the expectation value of the kinetic energy, T = p2/2m is 〈T 〉 = −h̄ 2 2m ∫ Ψ∗ ∂2Ψ ∂x2 dx An interesting result which is as close as we can come to Newton’s second law in QM is a case of Ehrenfest’s Theorem (shown in one of the problems), d〈p〉 dt = 〈 −∂V ∂x 〉 1.5 Uncertainty Principle While we will do a full derivation of the famous Uncertainty Principle in Chapter 3, we can give a preview of what it will mean when applied to the quantities x and p. 10 CHAPTER 2. THE TIME–INDEPENDENT SCHRÖDINGER EQUATION Some comments on our separated solution to the Schrödinger equation: 1. We have found stationary states, meaning that since the form of the solution gives Ψ(x, t)|2 = |ψ(x)|2 there is no time dependence to the probability density. This implies that 〈x〉 is constant and 〈p〉 = 0. 2. The states we find this way are states of definite total energy. The total energy operator, called the Hamiltonian is found from the classical expression by H(x, p) = p2 2m + V (x) =⇒ Ĥ = − h̄ 2 2m ∂2 ∂x2 + V (x) so that the time-independent Schrödinger equation is written compactly as Ĥψ = Eψ The expectation value of the total energy operator is 〈H〉 = ∫ ψ∗Ĥψ dx = E ∫ |ψ|2 dx = E and one can show that σH = 0. Every measurement of the total energy will return the value E for a particle in this stationary state. 3. The general solution to the time-dependent SE is a linear combination of separable solutions; our solutions of the form given in Eq. 2.2 really give us all we need to solve any problem. The general solution has the form Ψ(x, t) = ∞ ∑ n=1 cnψn(x)e −iEt/h̄ (2.3) where the conditions of the problem will give us the coefficients cn. 2.3 Infinite Square Well (“Box”) We now go on to solve the time–independent SE for “simple” potentials. Yep, simple, right. We should understand why we are solving such problems. In nature particles don’t move in the simple potentials that we will consider here. At best, a real-life potential might be close to one that we’ll solve, as with the harmonic oscillator. (Then we might say that our simple potential is a “toy model”. But by solving these problems we will get some valuable mathematical experience. Also, since there are very few problems in QM which permit exact solutions, we might be able to use one of these examples as a starting point for a more precise calculation. The simplest potential is V (x) = { 0, if 0 ≤ x ≤ a ∞, otherwise as illustrated in Fig. 2.1Ẇe want to use the TISE to find valid solutions for ψ(x) Outside the well it is clear that the particle can’t be found, so we expect to get ψ = 0 for x < 0 and x > a. For the region 0 ≥ x ≤ a, with the definition (which we will make often) k = √ 2mE h̄ 2.3. INFINITE SQUARE WELL (“BOX”) 11 V(x) xa Figure 2.1: Infinite square well, of width a. (it makes sense, because we fully expect the energy values to be positive) the Schrödinger equation can be written as d2ψ dx2 = −k2ψ which has trig functions as solutions. The condition that the wave function be continuous (so ψ(0) = 0 and ψ(a) = 0 leads to a condition on k and gives the solution ψ(x) = A sin (nπx a ) and energy eigenvalues En = n2π2h̄2 2ma2 n = 1, 2, 3, . . . (2.4) and the normalization condition gives the full solutions ψ(x) = √ 2 a sin (nπx a ) (2.5) We note: • The solutions are alternately even and odd about a/2 • As we increase in energy (and in n) the successive states have one more node • The wave functions satisfy ∫ ψm(x) ∗ψn(x) dx = 0 if m 6= n and so ∫ ψm(x) ∗ψn(x) dx = δmn and we say the ψn’s are orthonormal. (A generalization of the orthonormal basis of our conventional vectors.) 12 CHAPTER 2. THE TIME–INDEPENDENT SCHRÖDINGER EQUATION The ψn’s are complete, meaning that any function satisfying the boundary conditions can be expressed as a linear combination of them: f(x) = ∞ ∑ n=1 cnψn(x) = √ 2 a ∞ ∑ n=1 cn sin (nπx a ) We can find the cn using “Fourier’s Trick”; multiply both sides by ψm(x) ∗ and integrate. The result is cn = ∫ ψn(x) ∗ f(x) dx It follows that the general stationary states of the infinite square well is Ψ(x, t) = √ 2 a ∞ ∑ n=1 cn sin (nπx a ) e−i(n 2π2h̄/2ma2)t where cn = √ 2 a ∫ a 0 sin (nπx a ) Ψ(x, 0) dx This shows (for this one example at least) that knowing the solutions for the stationary states gives the most general solution to the Schrödinger equation. The meaning of the coefficients in the this expansion for a general state is that |cn|2 gives the probability that a measurement of the energy of the system gives the value En. One often says (loosely) that |cn|2 is that probability that the system is in the nth state, but as Griffiths notes, the particle is in state Ψ. The cn’s must satisfy ∞ ∑ n=1 |cn|2 = 1 and one can also show that the expectation value of H for the state Ψ is 〈H〉 = ∞ ∑ n=1 |cn|2En 2.4 The Harmonic Oscillator The next simplest choice for the potential V (x) is the “spring potential” of elementary classical physics. It is of enormous importance for quanutm physics as well, because if a potential function has a minimum somewhere, in the neighborhood of that minimum one can always approximate the potential as a quadratic function having the form V = 12kx 2 (plus a constant, which we can ignore), x being the displacement from the minimum. Defining ω (the angular frequency of the motion in the classical problem) as ω = √ k m =⇒ V (x) = 12mω2x2 the TISE for ψ is − h̄ 2 2m d2ψ dx2 + 12mω 2ψ = Eψ (2.6) 2.4. THE HARMONIC OSCILLATOR 15 (there is a solution with positive exponent, but it is not permitted since that function blows up at large x.) This suggests that if we pull off a factor of e−ξ 2/2 and define ψ(ξ) ≡ h(ξ)e−xi2/2 then we will get a friendlier equation for h(ξ). In fact, the equation that we get is d2h dξ2 − 2ξ dh dξ + (K − 1)h = 0 which looks worse than the original: Science takes a giant leap backwards. At this point we have to set up a power series for h(ξ): h(ξ) = ∞ ∑ j=0 ajξ j Substituting and doing some algebra gives a recurrence relation between the coefficients (what we’re normally after when use a power series), aj+2 = (2j + 1−K) (j + 1)(j + 2) aj which in fact does determine the solution (before we apply the boundary conditions). One then realizes that if this power series has an infinite number of terms the solution will be unsuitable for a wave function which must vanish at large ξ. The series must “truncate”, and from this it follows that K is specified by 2n+ 1 −K = 0 for n = 1, 2, 3, . . . and this gives the possible values of the energy, En = (n+ 1 2 )h̄ω which is the same answer as before, gotten in a totally different way. We can say that the reason that the energy is quantized is that for all the improper values, the wave function found from the Schrödinger equation will behave improperly. Reassembling the pieces to get the wave functions, we note that from the recursion relation, either a0 6= 0, a1 = 0 or a0 = 0, a1 6= 0 For the first, the wave function is even in x and for the second it is odd in x. The polynomials h(ξ) are simply related to some well–studied functions in math called the Hermite polynomials, of which the first few are H0 = 1 H1 = 2ξ H2 = 4ξ 2 − 2 H3 = 8ξ3 − 12ξ and one can show that the normalized wave functions are ψ(ξ) = (mω πh̄ )1/4 1√ 2n n! Hn(ξ)e −ξ2/4 (2.13) with ξ given by Eq 2.11. 16 CHAPTER 2. THE TIME–INDEPENDENT SCHRÖDINGER EQUATION 2.5 The Free Particle Now we turn to a case which might seem even more simple than those we considered: a particle moving freely, that is, V (x) = 0 everywhere. In classical mechanics, a particle would move at constant velocity under these conditions. The TISE for this case is simple: − h̄ 2 2m d2ψ dx2 = Eψ (2.14) which is a bit more simply written as d2ψ dx2 = −k2ψ (2.15) where k = √ 2mE/h̄. (This is the same DE as we had for the interior of the box.) There, the solution was written in terms of sin and cos but here we prefer to use complex exponentials. Thus: ψ(x) = Aeikx +Be−ikx But we have no boundary conditions, so there’s no reason to discard either of the terms here. It helps our understanding to replace the time dependence in this solution. Thus: Ψ(x, t) = Aeik(x− h̄k 2m t) + Be−ik(x+ h̄k 2m t) and now from our experience (?) with wave solutions we see that the first term is a (complex) wave moving to the right and the second is a wave moving to the left. The difference between the two terms is the sign associated with k (which was defined as positive) but if we now allow k to be positive or negative we can write down one of the terms for simplicity. So our solution for a wave traveling in some direction is Ψk(x, t) = Ae i(kx− h̄k 2 2m t) with k = ± √ 2mE h̄ (2.16) Trying to identify the particle “velocity” for this wave gives a small problem. The speed of the quantum wave is the ratio of the t coefficient to the x coefficient, which gives vquantum = √ E 2m whereas from the classical formula E = 12mv 2 we have vclassical = √ 2E m The resolution of this “paradox” will be given shortly. A more serious issue is that the wave function we have cannot be normalized! The solution in 2.16 gives ∫ ∞ −∞ Ψ∗kΨk dx = |A|2 ∫ ∞ −∞ 1 dx = ∞ (!) which means that this wave functions cannot describe a true physical state. So there is no such thing as a free particle moving with a definite energy. However the solution found here is of enormous use to us mathematically so we forge ahead. . . 2.5. THE FREE PARTICLE 17 We can get a proper physical state if we sum over solutions of the type in 2.16. Rather, we do an integral these functions (over k), weighted by a function we’ll call φ(k): Ψ(x, t) = 1√ 2π ∫ ∞ −∞ φ(k)ei(kx− h̄k 2 2m t) dk where we put a 1/ √ 2π in front for later convenience. Such a function can localized in space (and thus normalizeable) but it carries a range eigenvalues k and thus it surely carries a range of “speeds” regardless of the precise meaning of “speed”! In particular, at t = 0 the wavefunction we’ve constructed is Ψ(x, 0) = 1√ 2π ∫ ∞ −∞ φ(k)eikx dk and if you had a healthy math background you’d say that Ψ(x, 0) is the (inverse) Fourier trans- form of ψ(k). Or, equivalently, φ(k) is the Fourier transform of Ψ(x, 0). The Fourier transform is of great importance in theory of wave and also in QM (which is also all about waves). Is the mathematical means by which we alternate between the coordinate representation of a wave and the frequency representation of a wave. In QM we might say it is the means by which we go from a coordinate representation of a wave function to a momentum representation of a wave function. A function F (k) and its Fourier transform f(x) are related by “Plancheral’s theorem”, which is f(x) = 1√ 2π ∫ ∞ −∞ F (k)eikx dx F (k) = 1√ 2π ∫ ∞ −∞ f(x)e−ikx dx (2.17) Note the different sign in the exponential in the two transforms. Of course, these integrals have to exist for particular choices of f(x) and F (k); there are lots of juicy mathematical issues. But we push onward. . . From this theorem then, we get: φ(k) = 1√ 2π ∫ ∞ −∞ Ψ(x, o)e−ikx dx In words, this gives the momentum wave function from initial (t = 0)value of the coordinate wave function f(x). We see from working a few examples that if Ψ(x, 0) is “thin” then the momentum wave function is “broad” and vice-versa. This again brings us close to the content of the Uncertainty Principle, but we return to the question of the velocity of the wave. We did find that for our wave Ψk(x, t), the velocity of its wiggles is given by vwave = h̄|k| 2m but these are not wiggles in probability , which what is actually measured. The velocity we need to be concerned with is velocity of wiggles in the probability, and this is called the group velocity, vgroup. The velocity of the wave is properly called the phase velocity. As a concrete example one can consider a wave packet, which has the shape of a modulated but finite wiggle. The group velocity is the velocity of the modulating “envelope”. If the rapid 20 CHAPTER 2. THE TIME–INDEPENDENT SCHRÖDINGER EQUATION We have to solve for these coefficients, but we don’t need to solve for the energy because any positive value is permissible. Continuity of ψ gives A+ B = F +G If we do the same trick as before and integrate the Schrödinger equation over a small interval around the origin, one can show F −G = A(1 + 2iβ) −B(1 − 2iβ) where β = mα h̄2k but this is only two equations to find the four unknowns, A, B, F and G. The condition of normalization for localized states does not help us here. We actually have to make a choice of coefficients depending on the “experiment” we are considering. We have seen that the term with e+ikx corresponds to a wave traveling to the right; e−ikx gives a wave traveling to the left. We choose to consider particles (a whole lot of them, in fact) which are incident from the left. They interact with the potential and afterward have probabilities of continuing to travel to the right or to bounce back and travel to the left. But with with choice there are no particles with x > 0 which are traveling to the left; particles are only incoming from the left side. This means that G = 0 in our solution, but the other coefficients are not zero. Then the matching conditions give B = iβ 1 − iβA and F = 1 1 − iβA Now a further bit of interpretation for this result. The incoming wave represents the motion of a single particle. (Though experimental setups send a “beam” of particles toward a target, the incoming wave here does not represent more than one particle!) Certainly in the elementary sense of the word, A is the amplitude of the incoming wave, B is the amplitude of the reflected wave and F is the amplitude of the transmitted wave. If we take these values to also be quantum mechanical amplitudes for the different processes, then squaring them should give probabilities and we can take ratios to get the relative probability of reflection or transmission. Thus the relative probability for reflection is R ≡ |B| 2 |A|2 = β2 1 + β2 and the relative probability for transmission is T ≡ |F | 2 |A|2 = 1 1 + β2 also called the reflection and transmission coefficients. These give R+T = 1, as they should. Substituting for β gives the results R = 1 1 + (2h̄2E/mα2) T = 1 1 + (mα2/2h̄2E) (These results can be made for rigorous if one wants; the point is that we must be very careful about how we handle non-normalizable states and what the solutions mean. One can also solve the problem of the positive delta-function potential, i.e. a delta-function barrier . There are no bound states for that potential, but an incident wave will have a probability 2.7. FINITE SQUARE WELL 21 of passing through this barrier, in fact the same as for the well, since R and T just depend on α2. So the particle has a probability of passing through an infinitely high (though very thin) barrier. In classical motion, a particle cannot get through a barrier if its energy is less than the maximum potential “height” of that barrier. The phenomenon of quantum motion through barriers where it is forbidden classically is known as tunneling. 2.7 Finite Square Well Now consider the finite square well potential V (x) = { −V0 for − a < x < a 0 for |x| > a (2.18) where V0 is a positive constant. Again we expect to get both bound and scattering states. This problem requires more algebra to solve it than the delta-function potential, but the math is not hard. First, look for bound states, for which E < 0. As before, define κ ≡ √ −2mE h̄ Outside the well, where V = 0, the TISE can be written as d2ψ dx2 = κ2ψ which has exponential solutions. Choosing the ones with the proper behavior as x→ ±∞ gives ψ(x) = { Beκx x < −a Fe−κx x > a In the region −a < x < a, with V = −V0, the TISE is − h̄ 2 2m d2ψ dx2 − V0ψ = Eψ Then if we define ` ≡ √ 2m(E + V0) h̄ it can be written d2ψ dx2 = −`2ψ and this is a simple DE. Here we do want to use trig fuctions and the solution is ψ(x) = C sin(`x) +D cos(`x) for − a < x < a and now we have 5 (!) unknowns to solve for: B, F, C,D and the energy E (buried inside κ). We will find them though the boundary conditions, normalization and symmetry . 22 CHAPTER 2. THE TIME–INDEPENDENT SCHRÖDINGER EQUATION First, symmetry (an important tool in quantum mechanics). Certainly, the potential is sym- metric for x↔ −x. Are the solutions symmetric in some way? One can show that for a symmetric potential one can (at least) choose the solutions to be either symmetric or anti-symmetric, that is: ψ(−x) = { ψ(x) symmetric −ψ(x) anti-symmetric In one could have gotten this fact from the boundary conditions, but this is useful to understand and it does save some work. The condition of symmetry gives B = ±F . We will now explicitly look for symmetric solutions for the finite square well. (The well will have such a solution. It may have an anti-symmetric solution; that case is dealt with in one of the Problems.) So then we have B = F and it also gives C = 0 since sin(`x) is an odd function. Our solution is now ψ(x) =      Fe+κx x < a D cos(`x) −a < x < a Fe−κx x > a Applying continuity of the wave function and its derivative at x = a gives Fe−κa = D cos(`a) − κFe−κa = `D sin(`a) (Note: For this problem, the potential V (x) makes a finite jump but otherwise is not pathological at the boundary. We are thus also required to make ψ(x) smooth there, so the derivative is continuous.) One can then show that with the definitions z ≡ `a z0 ≡ a h̄ √ 2mV0 we have the condition tan z = √ (z0 z )2 − 1 (2.19) And from this transcendental1 equation we can get z and then the value of E. A useful strategy is to plot both sides of 2.19 together and note where the curves intersect (see text). Of course, computers can give a numerical solution. We see that even a “simple” problem can require some messy math for a solution! Such is life. We can consider limiting cases of the potential to check these results: • A wide, deep well2: In this case z0 is big. One can see graphically that there are many crossings for the left and right sides of 2.19, always occuring just below zn ≡ nπ/2. For these solutions, one can show E + V0 ≈ n2π2h̄2 2m(2a)2 where n is odd . that is to say, these are the energies of the states measured from the bottom of the well . This agrees with our previous box result. But here we miss half the solutions because we’ve only considered symmetric states. (See Prob 2.29 for the rest.) 1Meaning that it’s mathematically messy, not that you’re going to meditate on it. 2Deep. . . and wide. . . , deeeeep and wide. . . Chapter 3 Formalism 3.1 Introduction We’ve now covered the basics of the meaning of the wave functions and gotten tons o’ practice by solving the 1-D Schrödinger equation for a good assortment of potentials. What’s left? Well, the obvious things: We need to solve problems in three dimensions, we need to deal with multi-particle systems and also learn about spin, a dynamical quantity which occurs in the quantum world. And more mathematical methods. But we also need a firmer grounding in the fundamentals of quantum theory. This is because we will encounter other important dynamical quantities (possibly abstract, like spin) and we need to know how to work with their operators and associated quantum states. We need to know how to deal with any “observable” in QM beyond the ones we’ve seen: x and p. 3.2 Hilbert Space Quantum Mechanics is, as we’ve seen, based on wave functions and operators. The wave functions are treated much like abstract vectors, (as we saw from considering linear combinations of them) and the operators are also linear operators which makes of think of matrices. This leads us to the branch of mathematics known as Linear Algebra. (“The natural language of quantum mechanics”, as Griffiths sez.) Alas, most of us don’t haven’t learned linear algebra so well when we get to QM. (Were it not for QM it might be totally neglected in our math–phys education.) In any case, it is necessary to learn some of it. Griffiths has a big, important appendix giving a “refresher” course in linear algebra. First, we have vectors in an N–dimensional space, represented by an N -tuple of (complex) components: Vector |α〉 =⇒ a =      a1 a2 ... aN      Two vectors have an inner product (also called a scalar product), 〈α|β〉 = a∗1b1 + a∗2b2 + · · ·a∗N bN . 25 26 CHAPTER 3. FORMALISM Note the complex conjugation of the components for the first vector. The inner product is not commutative; switch the order and the result is complex–conjugated. Vectors are transformed by linear operators i.e. they are linear transformations. These transformations are represented by matrices. When we transform vector |α〉 to vector |β〉 by means of the transformation T (represented by matrix T), we have: |β〉 = T |α〉 =⇒ b = Ta =      t11 t12 · · · t1N t21 t22 · t2N ... ... ... ... tN1 · · · · · · tNN           a1 a2 ... aN      As written here, |α〉 and |β〉 denote the abstract vectors and T the abstract operator, while a and b and T give their representations for a particular basis. In linear algebra, one studies the types of matrices which can arise in linear transformations and the operations we perform with and on these matrices. In some of this work, we study how we can change between different bases, giving different representations. The “vectors” in QM are the wave functions. The ones we’ve seen are functions of x (could it be otherwise?) so these are vectors with an infinite number of components (that is, the values at each point, of which there are an infinite number). So things get very abstract here! What we need to do to adapt the normal language of linear algebra to that of the new vectors is change from the discrete index of regular vectors to the continuous index of functions. In doing this, sums go over to integrals. Actually, later on when we encounter spin (more generally, angular momentum), we will go back to the familiar finite-dimensional vector space. A vector (wave function) that we encounter in QM must be normalizeable, so that we can demand ∫ Ψ∗Ψdx = 1. So we will deal with a special class of functions, those for which ∫ |f(x)|2 dx <∞ We very loosely refer to these functions as our “vectors” in a Hilbert space and hope that no mathematicians are listening to our sloppy definitions. These vectors (functions) will have an inner product defined as 〈f |g〉 = ∫ b a f(x)∗g(x) dx (where a and b are limits appropriate for the vector space) which must exist for functions in this space. Note, 〈f |g〉 = 〈g|f〉∗ so the inner product is not commutative. A function f is normalized if 〈f |f〉 = 1. A set of basis functions in the space is orthonormal if 〈fm|fn〉 = δmn A set of set of functions fn in the space is complete if any other function f(x) in the space can be expressed as a linear combination of them: f(x) = ∞ ∑ n=1 cnfn(x) 3.3. OBSERVABLES 27 If the set of functions here is orthonormal, then the cn’s can be found by “Fourier’s trick”, cn = 〈fn|f〉 3.3 Observables We use a similar notation for the expectation value of an observable Q(x, p): 〈Q〉 = ∫ Ψ∗Q̂Ψ dx = 〈Ψ|Q̂Ψ〉 . Q̂ will always be a linear operator. Since 〈Q〉 is the average of many measurements , it must be real: 〈Q〉 = 〈Q〉∗. This implies 〈Ψ|Q̂Ψ〉 = 〈|Q̂Ψ〉∗ = 〈Q̂Ψ|Ψ〉 for any function Ψ in the space. So operators corresponding to observables must have the property 〈f |Q̂f〉 = 〈Q̂f |f〉 for all f(x). Such an operator is said to be Hermitian. So a Hermitian operator can be applied to the second or first member of an inner product with the same result. The operator x̂ = x clearly is Hermitian since it simply multiplies. One can show that the momentum operator p̂ = h̄i ∂ ∂x is Hermitian; but note, the derivative operator ∂ ∂x by itself is not Hermitian. 3.4 Determinate States Measuring the value of the observable Q on an ensemble of particles all prepared in the same state in general does not give the same value every time; generally we can only get the expectation value 〈Q〉. Quantum mechanics is this sense indeterminate. But could we prepare a state such that it did give the same value for Q each time? Such a state would be a determinate state for the observable Q. Actually, we’ve seen this already, as stationary states are determinate states for the Hamiltonian. For such a state — if the measurement of Q each time gives the value q— the standard deviation of Q would be zero. One can then show that for this state Ψ, Q̂Ψ = qΨ This is an eigenvalue equation for the operator Q̂; Ψ is an eigenfunction of Q̂, with corresponding eigenvalue q. We note that an eigenvalue is a number . An eigenfunction can be multiplied by a number and it will still be an eigenfunction (with the same eigenvalue). An eigenfunction can’t be zero (otherwise it wouldn’t be a legal vector for our states) but an eigenvalue can certainly be zero. Sometimes several independent eigenfunctions will have the same eigenvalue. In such a case we say the eigenfunctions are degenerate. In the TISE we had Ĥψ = Eψ; E is the eigenvalue of the Ĥ operator. (And the full wave function, Ψ(x, t) = ψ(x)e−iIt/h̄ is still an eigenfunction of Ĥ .) 30 CHAPTER 3. FORMALISM 3.6 General Statistical Interpretation The wave function Ψ(x, t) was introduced in Chapter 1 as something we would use to find the probabilty of a measurement of coordinate, x. We will now discuss the general theory of probabilities of measurement of any dynamical quantity. Actually, these are the postulates of QM; we haven’t derived them. But up to now we have gotten some experience in the mathematics needed to know what they are talking about! If you measure the quantity Q(x, p) of a particle in the state Ψ(x, t) you will get one of the eigenvalues of the hermitian operator Q̂(x, h̄i d dx). If the spectrum of Q̂ is discrete, the probability of getting eigenvalue qn is |cn|2, where cn = 〈fn|Ψ〉 If the spectrum of Q̂ is continuous, the probability of the eigenvalue z to be in the range dz is |c(z)|2 dz, where c(z) = 〈fz|Ψ〉 Upon measurement of Q, the wave function collapses to the corresponding eigenstate. 3.7 The Uncertainty Principle We’ve already seen examined (but not proven) a special case; we found that σxσp ≥ h̄ 2 for any state satisfying the Schrödinger equation. For any two observables A and B, with the commutator of their operators given by [Â, B̂] ≡ ÂB̂ − B̂ one can show that for any quantum state ψ, σ2Aσ 2 B ≥ ( 1 2i 〈 [Â, B̂] 〉 )2 (3.1) This is the generalized uncertainty principle. With  = x and B̂ = p̂ = h̄i d dx , using [x̂, p̂] = ih̄, Eq. 3.1 gives σ2xσ 2 p ≥ ( h̄ 2 )2 =⇒ σxσp ≥ h̄ 2 3.7.1 Minimum Uncertainty Wavepacket In a couple of the problems we have already encountered a quantum state which gave the minimum uncertainty product. They were gaussian wavepackets, but can we show that this must be the case? Griffiths shows that if we demand that uncertainty product give h̄2 then we can produce a differential equation for ψ: ( h̄ i d dx − 〈p〉 ) ψ = ia(x− 〈x〉)ψ 3.8. ENERGY–TIME UNCERTAINTY 31 for some constant a. This has the solution ψ(x) = Ae−a(x−〈x〉) 2/2h̄ei〈p〉x/h̄ which as in our examples is a gaussian wavepacket. 3.8 Energy–Time Uncertainty One can show: d dt 〈Q〉 = i h̄ 〈 [Ĥ, Q̂] 〉 + 〈 ∂Q̂ dt 〉 (3.2) Rant on the abuse of this relation too long to be included in the notes. Perhaps next year. 3.9 Dirac Notation We can now express the general quantum theory, taking the things far beyond our initial simplistic view of the quantum state. We work in analogy with finding the components of the vector A. To get them take the dot product of A with the different unit vectors: Ax = i ·A Ay = j · A But if we consider a coordinate system where the axes are rotated (call it the (x′, y′) system) then the components are given by A′x = i ′ ·A A′y = j′ · A The idea here is that the vector A exists independent of our choice of coordinates, but to express that vectors we must find its components using a particular choice of coordinates, and then those components are found by taking a dot product with the basis vectors. 32 CHAPTER 3. FORMALISM 4.3. THE ANGULAR EQUATION 35 4.3 The Angular Equation Rearranging, we have sin θ ∂ ∂θ ( sin θ ∂Y ∂θ ) + ∂2Y ∂φ2 = −`(`+ 1) sin2 θY This equation might look familiar! It also occurs in E & M when we solve the Laplace equation in spherical coordinates except that we now include a φ dependence which is not considered in our E & M book! (The problems didn’t need it.) Here in QM we must consider the φ dependence of the wave function. As this is still a differential equation in two coordinates, we do a further separation of variables and write Y (θ, φ) = Θ(θ) Φ(φ) Again we substitute, split up the equation and set both sides equal to the same constant which this time we call m2 (again, for good reasons to be seen later). The resulting separated equations are 1 Θ [ sin θ d dθ ( sin θ dΘ dθ )] + `(`+ 1) sin2 θ = m2 1 Φ d2Φ dφ2 = −m2 The first one is still pretty messy, but the second one is easy! The solution for Φ is d2Φ dφ2 = −m2 =⇒ Φ(φ) = eimφ where we “absorb” the constant in all the other factors and we deal with the possibilities e±imφ by letting m be positive or negative. We chose the complex form here because it is easier to work with; wave functions don’t have to be real! We do require that the wave function have one value at each point in space so Φ must have the same value if we incrase φ by 2π. One can show that this implies m = 0, ±1, ±2, . . . Using this, we go back to the Θ equation which we now write as sin θ d dθ ( sin θ dΘ dθ ) + [`(`+ 1) sin2 θ −m2]Θ = 0 (4.1) We won’t even attempt to “derive” the solution to this. It’s enough to present them and give their properties. The solution to 4.1 is Θ(θ) = APm` (cos θ) where Pm` is the associated Legendre function, defined by Pm` (x) = (1− x2)|m|/2 ( d dx )|m| P`(x) and where m can take on the 2`+ 1 values m = −`, −` + 1, . . . , ` 36 CHAPTER 4. THE SCHRÖDINGER EQUATION IN THREE DIMENSIONS and the P`(x) are the familiar (?) Legendre polynomials, which in fact are given by the Rodrigues formula, P`(x) = 1 2``! ( d dx )` (x2 − 1)` of which the first few are P0(x) = 1 P1(x) = x P2(x) = 1 2(3x 2 − 1) Note that Pm` (x) is in general not a polynomial; e.g. for ` = 2, P 02 (x) = 1 2(3x 2 − 1) P 12 (x) = 3x √ 1 − x2 P 22 (x) = 3(1− x2) and by the definitions we are using, P−m` = P m ` and P 0 ` = P` but conventions differ amongst the reference books! While the Pm` ’s look messy if we substitute x = cos θ and √ 1 − x2 = sin θ then we have P 00 = 1 P 0 1 = cos θ P 1 1 = sin θ P 02 = 1 3(3 cos 2 θ − 1) P 12 = 3 sin θ cos θ P 22 = 3 sin2 θ Polar plots of these functions remind one of the orbitals from basic chemistry. . . for good reason, as we’ll see eventually. We should note that since the θ equation was second order there should be two solutions here. The others, called Q`(x) blow up at θ = 0 and θ = π so they’re not permitted. Now we put Θ(θ) and Φ(φ) together to get the full angular wave function, which we will call Y (θ, φ). We need to choose how the product ΘΦ is normalized. Of course we still have to find R(r), but when we do the normalization condition on the whole wave function will be ∫ |ψ|2 d3r = ∫ ∞ 0 |R|2r2 dr ∫ 2π 0 ∫ π 0 |Y |2 sin θ dθ dφ = 1 and for later convenience we will choose things so that the separate factors are normalized: ∫ ∞ 0 |R|2r2 dr = 1 and ∫ 2π 0 ∫ π 0 |Y |2 sin θ dθ dφ = 1 The following formula will do the job: Y m` (θ, φ =  √ (2`+ 1) 4π (`− |m|)! (`+ |m|)! e imφ Pm` (cos θ) where  = { (−1)m m ≥ 0 1 m ≤ 0 (4.2) The Y m` ’s are called the spherical harmonics, and they are of enormous importance in theoretical physics. Note, different references may use different phases from Eq. 4.2. The spherical harmonics are mutually orthonormal: ∫ 2π 0 ∫ π 0 Y m` (θ, φ) ∗ Y m ′ `′ (θ, φ) sinθ dθ dφ = δ` `′δmm′ 4.4. THE RADIAL EQUATION 37 The first few are: Y 00 = √ 1 4π Y 01 = √ 3 4π cos θ Y ±11 = ∓ √ 3 8π sin θe±iφ Y 02 = √ 5 16π (3 cos2 θ − 1) Y ±12 = ∓ √ 15 8π sin θ cos θe±iφ etc. Note, Y 00 is not equal to 1. We will have more to say about the angular function Y (θ, φ) and its relation to the angular momentum of a particle later on. For now, we return to the equation for the radial function R(r). 4.4 The Radial Equation The Schrödinger equation for a particular radial potential V (r) gives the radial equation: d dr ( r2 dR dr ) − 2mr 2 h̄2 [V (r)−E]R = `(`+ 1)R And how do we solve this?? A useful trick is to define a sort of reduced radial wave function u(r) u(r) ≡ rR(r) We then get − h̄ 2 2m d2u dr2 + [ V + h̄2 2m `(`+ 1) r2 ] u = Eu (4.3) We can note that 4.3 has the same form as a one-dimensional Schrödinger equation except that the potential is replaced by the effective potential Veff = V + h̄2 2m `(`+ 1) r2 and where the extra term is called the centrifugal term. (We have seen some similar in classical mechanics for the central force problem!) The terms acts like a potential which pushes the particle outward, just like the corresponding pseudo-force in classical mechanics. Note that with this definition of u(r), the normalization condition on the radial function is ∫∞ 0 |u| dr = 1. That all we can do with the radial function until we make a specific choice for the potential V (r), i.e. the physical problem we want to solve! 4.5 Example: Spherical “Box” As an example leading to more complicated H atom, Griffiths considers the case of the potential V (r) = { 0 r < a ∞ r > a which might call a spherical infinite well, or something. 40 CHAPTER 4. THE SCHRÖDINGER EQUATION IN THREE DIMENSIONS Though there are indeed scattering solutions for the Coulomb problem (the solution of which is a notorious sticky point in the theory of scattering) we are definitely dealing with bound state solutions here, so we can assume E < 0. We will make the definitions κ = √ −2mE h̄ ρ = κr and ρ0 = me2 2π0h̄κ The reduced radii ρ and ρ0 are dimensionless, allowing us to focus on the math with physical constants flying around. Substituting into 4.4 gives d2u dρ2 = [ 1 − ρ0 ρ + `(`+ 1) ρ2 ] u (4.5) which is (superficially) simpler than 4.4 but is still a nasty differential equation. At this point we recall our strategy for solving the HO differential equation. We looked at the behavior of the DE (and thus its solutions) at large distance. As we might have expected, it turned out to be some kind of exponential decay which we then factored out of the solution; we then solved for the remaining part. For a radial function we actually have two extremes one at large distance and another at r → 0 (that is, very small r). For large ρ (large r), the radial equation and its solution becomes d2u dρ2 = u =⇒ u(ρ) = Ae−ρ +Beρ for which only the e−ρ term is possible. As ρ→ 0 we get d2u dρ2 = `(`+ 1) ρ2 u =⇒ u(ρ) = Cρ`+1 +Dρ−` for which we can keep only the ρ`+1 for proper behavior at small ρ. Now the idea is to pull the factors for these limiting solutions and solve for what’s left. Define: u(ρ)ρ`+1e−ρv(ρ) (4.6) and now solve for v(ρ). One might hope that the equation for it is simpler but that isn’t really the case. Looking ahead, there is a restriction on v(ρ) in that it can’t be such that u(ρ) blows up at large ρ. From 4.6, v(ρ) can’t “overpower” the factor e−ρ. Substitution gives ρ d2v dρ2 + 2(`+ 1 − ρ)dv dρ + [ρ0 − 2(`+ 1]v = 0 (4.7) The only thing we can do now is to write a (power) series expansion for v(ρ), v(ρ) = ∞ ∑ j=0 cjρ j and try to solve for the cj’s. Substitution of the series solution gives the recurrence relation cj+1 = [ 2(j + `+ 1) − ρ0 (j + 1)(j + 2`+ 2) ] cj 4.6. THE H ATOM! 41 which will give all of the cj’s if we have the first one. But ρ0 remains unknown because we don’t have a value for the energy E. What fixes this value? It is the condition mentioned above that v(ρ) can’t overpower the factor e−ρ. One can show that unless the series for v(ρ) stops (“truncates”) at some term, the series will give a function like e2ρ, which we can’t allow. So there has to be a maximal index jmax for which we get cjmax+1 = 0 =⇒ 2(jmax + `+ 1)− ρ0 = 0 If we define n ≡ jmax + `+ 1, then ρ0 = 2n and we can derive a condition on the energy E: E = − h̄ 2κ2 2m = − me 4 8π220h̄ 2ρ20 which then gives En = − [ m 2h̄2 ( e2 4π0 )2 ] 1 n2 ≡ E1 n2 for n = 1, 2, 3, . . . (4.8) In 4.8, n has those values because jmax and ` can possibly be zero. We can scrape up everything and write down the wave functions, remembering that ρ was defined using κ: κ = ( me2 4π0h̄ 2 ) 1 n ≡ 1 an where a ≡ 4π0h̄ 2 me2 = 0.529× 10−10 m and a is the Bohr radius, namely the radius of the smallest orbit of the simple Bohr model. The wave function for the state with the three quantum numbers (n, `,m) is ψn,`,m(r) = Rn`(r)Y m ` (θ, φ) where Rn`(r) = 1 r ρ`+1e−ρ v(ρ) and v(ρ) is a polynomial of degree jmax = n− `− 1 in ρ with the recursion relation cj+1 = 2(j + `+ 1 − n) (j + 1)(j + 2`+ 2) cj 4.6.3 The States of the H atom The ground state has n = 1 and gives E1 = −13.6 eV, so the binding energy of H in the ground state is 13.6 eV. The ground state wave function is ψ100(r, θ, φ) = R10(r) Y 0 0 (θ, φ) = 1√ πa3 e−r/a (4.9) For n = 2, the energy is E2 = −13.6 eV 4 = −3.4 eV (i.e. the energy of the first excited state) and we can have ` = 0 ` = 1 for this case, the latter having the possibilities m = −1, 0,+1. (All 4 states have n = 2.) 42 CHAPTER 4. THE SCHRÖDINGER EQUATION IN THREE DIMENSIONS The ` = 0 and ` = 1 cases give two different truncates series, hence two different radial functions. When we find and normalize them such that ∫∞ 0 |R|r2 dr = 1, they are R20(r) = 1√ 2 a−3/2 ( 1 − 1 2 r a ) e−r/2a R21(r) 1√ 24 a−3/2 r a e−r/2a For arbitrary n, ` can have the values ` = 0, 1, 2, . . .n− 1 and for each ` there are 2`+ 1 possible values for m. Thus the degeneracy of energy level En is d(n) = n−1 ∑ `=0 (2`+ 1) = n2 The polynomials we get for the radial function for each (n, `) are in fact known to old–timey mathematicians. They even have a name; they are called the associated Laguerre polynomials Ln2n1(x): v(ρ) = L2`+1n−`−1(2ρ) where L p q−p ≡ (−1)p ( d dx )p Lq(x) and the Lq(x) are the (regular) Laguerre polynomials, Lq(x) = e x ( d dx )q (e−xxq) The radial function still needs to be normalized, though. When we do this and put everything together, the big result for the H atom wave function for the state given by (n, `,m) is: ψ(r) = √ ( 2 na )3 (n− `− 1)! 2n[(n+ `)!] e−r/(na) ( 2r na )` [ L2`+1n−`−1 ( 2r na )] Y m` (θ, φ) (4.10) which is probably the most messiest thing we’ll ever produce in this course, but we note again that this is an “exact” answer for a realistic system and those are rare in quantum mechanics. We note again that the full wave function depends on the numbers (n, `,m) while the radial function depends on both n and `. The energy E depends only on n. The H atom wave functions are orthonormal: ∫ ψ∗n′,`′,m′ ψn`m r 2 sin θ dr dθ dφ = δn′nδ`′`δm′m For states with different energy eigenvalues, this has to be the case, but the wave functions were chosen so that all are mutually orthonormal. There are various ways to visualize these solutions; one can make shaded density plots of |ψ|2 and plot surfaces of constant |ψ|2. Note that if we are ignoring the factor of the angular function and considering a radial probability distribution we need to plot r2|R(r)|2 because that is the function which is a normalized probability distribution. 4.7. ANGULAR MOMENTUM 45 4.7.2 Analytic Approach to Angular Momentum By making operators out of the classical expression L = r × p, we can show that the operator for L is: L = h̄ i ( φ̂ ∂ ∂θ − θ̂ 1 sin θ ∂ ∂φ ) and the the Lz and ladder operators are Lz = h̄ i ∂ ∂φ L± = ±h̄e±iφ ( ∂ ∂θ ± i cot θ ∂ ∂φ ) and the L2 operator is L2 = −h̄2 [ 1 sin θ ∂ ∂θ ( sin θ ∂ ∂θ ) + 1 sin2 θ ∂2 ∂φ2 ] As this is the same as the operator in the angular (azimuthal) equation, we conclude that the spherical harmonics are the eigenfunctions of L2 and Lz. 4.7.3 Spin [Sx, Sy] = ih̄Sz [Sy, Sz] = ih̄Sx [Sz, Sx] = ih̄Sy (4.12) S2|s m〉 = h̄2s(s+ 1)|s m〉 Sz|s m〉 = h̄m|s m〉 S±|s m〉 = h̄ √ s(s + 1) −m(m± 1) |s m〉 χ = ( a b ) = aχ+ + bχ− where χ+ = ( 1 0 ) and χ− = ( 0 1 ) S 2 = 3 4 h̄2 ( 1 0 0 1 ) Sz = h̄ 2 ( 1 0 0 −1 ) Sx = h̄ 2 ( 0 1 1 0 ) Sy = h̄ 2 ( 0 −i i 0 ) Sz = h̄ 2 ( 1 0 0 −1 ) σx = ( 0 1 1 0 ) σy = ( 0 −i i 0 ) σz = ( 1 0 0 −1 ) χ (x) + = 1√ 2 ( 1 1 ) χ (x) − = 1√ 2 ( 1 −1 ) 4.7.4 Electron in a Magnetic Field B = B0k H = −γB0Sz χ(t) = aχ+e−iE+t/h̄ + bχ−e−iE−t/h̄ = ( ae−iE+t/h̄ be−iE−t/h̄ ) 46 CHAPTER 4. THE SCHRÖDINGER EQUATION IN THREE DIMENSIONS Chapter 5 Multi-Particle Systems There remains one fundamental principle of QM we haven’t covered. That is the way to treat multi-particle system. 5.1 The Schrödinger Equation for Two or More Particles The extension of the Schrödinger to two or more particles is not conceptually difficult. − h̄ 2 2m1 ∇21ψ − h̄2 2m2 ∇22ψ + V (r1, r2)ψ = Eψ (5.1) 5.1.1 Central Forces; Relative and CM Coordinates R ≡ m1r1 +m2r2 m1 +m2 r ≡ r1 − r2 µ ≡ m1m2 m1 +m2 (5.2) − h̄ 2 2M ∇2Rψ − h̄2 2µ ∇2rψ + V (r)ψ = Eψ ψ(r1, r2) = ±ψ(r2, r1) (5.3) 5.2 Helium H = { − h̄ 2 2m1 ∇21 − 1 4π0 2e2 r1 } + { − h̄ 2 2m2 ∇22 − 1 4π0 2e2 r2 } + 1 4π0 e2 |r1 − r2| (5.4) 47
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved