Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Introduction to Quantum Mechanics, Transcriptions of Quantum Mechanics

Concise explanations of the subject.

Typology: Transcriptions

2019/2020

Uploaded on 03/10/2020

skypassage09
skypassage09 🇺🇸

5

(1)

1 document

1 / 408

Toggle sidebar

Related documents


Partial preview of the text

Download Introduction to Quantum Mechanics and more Transcriptions Quantum Mechanics in PDF only on Docsity! Introduction to Quantum Mechanics David J. Griffiths Reed College Prentice Hall Upper Saddle River, New Jersey 07458 Fundamental Equations Schrodinger equation: ili a\I1 = H\I1at Time independent Schrodinger equation: Hf/I= Et/J, Standard Hamiltonian: li 2 H= --V2+V 2m Time dependence of an expectation value: d(Q) = .i. ([H QJ) + (aQ) dt li' at Generalized uncertainty principle: 1 2 U AUB ~ 2i {[A, BJ) Heisenberg uncertainty principle: Caaonical commutator: [x,p] = ili Aaau!ar momentum: Pauli mairicM: CONTENTS PREFACE, vii PART I THEORY CHAPTER 1 THE WAVE FUNCTION, 1 1.1 The Schrodinger Equation, 1 1.2 The Statistical Interpretation, 2 1.3 Probability, 5 1.4 Normalization, 11 1.5 Momentum, 14 1.6 The Uncertainty Principle, 17 CHAPTER 2 THE TIME-INDEPENDENT SCHRODINGER EQUATION, 20 2.1 Stationary States, 20 2.2 The Infinite Square Well, 24 2.3 The Harmonic Oscillator, 31 2.4 The Free Particle, 44 2.5 The Delta-Function Potential, 50 2.6 The Finite Square Well, 60 iii iv Contents 2.7 The Scattering Matrix, 66 Further Problems for Chapter 2, 68 CHAPTER 3 FORMAliSM, 75 3.1 Linear Algebra, 75 3.2 Function Spaces, 95 3.3 The Generalized Statistical Interpretation, 104 3.4 The Uncertainty Principle, 108 Further Problems for Chapter 3, 116 CHAPTER 4 QUANTUM MECHANICS IN THREE DIMENSIONS, 121 4.1 Schrodinger Equations in Spherical Coordinates, 121 4.2 The Hydrogen Atom, 133 4.3 Angular Momentum, 145 4.4 Spin, 154 Further Problems for Chapter 4, 170 CHAPTER 5 IDENTICAL PARTICLES, 177 5.1 Two-Particle Systems, 177 5.2 Atoms, 186 5.3 Solids, 193 5.4 Quantum Statistical Mechanics, 204 Further Problems for Chapter 5, 218 PARTII APPLICATIONS CHAPTER 6 TIME-INDEPENDENT PERTURBATION THEORY, 221 6.1 Nondegenerate Perturbation Theory, 221 6.2 Degenerate Perturbation Theory, 227 6.3 The Fine Structure ofHydrogen, 235 6.4 The Zeeman Effect, 244 6.5 Hyperfined Splitting, 250 Further Problems for Chapter 6, 252 CHAPTER 7 THE VARIATIONAL PRINCIPLE, 256 7.1 Theory, 256 7.2 The Ground State ofHelium, 261 7.3 The Hydrogen Molecule Ion, 266 Further Problems for Chapter 7, 271 CHAPTER 8 THE WKB APPROXIMATION, 274 8.1 The "Classical" Region, 275 8.2 Tunneling, 280 8.3 The Connection Formulas, 284 Further Problems for Chapter 8, 293 CHAPTER 9 TIME-DEPENDENT PERTURBATION THEORY, 298 9.1 Two-Level Systems, 299 9.2 Emission and Absorption ofRadiation, 306 9.3 Spontaneous Emission, 311 Further Problems for Chapter 9, 319 CHAPTER 10 THE ADIABATIC APPROXIMATION, 323 10.1 The Adiabatic Theorem, 323 10.2 Berry's Phase, 333 Further Problems for Chapter 10, 349 CHAPTER 11 SCATTERING, 352 11.1 Introduction, 352 11.2 Partial Wave Analysis, 357 11.3 The Born Approximation, 363 Further Problems for Chapter 11,373 AFTERWORD, 374 INDEX,386 Contents v viii Preface This book is intended for a one-semester or one-year course at the junior or senior level. A one-semester course will have to concentrate mainly on Part I; a full-year course should have room for supplementary material beyond Part II. The reader must be familiar with the rudiments of linear algebra, complex numbers, and calculus up through partial derivatives; some acquaintance with Fourier analysis and the Dirac delta function would help. Elementary classical mechanics is essential, of course, and a little electrodynamics would be useful in places. As always, the more physics and math you know the easier it will be, and the more you will get out of your study. But I would like to emphasize that quantum mechanics is not, in my view, something that flows smoothly and naturally from earlier theories. On the contrary, it represents an abrupt and revolutionary departure from classical ideas, calling forth a wholly new and radically counterintuitive way of thinking about the world. That, indeed, is what makes it such a fascinating subject. At first glance, this book may strike you as forbiddingly mathematical. We en- counter Legendre, Hermite, and Laguerre polynomials, spherical harmonics, Bessel, Neumann, and Hankel functions, Airy functions, and even the Riemann Zeta function -not to mention Fourier transforms, Hilbert spaces, Hermitian operators, Clebsch- Gordan coefficients, and Lagrange multipliers. Is all this baggage really necessary? Perhaps not, but physics is like carpentry: Using the right tool makes the job easier, not more difficult, and teaching quantum mechanics without the appropriate mathe- matical equipment is like asking the student to dig a foundation with a screwdriver. (On the other hand, it can be tedious and diverting if the instructor feels obliged to give elaborate lessons on the proper use of each tool. My own instinct is to hand the students shovels and tell them to start digging. They may develop blisters at first, but I still think this is the most efficient and exciting way to learn.) At any rate, I can assure you that there is no deep mathematics in this book, and if you run into something unfamiliar, and you don't find my explanation adequate, by all means ask someone about it, or look it up. There are many good books on mathematical methods-I par- ticularly recommend Mary Boas, Mathematical Methods in the Physical Sciences, 2nd ed., Wiley, New York (1983), and George Arfken, Mathematical Methods for Physicists, 3rd ed., Academic Press, Orlando (1985). But whatever you do, don't let the mathematics-which, for us, is only a tool-interfere with the physics. Several readers have noted that there are fewer worked examples in this book than is customary, and that some important material is relegated to the problems. This is no accident. I don't believe you can learn quantum mechanics without doing many exercises for yourself. Instructors should, of course, go over as many problems in class as time allows, but students should be warned that this is not a subject about which anyone has natural intuitions-you're developing a whole new set of muscles here, and there is simply no substitute for calisthenics. Mark Semon suggested that I offer a "Michelin Guide" to the problems, with varying numbers of stars to indicate the level of difficulty and importance. This seemed like a good idea (though, like the quality of a restaurant, the significance of a problem is partly a matter of taste); I have adopted the following rating scheme: Preface ix * an essential problem that every reader should study; ** a somewhat more difficult or more peripheral problem; * ** an unusually challenging problem that may take over an hour. (No stars at all means fast food: OK if you're hungry, but not very nourishing.) Most of the one-star problems appear at the end of the relevant section; most of the three-star problems are at the end of the chapter. A solution manual is available (to instructors only) from the publisher. I have benefited from the comments and advice of many colleagues, who sug- gested problems, read early drafts, or used a preliminary version in their courses. I would like to thank in particular Burt Brody (Bard College), Ash Carter (Drew Uni- versity), Peter Collings (Swarthmore College), Jeff Dunham (Middlebury College), Greg Elliott (University of Puget Sound), Larry Hunter (Amherst College), Mark Semon (Bates College), Stavros Theodorakis (University of Cyprus), Dan Velleman (Amherst College), and all my colleagues at Reed College. Finally, I wish to thank David Park and John Rasmussen (and their publishers) for permission to reproduce Figure 8.6, which is taken from Park's Introduction to the Quantum Theory (footnote 1), adapted from I. Perlman and J. O. Rasmussen, "Alpha Radioactivity," in Encyclopedia ofPhysics, vol. 42, Springer-Verlag, 1957. CHAPTER 1 THE WAVE FUNCTION 1.1 THE SCHRODINGER EQUATION Imagine a particle of mass m, constrained to move along the x-axis, subject to some specified force F (x, t) (Figure 1.1). The program of classical mechanics is to deter- mine the position of the particle at any given time: x (t). Once we know that, we can figure out the velocity (v = dxldt), the momentum (p = mv), the kinetic energy (T = (1/2)mv2 ) , or any other dynamical variable of interest. And how do we go about determining x(t)? We apply Newton's second law: F = ma. (For conservative systems-the only kind we shall consider, and, fortunately, the only kind that occur at the microscopic level-the force can be expressed as the derivative of a potential energy function,' F = -avlax, and Newton's law reads m d2xldt2 = -a v lax.) This, together with appropriate initial conditions (typically the position and velocity at t = 0), determines x(t). Quantum mechanics approaches this same problem quite differently. In this case what we're looking for is the wave function, \11 (x, t), of the particle, and we get it by solving the Schrodlnger equation: [1.1] 1Magnetic forces are an exception. but let's not worry about them just yet. By the way, we shall assume throughout this book that the motion is nonrelativistic (v « c). 1 2 Chap. 1 The Wave Function m 'rr-rl -----J~~F(x,t) x(t) Figure 1.1: A "particle" constrained to move in one dimension under the influ- ence of a specified force. x [1.2] Here i is the square root of -1, and h is Planck's constant-or rather, his original constant (h) divided by 'In : 1i = ~ = 1.054573 x 10-34J s. 2:rr The Schrodinger equation plays a role logically analogous to Newton's second law: Given suitable initial conditions [typically, \II(x, 0)], the Schrodinger equation de- termines \II (x, t) for all future time, just as, in classical mechanics, Newton's law determines x(t) for all future time. 1.2 THE STATISTICAL INTERPRETATION But what exactly is this "wave function", and what does it do for you once you've got it? After all, a particle, by its nature, is localized at a point, whereas the wave function (as its name suggests) is spread out in space (it's a function of x, for any given time t). How can such an object be said to describe the state of a particle? The answer is provided by Born's statistical interpretation of the wave function, which says that I\II (x, 1)1 2 gives the probability of finding the particle at point x, at time t-or, more precisely," 1\II(x,t)12dx = I probabilityoffindingtheparticle I [1.3] between x and (x + dx), at time t. For the wave function in Figure 1.2, you would be quite likely to find the particle in the vicinity of point A, and relatively unlikely to find it near point B. The statistical interpretation introduces a kind of indeterminacy into quantum mechanics, for even if you know everything the theory has to tell you about the 2The wave function itself is complex, but 11JI1 2 = IJI*IJI (where IJI* is the complex conjugate of IJI) is real and nonnegative-as a probability, of course, must be. Sec. 1.2: The Statistical Interpretation 3 dx A B C x Figure 1.2: A typical wave function. The particle would be relatively likely to be found near A, and unlikely to be found near B. The shaded area represents the probability of finding the particle in the range dx. particle (to wit: its wave function), you cannot predict with certainty the outcome of a simple experiment to measure its position-all quantum mechanics has to offer is statistical information about the possible results. This indeterminacy has been profoundly disturbing to physicists and philosophers alike. Is it a peculiarity of nature, a deficiency in the theory, a fault in the measuring apparatus, or what? Suppose I do measure the position of the particle, and I find it to be at the point C. Question: Where was the particle just before I made the measurement? There are three plausible answers to this question, and they serve to characterize the main schools of thought regarding quantum indeterminacy: 1. The realist position: The particle was at C. This certainly seems like a sensible response, and it is the one Einstein advocated. Note, however, that if this is true then quantum mechanics is an incomplete theory, since the particle really was at C, and yet quantum mechanics was unable to tell us so. To the realist, indeterminacy is not a fact of nature, but a reflection of our ignorance. As d'Espagnat put it, "the position of the particle was never indeterminate, but was merely unknown to the experimenter,'? Evidently \II is not the whole story-some additional information (known as a hidden variable) is needed to provide a complete description of the particle. 2. The orthodox position: The particle wasn 't really anywhere. It was the act of measurement that forced the particle to "take a stand" (though how and why it decided on the point C we dare not ask). Jordan said it most starkly: "Observations not only disturb what is to be measured, they produce it. ... We compel [the particle] to assume a definite position?" This view (the so-called Copenhagen interpretation) is associated with Bohr and his followers. Among physicists it has always been the 3Bemard d'Espagnat, The Quantum Theory and Reality, Scientific American, Nov. 1979 (Vol. 241), p. 165. 4Quoted in a lovely article by N. David Mermin, Is the moon there when nobody looks?, Physics Today, April 1985, p. 38. 6 Chap. 1 The Wave Function N(j) 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 j Figure 1.4: Histogram showing the number of people, N(j), with age j, for the example in Section 1.3. N(14) = 1 N(15) = 1 N(16) = 3 N(22) = 2 N(24) = 2 N(25) = 5 while N (17), for instance, is zero. The total number of people in the room is 00 N=LN(j). }=o [1.4] [1.5] (In this instance, of course, N = 14.) Figure 1.4 is a histogram of the data. The following are some questions one might ask about this distribution. »>: Question 1. If you selected one individual at random from this group, what is the probability that this person's age would be 15? Answer: One chance in 14, since there are 14 possible choices, all equally likely, of whom only one has this particular age. If P(j) is the probability of getting age i. then P(14) = 1/14, P(15) = 1/14, P(16) = 3/14, and so on. In general, P(j) _ N(j) - N . Notice that the probability of getting either 14 or 15 is the sum of the individual probabilities (in this case, 1/7). In particular, the sum of all the probabilities is 1- you're certain to get some age: 00 L P(j) = 1. }=1 [1.6] [1.7] Sec. 1.3: Probability 7 Question 2. What is the most probable age? Answer: 25, obviously; five people share this age, whereas at most three have any other age. In general, the most probable j is the j for which P(j) is a maximum. Question 3. What is the median age? Answer: 23, for 7 people are younger than 23, and 7 are older. (In general, the median is that value of j such that the probability of getting a larger result is the same as the probability of getting a smaller result.) Question 4. What is the average (or mean) age? Answer: (14) + (15) + 3(16) + 2(22) + 2(24) + 5(25) = 294 = 21. 14 14 In general, the average value of j (which we shall write thus: (j) is given by (j) = L jN(j) = t j P(j). N j=O Notice that there need not be anyone with the average age or the median age-in this example nobody happens to be 21 or 23. In quantum mechanics the average is usually the quantity of interest; in that context it has come to be called the expectation value. It's a misleading term, since it suggests that this is the outcome you would be most likely to get if you made a single measurement (that would be the most probable value, not the average value)-but I'm afraid we're stuck with it. Question 5. What is the average of the squares of the ages? Answer: You could get 142 = 196, with probability 1/14, or 152 = 225, with probability 1/14, or 162 = 256, with probability 3114, and so on. The average, then, is ex; (/) = L/P(j). j=O In general, the average value of some junction of j is given by 00 (f(j») = L f(j)P(j). j=O [1. 8] [1.9] (Equations 1.6, 1.7, and 1.8 are, if you like, special cases of this formula.) Beware: The average of the squares ((/) is not ordinarily equal to the square of the average ((j)2). For instance, if the room contains just two babies, aged 1 and 3, then (x 2) = 5, but (x)2 = 4. Now, there is a conspicuous difference between the two histograms in Figure 1.5, even though they have the same median, the same average, the same most prob- able value, and the same number of elements: The first is sharply peaked about the average value, whereas the second is broad and flat. (The first might represent the age profile for students in a big-city classroom, and the second the pupils in a one- room schoolhouse.) We need a numerical measure of the amount of "spread" in a 8 Chap. 1 The Wave Function N(j) N(j) 2 3 4 5 6 7 8 9 10 j 1 2 3 4 5 6 7 8 9 10 j Figure 1.5: Two histograms with the same median, same average, and same most probable value, but different standard deviations. distribution, with respect to the average. The most obvious way to do this would be to find out how far each individual deviates from the average, /).j = j - (J), [1.10] and compute the average of S]. Trouble is, of course, that you get zero, since, by the nature of the average, t:::.j is as often negative as positive: (f}.j) = L(j- (J)P(j) = LjP(j) - (J) L P(j) = (J) - (J) = O. (Note that (J) is constant-it does not change as you go from one member of the sample to another-so it can be taken outside the summation.) To avoid this irritating problem, you might decide to average the absolute value of S]. But absolute values are nasty to work with; instead, we get around the sign problem by squaring before averaging: [1.11] This quantity is known as the variance of the distribution; a itself (the square root of the average of the square of the deviation from the average-gulp!) is called the standard deviation. The latter is the customary measure of the spread about (J). There is a useful little theorem involving standard deviations: a 2 = (/).j)2) = L(f}.j)2 P(j) = L(j - (J)2 P(j) = L(/ - 2j{J) + (J)2)p(j) = L / P(j) - 2{J) L j P(j) + (j)2 L P(j) = (/) - 2{J){J) + (J)2, or [1.12] Equation 1.12 provides a faster method for computing a: Simply calculate (i) and {J)2, and subtract. Incidentally, I warned you a moment ago that (i) is not, in general, Sec. 1.4: Normalization 11 **Problem 1.5 A needle of length 1is dropped at random onto a sheet of paper ruled with parallel lines a distance 1apart. What is the probability that the needle will cross a line? [Hint: Refer to Problem 1.4.] -Problem 1.6 Consider the Gaussian distribution where A, a, and A are constants. (Look up any integrals you need.) (a) Use Equation 1.16 to determine A. (b) Find (x), (x2 ) , anda. (c) Sketch the graph of p(x). 1.4 NORMAliZATION We return now to the statistical interpretation of the wave function (Equation 1.3), which says that 1\II (x, t) 12 is the probability density for finding the particle at point x, at time t. It follows (Equation 1.16) that the integral of 1\111 2 must be 1 (the particle's got to be somewhere): f +OO -00 1\II(x, t)1 2 dx = 1. [1.20] Without this, the statistical interpretation would be nonsense. However, this requirement should disturb you: After all, the wave function is supposed to be determined by the Schrodinger equation-we can't impose an extrane- ous condition on \II without checking that the two are consistent. A glance at Equation 1.1 reveals that if \II(x , t) is a solution, so too is A\II(x, t), where A is any (complex) constant. What we must do, then, is pick this undetermined multiplicative factor so as to ensure that Equation 1.20 is satisfied. This process is called normalizing the wave function. For some solutions to the Schrodinger equation, the integral is infinite; in that case no multiplicative factor is going to make it 1. The same goes for the trivial solution \II = O. Such non-normalizable solutions cannot represent particles, and must be rejected. Physically realizable states correspond to the "square-integrable" solutions to Schrodinger's equation. 8 8Evidently W(x, t) must go to zero faster than 1/M, as [x] ~ 00. Incidentally, normalization only fixes the modulus of A; the phase remains undetermined. However, as we shall see, the latter carries no physical significance anyway. 12 Chap. 1 The Wave Function But wait a minute! Suppose I have normalized the wave function at time t = 0. How do I know that it will stay normalized, as time goes on and \II evolves? (You can't keep renormalizing the wave function, for then A becomes a function of t , and you no longer have a solution to the Schrodinger equation.) Fortunately, the Schrodinger equation has the property that it automatically preserves the normalization of the wave function-without this crucial feature the Schrodinger equation would be incompat- ible with the statistical interpretation, and the whole theory would crumble. So we'd better pause for a careful proof of this point: d f+oo f+OO a- 1\II(x,t)12dx= -I\II(x,t)12dx. dt -00 -00 at [1.21] [1.22] [Note that the integral is a function only of t, so I use a total derivative (djdt) in the first term, but the integrand is a function of x as well as t, so it's a partial derivative (ajat) in the second one.] By the product rule, a 2 a * *a\ll a \11* at 1\111 = at (\II \II) = \II at + at \II Now the Schrodinger equation says that a \II in a2\11 i at = 2m ax2 -;; V \II, and hence also (taking the complex conjugate of Equation 1.23) a \11* in a2\11* i -- = ---- + -V\II* at 2m ax 2 n ' so [1.23] [1.24] a in ( a2\11 a2\11*) a [in ( a\ll a \11* )]_1\111 2 = - \11*- - --\II = - - \11*- - -\II . [1.25] at 2m ax 2 ax 2 ax 2m ax ax The integral (Equation 1.21) can now be evaluated explicitly: d f+OO in ( a\ll a \11* ) +00- I\II (x , t)12 dx = - \11*- - -\II I . dt -00 2m ax ax -00 [1.26] But \II (x, t) must go to zero as x goes to (±) infinity-otherwise the wave function would not be normalizab1e. It follows that d f+oo- 1\II(x,t)12dx =0, dt -00 [1.27] and hence that the integral on the left is constant (independent of time); if \II is normalized at t = 0, it stays normalized for all future time. QED Sec. 1.4: Normalization 13 Problem 1.7 At time t = 0 a particle is represented by the wave function { Ax/ a, w(x,O) = A(b - x)/(b - a), 0, where A, a, and b are constants. if 0 ::; x ::; a, if a ::; x ::; b, otherwise, where (a) Normalize W (that is, find A in terms of a and b). (b) Sketch w(x, 0) as a function of x. (c) Where is the particle most likely to be found, at t = O? (d) What is the probability of finding the particle to the left of a? Check your result in the limiting cases b = a and b = 2a. (e) What is the expectation value of x? -Problem 1.8 Consider the wave function w(x, t) = Ae~>"lxle~iwt, where A, A, and w are positive real constants. [We'll see in Chapter 2 what potential (V) actually produces such a wave function.] (a) Normalize W. (b) Determine the expectation values of x and x 2 • (c) Find the standard deviation of x. Sketch the graph of Iw1 2 , as a function of z, and mark the points ((x) + 0") and ((x) - 0") to illustrate the sense in which 0" represents the "spread" in x. What is the probability that the particle would be found outside this range? Problem 1.9 Let Pab(t) be the probability of finding the particle in the range (a < x < b), at time t. (a) Show that dPab-----;It = J(a, t) - J(b, t) 'itt (aw* aw)J(x, t) - 2m W ax - w* ax . What are the units of J(x, t)? [J is called the probability current, because it tells you the rate at which probability is "flowing" past the point x. If Pab(t) is increasing, then more probability is flowing into the region at one end than flows out at the other.] 16 Chap. 1 The Wave Function given w; for our present purposes it will suffice to postulate that the expectation value of the velocity is equal to the time derivative of the expectation value ofposition: d(x) (v) =-. dt [1.32] Equation 1.31 tells us, then, how to calculate (u) directly from W. Actually, it is customary to work with momentum (p = mv), rather than ve- locity: d(x) f ~I * aw)(p) =m- = -ili W - dx. dt ax Let me write the expressions for (x) and (p) in a more suggestive way: [1.33] [1.34] [1.35] (x) = f w*(x)wdx, (p) = f W*(~~)WdX. 1 ax We say that the operator!' x "represents" position, and the operator (li/ i) (a/ax) "represents" momentum, in quantum mechanics; to calculate expectation values, we "sandwich" the appropriate operator between w* and W, and integrate. That's cute, but what about other dynamical variables? The fact is, all such quantities can be written in terms of position and momentum. Kinetic energy, for example, is 1 p2 T = -mv2 22m' and angular momentum is L = r x mv = r x p (the latter, of course, does not occur for motion in one dimension). To calculate the expectation value of such a quantity, we simply replace every p by (li/ i)(a/ax), insert the resulting operator between w* and W, and integrate: f li a(Q(x, p») = w*Q(x, -:--)W dx.1 ax [1.36] 11 An operator is an instruction to do something to the function that follows. The position operator tells you to multiply by x; the momentum operator tells you to differentiate with respect to x (and multiply the result by -in). In this book all operators will bederivatives (d jdt, d2 jdt 2 , a2 jaxay, etc.) or multipliers (2, i, x 2 , etc.) or combinations of these. Sec. 1.6: The Uncertainty Principle 17 For example, [1.37] Equation 1.36 is a recipe for computing the expectation value of any dynamical quantity for a particle in state \11; it subsumes Equations 1.34 and 1.35 as special cases. I have tried in this section to make Equation 1.36 seem plausible, given Born's statistical interpretation, but the truth is that this equation represents such a radically new way of doing business (as compared with classical mechanics) that it's a good idea to get some practice using it before we come back (in Chapter 3) and put it on a firmer theoretical foundation. In the meantime, if you prefer to think of it as an axiom, that's fine with me. Problem 1.11 Why can't you do integration by parts directly on the middle ex- pression in Equation 1.29-pull the time derivative over onto x, note that ax fat = 0, and conclude that d (x) / d t = O? -Problem 1.12 Calculate d(p)/dt. Answer: d(p) = (_ av ). dt ax [1.38] (This is known as Ehrenfest's theorem; it tells us that expectation values obey Newton's second law.) Problem 1.13 Suppose you add a constant Va to the potential energy (by "constant" I mean independent of x as well as t). In classical mechanics this doesn't change anything, but what about quantum mechanics? Show that the wave function picks up a time-dependent phase factor: exp(-iVot/n). What effect does this have on the expectation value of a dynamical variable? 1.6 THE UNCERTAINTY PRINCIPLE Imagine that you're holding one end of a very long rope, and you generate a wave by shaking it up and down rhythmically (Figure 1.6). If someone asked you, "Precisely where is that wave?" you'd probably think he was a little bit nutty: The wave isn't precisely anywhere-it's spread out over 50 feet or so. On the other hand, ifhe asked you what its wavelength is, you could give him a reasonable answer: It looks like about 6 feet. By contrast, if you gave the rope a sudden jerk (Figure 1.7), you'd get a relatively narrow bump traveling down the line. This time the first question (Where precisely is the wave?) is a sensible one, and the second (What is its wavelength?) seems nutty-it isn't even vaguely periodic, so how can you assign a wavelength to it? 18 Chap. 1 The Wave Function Figure 1.6: A wave with a (fairly) well-defined wavelength but an ill-defined position. Of course, you can draw intermediate cases, in which the wave isfairly well localized and the wavelength is fairly well defined, but there is an inescapable trade-off here: The more precise a wave's position is, the less precise is its wavelength, and vice versa.F A theorem in Fourier analysis makes all this rigorous, but for the moment I am only concerned with the qualitative argument. This applies, of course, to any wave phenomenon, and hence in particular to the quantum mechanical wave function. Now the wavelength of \11 is related to the momentum of the particle by the de Broglie formula 13: h 2:rrh p----- A - A . [1.39] Thus a spread in wavelength corresponds to a spread in momentum, and our general observation now says that the more precisely determined a particle's position is, the less precisely its momentum is determined. Quantitatively, [1.40] where ax is the standard deviation in x, and ap is the standard deviation in p. This is Heisenberg's famous uncertainty principle. (We'll prove it in Chapter 3, but I wanted to mention it here so you can test it out on the examples in Chapter 2.) I 10 I 20 I 40 I 50 x(feet) Figure 1.7: A wave with a (fairly) well-defined position but an ill-defined wave- length. 12That's why a piccolo player must be right on pitch, whereas a double-bass player can afford to wear garden gloves. For the piccolo, a sixty-fourth note contains many full cycles, and the frequency (we're working in the time domain now, instead of space) is well defined, whereas for the bass, at a much lower register, the sixty-fourth note contains only a few cycles, and all you hear is a general sort of "oomph," with no very clear pitch. 13I'1l prove this in due course. Many authors take the de Broglie formula as an axiom, from which they then deduce the association of momentum with the operator (nji)(BjBx). Although this is a conceptually cleaner approach, it involves diverting mathematical complications that I would rather save for later. Sec. 2.1: Stationary States 21 For separable solutions we have aw df at = 1{!dt' (ordinary derivatives, now), and the Schrodinger equation (Equation 1.1) reads df lz2 d21{! ilz1{!"d( = - 2m dx 2 f + V1{!! Or, dividing through by 1{!f: . 1 df lz2 1 d2ljJ llz-- = ---- + V. f dt 2m ljJ dx 2 [2.2] Now the left side is a function of t alone, and the right side is a function of x alone.' The only way this can possibly be true is if both sides are in fact constant-otherwise, by varying t, I could change the left side without touching the right side, and the two would no longer be equaL (That's a subtle but crucial argument, so if it's new to you, be sure to pause and think it through.) For reasons that will appear in a moment, we shall call the separation constant E. Then or and or ilz!...df =E f dt ' df iE dt = --,; f, [2.3] [2.4] Separation of variables has turned a partial differential equation into two ordi- nary differential equations (Equations 2.3 and 2.4). The first of these is easy to solve (just multiply through by dt and integrate); the general solution is C exp(-i E t Ilz), but we might as well absorb the constant C into 1{! (since the quantity of interest is the product 1{!f). Then f(t) = e- i Et / lI • [2.5] The second (Equation 2.4) is called the time-independent Schrodinger equation; we can go no further with it until the potential V (x) is specified. 2Note that this would not be true if V were a function of t as well as x. 22 Chap.2 The Time-Independent Schrodinqer Equation The rest of this chapter will be devoted to solving the time-independent Schro- dinger equation, for a variety of simple potentials. But before we get to that I would like to consider further the question: What's so great about separable solutions? After all, most solutions to the (time-dependent) Schrodinger equation do not take the form 'l/J (x) I (t). I offer three answers-two of them physical and one mathematical: 1. They are stationary states. Although the wave function itself, \II (x , t) = 'l/J(x)e- i E t / h , does (obviously) depend on t, the probability density [2.6] [2.7] does not-the time dependence cancels out. 3 The same thing happens in calculating the expectation value of any dynamical variable; Equation 1.36 reduces to f lz d(Q(x,p)}= 1jJ*Q(x,-;--)'l/Jdx.l dx [2.8] Every expectation value is constant in time; we might as well drop the factor I(t) altogether, and simply use 1ft in place of \II. (Indeed, it is common to refer to 1ft as "the wave function", but this is sloppy language that can be dangerous, and it is important to remember that the true wave function always carries that exponential time-dependent factor.) In particular, (x) is constant, and hence (Equation 1.33) (p) = O. Nothing ever happens in a stationary state. 2. They are states of definite total energy. In classical mechanics, the total energy (kinetic plus potential) is called the Hamiltonian: p2 H(x, p) = - + Vex). 2m [2.9] [2.10] The corresponding Hamiltonian operator, obtained by the canonical substitution p -+ (lzji)(ajax), is therefore" " lz2 a2 H = - 2m ax 2 + Vex). Thus the time-independent Schrodinger equation (Equation 2.4) can be written " H'l/J = E'l/J, [2.11] 3For normalizable solutions, E must be real (see Problem 2.1a). 4Whenever confusion might arise, I'll put a "hat" (") on the operator to distinguish it from the dynamical variable it represents. Sec. 2.1: Stationary States 23 and the expectation value of the total energy is (H) =f 1f!* fI1f! dx = E f 11fr1 2 dx = E. (Note that the normalization of \II entails the normalization of 1f!.) Moreover, and hence (H2) = f 1f!* fI21fr dx = E2 f 11f! 12 dx = E2. So the standard deviation in H is given by [2.12] [2.13] But remember, if a = 0, then every member of the sample must share the same value (the distribution has zero spread). Conclusion: A separable solution has the property that every measurement of the total energy is certain to return the value E. (That's why I chose that letter for the separation constant.) 3. The general solution is a linear combination of separable solutions. As we're about to discover, the time-independent Schrodinger equation (Equation 2.4) yields an infinite collection of solutions (1fr1 (x), 1fr2 (x), 0/3 (x), ... ), each with its associated value of the separation constant (E I, E2, E3, ...); thus there is a different wave function for each allowed energy: Now (as you can easily check for yourself) the (time-dependent) Schrodinger equation (Equation 1.1) has the property that any linear combination" of solutions is itself a solution. Once we have found the separable solutions, then, we can immediately construct a much more general solution, of the form 00 \II(x, t) = LCn1f!n(x)e-iEnl/lI. n=\ [2.14] It so happens that every solution to the (time-dependent) Schrodinger equation can be written in this form-it is Simply a matter of finding the right constants (CI' C2, ... ) so as to fit the initial conditions for the problem at hand. You'll see in the following sections how all this works out in practice, and in Chapter 3 we'll put it into more elegant language, but the main point is this: Once you've solved the time-independent 5A linear combination of the functions II (Z), fz (z) • . . . is an expression of the form I(z) = ci fv (z) + c2fz (z) + .... where c!, C2 • . . . are any (complex) constants. 26 Chap.2 The Time-Independent Schrodinqer Equation But k = 0 is no good [again, that would imply Vt(x) = 0], and the negative solutions give nothing new, since sine~B) = - sin(B) and we can absorb the minus sign into A. So the distinct solutions are k _ mr . 123n - , WIth n = , , , .... a [2.22] Curiously, the boundary condition at x = a does not determine the constant A, but rather the constant k, and hence the possible values of E: [2.23] In sharp contrast to the classical case, a quantum particle in the infinite square well cannot have just any old energy-only these special allowed values. Well, how do we fix the constant A? Answer: We normalize Vt: so This only determines the magnitude of A, but it is simplest to pick the positive real root: A = J2/a (the phase of A carries no physical significance anyway). Inside the well, then, the solutions are o/n(x) = If sin (n: x). [2.24] As promised, the time-independent Schrodinger equation has delivered an infi- nite set of solutions, one for each integer n. The first few of these are plotted in Fig- me 2.2; they look just like the standing waves on a string of length a. Vtl' which car- ries the lowest energy, is called the ground state; the others, whose energies increase in proportion to n2, are called excited states. As a group, the functions Vtn (x) have some interesting and important properties: 1. They are alternately even and odd, with respect to the center of the well. (0/1 is even, 0/2 is odd, Vt3 is even, and so on.") 2. As you go up in energy, each successive state has one more node (zero crossing). Vtl has none (the end points don't count), Vt2 has one, 0/3 has two, and so on. 6To make this symmetry more apparent, some authors center the well at the origin (so that it runs from -a12 to +aI2. The even functions are then cosines, and the odd ones are sines. See Problem 2.4. Sec. 2.2: The Infinite Square Well 27 / a x x x Figure 2.2: The first three stationary states of the infinite square well (Equation 2.24). 3. They are mutually orthogonal, in the sense that Jl/Im(x)*l/In(x)dx = 0, [2.25J [2.26] whenever m =j:. n. Proof Jl/Im (x)*l/In (x) dx = ~ la sin (:nx) sin (n; x) dx = ~ 1" [cos (m : nitx) - coo (m : nn x)] dx = {. I sin (m - nnx) _ I sin (m +nnx)} IG (m - nvn a (m + n)n a 0 = ~ { sin[(m - n)n] _ sin[(m + n)n } = O. n (m - n) (m + n) Note that this argument does not work if m = n (can you spot the point at which it fails"); in that case normalization tells us that the integral is 1. In fact, we can combine orthogonality and normalization into a single statement': f l/Im(x)*l/In(x)dx = omn, where omn (the so-called Kronecker delta) is defined in the usual way, if m =j=. n; if m = n, [2.27] We say that the 1jJ's are orthonormal. 4. They are complete, in the sense that any other function, I(x), can be ex- pressed as a linear combination of them: [2.28] 7In this case the V-r's are real, so the * on V-rm is unnecessary, but for future purposes it's a good idea to get in the habit of putting it there. 28 Chap.2 The Time-Independent Schrodinqer Equation I'm not about to prove the completeness of the functions J2ja sin(mTxja), but if you've studied advanced calculus you will recognize that Equation 2.28 is nothing but the Fourier series for I(x), and the fact that "any" function can be expanded in this way is sometimes called Dirichlet's theorem." The expansion coefficients (en) can be evaluated-for a given I(x)-by a method I call Fourier's trick, which beautifully exploits the orthonormality of {Vtn}: Multiply both sides of Equation 2.28 by Vtm(x)*, and integrate. JVtm(x)* I(x) dx = f. c.;f Vtm(X)*Vtn(X) dx = f.Cn6mn = Cm' [2.29] n=! n=! (Notice how the Kronecker delta kills every term in the sum except the one for which n = m.) Thus the mth coefficient in the expansion of I(x) is given by c; = f o/m(x)*j(x)dx. [2.30] These four properties are extremely powerful, and they are not peculiar to the infinite square well. The first is true whenever the potential itself is an even function; the second is universal, regardless of the shape of the potential.9 Orthogonality is also quite general-I'll show you the proof in Chapter 3. Completeness holds for all the potentials you are likely to encounter, but the proofs tend to be nasty and laborious; I'm afraid most physicists simply assume completeness and hope for the best. The stationary states (Equation 2.6) for the infinite square well are evidently fi . (nn) .( 2 2h 2m 2 \lJn(X, t) = V-;:; sm ---;;x e- l n T( / a ", [2.31] I claimed (Equation 2.14) that the most general solution to the (time-dependent) Schrodinger equation is a linear combination of stationary states: ~ If. (nn) . 2 2h 2 2)\lJ(x, t) = ~Cn - sin -x e-l(n T( / ma ', n=l a a [2.32] If you doubt that this is a solution, by all means check it! It remains only for me to demonstrate that I can fit any prescribed initial wave function, \lJ (x, 0), by appropriate choice of the coefficients Cn. According to Equation 2.32, 00 W(x, 0) = L CnVtn(x). n=l 8See, for example, Mary Boas, Mathematical Methods in the Physical Sciences, 2nd ed. (New York: John Wiley & Sons, 1983), p. 313; I(x) can even have a finite number of finite discontinuities. 9See, for example, John L. Powell and Bernd Crasemann, Quantum Mechanics (Reading, MA: Addison-Wesley, 1961), p. 126. Sec. 2.3: The Harmonic Oscillator 31 Incidentally, it follows that (H) is constant in time, which is one manifestation of conservation of energy in quantum mechanics. 2.3 THE HARMONIC OSCILLATOR The paradigm for a classical harmonic oscillator is a mass m attached to a spring of force constant k. The motion is governed by Hooke's law, d2x F= -kx =m- dt? (as always, we ignore friction), and the solution is x(t) = A sin(wt) + B cos(wt), where is the (angular) frequency of oscillation. The potential energy is 1 Vex) = "2kx2; [2.36] [2.37] its graph is a parabola. Of course, there's no such thing as «perfect simple harmonic oscillator-if you stretch it too far the spring is going to break, and typically Hooke's law fails long before that point is reached. But practically any potential is approximately parabolic, in the neighborhood of a local minimum (Figure 2.3). Formally, if we expand V (x) in a Taylor series about the minimum: Vex) = V(xo) + V'(xo)(x - xo) + ~VII(XO)(X - xO)2 +"', 2 subtract V (xo) [you can add a constant to V (x) with impunity, since that doesn't change the force], recognize that V'(xo) = 0 (since Xo is a minimum), and drop the higher-order terms [which are negligible as long as (x - xo) stays small], the potential becomes 1 II 2 V (x) '" "2 V (xo)(x - xo) , which describes simple harmonic oscillation (about the point xo), with an effective spring constant k = V If (XO).10 That's why the simple harmonic oscillator is so important: Virtually any oscillatory motion is approximately simple harmonic, as long as the amplitude is small. JONote that V" (xo) ::: 0, since by assumption xo is a minimum. Only in the rare case V" (xo) = 0 is the oscillation not even approximately simple harmonic. [2.38] 32 Chap.2 The Time-Independent Schrodinqer Equation x Figure 2.3: Parabolic approximation (dashed curve) to an arbitrary potential, in the neighborhood of a local minimum. The quantum problem is to solve the Schrodinger equation for the potential 1 Vex) = -muix2 2 (it is customary to eliminate the spring constant in favor of the classical frequency, using Equation 2.36). As we have seen, it suffices to solve the time-independent Schrodinger equation: Tz2 d2ljJ 1 - - -- + -muix2 ljJ = EljJ. [2.39] 2m dx? 2 In the literature you will find two entirely different approaches to this problem. The first is a straighforward "brute force" solution to the differential equation, using the method of power series expansion; it has the virtue that the same strategy can be applied to many other potentials (in fact, we'll use it in Chapter 4 to treat the Coulomb potential). The second is a diabolically clever algebraic technique, using so-called ladder operators. I'll show you the algebraic method first, because it is quicker and simpler (and more fun); if you want to skip the analytic method for now, that's fine, but you should certainly plan to study it at some stage. 2.3.1 Algebraic Method To begin with, let's rewrite Equation 2.39 in a more suggestive form: _1 [(~~)2+ (mwx)2] ljJ = EljJ. 2m 1 dx [2.40] The idea is to factor the term in square brackets. If these were numbers, it would be easy: u2 + v2 = (u - iv)(u + iv). Sec. 2.3: The Harmonic Oscillator 33 Here, however, it's not quite so simple, because u and v are operators, and operators do not, in general, commute (uv is not the same as vu). Still, this does invite us to take a look at the expressions a± __1_ (~.!!..- ± imwx) . ~ i dx [2.41] [2.42] What is their product, a.ia.c'! Warning: Operators can be slippery to work with in the abstract, and you are bound to make mistakes unless you give them a "test function", [tx), to act on. At the end you can throwaway the test function, and you'll be left with an equation involving the operators alone. In the present case, we have (a_a+)!(x) = 2~ (~ ~ - imwx) (~~ + imwx) f(x) = _1 (~.!!..- _imwx) (~df + imwxf) 2m I dx I dx 1 [ 2 d 2I d dI 2 ]= - -n - +nmw-(xl) -nmwx- + (mwx) I 2m dx 2 dx dx = 2~ [ (~:x) 2 + (mwx)2 +nmw] f(x). [I used d(xl)/dx = x (dftdx) + I in the last step.] Discarding the test function, we conclude that 1 [(n d ) 2 2] 1a_a+ = - -:- - + (mwx) + -nw. 2m I dx 2 Evidently Equation 2.40 does not factor perfectly-there's an extra term (l/2)nw. However, if we pull this over to the other side, the Schrodinger equation11 becomes [2.43] Notice that the ordering of the factors a+ and a: is important here; the same argument, with a+ on the left, yields Thus 1 [(n d)2 2] 1a+a_ = - -:- - + (mwx) - -nw. 2m I dx 2 [2.44] [2.45] 11I'm getting tired of writing "time-independent Schrodinger equation:' so when it's clear from the context which one I mean, I'll just call it the Schrodinger equation. 36 Chap. 2 The Time-Independent Schrodinqer Equation (This method does not immediately determine the normalization factor An; I'll let you work that out for yourself in Problem 2.12.) For example, [2.51] I wouldn't want to calculate Vt50 in this way, but never mind: We have found all the allowed energies, and in principle we have determined the stationary states-the rest is just computation. Problem 2.11 Show that the lowering operator cannot generate a state of infinite norm (i.e., f la- Vt 12dx < 00, if Vt itself is a normalized solution to the Schrodinger equation). What does this tell you in the case Vt = Vto? Hint: Use integration by parts to show that Then invoke the Schrodinger equation (Equation 2.46) to obtain 100 1la- Vtl 2 dx = E - -1iw,-00 2 where E is the energy of the state Vt. **Problem 2.12 (a) The raising and lowering operators generate new solutions to the Schrodinger equation, but these new solutions are not correctly normalized. Thus a+ Vtn is proportional to Vtn+l, and a: Vtn is proportional to Vtn-l, but we'd like to know the precise proportionality constants. Use integration by parts and the Schrodinger equation (Equations 2.43 and 2.46) to show that and hence (with i 's to keep the wavefunctions real) a+ Vtn = iJ(n + 1)1iw Vtn+l, a_Vtn = -iJn1iwVtn-l' [2.52] [2.53] Sec. 2.3: The Harmonic Oscillator 37 (b) Use Equation 2.52 to determine the normalization constant An in Equation 2.50. (You'll have to normalize % "by hand".) Answer: ( ) 1/4 ( ·)nmco -l An = rrTi In!(Tiw)n -Problem 2.13 Using the methods and results of this section, [2.54] (a) Normalize 0/1 (Equation 2.51) by direct integration. Check your answer against the general formula (Equation 2.54). (b) Find 0/2, but don't bother to normalize it. (c) Sketch 0/0, 0/1, and 0/2. (d) Check the orthogonality of 0/0, 0/1, and 0/2, Note: If you exploit the evenness and oddness of the functions, there is really only one integral left to evaluate explicitly. -Problem 2.14 Using the results of Problems 2.12 and 2.13, (a) Compute (x), (p), (x 2), and (p2), for the states % and 0/1. Note: In this and most problems involving the harmonic oscillator, it simplifies the notation if you introduce the variable ~ == Jmw In x and the constant a _ (mwIrrTi) 1/4. (b) Check the uncertainty principle for these states. (c) Compute (T) and (V) for these states (no new integration allowedl), Is their sum what you would expect? 2.3.2 Analytic Method We return now to the Schrodinger equation for the harmonic oscillator (Equa- tion 2.39): Ti 2 d20/ 1 2"---- + -mw XWo/ = Eo/. 2m dx? 2 Things look a little cleaner if we introduce the dimensionless variable [2.55] in terms of ~, the Schrodinger equation reads [2.56] 38 Chap.2 The Time-Independent Schrodinqer Equation where K is the energy, in units of (lj2)Tuu: 2E K = - [2.57] - hio Our problem is to solve Equation 2.56, and in the process obtain the "allowed" values of K (and hence of E). To begin with, note that at very large ~ (which is to say, at very large x), ~2 completely dominates over the constant K, so in this regime d21/r d~2 ~ ~21/r, [2.58] which has the approximate solution (check itl) 1/r(~) ~ Ae-~2/2 +BeH 2/2. [2.59] The B term is clearly not normalizable (it blows up as [xI -+ 00); the physically acceptable solutions, then, have the asymptotic form 1/r(~) -+ ()e-~2/2, at large s. [2.60] This suggests that we "peel off" the exponential part, [2.61] in hopes that what remains [h(~)] has a simpler functional form than 1/r(~) itself." Differentiating Equation 2.61, we have d1/r = (dh _ ~h) e-~2/2 d~ dt; and d 21/r (d 2h dh .) 2- = __ 2~- + (~2 - l)h e-~ /2 d~2 d~2 d~ , so the Schrodinger equation (Equation 2.56) becomes d 2h dh d~2 - 2~ d~ + (K - l)h = O. [2.62] I propose to look for a solution to Equation 2.62 in the form of a power series in ~15: 00 2 ~. h(~)=ao+aJ~+a2~ +"'=L...JaA J • j=O [2.63] 14Note that although we invoked some approximations to motivate Equation 2.61, what follows is exact. The device of stripping off the asymptotic behavior is the standard first step in the power series method for solving differential equations-see, for example, Boas (cited in footnote 8), Chapter 12. 15 According to Taylor's theorem, any reasonably well-behaved function can be expressed as a power series, so Equation 2.63 involves no real loss of generality. For conditions on the applicability of the series method, see Boas (cited in footnote 8) or George Arfken, Mathematical Methods for Physicists, 3rd ed. (Orlando, FL: Academic Press, 1985), Section 8.5. Sec. 2.3: The Harmonic Oscillator 41 and hence 0/1(1;) = al~e-e/2 (confirming Equation 2.51). For n = 2, j = °yields az = - 2ao, and j = 2 gives a4 = 0, so and and so on. (Compare Problem 2.13, where the same result was obtained by algebraic means.) In general, b; (~) will be a polynomial of degree n in ~, involving even powers only, if n is an even integer, and odd powers only, if n is an odd integer. Apart from the overall factor (aD or ad they are the so-called Hermite polynomials, H; (0. 19 The first few of them are listed in Table 2.1. By tradition, the arbitrary multiplicative factor is chosen so that the coefficient of the highest power of ~ is 2n . With this convention, the normalized" stationary states for the harmonic oscillator are [2.69] They are identical (ofcourse) to the ones we obtained algebraically in Equation 2.50. In Figure 2.5a I have plotted o/n(x) for the first few n's, The quantum oscillator is strikingly different from its classical counterpart- not only are the energies quantized, but the position distributions have some bizarre features. For instance, the probability of finding the particle outside the classically allowed range (that is, with x greater than the classical amplitude for the energy in question) is not zero (see Problem 2.15), and in all odd states the probability of Table 2.1: The first few Hermite polynomials, H; (x). Ho = 1. HI =2x, H2 = 4x 2 - 2, H3 = 8x 3 - 12x, H4 = 16x4 - 48x 2 + 12, H« = 32x s - 160x 3 + 120x. 19The Hermite polynomials have been studied extensively in the mathematical literature, and there are many tools and tricks for working with them. A few of these are explored in Problem 2.18. 2°1 shall not work out the normalization constant here; if you are interested in knowing how it is done, see, for example, Leonard Schiff, Quantum Mechanics, 3rd ed. (New York: McGraw-Hill, 1968), Section 13. 42 Chap. 2 The Time-Independent Schrodinqer Equation x x x x (a) hV100(x)1 2 0.24 I I I l II 0.20 II II II II II 0.16 II II II I I 0.12 I I I \ II 1\ II I 0.06 I II I I 0.04 0.0 lL.---II-.L...LJu..LJLLL.L.LL (b) x Figure 2.5: (a) The first four stationary states of the harmonic oscillator. (b) Graph of 1t/rIOOI 2, with the classical distribution (dashed curve) superimposed. Sec. 2.3: The Harmonic Oscillator 43 finding the particle at the center of the potential well is zero, Only at relatively large n do we begin to see some resemblance to the classical case. In Figure 2.5b I have superimposed the classical position distribution on the quantum one (for n = 100); if you smoothed out the bumps in the latter, the two would fit pretty well (however, in the classical case we are talking about the distribution of positions over time for one oscillator, whereas in the quantum case we are talking about the distribution over an ensemble of identically-prepared systems)." Problem 2.15 In the ground state of the harmonic oscillator, what is the probability (correct to three significant digits) offinding the particle outside the classically allowed region? Hint: Look in a math table under "Normal Distribution" or "Error Function", Problem 2.16 Use the recursion formula (Equation 2.68) to work out H5(~) and H6(~). -Problem 2.17 A particle in the harmonic oscillator potential has the initial wave function \}l(x,O) = A[Vro(x) + Vrl(X)] for some constant A. (a) Normalize \}l (x, 0). (b) Find \}lex, t) and l\}l(x, t)12• (c) Find the expectation value of x as a function of time. Notice that it oscillates sinusoidally. What is the amplitude of the oscillation? What is its (angular) frequency? (d) Use your result in (c) to determine (p). Check that Ehrenfest's theorem holds for this wave function. (e) Referring to Figure 2.5, sketch the graph of I\}l I at t = 0, n / to, 2rr/ to, 3rr/ to, and 4rr/ co. (Your graphs don't have to be fancy-just a rough picture to show the oscillation.) **Problem 2.18 In this problem we explore some of the more useful theorems (stated without proof) involving Hermite polynomials. (a) The Rodrigues formula states that [2.70] Use it to derive H3 and H4 . 21The analogy is perhaps more telling if you interpret the classical distribution as an ensemble of oscillators all with the same energy, but with random starting times. 46 Chap. 2 The Time-Independent Schrodinqer Equation combination of separable solutions (only this time it's an integral over the continuous variable k, instead of a sum over the discrete index n): I /+00 2'lJ(x, z) = -- ¢(k)ei(kx-~~t) dk. ..j2ii -00 [2.83] [The quantity 1j..j2ii is factored out for convenience; what plays the role of the coefficient c; in Equation 2.14 is the combination (lj..j2ii)¢(k) dk.] Now this wave function can be normalized [for appropriate ¢ (k)]. But it necessarily carries a range of k's, and hence a range of energies and speeds. We call it a wave packet. In the generic quantum problem, we are given 'lJ(x, 0), and we are to find 'lJ(x, t). For a free particle the solution has the form of Equation 2.83; the only remaining question is how to determine ¢(k) so as to fit the initial wave function: 1 /+00'lJ(x,O)= r-c ¢(k)eikxdk. v 2rr -00 [2.84] This is a classic problem in Fourier analysis; the answer is provided by Plancherel's theorem (see Problem 2.20): 1 /+00 1 /+00I(x) = r-c F(k)e ikx dk {:::::::} F(k) = r-c l(x)e-ikXdx. v 2rr -00 v2rr -00 [2.85] F (k) is called the Fourier transform of I(x); I(x) is the inverse Fourier transform of F(k) (the only difference is in the sign of the exponent). There is, of course, some restriction on the allowable functions: The integrals have to exist," For our purposes this is guaranteed by the physical requirement that 'lJ(x, 0) itself be normalized. So the solution to the generic quantum problem, for the free particle, is Equation 2.83, with I /+00¢(k)= r-c 'lJ(x,O)e-ikxdx. v 2rr -00 [2.86] I'd love to work out an example for you-starting with a specific function 'lJ(x, 0) for which we could actually calculate ¢(k), and then doing the integral in Equation 2.83 to obtain 'lJ(x, t) in closed form. Unfortunately, manageable cases are hard to 22The necessary and sufficient condition on f(x) is that J::oo If(x) r2dx be finite. (In that case J~ IF(k)12dk is also finite, and in fact the two integrals are equal.) See Arfken (footnote 15), Sec- tion 15.5. Sec. 2.4: The Free Particle 47 come by, and I want to save the best example for you to work out yourself. Be sure, therefore, to study Problem 2.22 with particular care. I return now to the paradox noted earlier-the fact that the separable solution \Ilk (X, t) travels at the "wrong" speed forthe particle it ostensibly represents. Strictly speaking, the problem evaporated when we discovered that \Ilk is not a physically achievable state. Nevertheless, it is of interest to discover how information about the particle velocity is carried by the wave function (Equation 2.83). The essential idea is this: A wave packet is a sinusoidal function whose amplitude is modulated by ¢ (Figure 2.6); it consists of "ripples" contained within an "envelope." What corresponds to the particle velocity is not the speed of the individual ripples (the so- called phase velocity), but rather the speed of the envelope (the group velocity)- which, depending on the nature of the waves, can be greater than, less than, or equal to the velocity of the ripples that go to make it up. For waves on a string, the group velocity is the same as the phase velocity. For water waves it is one half the phase velocity, as you may have noticed when you toss a rock into a pond: If you concentrate on a particular ripple, you will see it build up from the rear, move forward through the group, and fade away at the front, while the group as a whole propagates out at half the speed. What I need to show is that for the wave function of a free particle in quantum mechanics the group velocity is twice the phase velocity-just right to represent the classical particle speed. The problem, then, is to determine the group velocity of a wave packet with the general form I 1+00\II (x , t) = -- ¢(k)ei(kx-wt) dk. J2ii -00 [In our case ca = (1ik2/ 2m), but what I have to say now applies to any kind of wave packet, regardless of its dispersion relation-the formula for w as a function of k.] Let us assume that ¢(k) is narrowly peaked about some particular value ko. [There is nothing illegal about a broad spread in k, but such wave packets change shape rapidly (since different components travel at different speeds), so the whole notion of a "group," with a well-defined velocity, loses its meaning.] Since the integrand x Figure 2.6: A wave packet. The "envelope" travels at the group velocity; the "ripples" travel at the phase velocity. 48 Chap. 2 The Time-Independent Schrodinqer Equation is negligible except in the vicinity of ko, we may as well Taylor-expand the function w(k) about that point and keep only the leading terms: w(k) r-v Wo + w~(k - ko), where wb is the derivative of os with respect to k, at the point ko. Changing variables from k to s =k - ko, to center the integral at ko, we have 1 1+00\II(x, t) r-v r::L ¢(ko+ s)ei[(ko+s)x-(Wo+w~s)tl ds. v 2rr -00 At t = 0, I 1+00\II (x, 0) = ~- ¢(ko+ s)ei(ko+s)x ds,..;zrr -00 and at later times Except for the shift from x to (x - wbt), the integral is the same as the one in \II (x ,0). Thus \II(x, t) r-v e-i(Wo-kow~)t\Il(x- wb t , 0). [2.87] Apart from the phase factor in front (which won't affect 1\Il1 2 in any event), the wave packet evidently moves along at a speed dw vgroup = dk [2.88] (evaluated at k = ko), which is to be contrasted with the ordinary phase velocity W Vphase = k [2.89] In our case, co = (lik2/2m), so co] k = (likI2m), whereas dcofdk = (liklm), which is twice as great. This confirms that it is the group velocity of the wave packet, not the phase velocity of the stationary states, that matches the classical particle velocity: Vclassical = vgroup = 2Vphase. [2.90] Problem 2.19 Show that the expressions [Ae ikx +Be- ikx ], [C cos kx + D sin kx], [F cos(kx+a)], and [G sin(kx+ f3)] are equivalent ways of writing the same function of x, and determine the constants C, D, F, G, a, and f3 in terms of A and B. (In quantum mechanics, with V = 0, the exponentials give rise to traveling waves, and are most convenient in discussing the free particle, whereas sines and cosines Sec. 2.5: The Delta-Function Potential 51 (over n), whereas for the second it is an integral (over k). What is the physical significance of this distinction? In classical mechanics a one-dimensional time-independent potential can give rise to two rather different kinds of motion. If V (x) rises higher than the particle's to- tal energy (E) on either side (Figure 2.7a), then the particle is "stuck" in the potential well-it rocks back and forth between the turning points, but it cannot escape (unless, V(x) Classical turning points (a) V(x) x E ------------ x Classical turning point (b) V(x) E x (c) x Figure 2.7: (a) Abound state. (b) Scattering states. (c) A classical bound state, but a quantum scattering state. 52 Chap. 2 The Time-Independent Schrodinqer Equation of course, you provide it with a source of extra energy, such as a motor, but we're not talking about that). We call this a bound state. If, on the other hand, E exceeds V (x) on one side (or both), then the particle Comes in from "infinity", slows down or speeds up under the influence of the potential, and returns to infinity (Figure 2.7b). (It can't get trapped in the potential unless there is some mechanism, such as friction, to dissipate energy, but again, we're not talking about that.) We call this a scattering state. Some potentials admit only bound states (for instance, the harmonic oscillator); some allow only scattering states (a potential hill with no dips in it, for example); some permit both kinds, depending on the energy of the particle. As you have probably guessed, the two kinds of solutions to the Schrodinger equation correspond precisely to bound and scattering states. The distinction is even cleaner in the quantum domain, because the phenomenon of tunneling (which we'll come to shortly) allows the particle to "leak" through any finite potential barrier, so the only thing that matters is the potential at infinity (Figure 2.7c): I E < V(-oo) and V(+oo) ::::} E > V(-oo) or V(+oo) ::::} bound state, scattering state. [2.91] [2.92] [2.93] In "real life" most potentials go to zero at infinity, in which case the criterion simplifies even further: { E < O::::} bound state, E > O::::} scattering state. Because the infinite square well and harmonic oscillator potentials go to infinity as x ~ ±oo, they admit bound states only; because the free particle potential is zero everywhere, it only allows scattering states." In this section (and the following one) we shall explore potentials that give rise to both kinds of states. The Dirac delta function, 8(x), is defined informally as follows: { 0, if x =I- O} . 1+008(x) = if _ 0 ,wIth 8(x) dx = 1.00, 1 X - -00 It is an infinitely high, infinitesimally narrow spike at the origin, whose area is 1 (Figure 2.8). Technically, it's not a function at all, since it is not finite at x = 0 (mathematicians call it a generalized function, or distribution)." Nevertheless, it is an extremely useful construct in theoretical physics. (For example, in electrodynam- ics the charge density of a point charge is a delta function.) Notice that 8(x - a) would 23If you are very observant, and awfully fastidious, you may have noticed that the general theorem requiring E > Vmin (Problem 2.2) does not really apply to scattering states, since they are not normalizable anyway. If this bothers you, try SOlving the Schrodinger equation with E < 0, for the free particle, and note that even linear combinations of these solutions cannot be normalized. The positive energy solutions by themselves constitute a complete set. 24The delta function can be thought of as the limit of a sequence of functions, such as rectangles (or triangles) of ever-increasing height and ever-decreasing width. Sec. 2.5: The Delta-Function Potential 53 o(x) x Figure 2.8: The Dirac delta function (Equation 2.93). be a spike of area 1 at the point a. If you multiply 8(x - a) by an ordinary function j(x), it's the same as multiplying by j(a): j(x)8(x - a) = f(a)8(x - a), because the product is zero anyway except at the point a. In particular, 1 +00 1+00 -00 j(x)8(x - a) dx = j(a) -00 8(x - a) dx = j(a). [2.94] [2.95] That's the most important property of the delta function: Under the integral sign it serves to "pick out" the value of f(x) at the point a. (Of course, the integral need not go from -00 to +00; all that matters is that the domain of integration include the point a, so a - f to a + f would do, for any f > 0.) Let's consider a potential of the form Vex) = -a8(x), [2.96] [2.97] where a is some constant. This is an artificial potential (so was the infinite square well), but it's beautifully simple and in some respects closer to reality than any of the potentials we have considered so far. The Schrodinger equation reads 1z2 d2'l/r - 2m dx2 - a8(x)'l/r = E'l/r. This potential yields both bound states (E < 0) and scattering states (E > 0); we'll look first at the bound states. In the region x < 0, V (x) = 0, so where J-2mE K= 1z [2.98] [2.99] 56 Chap. 2 The Time-Independent Schriidinqer Equation What about scattering states, with E > O? For x < 0 the Schrodinger equation reads d2l/J 2mE 2- = --l/J = -k l/J dx 2 1i 2 ' where J2mE k ss 1i is real and positive. The general solution is l/J(x) = Aeikx + Be-ikx, [2.112J [2.113] and this time we cannot rule out either term, since neither of them blows up. Similarly, for x > 0, Vr(x) = Feikx + Ge-ikx. The continuity of Vr (x) at x = 0 requires that F+G=A+B. The derivatives are [2.114] [2.115] [2.116] ( dl/Jldx = ik(F<kx - Ge-~kx), for (x > 0), sodl/Jldxl+ = ik(F - G), dVrldx = ik (Ae1kx - Be- l kx ) , for (x < 0), so dVrldxl_ = ik(A - B), and hence ~(dl/Jldx) = ik(F - G - A + B). Meanwhile, Vr(O) = (A + B), so the second boundary condition (Equation 2.107) says Zm« ik(F - G - A + B) = -~(A + B), or, more compactly, F - G = A(I + 2if3) - B(1 - 2if3), mawhere f3 =-2-' 1i k [2.117] Having imposed the boundary conditions, we are left with two equations (Equa- tions 2.115 and 2.117) in four unknowns (A, B, F, and G)-five, if you count k. Nor- malization won't help-this isn't a normalizable state. Perhaps we'd better pause, then, and examine the physical significance of these various constants. Recall that exp(i kx) gives rise [when coupled with the time-dependent factor exp( - i E t In)] to a wave function propagating to the right, and exp( -ikx) leads to a wave propagating to the left. It follows that A (in Equation 2.113) is the amplitude of a wave coming in from the left, B is the amplitude of a wave returning to the left, F (in Equation 2.114) is the amplitude of a wave traveling off to the right, and G is the amplitude of a wave coming in from the right (Figure 2.10). In a typical scattering experiment particles are fired in from one direction-let's say, from the left. In that case the amplitude of the wave coming in from the right will be zero: Ae ikx---Be ikx Feikx---Ge- ikx x Sec. 2.5: The Delta-Function Potential 57 Figure 2.10: Scattering from a delta-function well. G = 0 (for scattering from the left). [2.118] A is then the amplitude of the incident wave, B is the amplitude of the reflected wave, and F is the amplitude of the transmitted wave. Solving Equations 2.115 and 2.117 for Band F, we find ifJ B= A, 1 - ifJ 1 F= A. 1 - ifJ [2.119] [2.120] [2.121] (If you want to study scattering from the right, set A = 0; then G is the incident amplitude, F is the reflected amplitude, and B is the transmitted amplitude.) Now, the probability of finding the particle at a specified location is given by I \IJ 12, so the relative's probability that an incident particle will be reflected back is IBI2 fJ2 R_--=--- IAI2 1+fJ2' R is called the reflection coefficient. (If you have a beam of particles, it tells you the fraction of the incoming number that will bounce back.) Meanwhile, the probability of transmission is given by the transmission coefficient T = IFI 2 = 1 -IAI2 1+fJ2' Of course, the sum of these probabilities should be I-and it is: R+T=1. [2.122] Notice that Rand T are functions of fJ, and hence (Equations 2.112 and 2.117) of E: 1 T-----~­ - 1 + tmo? /21z 2 E) . [2.123] 25This is not a normalizable wave function, so the absolute probability of finding the particle at a particular location is not well defined; nevertheless, the ratio of probabilities for two different locations is meaningful. More on this in the next paragraph. 58 Chap. 2 The Time-Independent Schrodinqer Equation The higher the energy, the greater the probability of transmission (which seems rea- sonable). This is all very tidy, but there is a sticky matter of principle that we cannot al- together ignore: These scattering wave functions are not normalizable, so they don't actually represent possible particle states. But we know what the resolution to this problem is: We must form normalizable linear combinations of the stationary states, just as we did for the free particle-true physical particles are represented by the resulting wave packets. Though straightforward in principle, this is a messy busi- ness in practice, and at this point it is best to tum the problem over to a computer." Meanwhile, since it is impossible to create a normalizable free particle wave function without involving a range of energies, Rand T should be interpreted as the approxi- mate reflection and transmission probabilities for particles in a narrow energy range about E. Incidentally, it might strike you as peculiar that we were able to analyse a quintessentially time-dependent problem (particle comes in, scatters off a potential, and flies off to infinity) using stationary states. After all, 1f; (in Equations 2.113 and 2.114) is simply a complex, time-independent, sinusoidal function, extending (with constant amplitude) to infinity in both directions. And yet, by imposing appropriate boundary conditions on this function, we were able to determine the probability that a particle (represented by a localized wave packet) would bounce off, or pass through, the potential. The mathematical miracle behind this is, I suppose, the fact that by taking linear combinations of states spread over all space, and with essentially triv- ial time dependence, we can construct wave functions that are concentrated about a (moving) point, with quite elaborate behavior in time (see Problem 2.40). As long as we've got the relevant equations on the table, let's look briefly at the case of a delta-function barrier (Figure 2.11). Formally, all we have to do is change the sign of a. This kills the bound state, of course (see Problem 2.2). On the other hand, the reflection and transmission coefficients, which depend only on a 2 , are unchanged. Strange to say, the particle is just as likely to pass through the barrier as to cross over the well! Classically, of course, the particle could not make it over an infinitely high barrier, regardless of its energy. In fact, the classical scattering V(x) = ao(x) x Figure 2.11: The delta-function barrier. 26There exist some powerful programs for analysing the scattering of a wave packet from a one- dimensional potential; see, for instance, A. Goldberg, H. M. Schey, and 1. L. Schwartz, Am. J. Phys. 35, 177 (1967). Sec. 2.6: The Finite Square Well 61 where v-2mE K= 11 [2.128] is real and positive. The general solution is l/J(x) = A exp( -KX) + B exp(x.r ), but the first term blows up (as x ---+ -(0), so the physically admissable solution (as before-see Equation 2.101) is l/J(x) = BeKX , for (x < -a). [2.129] In the region -a < x < a, V (x) = - Va, and the Schrodinger equation reads [2.130] where / = V2m(E + Va). 11 Although E is negative, for a bound state, it must be greater than - Va, by the old theorem E > Vmin (Problem 2.2); so I is also real and positive. The general solution IS l/J(x) = C sin(lx) + Dcos(lx), for (-a < x < a), [2.131] where C and D are arbitrary constants. Finally, in the region x > a the potential is again zero; the general solution is l/J(x) = F exp( -KX) + G exp(zr ), but the second term blows up (as x ---+ (0), so we are left with l/J(x) = Fe-KX , for (x > a). [2.132] The next step is to impose boundary conditions: l/J and dl/J/ dx continuous at -a and +a. But we can save a little time by noting that this potential is an even function, so we can assume with no loss of generality that the solutions are either even or odd (Problem 2.lc). The advantage of this is that we need only impose the boundary conditions on one side (say, at +a); the other side is then automatic, since l/J (-x) = ±l/J (x). I'll work out the even solutions; you get to do the odd ones in Problem 2.28. The cosine is even (and the sine is odd), so I'm looking for solutions of the form { F -KXe , l/J(x) = Dcos(lx), l/J(-x), The continuity of l/J (x), at x = a, says for (x > a), for (0 < x < a), for (x < 0). [2.133] Fe-Ka = Dcos(/a), and the continuity of d l/J/dx says - K Fe?" = -ID sine/a). [2.134] [2.135] 62 Chap. 2 The Time-Independent Schrodinqer Equation Dividing Equation 2.135 by Equation 2.134, we find that K = l tan(la). [2.136] Equation 2.136 is a formula for the allowed energies, since K and l are both functions of E. To solve for E, it pays to adopt some nicer notation. Let [2.137]z = la, and zo =~J2mVo. h According to Equations 2.128 and 2.130, (K 2 + l2) = 2mVo/ 11 2, so «a = JZ6- Z2, and Equation 2.136 reads tan z = .j(zo/Z)2 - 1. [2.138] [2.139] This is a transcendental equation for z (and hence for E) as a function of zo (which is a measure of the "size" of the well). It can be solved numerically, using a calculator or a computer, or graphically, by plotting tan z and J(zo/Z)2 - 1 on the same grid, and looking for points of intersection (see Figure 2.13). Two limiting cases are of special interest: 1. Wide, deep well. If Zo is very large, the intersections occur just slightly below Zn = nit /2, with n odd; it follows that n2rr 211 2 En + Vo ,....., 2m(2a)2' Here (E + Vo) is the energy above the bottom of the well, and on the right we have precisely the infinite square well energies, for a well of width 2a (see Equation 2.23)- or rather, half of them, since n is odd. (The other ones, of course, come from the odd wave functions, as you'll find in Problem 2.28.) So the finite square well goes over to the infinite square well, as Vo ---+ 00; however, for any finite Vo there are only finitely many bound states. 2. Shallow, narrow well. As zo decreases, there are fewer and fewer bound states, until finally (for Zo < n /2, where the lowest odd state disappears) only one remains. It is interesting to note, however, that there is always one bound state, no matter how "weak" the well becomes. You're welcome to normalize 1fr (Equation 2.133), if you're interested (see Problem 2.29), but I'm going to move on now to the scattering states (E > 0). To the left, where Vex) = 0, we have 1fr(x) = Aeikx + se:"", for (x < -a), [2.140] where (as usual) J2mE k - 11 . Inside the well, where V (x) = - Vo, 1fr(x) = Csin(lx) + Dcos(lx), for (-a < x < a), [2.141] [2.142] Sec. 2.6: The Finite Square Well 63 rr/2 rr 3rr/2 2rr 5rr/2 z Figure 2.13: Graphical solution to Equation 2.138, for zo = 8 (even states). where, as before, -J2m(E + Vo) I = n . To the right, assuming there is no incoming wave in this region, we have 1/I(x) = Feikx . [2.143] [2.144] A is the incident amplitude, B is the reflected amplitude, and F is the transmitted amplitude.27 There are four boundary conditions: Continuity of 1/1 (x) at -a says Ae-ika + Be ika = -C sin(la) + D cos(la), continuity of d 1/1/dx at -a gives ik[Ae- ika - Be ika ] = I[C cos(la) + D sin(la)], continuity of 1/1 (x) at +a yields C sin(la) + D cos(la) = Feika , and continuity of d 1/1/dx at +a requires I[C cos(la) - D sin(la)] = ikFeika • [2.145] [2.146] [2.147] [2.148] We can use two of these to eliminate C and D, and solve the remaining two for B and F (see Problem 2.31): sin(21a) B = i (1 2 - k2)F 2kl ' [2.149] 27We could use even and odd functions, as we did for bound states, but these would represent standing waves, and the scattering problem is more naturally formulated in terms of traveling waves. 66 Chap. 2 The Time-Independent Schrodinqer Equation (d) For E > Vo, calculate the transmission coefficient for the step potential, and check that T + R = I. 2.7 THE SCATTERING MATRIX The theory of scattering generalizes in a pretty obvious way to arbitrary localized potentials (Figure 2.15). To the left (Region I), V (x) = 0, so where k _ "J2fflE. n [2.155] To the right (Region III), V (x) is again zero, SO Vt(x) = Feikx + Ge-ikx . [2.156] In between (Region II), of course, I can't tell you what 1jJ is until you specify the potential, but because the Schrodinger equation is a linear, second-order differential equation, the general solution has got to be of the form 1jJ(x) = Cf(x) + Dg(x) , [2.157] where f(x) and g(x) are any two linearly independent particular solutions." There will be four boundary conditions (two joining Regions I and II, and two joining Regions II and III). Two of these can be used to eliminate C and D, and the other two can be "solved" for Band F in terms of A and G: [2.158] The four coefficients Sij, which depend on k (and hence on E), constitute a 2 x 2 matrix V(x) Be- ikx .. Region I Region II Fe ikx • Ge- ikx x Region III Figure 2.15: Scattering from an arbitrary localized potential (vex) = 0 except in Region II]. 29See any book on differential equations-for example, J. L. Van Iwaarden, Ordinary Differential Equations with Numerical Techniques (San Diego, CA: Harcourt Brace Jovanovich, 1985). Chapter 3. Sec. 2.7: The Scattering Matrix 67 s = (Sl1 S21 [2.159] called the scattering matrix (or S-matrix, for short). The S-matrix tells you the outgoing amplitudes (B and F) in terms of the incoming amplitudes (A and G): [2.160] In the typical case of scattering from the left, G = 0, so the reflection and transmission coefficients are For scattering from the right, A = 0, and R _ IF 12 1 -IS 12 r - I G 12 A=O - 22 , IBI2 2 T; = -21 = ISI2I .IGI A=O [2.161] [2.162] [2.163] The S-matrix tells you everything there is to know about scattering from a local- ized potential. Surprisingly, it also contains (albeit in a concealed form) information about the bound states (if there are any). For if E < 0, then 1{! (x) has the form { BeKX (Region I), 1{!(x) = Cj(x) + Dg(x) (Region II), Fe-KX (Region III), with J-2mE K= 1i [2.164] The boundary conditions are the same as before, so the S-matrix has the same structure-only now E is negative, so k -+ i«, But this time A and G are nec- essarily zero, whereas B and F are not, and hence (Equation 2.158) at least two elements in the S-matrix must be infinite. To put it the other way around, if you've got the S-matrix (for E > 0), and you want to locate the bound states, put in k -+ i«, and look for energies at which the S-matrix blows up. For example, in the case of the finite square well, e-2ika S21= . 2/ cos(21a) - i sm2~/) (k 2 +F) (Equation 2.150). Substituting k -+ i K, we see that S2] blows up whenever z2 - K 2 cot(21a) = I' 2K 68 Chap. 2 The Time-Independent Schrodinqer Equation Using the trigonometric identity tan (~) = ±J1 + cot- 19 - cot 19 , we obtain tan(la) = ~ (plus sign), and cot(la) = -] (minus sign). These are precisely the conditions for bound states of the finite square well (Equation 2.136 and Problem 2.28). -Problem 2.34 Construct the S-matrix for scattering from a delta-function well (Equation 2.96). Use it to obtain the bound state energy, and check your answer against Equation 2.111. Problem 2.35 Find the S-matrix for the finite square well (Equation 2.127). Hint: This requires no new work if you carefully exploit the symmetry of the problem. FURTHER PROBLEMS FOR CHAPTER 2 Problem 2.36 A particle in the infinite square well (Equation 2.15) has the initial wave function \I1(x, 0) = A sin\rrx/a). Find (x) as a function of time. -Problem 2.37 Find (x), (p), (x 2), (p2), (T), and (V(x» for the nth stationary state of the harmonic oscillator. Check that the uncertainty principle is satisfied. Hint: Express x and (n/i)(d/dx) in terms of (zz., ±a_), and use Equations 2.52 and 2.53; you may assume that the states are orthogonal. Problem 2.38 Find the allowed energies of the half-harmonic oscillator Vex) = {(l/2)mw2x 2, for (x > 0), 00, for (x < 0). (This represents, for example, a spring that can be stretched, but not compressed.) Hint: This requires some careful thought, but very little actual computation. **Problem 2.39 Solve the time-independent Schrodinger equation for an infinite square well with a delta-function barrier at the center: Vex) = {a8(X), 00, for (-a < x < +a), for (]x] ;::: a). Further Problems for Chapter 2 71 V(x) a b a i , ; i . ; X -Va Figure 2.17: The double square well (Problem 2.44). satisfies the time-dependent Schrodinger equation for the harmonic oscillator potential (Equation 2.38). Here a is any real constant (with the dimensions of length).32 (b) Find I \II (x, t) 12, and describe the motion of the wave packet. (c) Compute (x) and (p), and check that Ehrenfest's theorem (Equation 1.38) is satisfied. Problem 2.46 Consider the potential 1 00 if x < 0, Vex) = a8'(x - a), if x ~ 0, where a and a are positive real constants with the appropriate units (see Figure 2.18). A particle starts out in the "well" (0 < x < a), but because of tunneling its wave function gradually "leaks" out through the delta-function barrier. (a) Solve the (time-independent) Schrodinger equation for this potential; impose appropriate boundary conditions, and determine the "energy", E. (An implicit equation will do.) (b) I put the word "energy" in quotes because you'll notice that it is a complex number! How do you account for this, in view of the theorem you proved in Problem 2.1a? (c) Writing E = Eo + ir (with Eo and T real), calculate (in terms of I') the characteristic time it takes the particle to leak out of the well (that is, the time it takes before the probability is lie that it's still in the region °< x < a). 32This rare example of an exact closed-form solution to the time-dependent Schrodinger equation was discovered by Schrodinger himself, in 1926. 72 Chap. 2 The Time-Independent Schrodinqer Equation V(x) a x Figure 2.18: The potential for Problem 2.46. **Problem 2.47 Consider the moving delta-function well: Vex, t) = -a8(x - vt), where v is the (constant) velocity of the well. (a) Show that the time-dependent Schrodinger equation admits the exact solution \II(x t) = .Jma e-malx-vtl/h2 e-i[(E+O/2)mv2)t-mvxl/h, n ' where E = -ma2 /2n 2 is the bound-state energy of the stationary delta function. Hint: Plug it in and check it! Use Problem 2.24b. (b) Find the expectation value of the Hamiltonian in this state, and comment on the result. ***Problem 2.48 Consider the potential 1i 2a2 Vex) = ---sech2(ax), m where a is a positive constant and "sech" stands for the hyperbolic secant. (a) Show that this potential has the bound state %(x) = A sech(ax), and find its energy. Normalize 'l/Jo, and sketch its graph. (b) Show that the function ( ik - a tanh(aX») ikx 'l/Jk(X) = A. e ik +a Further Problems for Chapter 2 73 (where k == ...12m E In, as usual) solves the Schrodinger equation for any (posi- tive) energy E. Since tanhz --+ -1 as z --+ -00, l/Jk(X) ~ Ae ikx , for large negative x. This represents, then, a wave coming in from the left with no accompanying reflected wave [i.e., no term exp( -ikx)]. What is the asymptotic form of 1/fk(X) at large positive x? What are Rand T for this potential? Note: sech' is a famous example of a "refiectionless' potential--every incident particle, regardless of its energy, passes right through. See R. E. Crandall and B. R. Litt, Annals of Physics 146,458 (1983). (c) Construct the S-matrix for this potential, and use it to locate the bound states. How many of them are there? What are their energies? Check that your answer is consistent with part (a). ***Problem 2.49 The S-matrix tells you the outgoing amplitudes (B and F) in terms of the incoming amplitudes (A and G): (B)= (811 812 ) (A) .F 821 S22 G For some purposes it is more convenient to work with the transfer matrix, M, which gives you the amplitudes to the right of the potential (F and G) in terms of those to the left (A and B): (F)= (MIl M12) (A) .G M21 M22 B (a) Find the four elements of the M-matrix in terms of the elements of the S-matrix, and vice versa. Express Ri, TI, Rr , and T; (Equations 2.161 and 2.162) in terms of elements of the M-matrix. (b) Suppose you have a potential consisting of two isolated pieces (Figure 2.19). Show that the M -matrix for the combination is theproductof the two M -matrices for each section separately: M = M 2M1. (This obviously generalizes to any number of pieces, and accounts for the use- fulness of the M-matrix.) v=o x v=o A ~V' 1 ......._--.---- v=o Figure 2.19: A potential consisting of two isolated pieces (Problem 2.49). 76 Chap. 3 Formalism Vector addition. The "sum" of any two vectors is another vector: la) + 1m = Iy)· Vector addition is commutative la) + 1.6) = 1.6) + la), and associative la) + (1.6) + IY) = (Ia) + 1.6) + IY)· There exists a zero (or null) vector,' 10), with the property that la) + 10) = la), [3.1] [3.2] [3.3] [3.4] for every vector [o). And for every vector la) there is an associated inverse vector (I - a), such that la) + I - a) = 10). [3.5] Scalar multiplication. The "product" of any scalar with any vector is another vector: ala) = Iy). Scalar multiplication is distributive with respect to vector addition a(ja) + 1.6) = ala) + al.6) and with respect to scalar addition (a + b)la) = ala} + bla). It is also associative with respect to the ordinary multiplication of scalars: a(bla» = (ab)la). Multiplication by the scalars °and 1 has the effect you would expect: O]«) = 10}; 11a) = la}. [3.6] [3.7] [3.8] [3.9] [3.10] Evidently I - a) = (-l)la). There's a lot less here than meets the eye-all I have done is to write down in abstract language the familiar rules for manipulating vectors. The virtue of such abstraction is that we will be able to apply our knowledge and intuition about the behavior of ordinary vectors to other systems that happen to share the same formal properties. 3It is customary, where no confusion can arise, to write the null vector without the adorning bracket: IO} -* O, Sec. 3.1: Linear Algebra 77 A linear combination of the vectors la), 1,8), [y), ... is an expression of the form ala) + bl,8) + cly) + .. '. [3.11] A vector I)..) is said to be linearly independent of the set la), 1,8), Iy), ... if it cannot be written as a linear combination of them. (For example, in three dimensions the unit vector k is linearly independent of 1and j, but any vector in the xy-plane is linearly dependent on 1 and j.) By extension, a set of vectors is linearly independent if each one is linearly independent of all the rest. A collection of vectors is said to span the space if every vector can be written as a linear combination of the members of this set." A set of linearly independent vectors that spans the space is called a basis. The number of vectors in any basis is called the dimension of the space. For the moment we shall assume that the dimension (n) isfinite. With respect to a prescribed basis any given vector [o) = aIlel) + azlez) + ... + an len) is uniquely represented by the (ordered) n-tuple ofits components: la) +* (al, az, ... , an). [3.12] [3.13] [3.14] It is often easier to work with the components than with the abstract vectors them- selves. To add vectors, you add their corresponding components: to multiply by a scalar you multiply each component: cia) +* (cal, caz, ... ,can); the null vector is represented by a string of zeroes: 10) +* (0,0, ... ,0); and the components of the inverse vector have their signs reversed: [3.15] [3.16] [3.17] [3.18] The only disadvantage of working with components is that you have to commit your- self to a particular basis, and the same manipulations will look very different to someone working in a different basis. Problem 3.1 Consider the ordinary vectors in three dimensions (ax! +ayi +azk) with complex components. 4A set of vectors that spans the space is also called complete, though I personally reserve that word for the infinite-dimensional case, where SUbtle questions of convergence arise. 78 Chap. 3 Formalism (a) Does the subset of all vectors with a, = 0 constitute a vector space? If so, what is its dimension; if not, why not? (b) What about the subset of all vectors whose z component is I? (C) How about the subset of vectors whose components are all equal? -Problem 3.2 Consider the collection ofall polynomials (with complex coefficients) of degree < N in x. (a) Does this set constitute a vector space (with the polynomials as "vectors")? If so, suggest a convenient basis, and give the dimension of the space. If not, which of the defining properties does it lack? (b) What if we require that the polynomials be even functions? (c) What if we require that the leading coefficient (i.e., the number multiplying X N- 1) be I? (d) What if we require that the polynomials have the value 0 at x = I? (e) What if we require that the polynomials have the value 1 at x = O? Problem 3.3 Prove that the components of a vector with respect to a given basis are unique. 3.1.2 Inner Products In three dimensions we encounter two kinds of vector products: the dot product and the cross product. The latter does not generalize in any natural way to n-dimensional vector spaces, but the former does-in this context it is usually called the inner product. The inner product of two vectors (Ia) and 1.8)) is a complex number (which we write as (a 1.8)), with the following properties: (.8la) = (al,B)*, (ala) ::: 0, and (ala) = 0 ¢> [o) = 10), (al (bl.8) +cIY)) =b{al,B) +c{aly)· [3.19] [3.20] [3.21] Apart from the generalization to complex numbers, these axioms simply codify the familiar behavior of dot products. A vector space with an inner product is called an inner product space. Because the inner product of any vector with itself is a nonnegative number (Equation 3.20), its square root is real-we call this the norm of the vector: Iiall - J{ala); [3.22] Sec. 3.1: Linear Algebra 81 If la) is an arbitrary vector: n [o) = allel) +a2Ie2) + ... +anlen) = Lajlej), j=1 then n n n n n Tla) = Laj(Tlej) = LLaj1ijlei) = L(L1ijaj)lei). j=1 j=l i=l i=l j=l [3.31] [3.32] [3.33] Evidently T takes a vector with components al, a2, ... , an into a vector with compo- nents" n a; = L 1ijaj' j=l Thus the n2 elements Iij uniquely characterize the linear transformation T (with respect to a given basis), just as the n components a, uniquely characterize the vector la) (with respect to the same basis): A T *"* (Til, Tl 2 , ..• , Tnn ) . If the basis is orthonormal, it follows from Equation 3.30 that Iij = (eiITlej). [3.34] [3.35] It is convenient to display these complex numbers in the form of a matrix': [3.36] The study of linear transformations, then, reduces to the theory of matrices. The sum of two linear transformations (5' + T) is defined in the natural way: (S + T)la) = Sla) + Tla); [3.37] this matches the usual rule for adding matrices (you add their corresponding elements): u = S + T ¢> Ui, = Sij + 1ij' [3.38] 6Notice the reversal of indices between Equations 3.30 and 3.33. This is not a typographical error. Another way of putting it (switching i B- j in Equation 3.30) is that if the components transform with Tij, the basis vectors transform with 0i. 71'11 use boldface to denote matrices. 82 Chap. 3 Formalism The product of two lipear trap.sformations (Sf) is the net effect of performing them in succession-first T, then S: la) -+ la/) = fla) -+ la") = Sla/) = S(f1a)) = Sfla). [3.39] [3.40] What matrix U represents the combined transformation (; = Sf? It's not hard to work it out: Evidently n U = ST {} Ui k = L Sij1jk; }=1 this is the standard rule for matrix multiplication-to find the ik'" element of the product, you look at the i th row of S and the kth column of T, multiply corresponding entries, and add. The same procedure allows you to multiply rectangular matrices, as long as the number of columns in the first matches the number of rows in the second. In particular, if we write the n -tuple of components of la) as an n x I column matrix the transformation rule (Equation 3.33) can be written a/ = Ta. [3.41] [3.42] And now, some useful matrix terminology: The transpose of a matrix (which we shall write with a tilde: T) is the same set of elements, but with rows and columns interchanged: T21 T22 [3.43] Notice that the transpose of a column matrix is a row matrix: [3.44] A square matrix is symmetric if it is equal to its transpose (reflection in the main diagonal-upper left to lower right-leaves it unchanged); it is antisymmetric if this operation reverses the sign: SYMMETRIC: T = T; ANTISYMMETRIC: T = -T. [3.45] Sec. 3.1: Linear Algebra 83 To construct the (complex) conjugate of a matrix (which we denote, as usual, with an asterisk: T*), you take the complex conjugate of every element: a' = (~:J n [3.46] A matrix is real if all its elements are real and imaginary if they are all imaginary: REAL: T* = T; IMAGINARY: T* = -T. [3.47] The Hermitian conjugate (or adjoint) of a matrix (indicated by a dagger: Tt) is the transposed conjugate: C\ T2*1 T:, )T* T2*2 Tn*2Tt =T* = ;12 at _ a* = (ar a~). [3.48]. , a*2 T* T* Tn*nIn 2n A square matrix is Hermitian (or self-adjoint) if it is equal to its Hermitian conjugate; if Hermitian conjugation introduces a minus sign, the matrix is skew Hermitian (or anti-Hermitian): HERMITIAN: Tt = T~ SKEW HERMITIAN: Tt = -T. [3.49] With this notation the inner product of two vectors (with respect to an orthonormal basis-Equation 3.24), can be written very neatly in matrix form: [3.50] (Notice that each of the three operations discussed in this paragraph, if applied twice, returns you to the original matrix.) Matrix multiplication is not, in general, commutative (ST =f- TS); the difference between the two orderings is called the commutator: [S, T] - ST - TS. The transpose of a product is the product of the transposes in reverse order: (ST) = TS (see Problem 3.12), and the same goes for Hermitian conjugates: CST) t = rtst. [3.51] [3.52] [3.53]
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved