Download Introduction to Quantum Mechanics and more Summaries Quantum Mechanics in PDF only on Docsity! INTRODUCTION TO
QUANTUM
MECHANICS
Srcel eens Tamers
DAVID J. GRIFFITHS
Fundamental Equations
Schrédinger equation:
Time-independent Schrédinger equation:
Hw = EW. Ws pe ten
Hamiltonian operator:
i? _,
H=-—ViV
2m
Momentum operator:
p=-iAV
Time dependence of an expectation value:
ag) _ 9Q
SE. = = ((H, ~
doh ({H, Q)) + Or
Generalized uncertainty principle:
1
oA0g = |z (A, e1|
Heisenberg uncertainty principle:
Oxon > A/2
Canonical commutator:
{x, p) = ih
Angular momentum:
[Lx, Ly] = fA, [Ly. Ly] = iALy. (Lz, Lx] =ihLy
a= (0). m=) a=( 8)
Pauli matrices:
Editor-in-Chief. Science: John Challice
Senior Editor: Erik Fahlgren
Associate Editor: Christian Botting
Editorial Assistant: Andrew Sobel
Vice President and Director of Production and Manufacturing, ESM: David W. Riccardi
Production Editor: Beth Lew
Director of Creative Services: Paul Belfanti
Art Director: Jayne Conte
Cover Designer: Bruce Kenselaar
Managing Editor, AV Management and Production: Patricia Burns
Art Editor: Abigail Bass
Manufacturing Manager: Trudy Pisciotti
Manufacturing Buyer: Lynda Castillo
Executive Marketing Manager: Mark Pfaltzgraff
© 2005, 1995 Pearson Education, Inc.
Pearson Prentice Hall
Pearson Education, Inc.
Upper Saddle River, NJ 07458
All rights reserved, No part of this book may be reproduced in any form or by any means, without
permission in writing from the publisher.
Pearson Prentice Hall® is a trademark of Pearson Education, Inc.
Printed in the United States of America
109 8 7
ISBN O-13-191175-9
If you purchased this book within the United States or Canada you should be aware that it has been
wrongfully imported without the approval of the Publisher or the Author.
Pearson Education LTD.. London
Pearson Education Australia Pty. Ltd.. Sydney
Pearson Education Singapore, Pte. Ltd.
Pearson Education North Asia Lid,. Hong Kong
Pearson Education Canada, Inc., Toronto
Pearson Educacién de Mexico, S.A. de C.V.
Pearson Education—Japan. Tokyo
Pearson Education Malaysia, Pte, Ltd.
Pearson Education. Upper Saddle River, New Jersey
CONTENTS
PREFACE vii
PARTI THEORY
1 THE WAVE FUNCTION 1
11
1.2
1.3
14
15
1.6
The Schrédinger Equation 1
The Statistical Interpretation 2
Probability 5
Normalization 12
Momentum 15
The Uncertainty Principle 18
2. TIME-INDEPENDENT SCHRODINGER EQUATION 24
21
2.2
2.3
2.4
2.5
2.6
Stationary States 24
The Infinite Square Well 30
The Harmonic Oscillator 40
The Free Particle 59
The Delta-Function Potential 68
The Finite Square Well 78
3 FORMALISM 93
3.1
3.2
3.3
Hilbert Space 93
Observables 96
Eigenfunctions of a Hermitian Operator 100
iti
iv
Contents
3.4 Generalized Statistical Interpretation 106
3.5 The Uncertainty Principle 110
3.6 Dirac Notation 118
QUANTUM MECHANICS IN THREE DIMENSIONS 131
4.1 Schrédinger Equation in Spherical Coordinates 131
4.2 The Hydrogen Atom 145
43 Angular Momentum 160
44 Spin 171
IDENTICAL PARTICLES 201
5.1 Two-Particle Systems 201
5.2 Atoms 210
5.3 Solids 218
5.4 Quantum Statistical Mechanics 230
PART I APPLICATIONS
6
10
TIME-INDEPENDENT PERTURBATION THEORY 249
6.1 | Nondegenerate Perturbation Theory 249
6.2 Degenerate Perturbation Theory 257
6.3. The Fine Structure of Hydrogen 266
6.4 The Zeeman Effect 277
6.5 Hyperfine Splitting 283
THE VARIATIONAL PRINCIPLE 293
7.1. Theory 293
7.2 The Ground State of Helium 299
7.3. The Hydrogen Molecule Ion 304
THE WKB APPROXIMATION 315
8.1 The “Classical” Region 316
8.2 Tunneling 320
8.3. The Connection Formulas 325
TIME-DEPENDENT PERTURBATION THEORY 340
9.1 Two-Level Systems 341
9.2 Emission and Absorption of Radiation 348
9.3. Spontaneous Emission 355
THE ADIABATIC APPROXIMATION 368
10.1 The Adiabatic Theorem 368
10.2 Berry’s Phase 376
Vili
Preface
instructors, for example, may wish to treat time-independent perturbation theory
immediately after Chapter 2.
This book is intended for a one-semester or one-year course at the junior or
senior level. A one-semester course will have to concentrate mainly on Part I;
a full-year course should have room for supplementary material beyond Part II.
The reader must be familiar with the rudiments of linear algebra (as summarized
in the Appendix), complex numbers, and calculus up through partial derivatives;
some acquaintance with Fourier analysis and the Dirac delta function would help.
Elementary classical mechanics is essential, of course, and a little electrodynamics
would be useful in places. As always, the more physics and math you know the
easier it will be, and the more you will get out of your study. But I would like
to emphasize that quantum mechanics is not, in my view, something that flows
smoothly and naturally from earlier theories. On the contrary, it represents an
abrupt and revolutionary departure from classical ideas, calling forth a wholly new
and radically counterintuitive way of thinking about the world. That, indeed, is
what makes it such a fascinating subject.
At first glance, this book may strike you as forbiddingly mathematical. We
encounter Legendre, Hermite, and Laguerre polynomials, spherical harmonics,
Bessel, Neumann, and Hankel functions, Airy functions, and even the Riemann
zeta function—not to mention Fourier transforms, Hilbert spaces, hermitian oper-
ators, Clebsch-Gordan coefficients, and Lagrange multipliers. Is all this baggage
teally necessary? Perhaps not, but physics is like carpentry: Using the right tool
makes the job easier, not more difficult, and teaching quantum mechanics without
the appropriate mathematical equipment is like asking the student to dig a foun-
dation with a screwdriver. (On the other hand, it can be tedious and diverting if
the instructor feels obliged to give elaborate lessons on the proper use of each
tool. My own instinct is to hand the students shovels and tell them to start dig-
ging. They may develop blisters at first, but I still think this is the most efficient
and exciting way to learn.) At any rate, I can assure you that there is no deep
mathematics in this book, and if you run into something unfamiliar, and you don’t
find my explanation adequate, by all means ask someone about it, or look it up.
There are many good books on mathematical methods—I particularly recommend
Mary Boas, Mathematical Methods in the Physical Sciences, 2nd ed., Wiley, New
York (1983), or George Arfken and Hans-Jurgen Weber, Mathematical Methods for
Physicists, 5th ed., Academic Press, Orlando (2000). But whatever you do, don’t
let the mathematics—which, for us, is only a too/ —interfere with the physics.
Several readers have noted that there are fewer worked examples in this book
than is customary, and that some important material is relegated to the problems.
This is no accident. I don’t believe you can learn quantum mechanics without doing
many exercises for yourself. Instructors should of course go over as many problems
in class as time allows, but students should be warned that this is not a subject
about which anyone has natural intuitions—you’re developing a whole new set
of muscles here, and there is simply no substitute for calisthenics. Mark Semon
Preface ix
suggested that I offer a “Michelin Guide” to the problems, with varying numbers
of stars to indicate the level of difficulty and importance. This seemed like a good
idea (though, like the quality of a restaurant, the significance of a problem is partly
a matter of taste); I have adopted the following rating scheme:
* an essential problem that every reader should study;
** a somewhat more difficult or more peripheral problem;
**%* an unusually challenging problem, that may take over an hour.
(No stars at all means fast food: OK if you're hungry, but not very nourishing.)
Most of the one-star problems appear at the end of the relevant section; most of
the three-star problems are at the end of the chapter. A solution manual is available
(to instructors only) from the publisher.
In preparing the second edition ] have tried to retain as much as possible the
spiril of the first. The only wholesale change is Chapter 3, which was much too
long and diverting; it has been completely rewritten, with the background material
on finite-dimensional vector spaces (a subject with which most students at this level
are already comfortable) relegated to the Appendix. I have added some examples
in Chapter 2 (and fixed the awkward definition of raising and lowering operators
for the harmonic oscillator). In later chapters I have made as few changes as I
could, even preserving the numbering of problems and equations, where possible.
The treatment is streamlined in places (a better introduction to angular momentum
in Chapter 4, for instance, a simpler proof of the adiabatic theorem in Chapter
10, and a new section on partial wave phase shifts in Chapter 11). Inevitably, the
second edition is a bit longer than the first, which I regret, but I hope it is cleaner
and more accessible.
I have benefited from the comments and advice of many colleagues, who
read the original manuscript, pointed out weaknesses (or errors) in the first edition,
suggested improvements in the presentation, and supplied interesting problems. I
would like to thank in particular P. K, Aravind (Worcester Polytech), Greg Benesh
(Baylor), David Boness (Seattle), Burt Brody (Bard), Ash Carter (Drew), Edward
Chang (Massachusetts), Peter Collings (Swarthmore), Richard Crandall (Reed),
Jeff Dunham (Middlebury), Greg Elliott (Puget Sound), John Essick (Reed), Gregg
Franklin (Carnegie Mellon), Henry Greenside (Duke), Paul Haines (Dartmouth),
J. R. Huddle (Navy), Larry Hunter (Amherst), David Kaplan (Washington), Alex
Kuzmich (Georgia Tech), Peter Leung (Portland State), Tony Liss (Illinois), Jeffry
Mallow (Chicago Loyola), James McTavish (Liverpool), James Nearing (Miami),
Johnny Powell (Reed), Krishna Rajagopal (MIT), Brian Raue (Florida Interna-
tional), Robert Reynolds (Reed), Keith Riles (Michigan), Mark Semon (Bates),
Herschel Snodgrass (Lewis and Clark), John Taylor (Colorado), Stavros Theodor-
akis (Cyprus), A. S. Tremsin (Berkeley), Dan Velleman (Amherst), Nicholas
Wheeler (Reed), Scott Willenbrock (Illinois), William Wootters (Williams), Sam
Wurzel (Brown), and Jens Zorn (Michigan).
Introduction to
Quantum Mechanics
Section 1.2: The Statistical Interpretation 3
lel?
FIGURE 1,2: A typical wave function. The shaded area represents the probability of
finding the particle between a and b. The particle would be relatively likely to be found
near A, and unlikely to be found near B.
The statistical interpretation introduces a kind of indeterminacy into quan-
tum mechanics, for even if you know everything the theory has to tell you about
the particle (to wit: its wave function), still you cannot predict with certainty the
outcome of a simple experiment to measure its position—all quantum mechan-
ics has to offer is statistical information about the possible results. This inde-
terminacy has been profoundly disturbing to physicists and philosophers alike,
and it is natural to wonder whether it is a fact of nature. or a defect in the
theory.
Suppose I do measure the position of the particle, and I find it to be at point
C.* Question: Where was the particle just before I made the measurement? There
are three plausible answers to this question, and they serve to characterize the main
schools of thought regarding quantum indeterminacy:
1. The realist position: The particle was at C. This certainly seems like a sen-
sible response, and it is the one Einstein advocated. Note, however, that if this is
true then quantum mechanics is an incomplete theory, since the particle really was
at C, and yet quantum mechanics was unable to tell us so. To the realist, indeter-
minacy is not a fact of nature, but a reflection of our ignorance. As d’Espagnat put
it, “the position of the particle was never indeterminate, but was merely unknown
to the experimenter.”> Evidently W is not the whole story —some additional infor-
mation (known as a hidden variable) is needed to provide a complete description
of the particle.
2. The orthodox position: The particle wasn't really anywhere. It was the act
of measurement that forced the particle to “take a stand” (though how and why it
decided on the point C we dare not ask). Jordan said it most starkly: “Observations
not only disturb what is to be measured, they produce it... We compel (the
4Of course, no measuring instrument is perfectly precise: what I mean is thal the particle was
found in the vicinity of C. to within the tolerance of the equipment.
> Berard d° Espagnat, “The Quantum Theory and Reality” (Scientific American, November 1979,
p. 165),
4
Chapter 1 The Wave Function
particle) to assume a definite position.”* This view (the so-called Copenhagen
interpretation), is associated with Bohr and his followers. Among physicists it
has always been the most widely accepted position. Note, however, that if it is
correct there is something very peculiar about the act of measurement —something
that over half a century of debate has done precious little to illuminate.
3. The agnostic position: Refitse to answer. This is not quite as silly as it
sounds—after all, what sense can there be in making assertions about the status
of a particle before a measurement, when the only way of knowing whether you
were right is precisely to conduct a measurement, in which case what you get is no
longer “before the measurement?” It is metaphysics (in the pejorative sense of the
word) to worry about something that cannot, by its nature, be tested. Pauli said:
“One should no more rack one’s brain about the problem of whether something one
cannot know anything about exists all the same, than about the ancient question of
how many angels are able to sit on the point of a needle.””? For decades this was the
“fall-back” position of most physicists: They’d try to sell you the orthodox answer,
but if you were persistent they'd retreat to the agnostic response, and terminate the
conversation,
Until fairly recently, all three positions (realist, orthodox, and agnostic) had
their partisans. But in 1964 John Bell astonished the physics community by showing
that it makes an observable difference whether the particle had a precise (though
unknown) position prior to the measurement, or not. Bell’s discovery effectively
eliminated agnosticism as a viable option, and made it an experimental question
whether | or 2 is the correct choice. Ill return to this story at the end of the book,
when you will be in a better position to appreciate Bell’s argument; for now, suffice
it to say that the experiments have decisively confirmed the orthodox interpreta~
tion:® A particle simply does not have a precise position prior to measurement, any
more than the ripples on a pond do; it is the measurement process that insists on
one particular number, and thereby in a sense creates the specific result, limited
only by the statistical weighting imposed by the wave function.
What if 1 made a second measurement, immediately after the first? Would |
get C again, or does the act of measurement cough up some completely new num-
ber each time? On this question everyone is in agreement: A repeated measurement
(on the same particle) must return the same value. Indeed, it would be tough to
prove that the particle was really found at C in the first instance, if this could not
be confirmed by immediate repetition of the measurement. How does the orthodox
SQuoted in a lovely article by N. David Mermin, “Is the moon there when nobody looks?”
(Physics Today. April 1985. p. 38).
7 Quoted by Mermin (footnote 6). p. 40.
8This statement is a little too strong: There remain a few theoretical and experimental loopholes,
some of which T shall discuss in the Afterword, There exist viahle nonlocal hidden variable theories
(notably David Bohms's), and other formulations (such as the many worlds interpretation) that do not
fit cleanly inte any of my three categories, But | think it is wise. ut least fram a pedagogical point of
view, to adopt a clear and coherent platform at this stage. and worry about the alternatives later,
Section 1.3: Probability 5
lw?
FIGURE 1.3: Collapse of the wave function: graph of [W|? immediately after a
measurement has found the particle at point C.
interpretation account for the fact that the second measurement is bound to yield
the value C? Evidently the first measurement radically alters the wave function,
so that it is now sharply peaked about C (Figure 1.3). We say that the wave func-
tion collapses, upon measurement, to a spike at the point C (it soon spreads out
again, in accordance with the Schrédinger equation, so the second measurement
must be made quickly). There are, then, two entirely distinct kinds of physical pro-
cesses: “ordinary” ones, in which the wave function evolves in a leisurely fashion
under the Schrédinger equation, and “measurements,” in which Y suddenly and
discontinuously collapses.
1.3 PROBABILITY
1.3.1 Discrete Variables
Because of the statistical interpretation, probability plays a central role in quantum
mechanics, so I digress now for a brief discussion of probability theory. It is mainly
a question of introducing some notation and terminology, and I shall do it in the
context of a simple example.
Imagine a room containing fourteen people, whose ages are as follows:
one person aged 14,
one person aged 15,
three people aged 16,
*The role of measurement in quantum mechanics is so. critical and so bizarre that you may
well be wondering what precisely constitutes a measurement. Does it have to do with the interaction
between a microscopic (quantum) system and a macroscopic (classical) measuring apparatus (as Bohr
insisted), or is it characterized by the leaving of a permanent “record” (as Heisenberg claimed), or does
it involve the intervention of a conscious “observer™ (as Wigner proposed)? I'll return to this thorny
issue in the Afterword: for the moment let’s take the naive view: A measurement is the kind of thing
that a scientist does in the laboratory. with rulers, stopwatches, Geiger counters, and so on.
8
Chapter1 The Wave Function
NU) NC)
1
123 4656 67 89 10 j; 123 45 6 7 8 910 jf
FIGURE 1.5: Two histograms with the same median, same average, and same most
probable value, but different standard deviations.
(Equations 1.6, 1.7, and 1.8 are, if you like, special cases of this formula.) Beware:
The average of the squares, (j*), is not equal, in general, to the square of the
average, ay. For instance, if the room contains just two babies, aged 1 and 3,
then (x?) = 5, but (xj? =4.
Now, there is aconspicuous difference between the two histograms in Figure 1.5.
even though they have the same median, the same average, the same most probable
value, and the same number of elements: The first is sharply peaked about the average
value, whereas the second is broad and flat. (The first might represent the age profile
for students in a big-city classroom, the second, perhaps, a rural one-room school-
house.) We need a numerical measure of the amount of “spread” in a distribution,
with respect to the average. The most obvious way to do this would be to find out
how far each individual deviates from the average,
Aj=i-(). [1.10]
and compute the average of Aj. Trouble is, of course, that you get zero, since, by
the nature of the average, Aj is as often negative as positive:
(Af) = S20 — PW = iPM WI PW)
= (J) -) =0.
(Note that (j} is constant—it does not change as you go from one member of
the sample to another—so it can be taken outside the summation.) To avoid this
irritating problem you might decide to average the absolute value of Aj. But
absolute values are nasty to work with; instead, we get around the sign problem
by squaring before averaging:
o* = ((Ajy’). [1.11]
Section 1.3: Probability 9
This quantity is known as the variance of the distribution; o itself (the square
root of the average of the square of the deviation from the average— gulp!) is
called the standard deviation. The latter is the customary measure of the spread
about (/).
There is a useful little theorem on variances:
o? = (As) = SAPP) = OU - YP PU)
=u? - 24 + YP
= PPA YL P+ UP YP)
=P) - 2G) + UP = 7) - GY.
Taking the square root, the standard deviation itself can be written as
o = (i?) -(j)*- [1.12]
In practice, this is a much faster way to get o: Simply calculate (j?) and (j)?,
subtract, and take the square root. Incidentally, I warned you a moment ago that
(j?) is not, in general, equal to (j)?. Since o? is plainly nonnegative (from its
definition in Equation 1.11), Equation 1.12 implies that
(P) = UY, [1.13]
and the two are equal only when o = 0, which is to say, for distributions with no
spread at all (every member having the same value).
1.3.2 Continuous Variables
So far, I have assumed that we are dealing with a discrete variable—that is, one
that can take on only certain isolated values (in the example, j had to be an
integer, since I gave ages only in years). But it is simple enough to generalize to
continuous distributions. If 1 select a random person off the street, the probability
that her age is precisely 16 years, 4 hours, 27 minutes, and 3.333 ... seconds is
zero. The only sensible thing to speak about is the probability that her age lies in
some interval—say, between 16 and 17. If the interval is sufficiently short, this
probability is proportional to the length of the interval. For example, the chance that
her age is between 16 and 16 plus nwo days is presumably twice the probability
that it is between 16 and 16 plus one day. (Unless, I suppose, there was some
extraordinary baby boom 16 years ago, on exactly that day—in which case we
have simply chosen an interval too long for the rule to apply. If the baby boom
10
Chapter 1 The Wave Function
lasted six hours, we'll take intervals of a second or less, to be on the safe side.
Technically, we're talking about infinitesimal intervals.) Thus
probability that an individual (chosen
al random) lies between x and (x + dx)
| = pide, [1.14]
The proportionality factor, o(x), is often loosely called “the probability of getting
x,” but this is sloppy language; a better term is probability density. The probability
that x lies between a and 6 (a finite interval) is given by the integral of p(x):
b
Pa = [ p(x) dx, [1.15]
a
and the rules we deduced for discrete distributions translate in the obvious way:
HOO
1 -/ p(x) dx, [1.16]
mood
+00
(x) -/ xp(x) dx, [1.17]
20
oo
(f@)) = F)e@) dx, {1.18]
OO
2 = (Ax?) = (7) — Gy, [1.19]
Example 1.1 Suppose I drop a rock off a cliff of height 7. As it falls. I snap a wt
million photographs, at random intervals. On each picture [ measure the d ”
the rock has fallen. Question: What is the average of all these distances”
to say, what is the time average of the distance traveled’?!
Solution: The rock starts out at rest, and picks up speed as it falls; it spends more
time near the top, so the average distance must be less than 4/2. Ignoring air
resistance, the distance x at time f is
i,
x(t) = xgt?.
x(t) 378
The velocity is dv/dr = gt, and the total flight time is T = ./2//g. The probability
that the camera flashes in the interval dr is dt/T, so the probability that a given
104 statistician will complain that I am confusing the average of a finite sample (a million, in
this case) with the “true” average (over the whole continuum). This can be an awkward problem tor
the experimentalist, especially when the sample size is small, but here | am only concerned, of course,
with the true average. to which the sample average is presumably a good approximation.
Section 1.4: Normalization 13
glance at Equation 1.1 reveals that if Y(x. r) is a solution, so too is AY(x. 1), where
A is any (complex) constant. What we must do, then, is pick this undetermined
multiplicative factor so as to ensure that Equation 1.20 is satisfied. This process
is called normalizing the wave function. For some solutions to the Schrédinger
equation the integral is infinite; in that case no multiplicative factor is going to
make it 1. The same goes for the trivial solution VY = 0. Such non-normalizable
solutions cannot represent particles, and must be rejected. Physically realizable
states correspond to the square-integrable solutions to Schrédinger’s equation.!!
But wait a minute! Suppose I have normalized the wave function at time t = 0.
How do I know that it will stay normalized, as time goes on, and ¥ evolves? (You
can’t keep renormalizing the wave function, for then A becomes a function of ft,
and you no longer have a solution to the Schrédinger equation.) Fortunately, the
Schrédinger equation has the remarkable property that it automatically preserves the
normalization of the wave function—without this crucial feature the Schrédinger
equation would be incompatible with the statistical interpretation, and the whole
theory would crumble.
This is important, so we’d better pause for a careful proof. To begin with,
d O90 2 HOC a >
af. Wo. oP as -/ 5 Wen DP dx. (1.21)
00
(Note that the integral is a function only of t, so I use a total derivative (d/drt)
in the first expression, but the integrand is a function of x as well as 1, so it’s a
partial derivative (0/8f) in the second one.) By the product rule,
aw*
a. 8. , aw
—|¥r = —(*) = ww, 1,22
ar! | ar ) ar + ar {1.22]
Now the Schrédinger equation says that
aw ik ePw i
Est 1.23
at am ax? hk 1.23]
and hence also (taking the complex conjugate of Equation 1.23)
aw* ihaews Ff
=-_ + ivy, [1.24]
at 2m ax? hh
so
a if aap a2 a [ia Yaw
“wea? (v5 -Sav)- 245 (ws 2 ¥)]. [1.25]
ar ~ 2m ax? ax? ax |2m Ox ax
' Evidently W(x. 1) must go to zero faster than 1/,/|x], as |x| + oo. Incidentally. normalization
only fixes the modulus of A: the phase remains undetermined. However. as we shall see, the latter
carries no physical significance anyway.
14
Chapter 1 The Wave Function
The integral in Equation 1.21 can now be evaluated explicitly:
te° i vw awe \i+
=| Wo.nPdr = ot (ws 2 ¥) ~ [1.26]
t Joo 2m oo
x ax
But W(x, ) must go to zero as x goes to (+) infinity—otherwise the wave function
would not be normalizable.!? It follows that
+00
— Von dx =0, [1.27]
dt Juoo
and hence that the integral is constant (independent of time); if Y is normalized
at ¢ = 0, it srays normalized for all future time. QED
Problem 1.4 At time t = 0 a particle is represented by the wave function
x :
A-, if0<x <a,
a
Wx O=] gl") gcx<s,
(b—a) =*=
0, otherwise,
where A, a, and 6 are constants.
(a) Normalize © (that is, find A, in terms of a and 5).
(b) Sketch ¥(x, 0), as a function of x.
(c) Where is the particle most likely to be found, at ¢ = 0?
(d) What is the probability of finding the particle to the left of @? Check your
result in the limiting cases b = a and b = 2a.
(e) What is the expectation value of x?
*Problem 1.5 Consider the wave function
Wx. t) = Ae Mle ter
where A, A, and w are positive real constants. (We'll see in Chapter 2 what potential
(V) actually produces such a wave function.)
(a) Normalize wv.
(b) Determine the expectation values of x and x?.
Ra good mathematician can supply you with pathological counterexamples, but they do not arise
in physics: for us the wave function afways goes to Zero at infinity.
Section 1.5: Momentum 15
(c) Find the standard deviation of x. Sketch the graph of |(?, as a function
of x, and mark the points ({x) +o) and ({x) — o), to illustrate the sense in
which o represents the “spread” in x, What is the probability that the particle
would be found outside this range?
1.55 MOMENTUM
For a particle in state Y, the expectation value of x is
+00
(x) = | x|Wr. np? dx. (1.28]
08
What exactly does this mean? It emphatically does not mean that if you measure
the position of one particle over and over again, f x|%[?dx is the average of the
results you’! get. On the contrary: The first measurement (whose outcome is inde-
terminate) will collapse the wave function to a spike at the value actually obtained,
and the subsequent measurements (if they’re performed quickly) will simply repeat
that same result. Rather, (x) is the average of measurements performed on particles
all in the state VY, which means that either you must find some way of returning the
particle to its original state after each measurement, or else you have to prepare a
whole ensemble of particles, each in the same state YW, and measure the positions of
all of them: (x) is the average of these results. (1 like to picture a row of bottles on
a shelf, each containing a particle in the state W (relative to the center of the bottle).
A graduate student with a ruler is assigned to each bottle, and at a signal they all
measure the positions of their respective particles. We then construct a histogram
of the results, which should match |/[*, and compute the average, which should
agree with (x). (Of course, since we're only using a finite sample, we can’t expect
perfect agreement, but the more bottles we use, the closer we ought to come.)) In
short, the expectation value is the average of repeated measurements on an ensem-
ble of identically prepared systems, not the average of repeated measurements on
one and the same system.
Now, as time goes on, (x) will change (because of the time dependence
of Y), and we might be interested in knowing how fast it moves. Referring to
Equations 1.25 and 1.28, we see that!?
d{x) | a ih a ~ov aur
—- = fx x= — | x—(V*F¥— —- WY) dx. 1.29
dt <9, ax 2m * 9x ax ax * [1.29]
BT keep things from getting too cluttered. I'll suppress the limits of integration.
18
Chapter 1
The Wave Function
Problem 1.6 Why can’t you do integration-by-parts directly on the middle expres-
sion in Equation 1.29—pull the time derivative over onto x, note that 6x/dt = 0,
and conclude that d(x)/dt = 0?
«Problem 1.7 Calculate d(p)/dt. Answer:
dip) -( ay
APF (5 13
dt ax (1-38)
Equations 1.32 (or the first part of 1.33) and 1.38 are instances of Ehrenfest’s
theorem, which tells us that expectation values obey classical laws.
Problem 1.8 Suppose you add a constant Vp to the potential energy (by “constant™
I mean independent of x as well as /). In classical mechanics this doesn’t change
anything, but what about guantum mechanics? Show that the wave function picks
up a time-dependent phase factor: exp(—i Vor/). What effect does this have on
the expectation value of a dynamical variable?
1.6 THE UNCERTAINTY PRINCIPLE
Imagine that you’re holding one end of a very long rope, and you generate a
wave by shaking it up and down rhythmically (Figure 1.7). If someone asked you
“Precisely where is that wave?” you’d probably think he was a little bit nutty: The
wave isn’t precisely any where—it’s spread out over 50 feet or so. On the other
hand, if he asked you what its wavelength is, you could give him a reasonable
answer: It looks like about 6 feet. By contrast, if you gave the rope a sudden jerk
(Figure 1.8), you'd get a relatively narrow bump traveling down the line. This time
the first question (Where precisely is the wave?) is a sensible one, and the second
(What is its wavelength?) seems nutty—it isn’t even vaguely periodic, so how
can you assign a wavelength to it? Of course, you can draw intermediate cases, in
which the wave is fairly well localized and the wavelength is fairly well defined.
but there is an inescapable trade-off here: The more precise a wave’s position is,
the less precise is its wavelength, and vice versa.!® A theorem in Fourier analysis
makes all this rigorous, but for the moment I am only concerned with the qualitative
argument.
'6That’s why a piccolo player must be right on pitch. whereas a double-bass player can alford to
wear garden gloves. For the piccolo, a sixty-fourth note contains many full cycles. and the frequency
(we're working in the time domain now, instead of space) is well defined, whereas for the bass. al a
much lower register. the sixty-fourth note contains only a few cycles, and all you hear is a general sort
of “oomph,” with no very clear pitch.
Section 1.6: The Uncertainty Principle 19
—
| 10 20 30 40 50 x (feet)
FIGURE 1.7: A wave with a (fairly) well-defined wavelength, but an ill-defined
position.
A N
10 20 30 40 50 x (feat)
FIGURE 1.8: A wave with a (fairly) well-defined position, but an ill-defined wave-
length,
This applies, of course, to any wave phenomenon, and hence in particular to
the quantum mechanical wave function. Now the wavelength of is related to the
momentum of the particle by the de Broglie formula:!7
A Qnh
= oa, {1.39]
a A
Thus a spread in wavelength corresponds to a spread in momentum, and our general
observation now says that the more precisely determined a particle’s position is,
the less precisely is its momentum. Quantitatively,
=
Ox0p >= 5: [1.40]
N
where oy is the standard deviation in x, and op is the standard deviation in p.
This is Heisenberg’s famous uncertainty principle. (We'll prove it in Chapter 3,
but I wanted to mention it right away, so you can test it out on the examples in
Chapter 2.)
Please understand what the uncertainty principle means: Like position mea-
surements, Momentum measurements yield precise answers—the “spread” here
refers to the fact that measurements on identically prepared systems do not yield
identical results. You can, if you want, construct a state such that repeated posi-
tion measurements will be very close together (by making a localized “spike”’),
but you will pay a price: Momentum measurements on this state will be widely
scattered, Or you can prepare a slate with a reproducible momentum (by making
"P11 prove this in due course. Many authors take the de Broglie formula as an axiom, from
which they then deduce the association of momentum with the operator (#/7)(8/ax). Although this is
a conceptually cleaner approach, it involves diverting mathematical complications that I would rather
save for later.
20 Chapter 1 The Wave Function
W a long sinusoidal wave), but in that case, position measurements will be widely
scattered. And, of course, if you’re in a really bad mood you can create a state for
which neither position nor momentum is well defined: Equation 1.40 is an inequal-
ity, and there’s no limit on how big oy and op can be—just make V some long
wiggly line with lots of bumps and potholes and no periodic structure.
«Problem 1.9 A particle of mass m is in the state
Va.n= AoW alma? Jatin]
where A and @ are positive real constants.
(a) Find A.
(b) For what potential energy function V(x) does W satisfy the Schrédinger
equation?
{c) Calculate the expectation values of x, x, p, and p?.
(d) Find o, and op. Is their product consistent with the uncertainty principle?
FURTHER PROBLEMS FOR CHAPTER i
Problem 1.10 Consider the first 25 digits in the decimal expansion of m (3, 1, 4,
1,5, 9,...).
{a) If you selected one number at random, from this set, what are the probabilities
of getting each of the 10 digits?
{b) What is the most probable digit? What is the median digit? What is the
average value?
(c) Find the standard deviation for this distribution.
Problem 1.11 The needle on a broken car speedometer is free to swing, and
bounces perfectly off the pins at either end, so that if you give it a flick it is
equally likely to come to rest at any angle between 0 and z.
{a) What is the probability density, o(@)? Hint: p(@)d@ is the probability that
the needle will come to rest between 6 and (9 +40). Graph o(@) as a function
of 6, from —x/2 to 32/2. (Of course, part of this interval is excluded, so p
is zero there.) Make sure that the total probability is 1.
Further Problems for Chapter 1 23
(g) Find the uncertainty in p (cp).
({h) Check that your results are consistent with the uncertainty principle.
Problem 1.18 In general, quantum mechanics is relevant when the de Broglie
wavelength of the particle in question (#/p) is greater than the characteristic size
of the system (d). In thermal equilibrium at (Kelvin) temperature 7, the average
kinetic energy of a particle is
2
pe 3
— = 5kaT
am 28
(where kg is Boltzmann’s constant), so the typical de Broglie wavelength is
a-— [141]
~ 3mkgT ,
The purpose of this problem is to anticipate which systems will have to be treated
quantum mechanically, and which can safely be described classically.
(a) Solids. The lattice spacing in a typical solid is around d = 0.3 nm. Find the
temperature below which the free!® electrons in a solid are quantum mechan-
ical. Below what temperature are the nuc/ei in a solid quantum mechanical?
(Use sodium as a typical case.) Moral: The free electrons in a solid are
always quantum mechanical; the nuclei are almost never quantum mechani-
ca]. The same goes for liquids (for which the interatomic spacing is roughly
the same), with the exception of helium below 4 K.
{b) Gases. For what temperatures are the atoms in an ideal gas at pressure P
quantum mechanical? Hint; Use the ideal gas law (PV = NkgT) to deduce
the interatomic spacing. Answer: T < (1/kg)(h7/3m)9/ P25. Obviously
(for the gas to show quantum behavior) we want #7 to be as small as possible.
and P as /arge as possible. Put in the numbers for helium at atmospheric
pressure. Is hydrogen in outer space (where the interatomic spacing is about
1 cm and the temperature is 3 K) quantum mechanical?
'Sty a solid the inner electrons are allached to a particular nucleus. and for them the relevant
size would be the radius of the atom. But the outermost electrons are not attached. and for them the
relevant distance is the latice spacing. This problem pertains to the auer electrons.
CHAPTER 2
TIME-INDEPENDENT
SCHRODINGER EQUATION
2.1 STATIONARY STATES
In Chapter 1 we talked a lot about the wave function, and how you use it to
calculate various quantities of interest. The time has come to stop procrastinating,
and confront what is, logically, the prior question: How do you get V(x. f) in the
first place? We need to solve the Schrédinger equation,
aw row
-.—-—st VY, 2.1
ar 2m ax? * 2.1
for a specified potential! V(x. 1). In this chapter (and most of this book) I shall
assume that V is independent of t. In that case the Schrédinger equation can be
solved by the method of separation of variables (the physicist’s first line of attack
on any partial differential equation): We look for solutions that are simple products,
YO. =O) en. [2.2]
where y (/ower-case) is a function of x alone, and ¢ is a function of tf alone, On
its face, this is an absurd restriction, and we cannot hope to get more than a tiny
‘it is tiresome to keep saying “potential energy function.” so most people just call V the
“potential.” even though this invites occasional confusion with elecric potential. which is actually
potential energy per unit charge.
24
Section 2.1: Stationary States 25
subset of all solutions in this way. But hang on, because the solutions we do obtain
turn out to be of great interest. Moreover (as is typically the case with separation
of variables) we will be able at the end to patch together the separable solutions
in such a way as to construct the most general solution.
For separable solutions we have
av dg Fw &
or dt Gxt det?
(ordinary derivatives, now), and the Schrédinger equation reads
dy dy
hy— =-—
thy dt 2m dx?
gt Vy.
Or, dividing through by yg:
2 2
nite? FP idv iy [2.3]
gat 2m w dx?
Now, the left side is a function of t alone, and the right side is a function of
x alone.” The only way this can possibly be true is if both sides are in fact
constant — otherwise, by varying t, I could change the left side without touching
the right side, and the two would no longer be equal. (That’s a subtle but crucial
argument, so if it’s new to you, be sure to pause and think it through.) For reasons
that will appear in a moment, we shall call the separation constant E. Then
ldg
in-— = E,
i o dt E
or I iE
dg i
oP _ ls 24
dt 7? [2.4]
and
Ridy
"3a dx? +V=E,
or
ay
-— Vw =E. .
2m dx +¥y ¥ [2.5]
Separation of variables has turned a partial differential equation into two ordi-
nary differential equations (Equations 2.4 and 2.5). The first of these (Equation 2.4)
2Note that this would nor be true if V were a function of 1 as well as x.
28
Chapter 2 Time-Independent Schrédinger Equation
is itself a solution. Once we have found the separable solutions, then, we can
immediately construct a much more general solution, of the form
~
Wat) = DP envn de ntl, [2.15]
n=l
It so happens that every solution to the (time-dependent) Schrodinger equation
can be written in this form—it is simply a matter of finding the right constants
(c}. cz, ...) so as to fit the initial conditions for the problem at hand. You’ll see
in the following sections how all this works out in practice, and in Chapter 3 we'll
put it into more elegant language, but the main point is this: Once you’ve solved
the time-independent Schrédinger equation, you're essentially done; getting from
there to the general solution of the time-dependent Schrédinger equation is, in
principle, simple and straightforward.
A lot has happened in the last four pages, so let me recapitulate, from a
somewhat different perspective. Here’s the generic problem: You're given a (time-
independent) potential V(x), and the starting wave function (x, 0); your job is
to find the wave function, (x,t), for any subsequent time t. To do this you must
solve the (time-dependent) Schrédinger equation (Equation 2.1). The strategy® is
first to solve the time-independent Schrédinger equation (Equation 2.5); this yields,
in general, an infinite set of solutions (W(x), W2(x). W3(*). ...), each with its own
associated energy (£, £2, £3,...). To fit Y(x,0) you write down the general
linear combination of these solutions:
ox
VO, 0) = Den Ynlx): [2.16]
n=]
the miracle is that you can always match the specified initial state by appropriate
choice of the constants cj, cz, ¢3, ... . To construct ¥(x,7) you simply tack onto
each term its characteristic time dependence, exp(—iE,1/f):
oo w~
Bx) = cyte Ee = VP ey Wa (x, 1)- [2.17]
n=l t=]
The separable solutions themselves,
Watt) = Yn lae fBat/h 2.18]
SOccasionally you can solve the time-dependent Schrddinger equation without recourse to sep-
aration of variables—-see. for instance. Problems 2.49 and 2.50. But such cases are extremely rare.
Section 2.1: Stationary States 29
are stationary states, in the sense that all probabilities and expectation values are
independent of time, but this property is emphatically mot shared by the general
solution (Equation 2.17); the energies are different, for different stationary states,
and the exponentials do not cancel, when you calculate |W|*.
Example 2.1 Suppose a particle starts out in a linear combination of just fro
stationary states:
Wx. 0) = 1 v1 (x) + coax).
(To keep things simple I'll assume that the constants c, and the states w,,(x) are
real,) What is the wave function ¥(x, t) at subsequent times? Find the probability
density, and describe its motion.
Solution: The first part is easy:
BOL Scie TE + coo re (EH,
where E, and E> are the energies associated with yr, and wo. It follows that
or NP = Cyne El + ern Yer ne FU" + exyne TP")
= fw} + hW2 + 2ereoi Wr cos[( Ez — Ey)t/A).
(L used Euler’s formula, exp/@ = cos +i sin, to simplify the result.) Evidently
the probability density oscillates sinusoidally, at an angular frequency (E2— £))/fis
this is certainly not a stationary state. But notice that it took a linear combination
of states (with different energies) to produce motion.’
«Problem 2.1 Prove the following three theorems:
(a) For normalizable solutions, the separation constant E must be real. Hint:
Write E (in Equation 2.7) as Eo + iF (with Eq and [ real), and show that
if Equation 1.20 is to hold for all t, F must be zero.
(b) The time-independent wave function (x) can always be taken to be real
(unlike W(x. t), which is necessarily complex), This doesn’t mean that every
solution to the time-independent Schrédinger equation is real: what it says
is that if you've got one that is nor, it can always be expressed as a linear
combination of solutions (with the same energy) that are. So you might as
well stick to y’s that are real. Hint: If r(x) satisfies Equation 2.5, for a
given E, so too does its complex conjugate, and hence also the real linear
combinations (yr + w*) and f(y — ¥*).
7This is nicely illustrated by an applet at the Web site http:/thorin.adne.com/~topquark/
quantum/deepweillmain-html.
30 Chapter 2 Time-Independent Schridinger Equation
(c) If V(x) is an even function (that is, V(—x) = V(x)) then (x) can always
be taken to be either even or odd. Hint: If (x) satisfies Equation 2.5. for
a given E, so too does y(—x), and hence also the even and odd linear
combinations w(x) + w(—x).
«Problem 2.2 Show that E must exceed the minimum value of V(x), for every
normalizable solution to the time-independent Schrédinger equation. What is the
classical analog to this statement? Hint: Rewrite Equation 2.5 in the form
ay 2m
qe = pvo — Ely:
if E < Vmin. then y and its second derivative always have the same sign —argue
that such a function cannot be normalized.
2.2: THE INFINITE SQUARE WELL
Suppose
0 ifO0<x<a,
oo, otherwise [2.19]
V(x) =|
(Figure 2.1). A particle in this potential is completely free, except at the two ends
(x = 0 and x = a), where an infinite force prevents it from escaping. A classical
model would be a cart on a frictionless horizontal air track, with perfectly elastic
bumpers—it just keeps bouncing back and forth forever. (This potential is artifi-
cial, of course, but I urge you to treat it with respect. Despite its simplicity—or
rather, precisely because of its simplicity—it serves as a wonderfully accessi-
ble test case for all the fancy machinery that comes later. We’ll refer back to it
frequently.)
V(x)
FIGURE 2.1; The infinite square well poten-
a x tial (Equation 2.19).
Section 2.2: The Infinite Square Well 33
2. As you go up in energy, each successive state has one more node (zero-
crossing): yf, has none (the end points don’t count), w2 has one, #3 has two, and
so on.
3. They are mutually orthogonal, in the sense that
| Vin)" Wn (x) dx = 0, [2.29]
whenever m 4 n. Proof:
2 a
| Vn 2) Wale) dx = = [ sin(™ x) sin(= 2) ax
a Jo a a
: {| (“— ) Ce )]
=- cos mx | — cos mx )| dx
a Jo a a
1 _ f{m—n 1 _ [min
= sin ax | — sin WX
(m —n)x a (m+n) a
i {ee —n)r] _ sin((m aa =0
(m —n) (m+n) —"
a
0
at
Note that this argument does not work if m =n. (Can you spot the point at which
it fails?) In that case normalization tells us that the integral is 1. In fact, we can
combine orthogonality and normalization into a single statement:!°
| Von Yen (x) dx = bins [2.30]
where 5, (the so-called Kronecker delta) is defined in the usual way,
0, if :
om = { § nme (2.31)
We say that the y’s are orthonormal.
4. They are complete, in the sense that any ofher function, f(x), can be
expressed as a linear combination of them:
~ 2— fae
f@= » CrWn(X) = a » Cy SID (=) : [2.32]
n=l n=l
'0Tn this case the sare real, so the * on ys, is unnecessary. but for future. purposes it’s a good
idea to get in the habit of putting it there.
34
Chapter 2 Time-Independent Schrodinger Equation
I’m not about to prove the completeness of the functions sin (n7.x/a), but if you’ve
studied advanced calculus you will recognize that Equation 2.32 is nothing but the
Fourier series for f(x), and the fact that “any” function can be expanded in this
way is sometimes called Dirichlet’s theorem. '!
The coefficients c, can be evaluated—for a given f(+)—by a method I call
Fourier’s trick, which beautifully exploits the orthonormality of {y,}: Multiply
both sides of Equation 2.32 by p»(x)*, and integrate.
oo oo
J mca 200 dx = Yen f Yin)" Wn(0d dx = enBm = ome (233)
n=1
n=l
(Notice how the Kronecker delta kills every term in the sum except the one for
which 2 = mm.) Thus the nth coefficient in the expansion of f(x) is!”
Ch = | Vin (x)* f(x) dx. {2.34]
These four properties are extremely powerful, and they are not peculiar to the
infinite square well. The first is true whenever the potential itself is a symmetric
function; the second is universal, regardless of the shape of the potential.!> Orthog-
onality is also quite general—I'Il show you the proof in Chapter 3. Completeness
holds for all the potentials you are likely to encounter, but the proofs tend to be
nasty and laborious; I’m afraid most physicists simply assume completeness, and
hope for the best.
The stationary states (Equation 2.18) of the infinite square well are evidently
wep) = [2sin (=) eer h/2na? yt [2.35]
a
a
I claimed (Equation 2.17) that the most general solution to the (time-dependent)
Schrédinger equation is a linear combination of stationary states:
OS
bon = on pz sin (“2 x) elon name, (2.36)
n=l
"See, for example, Mary Boas, Mathematical Methods in the Physical Sciences, 2d ed. (New
York: John Wiley, 1983), p. 313: (+) can even have a finite number of finite discontinuities.
'21) doesn't matter whether you use m or # as the “dummy index” here (as long as you are
consistent on the two sides of the equation. of course); whatever letter you usc. it just stands for “any
positive integer."
see. for example, John L, Powell and Bernd Crasemann, Quantin Mechanics (Addison-
Wesley, Reading, MA, 1961), p. 126.
Section 2.2: The Infinite Square Well 35
(If you doubt that this is a solution, by all means check it!) It remains only for
me to demonstrate that J can fit any prescribed initial wave function, U(x. 0), by
appropriate choice of the coefficients c,:
x
W(x, 0) = 0 cuba).
n=l
The completeness of the y’s (confirmed in this case by Dirichlet’s theorem) guar-
antees that I can always express V(x.0) in this way, and their orthonormality
licenses the use of Fourier’s trick to determine the actual coefficients:
2 a
Cn =\f— [ sin (=:) W(x, 0) dx. (2.37]
ad Jo a
That does it: Given the initial wave function, ¥(x.0), we first compute the
expansion coefficients c,, using Equation 2.37, and then plug these into Equation 2.36
to obtain (x. t). Armed with the wave function, we are in a position to compute any
dynamical quantities of interest, using the procedures in Chapter 1. And this same
ritual applies to any potential—the only things that change are the functional form
of the y's and the equation for the allowed energies.
Example 2.2 A particle in the infinite square well has the initial wave function
W(x.0) = Ax(a—x). (052 <a).
for some constant A (see Figure 2.3). Outside the well, of course, ¥ = 0. Find
Wox.t).
W(x, 0)
a x
FIGURE 2.3: The starting wave function in Example 2.2.
38
Chapter 2 Time-Independent Schridinger Equation
Example 2.3. In Example 2.2 the starting wave function (Figure 2.3) closely re-
sembles the ground state y (Figure 2.2). This suggests that |c, |? should dominate,
and in fact -
la? = (2) = 0.998555...
The rest of the coefficients make up the difference:!4
Sia? = (xe) y Jen.
n=l 4=1.35....
The expectation value of the energy, in this example, is
2
co (33) wm 480n2> SSH?
(H) = aero = .
» 2ma? x4ma2 » n4 ma?
n=1.3.5..., n=),3.5...
n3z3
As one might expect, it is very close to Ey = 12h? /2ma?—slightly larger, because
of the admixture of excited states.
Problem 2.3 Show that there is no acceptable solution to the (time-independent)
Schrédinger equation for the infinite square well with E = 0 or E < 0. (This is a
special case of the general theorem in Problem 2.2, but this time do it by explicitly
solving the Schrédinger equation, and showing that you cannot meet the boundary
conditions.)
«Problem FA catcutate (x), Or), (p), (p?), og, and @p, for the nth stationary state
of the infinite square well. Check that the uncertainty principle is satisfied. Which
state comes closest to the uncertainty limit?
«Problem 2.5 A particle in the infinite square well has as its initial wave function
an even mixture of the first two stationary states:
Wx, 0) = Aly) + Wax)].
'4You can look up the series
Iii _ 7
16 + 36 + 56 t= agg
and
14d we
pita tat 96
in math tables. under “Sums of Reciprocal Powers” or “Riemann Zeta Function.”
Section 2.2: The Infinite Square Well 39
(a) Normalize (x, 0). (That is, find A. This is very easy, if you exploit the
orthonormality of 4 and yz. Recall that, having normalized Y at ¢ = 0,
you can rest assured that it stays normalized—if you doubt this, check it
explicitly after doing part (b).)
(b) Find (x, 1) and [W(x, nl. Express the latter as a sinusoidal function of
time, as in Example 2.1. To simplify the result, let # = 77A/2ma?.
(c) Compute (x). Notice that it oscillates in time. What is the angular frequency
of the oscillation? What is the amplitude of the oscillation? (If your amplitude
is greater than a/2, go directly to jail.)
(d) Compute (p). (As Peter Lorre would say, “Do it ze kveek vay, Johnny!)
{e) If you measured the energy of this particle, what values might you get, and
what is the probability of getting each of them? Find the expectation value
of H. How does it compare with £) and E2?
Problem 2.6 Although the overall phase constant of the wave function is of no
physical significance (it cancels out whenever you calculate a measurable quantity),
the relative phase of the coefficients in Equation 2.17 does matter. For example,
suppose we change the relative phase of | and wW2 in Problem 2.5:
W(x. 0) = Alyn (x) + ead],
where @ is some constant. Find W(x, t), oP, and (x), and compare your
results with what you got before. Study the special cases @ = w/2 and @ = 7.
(For a graphical exploration of this problem see the applet in footnote 7.)
*Problem 2.7 A particle in the infinite square well has the initial wave function!>
Ax, O<x <a/2,
A(a—x). a/2 5x <a.
W(x,0) = |
(a) Sketch (x, 0), and determine the constant A.
(b) Find W(x. 1).
'SThere is no restriction in principle on the shape of the starting wave function. as long
as it is normalizable. In particular, W(x. 0) need not have a continuous derivalive—in fact, it
doesn’t even have to be a continuous function. However. if you wy lo calculate (H) using
f ¥v.0)* AWC, 0) ax in such a case, you may encounter technical difficulties, because the second
derivative of (x. 0) is ill-defined. It works in Problem 2.9 because the discontinuities occur at the end
points, where the wave function is zero anyway. In Problem 2.48 you'll see how to manage cases like
Problem 2.7.
40
Chapter 2. Time-Independent Schrédinger Equation
(c) What is the probability that a measurement of the energy would yield the
value E)?
(d) Find the expectation value of the energy.
Problem 2.8 A particle of mass m in the infinite square well (of width a) starts
out in the left half of the well, and is (at ¢ = 0) equally likely to be found at any
point in that region.
(a) What is its initial wave function, (x, 0)? (Assume it is real. Don’t forget
to normalize it.)
(b) What is the probability that a measurement of the energy would yield the
value wh? /2ma??
Problem 2.9 For the wave function in Example 2.2, find the expectation value of
Hi, at time r = 0, the “old fashioned” way:
(H) = f ve. 0" AVC, 0) de.
Compare the result obtained in Example 2.3. Note: Because (#) is independent of
time, there is no loss of generality in using t = 0.
2.3 THE HARMONIC OSCILLATOR
The paradigm for a classical harmonic oscillator is a mass #1 attached to a spring
of force constant k. The motion is governed by Hooke’s law,
2
Fo=-kx =m—,
m ape
(ignoring friction), and the solution is
x(t) = A sin(wr) + Bcos(wrt),
where
w= /* [2.41]
AL
is the (angular) frequency of oscillation. The potential energy is
Vix) = Sexy [2.42]
ils graph is a parabola.
Section 2.3: The Harmonic Oscillator 43
As anticipated, there’s an extra term, involving (xp — pv). We call this the com-
mutator of x and p; it is a measure of how badly they fai! to commute. In general,
the commutator of operators A and B (written with square brackets) is
[A, B] = AB — BA. (2.48)
In this notation,
1
~ 2hinw
We need to figure out the commutator of x and p. Warning: Operators are
notoriously slippery to work with in the abstract, and you are bound to make
mistakes unless you give them a “test function,” f(x), to act on, At the end you
can throw away the test function, and you'll be left with an equation involving the
operators alone. In the present case we have:
d_ay [p> + (mox)] - ity, P). [2.49]
2h
. yf Ad hd, .)_hf df df _; ;
[x. p/f@) = [7 a Eon =F (Z x as f) = ih f(x).
[2.50]
Dropping the test function, which has served its purpose,
[x. p]) = ih. [2.51]
This lovely and ubiquitous result is known as the canonical commutation rela-
tion.'8
With this, Equation 2.49 becomes
1 1
a4 = —H+-. 2.52
aa = FH +5 [2.52]
or I
H=hw (oa, - *) : [2.53]
Evidently the Hamiltonian does not factor perfectly—there’s that extra —1/2 on the
tight. Notice that the ordering of a; and a_ is important here; the same argument.
with a4 on the left, yields
aQya_ = —H--. [2.54]
In particular,
[a-, a4] = 1. [2.55]
'81n a deep sense all of the mysteries of quantum mechanics can be traced to the fact that position
and momentum do not commute. Indeed, some authors take the canonical commutation relation as an
axiom of the theory. and use it to derive p = (h/i)d/dx.
44
Chapter 2 Time-Independent Schrédinger Equation
So the Hamiltonian can equally well be written
1
H=hw (c0- + 3) . [2.56]
In terms of a4, then, the Schrédinger equation’? for the harmonic oscillator takes
the form
hw (2a £ >) w=Ey [2.57]
(in equations like this you read the upper signs all the way across, or else the lower
signs).
Now, here comes the crucial step: I claim that if w satisfies the Schrodinger
equation with energy E, (that is: Hw = Ew), then ayw satisfies the Schrodinger
equation with energy (E + hw): Hazy) = (E + hw)(azy). Proof:
Hap) = he (ca. + 4) (ath) =hw (cxaay + sas v
=hoay («0 + ;) waa, [re (ea. +14 ;) |
= a4(H + how) = as(E + holy = (E + ho)(asw).
(I used Equation 2.55 to replace a_a, by a,a_ + 1, in the second line. Notice
that whereas the ordering of a; and a_ does matter, the ordering of az and
any constants—such as fh, w, and E—does not, an operator commutes with any
constant.)
By the same token, a_y is a solution with energy (E — hw):
Hla_y) = hw (ca = 5) (ad_y) = hiwa_ (coe = s) y
=a_ [ne («0 -1- :) ¥] =a_(H —hw)w =a_(E — hoy
= (E —ho)(a_y).
Here, then, is a wonderful machine for generating new solutions, with higher and
lower energies—if we could just find one solution, to get started! We call a4
ladder operators, because they allow us to climb up and down in energy; a+ is
the raising operator, and a_ the lowering operator. The “ladder” of states is
illustrated in Figure 2.5.
'9i'm getting tired of writing “time-independent Schriidinger equation,” so when it’s clear from
the context which one I mean, 1'll just call it the “Schrddinger equation.”
Section 2.3: The Harmonic Oscillator 45
FIGURE 2.5: The “ladder” of states for the harmonic oscillator.
But wait! What if I apply the lowering operator repeatedly? Eventually I’m
going to reach a state with energy less than zero, which (according to the general
theorem in Problem 2.2) does not exist! At some point the machine must fail.
How can that happen? We know that a_y is a new solution to the Schrédinger
equation, but there is no guarantee that it will be normalizable —it might be zero,
or its square-integral might be infinite. In practice it is the former: There occurs a
“lowest rung” (call it Wo) such that
a_Wo = 0. [2.58]
We can use this to determine W(x):
1 d
——. [ih + mowx =0,
J2hima ( dx ) Yo
48
Chapter 2. Time-Independent Schrédinger Equation
and integration by parts takes f f*(dg/dx)dx to — f(df/dx)*g dx (the boundary
terms vanish, for the reason indicated in footnote 22), so
oO 1 OS d * a
* = tA (= *odx
[os (atg)dx mal. lenz + mo) f] gdx [en gdx,
QED
In particular,
20 OG
| (az Wn) * (ag Wn) dx = | (apa 4 Yn) Un dx,
90 oc
But (invoking Equations 2.57 and 2.61)
a4d-Wn =n, d-d4 Yn = (H+ Dn. [2.65]
so
x ~ ox
[ (44-00) (a4 bn) dx = lon f (nai dx = (a + vf Wal? dx.
oO —90 ied
ow > ow 3 ow
[ (Vu) "(a Yn) dx = Mey? | Wn-Pax =n | Ini? ax.
oO oO
00
But since wy, and w, +1 are normalized, it follows that len? = n+ and |d,{? =n,
and hence
abn = Vat las, avn = Va Wnt {2.66]
Thus
Vis avo. We= ean = any
1 = 440. 2S TM = get 0
a 3 = tay, = —L La
w= wow = wa vo. Wa= yar = yaaa” Wo.
and so on. Clearly
1 n
Vn = ya Yo. (2.67]
which is to say that the normalization factor in Equation 2.61 is Ay = 1//n! (in
particular, A; = 1, confirming our result in Example 2.4).
Section 2.3: The Harmonic Oscillator 49
As in the case of the infinite square well, the stationary states of the harmonic
oscillator are orthogonal:
x
/ Vn Wn dX = Sina [2.68]
9
This can be proved using Equation 2.65, and Equation 2.64 twice—first moving
a, and then moving a_:
~ fo. $]
J vitesartnde =n f virtnas
—oX —C
2% 00
= | (a_Vin)"(a_-vn) dx = | (ay.a_ Vin Wn dx
- oC
oD
w~
=m | Wein dx.
oO
Unless m = n, then, f YA, dx must be zero. Orthonormality means that we
can again use Fourier’s trick (Equation 2.34) to evaluate the coefficients, when we
expand W(x, 0) as a linear combination of stationary states (Equation 2.16), and
\cn|? is again the probability that a measurement of the energy would yield the
value E,.
Example 2.5 Find the expectation value of the potential energy in the nth state
of the harmonic oscillator.
Solution:
1 2.2 ] » [* «2
(V)= gna’ = yer Wy xo Wn dx.
—x
There’s a beautiful device for evaluating integrals of this kind (involving powers
of x or p): Use the definition (Equation 2.47) to express x and p in terms of the
raising and lowering operators:
ih ih
x= f——(apta_): p=i re ag =a). [2.69]
2inw 2
2;
In this example we are interested in x
h
ea [tox + (a@pa_) + (aay) + a)? .
2mw
So
(V)= “ | Vi [Cay + aya) + (aay) + (-Y'] Yn de.
50 Chapter 2 Time-Independent Schrédinger Equation
But (a+)?¥, is (apart from normalization) ¥n42, which is orthogonal to y,, and
the same goes for (a_)?W,_, which is proportional to W,—2. So those terms drop
out, and we can use Equation 2.65 to evaluate the remaining two:
1 1
V)= ein tath)= ghe (> + 7) :
As it happens, the expectation value of the potential energy is exactly falf the
total (the other half, of course, is kinetic). This is a peculiarity of the harmonic
oscillator, as we'll see later on.
«Problem 2.10
(a) Construct yoCr).
(b) Sketch Wo. wi. and yo.
(c) Check the orthogonality of wo, yf), and wo, by explicit integration. Hint: If
you exploit the even-ness and odd-ness of the functions, there is really only
one integral left to do.
«Problem 2.11
(a) Compute (x), (p). (x7), and (p*), for the states Wo (Equation 2.59) and vy
(Equation 2.62), by explicit integration. Comment: In this and other problems
involving the harmonic oscillator it simplifies matters if you introduce the
variable £ = /m@/hx and the constant @ = (mw/mh)!/4.
(b) Check the uncertainty principle for these states.
(c) Compute (7) (the average kinetic energy) and (V) (the average potential
energy) for these states. (No new integration allowed!) Is their sum what you
would expect?
xProblem 2.12 Find (x), (p), (x2), (p?), and (T), for the ath stationary state of the
harmonic oscillator, using the method of Example 2.5. Check that the uncertainty
principle is satisfied.
Problem 2.13 A particle in the harmonic oscillator potential starts out in the state
W(x, 0) = ALB ox) + 4410).
(a) Find A.
(b) Construct W(x, 1) and |W.) /?.
Section 2.3: The Harmonic Oscillator 53
Putting these into Equation 2.78, we find
oO
DOU + DU + Dajss — 2jaj + (K — Daj]éf =O. [2.80}
J=0
It follows (from the uniqueness of power series expansions”>) that the coefficient
of each power of — must vanish,
G+ DG +2)aj42 —2jaj + (K — Daj =0,
and hence that |
@j+1-K)
G+)G +27
This recursion formula is entirely equivalent to the Schrédinger equation.
Starting with ag, it generates all the even-numbered coefficients:
aja = [2.81]
_d-—&) _~ 6-8), _ G-K)d—«)
aQ= 2 a0. a= 12 I= 24 dg.
and starting with «, it generates the odd coefficients:
a= G-K), a =U), -7- 67K),
eG 9g 120 '
We write the complete solution as
h(E) = Reven(E) + hoaaE), [2.82]
where
Reven(E) = ay + ark? + ag +++
is an even function of &, built on ao, and
Noga) = arg + aa8? + a58° + ---
is an odd function. built on a;. Thus Equation 2.81 determines A(&) in terms of
two arbitrary constants (aig and a,;)—which is just what we would expect, for a
second-order differential equation.
However. not all the solutions so obtained are normalizable. For at very large
J, the recursion formula becomes (approximately)
ajq2 yu
23 See, for example. Arfken (footnote 24). Section 5.7.
54
Chapter 2 Time-Independent Schrédinger Equation
with the (approximate) solution
a,x
1 G2
for some constant C, and this yields (at large §, where the higher powers dominate)
1 ; 1 >; 2
new cy’ el wc Yo eY » cel
Gi/
Now, if # goes like exp(£?), then y (remember y/?—that’s what we're trying to
calculate) goes like exp(£?/2) (Equation 2.77), which is precisely the asymptotic
behavior we didn't want.2° There is only one way to wiggle out of this: For
nornalizable solutions the power series must terminate. There must occur some
“highest” j (call it 7), such that the recursion formula spits out a,42 = 0 (this will
truncate either the series heyen or the series toga; the other one must be zero from
the start: a} = 0 if is even, and ag = 0 if » is odd). For physically acceptable
solutions, then, Equation 2.81 requires that
K=2n+1,
for some non-negative integer 7, which is to say (referring to Equation 2.73) that
the energy must be
E, = (: + ) hw. form =0.1,2..... [2.83]
Thus we recover, by a completely different method, the fundamental quantization
condition we found algebraically in Equation 2.61.
It seems at first rather surprising that the quantization of energy should
emerge from a technical detail in the power series solution to the Schrédinger
equation, but let’s look at it from a different perspective. Equation 2.70 has
solutions, of course, for any value of E (in fact, it has nvo linearly independent
solutions for every £). But almost all of these solutions blow up exponentially at
large x, and hence are not normalizable. Imagine, for example, using an E that
is slightly /ess than one of the allowed values (say, 0.49/w), and plotting the
solution (Figure 2.6(a)): the “tails” fly off to infinity. Now try an E slightly larger
(say, 0.51fiw); the “tails” now blow up in the orher direction (Figure 2.6(b)). As
you tweak the parameter in tiny increments from 0.49 to 0.51, the tails flip over
when you pass through 0.5—only at precisely 0.5 do the tails go to zero, leaving
a normalizable solution.??
2611's no surprise that the ill-behaved solutions are still contained in Equation 2.81: this recursion
relation is equivalent to the Schrédinger equation, so it’s gor to include both the asymptotic forms we
found in Equation 2.75.
°71 is possible to set this up on a computer. and discover the allowed cnergies “experimentally.”
You might call it the wag the dog method: When the tail wags. you know you've just passed over an
allowed value. See Problems 2.54-2.56.
Section 2.3: The Harmonic Oscillator 55
4
wn
T
{b)
FIGURE 2.6: Solutions to the Schrédinger equation for (a) E = 0.49 ha, and
(b) E= 0.51 fa.
For the allowed values of K, the recursion formula reads
—2@n- SF)
G+ng+D (2.84)
aj42=
If n = 0, there is only one term in the series (we must pick a) = 0 to kill Aoag.
and j = 0 in Equation 2.84 yields az = 0):
ho(&) = a0.
and hence
Wolk) = age F?
58 Chapter 2 Time-Independent Schrédinger Equation
Iviog(nll? 0.24 : @)
ef
“a
el
FIGURE 2.7: (a) The first four stationary states of the harmonic oscillator. This
material is used by permission of John Wiley & Sons, Inc.; Stephen Gasiorowicz,
Quantum Physics, John Wiley & Sons, Inc., 1974. (b) Graph of IFio0l?, with the
classical distribution (dashed curve) superimposed.
Section 2.4: The Free Particle 59
(c) If you differentiate an nth-order polynomial, you get a polynomial of order
(n — 1). For the Hermite polynomials, in fact,
dH, _
= 2n Hy —1 (8). [2.88]
Check this, by differentiating Hs and Hg.
(d) H,,(&) is the nth z-derivative, at z = 0, of the generating function exp(—z* +
2zé); or, to put it another way, it is the coefficient of z”/m! in the Taylor
series expansion for this function:
co
-2
on
ott SE He, (2.89)
n=0
Use this to rederive Ho, A, and Hp.
2.4 THE FREE PARTICLE
We turn next to what should have been the simplest case of all: the free particle
(V(x) = 0 everywhere). Classically this would just mean motion at constant veloc-
ity, but in quantum mechanics the problem is surprisingly subtle and tricky. The
time-independent Schrédinger equation reads
wd?
oS = Ey. [2.90]
or 2
¢ v =—ky. where k= ~ ~ [2.91]
x7 Yt
So far, it's the same as inside the infinite square well (Equation 2.21), where the
potential is also zero; this time, however, I prefer to write the general solution in
exponential form (instead of sines and cosines), for reasons that will appear in due
course:
w(x) = Ae™ 4+ Be, [2.92]
Unlike the infinite square well, there are no boundary conditions to restrict the
possible values of k (and hence of E); the free particle can carry any (positive)
energy. Tacking on the standard time dependence, exp(—i Er/h),
W(x. 2) = Agiks— FE) + Begikst Hn | [2.93]
Now, any function of x and 7 that depends on these variables in the special
combination (x + ur) (for some constant v) represents a wave of fixed profile,
traveling in the +x-direction, at speed v. A fixed point on the waveform (for
60
Chapter 2 Time-Independent Schrodinger Equation
example, a maximum or a minimum) corresponds to a fixed value of the argument,
and hence to x and ¢ such that
x tut =constant. or x = -vt + constant.
Since every point on the waveform is moving along with the same velocity, its
shape doesn’t change as it propagates. Thus the first term in Equation 2.93 repre-
sents a wave traveling to the right, and the second represents a wave (of the same
energy) going to the /eft. By the way, since they only differ by the sign in front of
k, we might as well write
Wee.) = Ae ete), [2.94]
and let & run negative to cover the case of waves traveling to the left:
ret VamE with k>0O=- traveling to the right, 2.95]
~R k <0 traveling to the left. ,
Evidently the “stationary states” of the free particle are propagating waves; their
wavelength is A = 27/|k|, and, according to the de Broglie formula (Equation 1.39),
they carry momentum
pahk. {2.96]
The speed of these waves (the coefficient of t over the coefficient of x) is
Alk| [LE
Uquantum = om Va’ [2.97]
On the other hand, the classical speed of a free particle with energy E is given by
E= (1/2)mv? (pure kinetic, since V = 0), so
2E
Uclassical = hn = 2vquantum: [2.98]
Apparently the quantum mechanical wave function travels at half the speed of the
particle it is supposed to represent! We'll return to this paradox in a moment—there
is an even more serious problern we need to confront first: This wave function is
not normalizable. For
+00 +00
| WED, dx = lar [ dx = |A[*(oo). [2.99]
oO -—o
In the case of the free particle, then, the separable solutions do not represent
physically realizable states. A free particle cannot exist in a stationary state; or,
to put it another way, there is no such thing as a free particle with a definite
energy.
Section 2.4: The Free Particle 63
hex, tl?
05
0.4
0.3
0.
1
_/ ~.
-6 -4 -2 0 2 4 6
[><
FIGURE 2.8: Graph of | (x, £)|? (Equation 2.104) at ¢ = 0 (the rectangle) and at
t = ma? /h (the curve).
W(x, 0) ok)
Vea
alr
~ala x k
(a) (b)
FIGURE 2.9: Example 2.6, for small a. (a) Graph of &(x, 0). (b) Graph of #{4).
it’s flat, since the k’s cancelled out (Figure 2.9(b)). This is an example of the
uncertainty principle: If the spread in position is small, the spread in momentum
(and hence in k—see Equation 2.96) must be large. At the other extreme (large
a) the spread in position is broad (Figure 2.10(a)) and
_ fa sin(ka)
Now, sinz/z has its maximum at z = 0, and drops to zero at z = + m (which, in
this context, means k = + 2/a). So for large a, #(k) is a sharp spike about k = 0
(Figure 2.10(b)). This time it’s got a well-defined momentum but an ill-defined
position.
64
Chapter 2 Time-Independent Schrédinger Equation
P(x, 0) o(k)
Vain
1
Vea
x _ x ® k
a a
{a) (b)
FIGURE 2.10: Example 2.6, for large a. {a} Graph of (x, 0). (b) Graph of (k).
T return now to the paradox noted earlier: the fact that the separable solution
W(x, t) in Equation 2.94 travels at the “wrong” speed for the particle it osten-
sibly represents. Strictly speaking, the problem evaporated when we discovered
that % is not a physically realizable state. Nevertheless, it is of interest to dis-
cover how information about velocity is contained in the free particle wave function
(Equation 2,100). The essential idea is this: A wave packet is a superposition of
sinusoidal functions whose amplitude is modulated by @ (Figure 2.11); it consists of
“ripples” contained within an “envelope.” What corresponds to the particle velocity
is not the speed of the individual ripples (the so-called phase velocity), but rather
the speed of the envelope (the group velocity)—which, depending on the nature
of the waves, can be greater than, less than, or equal to, the velocity of the ripples
that go to make it up. For waves on a string, the group velocity is the same as the
phase velocity. For water waves it is one-half the phase velocity, as you may have
noticed when you toss a rock into a pond (if you concentrate on a particular ripple,
you will see it build up from the rear, move forward through the group, and fade
away at the front, while the group as a whole propagates out at half the speed). What
I need to show is that for the wave function of a free particle in quantum mechanics
FIGURE 2.11: A wave packet. The “enve-
lope” travels at the group velocity; the “rip-
ples” travel at the phase velocity.
Section 2.4: The Free Particle 65
the group velocity is Avice the phase velocity—just right to represent the classical
particle speed.
The problem, then, is to determine the group velocity of a wave packet with
the general form
i te0 i(kx wt)
W(x, = sa! GeO ag,
Fo dx
(In our case w = (Ak?/2m), but what I have to say now applies to any kind
of wave packet, regardless of its dispersion relation—the formula for w as a
function of k.) Let us assume that ¢(k) is narrowly peaked about some particular
value ko. (There is nothing il/egal about a broad spread in k, but such wave packets
change shape rapidly—since different components travel at different speeds—so
the whole notion of a “group,” with a well-defined velocity, loses its meaning.)
Since the integrand is negligible except in the vicinity of ko, we may as well
Taylor-expand the function «(k) about that point, and keep only the leading terms:
wk) = wo + wo (k — ko).
where wis the derivative of w with respect to k, at the point kg.
Changing variables from k to s = k — kp (to center the integral at ko), we
have
1 ote (lobar :
Wor.) = Te fe olka + spell ota —tonreostl ge
Atr=0,
1 ote kote
W(x. 0) = Te fe (ko + shot ds,
and at later times
1. 1 +90 se ,
W(x. t) = ciency f oo + sjelkotsir—eyt) ds.
V2n 00
Except for the shift from x to (x — wot), the integral is the same as the one in
W(x, 0). Thus
W(x. 1) & Hero (x — ashe. 0). [2.105]
Apart from the phase factor in front (which won’t affect (|? in any event) the
wave packet evidently moves along at a speed wp:
dw
Usgroup = dk [2.106]
(evaluated at k = ko). This is to be contrasted with the ordinary phase velocity
wo
Uphase = ke [2.107]
68
Chapter 2 Time-Independent Schrédinger Equation
2.5 THE DELTA-FUNCTION POTENTIAL
2.5.1 Bound States and Scattering States
We have encountered two very different kinds of solutions to the time-independent
Schrédinger equation: For the infinite square well and the harmonic oscillator they
are normalizable, and labeled by a discrete index n; for the free particle they are
non-normalizable, and labeled by a continuous variable k. The former represent
physically realizable states in their own right, the latter do not; but in both cases the
general solution to the time-dependent Schrédinger equation is a linear combination
of stationary states—for the first type this combination takes the form of a sum
(over »), whereas for the second it is an integral (over k). What is the physical
significance of this distinction?
In classical mechanics a one-dimensional time-independent potential can give
tise to two rather different kinds of motion. If V(x) rises higher than the particle’s
total energy (£) on either side (Figure 2.]2(a)), then the particle is “stuck” in the
potential well—it rocks back and forth between the turning points, but it cannot
escape (unless, of course, you provide it with a source of extra energy, such as
a motor, but we’re not talking about that). We call this a bound state. If, on the
other hand, E exceeds V(x) on one side (or both), then the particle comes in from
“infinity,” slows down or speeds up under the influence of the potential, and returns
to infinity (Figure 2.12(b)). (It can’t get trapped in the potential unless there is some
mechanism, such as friction, to dissipate energy, but again, we're not talking about
that.) We call this a scattering state. Some potentials admit only bound states (for
instance, the harmonic oscillator); some allow only scattering states (a potential
hill with no dips in it, for example); some permit both kinds, depending on the
energy of the particle.
The two kinds of solutions to the Schrédinger equation correspond precisely to
bound and scattering states. The distinction is even cleaner in the quantum domain,
because the phenomenon of tunneling (which we'll come to shortly) allows the
particle to “leak” through any finite potential barrier, so the only thing that matters
is the potential at infinity (Figure 2.12(c)):
E>([V(-oo) or V(+oo)}=> © scattering state. [2.109]
| E <[V(—oo) and V(+00)] > bound state,
In “real life” most potentials go to zero at infinity, in which case the criterion
simplifies even further:
E<0Q= bound state,
| E>0O= scattering state. 2.110]
Because the infinite square well and harmonic oscillator potentials go to infinity as
x — oo, they admit bound states only; because the free particle potential is zero
Section 2.5: The Delta-Function Potential 69
Classical turning points
(a)
Vix)
E 4-H ee
| ~ x x
Classical turning point
(b)
(c)
FIGURE 2.12: (a) A bound state. (b) Scattering states. (c) A classical bound state, but
a quantum scattering state.
everywhere, it only allows scattering states.*4 In this section (and the following
one) we shall explore potentials that give rise to both kinds of states.
Mig you are irtitalingly observant. you may have noticed that the general theorem requiring
E > Vmin (Problem 2.2) does not really apply to scattering states. since they are not normalizable
anyway. If this bothers you, wy solving the Schrédinger equation with E < 0. for the free particle, and
70
Chapter 2. Time-Independent Schrédinger Equation
8{x)
FIGURE 2.13: The Dirac delta function
x (Equation 2.111).
2.5.2 The Delta-Function Well
The Dirac delta function is an infinitely high, infinitesimally narrow spike at the
origin, whose area is | (Figure 2.13):
,_f 0 ifx 40 . too —_
w= | oo. ify £0 \. with [. 8(x) dx =1. [2.111]
Technically, it isn’t a function at all, since it is not finite at x = 0 (mathematicians
call it a generalized function, or distribution).*> Nevertheless, it is an extremely
useful construct in theoretical physics. (For example, in electrodynamics the charge
density of a point charge is a delta function.) Notice that (x —a) would be a spike
of area | at the point a. If you multiply 8(x — a) by an ordinary function f(x),
it’s the same as multiplying by f(a),
FQRIMA— a) = f(MS — a), [2.112]
because the product is zero anyway except at the point a. In particular,
+00 +90
/ f(x)bx — a) dx = f(a) | b(x —a)dx = f(a). (2.113]
oo
That’s the most important property of the delta function: Under the integral sign it
serves to “pick out” the value of f(x) at the point a. (Of course, the integral need
not go from —oo to +00; all that matters is that the domain of integration include
the point a, soa —e€ toa +e would do, for any € > 0.)
Let’s consider a potential of the form
V(x) = —a@8(x). [2.114]
note that even linear combinations of these solutions cannot be normalized. The positive energy solutions
by themselves constitute a complete set.
33The delta function can be thought of as the init of a sequence of functions, such as reclangles
(or triangles) of ever-increasing height and ever-decreasing width.
Section 2.5: The Delta-Function Potential 73
and the allowed energy (Equation 2.117) is
a 2 2
pa _ _ me [2.127]
2m 2h °
Finally, we normalize y:
+90 5 ot |B?
/ I(x)? dx = aie f eX dy = =],
—90 0 K
so (choosing, for convenience, the positive real root):
fma
fh
B=Je= . [2.128]
Evidently the delta-function well, regardless of its “strength” a, has exactly one
bound state:
[2.129]
Wo) = ER ermal, B= ma’
, on
What about scattering states, with E > 0? For x < 0 the Schrédinger equation
reads
2
ss =- ~ v= ky.
where
k= Vine [2.130]
is real and positive. The general solution is
v(x) = Ae 4 Bee [2.131]
and this time we cannot rule out either term, since neither of them blows up.
Similarly, for x > 0, ; ;
w(x) = Fe + Ge7*, [2.132]
The continuity of w(x) at x = 0 requires that
F+G=A+B. [2.133]
The derivatives are
dw/dx =ik(Fe'™ — Ge). for (x > 0), so dy/dx|, =ik(F - G).
diy/dx =ik (Ae — Be) for (x <0). so dyy/dx|_ =ik(A— B).
74
Chapter 2 Time-Independent Schridinger Equation
and hence A(dys/dx) = ik(F —-G—A+ B). Meanwhile, ¥(0) = (A + B), so the
second boundary condition (Equation 2,125) says
2ma
ik(F -G-A+B)=-—,
RP
(A+B), [2.134]
or, more compactly,
F-—-G=A(i+2i8) — BU —2ip). where 6 = x [2.135]
i
Having imposed both boundary conditions, we are left with two equations
(Equations 2,133 and 2.135) in four unknowns (A, B, F, and G)—five, if you
count k. Normalization won’t help—this isn’t a normalizable state. Perhaps we’d
better pause, then, and examine the physical significance of these various con-
stants. Recall that exp(/kx) gives rise (when coupled with the time-dependent
factor exp(—/Er/h)) to a wave function propagating to the right, and exp(—ikx)
leads to a wave propagating to the /eft. It follows that A (in Equation 2.131) is the
amplitude of a wave coming in from the left, B is the amplitude of a wave return-
ing to the left, F (Equation 2.132) is the amplitude of a wave traveling off to the
right, and G is the amplitude of a wave coming in from the right (see Figure 2.15),
In a typical scattering experiment particles are fired in from one direction—let’s
say, from the left. In that case the amplitude of the wave coming in from the right
will be zero:
G=0. (for scattering from the left); (2.136]
A is the amplitude of the incident wave, B is the amplitude of the reflected wave,
and F is the amplitude of the transmitted wave. Solving Equations 2.133 and
2.135 for B and F, we find
ip i
= . F= .
i—ip4 1-ip*
(If you want to study scattering from the right, set A = 0; then G is the incident
amplitude, F is the refiected amplitude, and B is the transmitted amplitude.)
B
[2.137]
FIGURE 2.15: Scattering from a delta func-
tion well.
Section 2.5: The Delta-Function Potential 75
Now, the probability of finding the particle at a specified location is given by
|'W|?, so the relative?” probability that an incident particle will be reflected back is
_ BPP
“JAP 1+ 87"
[2.138]
R is called the reflection coefficient. (If you have a beam of particles, it tells
you the fraction of the incoming number that will bounce back.) Meanwhile, the
probability of transmission is given by the transmission coefficient
\FP 1
T= =—. 2.139
Az 1+ 8? C ]
Of course, the sum of these probabilities should be 1—and it is:
RiT=1. (2.140]
Notice that R and T are functions of 8, and hence (Equations 2.130 and 2.135)
of E:
1 1
= —____, T= —_____. 2.141
14+ Qh E/me?) 1+ Gna? /2h7E) [ ]
The higher the energy, the greater the probability of transmission (which certainly
seems reasonable).
This is all very tidy, but there is a sticky matter of principle that we cannot
altogether ignore: These scattering wave functions are not normalizable, so they
don’t actually represent possible particle states. But we know what the resolution to
this problem is: We must form normalizable linear combinations of the stationary
states, just as we did for the free particle—true physical particles are represented
by the resulting wave packets. Though straightforward in principle, this is a messy
business in practice, and at this point it is best to turn the problem over to a
computer.*® Meanwhile, since it is impossible to create a normalizable free-particle
wave function without involving a range of energies, R and T should be interpreted
as the approximate reflection and transmission probabilities for particles in the
vicinity of E.
Incidentally, it might strike you as peculiar that we were able to analyze a
quintessentially time-dependent problem (particle comes in, scatters off a potential,
37 This is not a normalizable wave function. so the absolute probability of finding the particle
al a particular location is not well defined: nevertheless. the ratio of probabilities for the incident and
reflected waves is meaningful. More on this in the next paragraph.
38Numerical studies of wave packets scattering off welis and barriers reveal extraordinarily rich
structure. The classic analysis is A. Goldberg, H. M. Schey. and J. L. Schwartz, Amn. J. Phys. 35, 177
(1967); more recent work can be found on the Web,
78 Chapter 2. Time-Independent Schrédinger Equation
(a) Sketch this potential.
(b) How many bound states does it possess? Find the allowed energies, for @ =
h? /ma and for a = fh? /4ma, and sketch the wave functions.
«Problem 2.28 Find the transmission coefficient for the potential in Problem 2.27.
2.6 THE FINITE SQUARE WELL
As a last example, consider the finite square well potential
-VW, for -a<x <a,
0, for |x| > a, [2.145]
VaX)= |
where Vo is a (positive) constant (Figure 2.17). Like the delta-function well, this
potential admits both bound states (with E < 0) and scattering states (with E > 0).
We'll look first at the bound states.
In the region x < —a the potential is zero, so the Schrédinger equation reads
Pey ay 4
—~—-—_= Ey, HK.
Im dx PY OT Gy He
where
k= vine [2.146]
is real and positive. The general solution is w(x) = A exp(—kx) + Bexp(kx), but
the first term blows up (as x — —oo), so the physically admissible solution (as
before—see Equation 2.119) is
w(x) = Be**, forx <a. [2.147]
Vix)
—a a
x
-V,
FIGURE 2.17: The finite square well
{Equation 2.145).
Section 2.6: The Finite Square Well 79
In the region —a@ < x < a, V(x) = —Vp, and the Schrédinger equation reads
he Py ey 5
Sa da? —Vow=Ey, or de = ly.
where
Vamn(E + Vi
l= ae [2.148]
ft
Although E is negative, for bound states, it must be greater than —Vo, by the
old theorem E > Vmin (Problem 2.2); so / is also real and positive. The general
solution is*?
w(x) =Csin(lx) + Deos(ix), for -a<x <a, [2.149]
where C and D are arbitrary constants. Finally, in the region « > a the potential
is again zero; the general solution is (x) = F exp(—«x) + Gexp(«x), but the
second term blows up (as + > oo), so we are left with
wx) = Fe**. for x > a. [2.150]
The next step is to impose boundary conditions: y and dy/dx continuous at
—a and +a. But we can save a little time by noting that this potential is an even
function, so we can assume with no loss of generality that the solutions are either
even or odd (Problem 2.1(c)). The advantage of this is that we need only impose
the boundary conditions on one side (say, at +a); the other side is then automatic,
since ¥(—x) = +y(x). I'll work out the even solutions; you get to do the odd
ones in Problem 2.29. The cosine is even (and the sine is odd), so I’m looking for
solutions of the form
Fe, forx >a,
w(x) = 4% Deos(ix), forO<x <a, [2.151]
w(—x). for + <0.
The continuity of y(x), al x =a, says
Fe~“" = Deos(la). [2.152]
and the continuity of dw/dx, says
—«Fe*" = -1Dsin(la). (2.153]
Dividing Equation 2.153 by Equation 2.152, we find that
« =I tan(la). [2.154]
¥You can, if you like, write the general solution in exponential form (Ce + D'e). This
leads to the same final result. but since the potential is symmetric we know the solutions will be either
even or odd, and the sine/cosine notation allows us to exploit this directly.
80 Chapter 2 Time-Independent Schridinger Equation
tanz
FIGURE 2.18: Graphical solution to Equation 2.156, for zo = 8 (even states).
This is a formula for the allowed energies, since « and / are both functions
of £. To solve for E, we first adopt some nicer notation: Let
zsla, and w= 5 2m Vo. [2.155]
7?
According to Equations 2.146 and 2.148, («?+ 1?) = 2m Vo/h*, so xa = ,/22 — 22, °
and Equation 2.154 reads
tanz = \/ (zo/2)? — 1. [2.156]
This is a transcendental equation for z (and hence for E) as a function of zo
(which is a measure of the “size” of the well). It can be solved numerically, using
a computer, or graphically, by plotting tanz and ./(zo/z)? — 1 on the same grid,
and looking for points of intersection (see Figure 2.18). Two limiting cases are of
special interest:
1. Wide, deep well. If zo is very large, the intersections occur just slightly
below z, = 27/2, with » odd; it follows that
2_322
were
E,t Vy =o.
nt Mo = 7 Gay
[2.157]
But E + Vo is the energy above the bottom of the well, and on the right side
we have precisely the infinite square well energies, for a well of width 2a (see
Equation 2.27)—or rather, half of them, since this x is odd. (The other ones, of
course, come from the odd wave functions, as you’ll discover in Problem 2.29.) So
the finite square well goes over to the infinite square well, as Vo > oo; however,
for any finite Vo there are only a finite number of bound states,
2. Shallow, narrow well. As zo decreases, there are fewer and fewer bound
states, until finally (for zy < 2/2, where the lowest odd state disappears) only one
remains. It is interesting to note, however, that there is always one bound state, no
matter ow “weak” the well becomes.
Section 2,6: The Finite Square Well 83
Problem 2.30 Normalize y(x) in Equation 2.151, to determine the constants D
and F.
Problem 2.31 The Dirac delta function can be thought of as the limiting case of a
rectangle of area 1, as the height goes to infinity and the width goes to zero. Show
that the delta-function well (Equation 2.114) is a “weak” potential (even though it
is infinitely deep), in the sense that za > 0. Determine the bound state energy for
the delta-function potential, by treating it as the limit of a finite square well. Check
that your answer is consistent with Equation 2,129. Also show that Equation 2.169
reduces to Equation 2.141 in the appropriate limit.
Problem 2.32 Derive Equations 2.167 and 2.168. Hint: Use Equations 2.165 and
2.166 to solve for C and D in terms of F:
k ik k ik
C= [sna +iF costa] Lc ee [cost iF sincay| elf F,
Plug these back into Equations 2.163 and 2.164. Obtain the transmission coefficient,
and confirm Equation 2.169.
«*Problem 2.33 Determine the transmission coefficient for a rectangular barrier
(same as Equation 2.145, only with V(x) = +Vp > 0 in the region -a <x <a).
Treat separately the three cases E < Vo, E = Vo, and E > Vo (note that the
wave function inside the barrier is different in the three cases). Partial answer: For
E < Vo,"
ve 2
T=14 — oO — sinh? (Fvae = B)).
4E(Vo — E) h
«Problem 2.34 Consider the “step” potential:
0 ifx <0.
V@)= | Vo. ifx>O.
(a) Calculate the reflection coefficient, for the case E < Vo, and comment on
the answer.
(b) Calculate the reflection coefficient for the case E > Vo.
(c) For a potential such as this, which does not go back to zero to the right of
the barrier, the transmission coefficient is nor simply |F|?/|A|? (with A the
“This is a good example of tunneling—classically the particle would bounce back.
84
Chapter 2 Time-Independent Schridinger Equation
Vix)
Vo
FIGURE 2.20: Scattering from a “cliff” (Problem 2.35).
(4)
incident amplitude and F the transmitted amplitude), because the transmitted
wave travels at a different speed. Show that
E—Vo |F[?
T= E~VolFY
E |A/P
for E > Vo, Hint: You can figure it out using Equation 2.98, or—more ele-
gantly, but less informatively —from the probability current (Problem 2.19).
What is T, for E < Vo?
[2.172]
For E > Vo, calculate the transmission coefficient for the step potential, and
check that T+ R=1.
Problem 2.35 A particle of mass #7 and kinetic energy E > 0 approaches an
abrupt potential drop Vo (Figure 2.20).
(a)
(b)
(c)
What is the probability that it will “reflect” back, if E = Vo/3? Hint: This
is just like Problem 2.34, except that the step now goes down, instead of up.
I drew the figure so as to make you think of a car approaching a cliff, but
obviously the probability of “bouncing back” from the edge of a cliff is far
smaller than what you got in (a)—-unless you’re Bugs Bunny, Explain why
this potential does nor correctly represent a cliff. Hint: In Figure 2.20 the
potential energy of the car drops discontinuously to — Vo, as it passes x = 0,
would this be true for a falling car?
When a free neutron enters a nucleus, it experiences a sudden drop in poten-
tial energy, from V = 0 outside to around —12 MeV (million electron volts)
inside. Suppose a neutron, emitted with kinetic energy 4 MeV by a fission
event, strikes such a nucleus. What is the probability it will be absorbed,
thereby initiating another fission? Hint: You calculated the probability of
reflection in part (a); use T = 1 — R to get the probability of transmission
through the surface.
Further Problems for Chapter 2 85
FURTHER PROBLEMS FOR CHAPTER 2
Problem 2.36 Solve the time-independent Schrédinger equation with appropri-
ate boundary conditions for the “centered” infinite square well: V(x) = 0 (for
-—a <x < +a), V(x) = oo (otherwise). Check that your allowed energies are
consistent with mine (Equation 2.27), and confirm that your y’s can be obtained
from mine (Equation 2.28) by the substitution x — (x + a)/2 (and appropriate
renormalization). Sketch your first three solutions, and compare Figure 2.2. Note
that the width of the well is now 2a.
Problem 2.37 A particle in the infinite square well (Equation 2.19) has the initial
wave function
WX OHA sin? (rx /a) (O<x <a).
Determine A, find W(x. tf), and calculate (v), as a function of time. What is the
expectation value of the energy? Hint: sin" @ and cos” 6 can be reduced, by repeated
application of the trigonometric sum formulas, to linear combinations of sin(#7@)
and cos(#@), with 1 = 0.1.2. ... .7.
«Problem 2.38 A particle of mass m is in the ground state of the infinite square well
(Equation 2.19). Suddenly the well expands to twice its original size—the right
wall moving from a to 2a —leaving the wave function (momentarily) undisturbed.
The energy of the particle is now measured.
(a) What is the most probable result? What is the probability of getting that
result?
(b) What is the next most probable result, and what is its probability?
(c) What is the expectation yalue of the energy? Hint: If you find yourself
confronted with an infinite series, try another method.
Problem 2.39
(a) Show that the wave function of a particle in the infinite square well returns
to its original form after a quantum revival time T = 4a? /sh. That is:
Wx, T) = V(x. 0) for any state (vor just a stationary state).
(b) What is the classical revival time, for a particle of energy E bouncing back
and forth between the wails?
(c) For what energy are the two revival times equal?”
3 The fact that the classical and quantum revival times bear no obvious relation to one another
{and the quantum one doesn’t even depend on the energy) is a curious paradox: see Daniel Styer.
Am. J. Phys. 69, 56 (2001).