Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Quantum mechanics solution of griffiths, Exercises of Physics Fundamentals

Easy to give you guidance to solve griffiths quantum mechanics book and it will help you to clear competition exams too.

What you will learn

  • What is the role of the given orthonormal basis vectors in solving this problem?
  • What are the eigenvectors and eigenvalues of the given operator?
  • How do you calculate the probabilities of obtaining certain energy levels if the energy is measured?

Typology: Exercises

2019/2020
On special offer
30 Points
Discount

Limited-time offer


Uploaded on 07/19/2020

komal-sharma-8
komal-sharma-8 🇮🇳

5

(4)

1 document

1 / 158

Toggle sidebar
Discount

On special offer

Related documents


Partial preview of the text

Download Quantum mechanics solution of griffiths and more Exercises Physics Fundamentals in PDF only on Docsity! The Schrödinger Equation cannot be a wavefunction but !(x) = e−x 2 could be a valid wavefunction. SOLUTION Both functions are continuous and defined on the interval of interest. They are both single valued and differentiable. However, consider the integral of x : ∫ ∞ 0 |ψ(x)|2 dx = ∫ ∞ 0 x2dx = x3 3 ∣ ∣ ∣ ∞ 0 = ∞ Given that, ψ(x) = x is not square integrable over this range it cannot be a valid wavefunction. On the other hand: ∫ ∞ 0 |!(x)|2 dx = ∫ ∞ 0 e−2x 2 dx = √ π 8 V (x) = { 0 0 ≤ x ≤ a ∞ otherwise Show that ψ(x, t) = A sin(kx) exp(iEt/h̄) solves the Schrödinger equation pro- vided that E = h̄2k2 2m SOLUTION The potential is infinite at x = 0 and a , therefore the particle can never be found outside of this range. So we only need to consider the Schrödinger equation inside the well, where V = 0. With this condition the Schrödinger equation takes the form: ih̄ ∂ψ(x, t) ∂t = − h̄2 2m ∂2ψ(x, t) ∂x2 Setting !(x, t) = A sin(kx) exp(iEt/h̄), we consider the left side of the Schrödinger equation first: ih̄ ∂ψ(x, t) ∂t = ih̄ ∂ ∂t (A sin(kx) exp(−iEt/h̄)) = ih̄(−iE/h̄)A sin(kx) exp(−iEt/h̄) = E(A sin(kx) exp(−iEt/h̄)) = Eψ Now consider the derivative with respect to x : ∂ ∂x ψ = ∂ ∂x [A sin(kx) exp(−iEt/h̄)] = kA cos(kx) exp(−iEt/h̄) → − h̄2 2m h̄2ψ(x, t) ∂x2 = − h̄2 2m ∂ ∂x [kA cos(kx) exp(−iEt/h̄)] = − h̄2 2m [−k2A sin(kx) exp(−iEt/h¯ )] = h̄2 2m k 2ψ 1.Basic Developments EXAMPLE 2 Consider a particle trapped in a well with potential given by: EXAMPLE 1 Let two functions ψ and ! be defined for 0 ≤ x < ∞. Explain why ψ(x) = x Using ih̄ ∂ψ(x, t) ∂t = − h̄2 2m ∂2ψ(x, t) ∂x2 we equate both terms, finding that: Eψ = h̄2 2m k2ψ And so we conclude that the Schrödinger equation is satisfied if E = h̄ 2 k 2 2m SOLUTION The wavefunction is written as a product: !(x, t) = "(x) exp(−iEt/h̄) Therefore it is not necessary to work with the full Schrödinger equation. Recalling the time-independent Schrödinger equation: − h̄2 2m ∂2"(x) ∂x2 + V (x)"(x) = E"(x) We set "(x) = A(x − x3) and solve to find V . The right-hand side is simply: E"(x) = EA(x − x3) To find the form of the left-side of the equation, we begin by computing the first derivative: ∂"(x) ∂x = ∂ ∂x [ A(x − x3) ] = A(1 − 3x2) For the second derivative, we obtain: ∂ 2 "(x) ∂x2 = ∂ ∂x ( A(1 − 3x2) ) = −6Ax, ⇒ − h̄2 2m ∂2"(x) ∂x2 = h̄2 2m 6Ax Putting this in the left-side of the time-independent Schrödinger equation and equat- ing this to EA(x − x3) gives: h̄2 2m 6Ax + V (x)A(x − x3) = EA(x − x3) Now subtract (h̄2/2m)6Ax from both sides: V (x)A(x − x3) = EA(x − x3) − h̄2 2m 6Ax Dividing both sides by A(x − x3) gives us the potential: V (x) = E − h̄2 2m 6x (x − x3) 3)e−iEt/h̄ . Find V (x) such that the Schrödinger equation is satisfied. EXAMPLE 3 Suppose(x , t ) = A(x − x = 1 a ∫ 3a 4 a 2 dx − 1 a ∫ 3a 4 a 2 cos ( 2πx a ) dx = 1 a x ∣ ∣ ∣ ∣ 3a 4 a 2 − 1 2π sin ( 2πx a ) ∣ ∣ ∣ ∣ 3a 4 a 2 = 1 a [ 3a 4 − a 2 ] − 1 2π [ sin ( 6π 4 ) − sin (π) ] = 1 a [a 4 ] − 1 2π sin ( 3π 2 ) = 1 4 + 1 2π = π + 2 4π = 0.41 !(x) = { A for 0 ≤ x ≤ a Bx for a ≤ x ≤ b. is normalized. SOLUTION ∫ ∞ −∞ |!(x)| 2 dx = ∫ a 0 A2dx + ∫ b a B2x2dx = A2x ∣ ∣ a 0 + B2(x3) 3 |ba = A 2a + B2(b3 − a3) 3 Using ∫∞ −∞ |!(x)|2 dx = 1, we obtain: A2a + B2(b3 − a3) 3 = 1, ⇒ A2 = ( 1 a )( 1 − B2(b3 − a3) 3 ) As long as ∫∞ −∞ |!(x)|2 dx = 1 is satisfied, we are free to arbitrarily choose one of the constants as long as it’s not zero. So we set B = 1: A2 = ( 1 a )( 1 − (b3 − a3) 3 ) , ⇒ A = √ 1 a ( 1 − (b3 − a3) 3 ) ψ(x) = C x2 + a2 EXAMPLE 7 Find an A and B so that: EXAMPLE 8 Normalize the wavefunction SOLUTION We start by finding the square of the wavefunction: |ψ(x)|2 = C2 (x2 + a2)2 To compute ∫ |ψ(x)|2 dx , we will need two integrals which can be found in inte- gration tables. These are: ∫ du u2 + a2 = 1 a tan−1 u a and ∫ du (u2 + a2)2 = 1 2a2 ( u u2 + a2 ) + ∫ du u2 + a2 We begin by using the second of these: ⇒ ∫ ∞ −∞ |ψ(x)|2 dx = C2 ∫ ∞ −∞ dx (x2 + a2)2 = C2 2a2 ( x x2 + a2 ∣ ∣ ∣ ∣ ∞ −∞ + ∫ ∞ −∞ dx x2 + a2 ) Where the first term is to be evaluated at ±∞. Consider the limit as x → ∞: lim x→∞ x x2 + a2 = lim x→∞ 1 2x = 0 where L’Hopitals rule was used. Similarly, for the lower limit we find: lim x→−∞ x x2 + a2 = lim x→−∞ 1 2x = 0 So we can discard the first term altogether. Now we can use ∫ (du)/(u2 + a2) = (1/a) tan−1(u/a) for the remaining piece. ∫ ∞ −∞ |ψ(x)|2 dx = C2 2a2 ∫ ∞ −∞ dx x2 + a2 = C2 2a2 ( 1 a ) tan−1 x a ∣ ∣ ∣ ∣ ∞ −∞ = lim u→∞ C2 2a3 [ tan−1(u) − tan−1(−u) ] = C2 2a3 [π 2 − ( − π 2 )] = C2 2a3 π Again we recall that the normalization condition is: ∫ ∞ −∞ |ψ(x)|2 dx = 1 therefore, setting our result equal to one we find that: C2 2a3 π = 1 ⇒ C = √ 2a3 π And so, the normalized wavefunction is: ψ(x) = √ 2a3 π 1 x2 + a2 In the next examples, we consider the normalization of some Gaussian functions. Three frequently seen integrals we will use are: ∫ ∞ −∞ e−z 2 dz = √ π ∫ ∞ −∞ z2ne−z 2 dz = √ π 1.3.5 . . . (2n − 1) 2n , n = 1, 2 . . . ∫ ∞ −∞ ze−z 2 dz = 0 We also introduce the error function: erf(z) = 2 √ π ∫ z 0 e−u 2 du −λ(x−x0) 2 . Find A such that "(x) is normalized. The constants λ and x0 are real. SOLUTION |ψ(x)|2 = A2e−2λ(x−x0) 2 ⇒ ∫ ∞ −∞ |ψ(x)|2 dx = ∫ ∞ −∞ A2e−2λ(x−x0) 2 dx = A2 ∫ ∞ −∞ e−2λ(x−x0) 2 dx We can perform this integral by using the substitution z2 = 2λ(x − x0)2 . Then: z = √ 2λ(x − x0), dz = √ 2λdx ⇒ A2ψ(x) = Ae−λ(x−x0) 2 Using ∫ ∞ −∞ |ψ(x)| 2 dx = 1, we find that A = (2λ/π) 14 . 1e −x2/a + A2xe −x2/b)e−ict for −∞ < x < ∞. (a) Write the normalization condition for A1 and A2. (b) Nor- malize !(x) for A1 = A2 and a = 32, b = 8. (c) Find the probability that the particle is found in the region 0 < x < 32. SOLUTION ( a ) ψ∗(x, t) = (A1e −x2/a + A2xe −x2/b)eict , ⇒ |ψ(x, t)|2 = ψ∗(x, t)ψ(x, t) = [(A1e −x2/a + A2xe −x2/b) eict ] [( A1e −x2/a + A2xe −x2/b ) e−ict ] = A21e −2x2/a + 2A1A2xe −x2 ( 1 a + 1 b ) + A22x 2e−2x 2/b EXAMPLE 9 ψ (x ) = Ae EXAMPLE 10 ψ (x , t ) = (A The second term is: 2 √ π 3 ∫ 32 0 erf (√ 3x 4 ) dx = 2 √ π 3 ( 4 √ 3π + 4 192 √ 3π + 32erf [ 8 √ 3 ] ) ≈ 62.8 ⇒ ∫ 32 0 xe−3x 2/16dx = 65.5 − 62.8 = 2.7 Now the final term, which we evaluate numerically, is: ∫ 32 0 x2e−x 2/4dx = − 64 256 + 2 √ π erf [16] ≈ 3.54 Pulling all of these results together, the probability that the particle is found between 0 < x < 32 is: P (0 < x < 32) = 1 8 √ π ∫ 32 0 e−x 2/16dx + 2 8 √ π ∫ 32 0 xe−3x 2/16dx + 1 8 √ π ∫ 32 0 x2e−x 2/4dx = 1 8 √ π 2 √ π + 2 8 √ π (2.7) + 1 8 √ π (3.54) = 0.88 Conclusion: for ψ(x, t) = ( 1/ ( √ 8 √ π )) ( e−x 2/32 + xe−x 2/8 ) e−ict defined for −∞ < x < ∞, there is an 88% chance that we will find the particle between 0 < x < 32. −|x|/2aei(x−x0) . Find the constant A by normalizing the wavefunc- tion. SOLUTION First we compute: ψ∗(x) = A∗e− |x| 2a e−i(x−x0), ⇒ ψ∗(x)ψ(x) = |A|2 e− |x| 2a e−i(x−x0)e− |x| 2a ei(x−x0) = |A|2 e− |x| a [ e−i(x−x0)ei(x−x0) ] = |A|2 e− |x| a To integrate e−x/a think about the fact that it’s defined in terms of the absolute value function. For |x| < 0 , this term is ex/a , while for x > 0 it’s e−x/a . So we split the integral into two parts: ∫ ∞ −∞ |ψ(x, t)|2 dx = A2 ∫ 0 −∞ e x a dx + A2 ∫ ∞ 0 e −x a dx EXAMPLE 11 Let ψ(x) = Ae Let’s look at the first term (the second can be calculated in the same way modulo a minus sign). Let u = x/a , then du = dx/a. And so: ∫ 0 −∞ e x a dx = a ∫ eudu = ae x a ∣ ∣ ∣ 0 −∞ = a [ e0 − e−∞ ] = a [1 − 0] = a Application of the same technique to the second term also gives a , and so: ∫ ∞ −∞ |ψ(x, t)|2 dx = A2 ∫ 0 −∞ e x a dx + A2 ∫ ∞ 0 e −x a dx = A2a + A2a = 2A2a Using the normalization condition: ∫ ∞ −∞ |ψ(x, t)|2 dx = 1 We obtain: A = 1 √ 2a The normalized wavefunction is then: ψ(x) = 1 √2a e− |x| 2a ei(x−x0) A particle of mass m is trapped in a one-dimensional box of width a . The wave- function is known to be: ψ(x) = i 2 √ 2 a sin (πx a ) + √ 1 a sin ( 3πx a ) − 1 2 √ 2 a sin ( 4πx a ) If the energy is measured, what are the possible results and what is the probability of obtaining each result? What is the most probable energy for this state? SOLUTION We begin by recalling that the nth excited state of a particle in a one-dimensional box is described by the wavefunction: !n(x) = √ 2 a sin (nπx a ) , with energyEn = n2h̄2π2 2ma2 Table 2-1 gives the first few wavefunctions and their associated energies. Table 2-1 n !n(x) En 1 √ 2 a sin (πx a ) h̄2π2 2ma2 2 √ 2 a sin ( 2πx a ) 4h̄2π2 2ma2 3 √ 2 a sin ( 3πx a ) 9h̄2π2 2ma2 4 √ 2 a sin ( 4πx a ) 16h̄2π2 2ma2 EXAMPLE 12 Noting that all of the !n multiplied by the constant √ 2/a , we rewrite the wavefunction as given so that all three terms look this way: ψ(x) = i 2 √ 2 a sin (nx a ) + √ 1 a sin ( 3πx a ) − 1 2 √ 2 a sin ( 4πx a ) = i 2 √ 2 a sin (πx a ) + √ 2 2 √ 1 a sin ( 3πx a ) − 1 2 √ 2 a sin ( 4πx a ) = i 2 √ 2 a sin (πx a ) + 1 √ 2 √ 2 a sin ( 3πx a ) − 1 2 √ 2 a sin ( 4πx a ) Now we compare each term to the table, allowing us to write this as: ψ(x) = i 2 !1(x) + 1 √ 2 !3(x) − 1 2 !4(x) Since the wavefunction is written in the form ψ(x) = ∑ cn!n(x), we see that the coefficients of the expansion areshown in Table 2-1: Table 2-2 n Cn Associated Basis Function Associated Energy 1 i 2 !1(x) = √ 2 a sin (πx/a) h̄2π2 2ma2 2 0 !2(x) = √ 2 a sin (2πx/a) 2h̄2π2 ma2 3 1 √ 2 !3(x) = √ 2 a sin (3πx/a) 9h̄2π2 2ma2 4 − 1 2 !4(x) = √ 2 a sin (4πx/a) 8h̄2π2 ma2 Since c2 is not present in the expansion, there is no chance of seeing the energy E = (2h̄2π2)/(2ma2). The square of the other coefficient terms gives the proba- bility of measuring each energy. So the possible energies that can be measured with their respective probabilities are: E1 = h̄2π2 2ma2 , P (E1) = |c1|2 = c∗1c1 = ( −i 2 ) i 2 = 1 4 = 0.25 E3 = 3h̄2π2 2ma2 , P (E3) = |c3|2 = ( 1 √ 2 )2 = 1 2 = 0.50 E4 = 4h̄2π2 2ma2 , P (E4) = |c4|2 = ( 1 2 )2 = 1 4 = 0.25 ⇒the most probable energy is E3 = (9h̄2π2)/(2ma2) The energy of each state is also found by letting a → 2a : En = n2π2h̄2 2ma2 ( for length a) ⇒ En = n2π2h̄2 2m(2a)2 = n2π2h̄2 8ma2 ( for length 2a) Given that the system is in the state !(x) = √ 2/a sin((3πx)/a), we use the inner product ("n, !) to find the probability that the particle is found in state "n(x) when a measurement is made. (!n, ψ) = ∫ !∗n(x)ψ(x)dx = ∫ 2a 0 ( √ 1 a sin nπx 2a )( √ 2 a sin 3πx a ) dx = √ 2 a ∫ a 0 sin nπx 2a sin 3πx a dx To perform this integral, we use the trig identity: sin A sin B = 1 2 [ cos(A − B) − cos(A + B) ] ⇒ sin nπx 2a sin 3πx a = 1 2 [ cos ( nπx 2a − 3πx a ) − cos ( nπx 2a + 3πx a )] = 1 2 cos [πx a (n 2 − 3 )] − 1 2 cos [πx a (n 2 + 3 )] Putting this result into the integral gives: (!n, ψ) = √ 2 a ( 1 2 )∫ a 0 cos [πx a (n 2 − 3 )] − cos [πx a (n 2 + 3 )] dx Each term is easily evaluated with a substitution. We set u = ((πx)/a)((n/2)−3) in the first integral and u = ((πx)/a)((n/2) + 3) in the second. Therefore: ∫ a 0 cos [πx a (n 2 − 3 )] dx = a π ( n 2 − 3 ) sin [ π (n 2 − 3 )] ∫ a 0 cos [πx a (n 2 + 3 )] dx = a π ( n 2 + 3 ) sin [ π (n 2 + 3 )] If n is even, then we get sin of an integral multiple of π and both terms are zero. Putting all of these results together, we have: (!n, ψ) = 1 a √ 2 { a π ( n 2 − 3 ) sin [ π (n 2 − 3 )] − a π ( n 2 + 3 ) sin [ π (n 2 + 3 )] } The first excited state has n = 2. Since: sin [ π ( 2 2 − 3 )] = sin [ π(1 − 3) ] = sin [−2π ] = 0 sin [ π ( 2 2 + 3 )] = sin [ π(1 + 3) ] = sin [4π ] = 0 (!2, ") = 0, and so the probability of finding the system in the first excited state vanishes. For the ground state n = 1 and we have: (!1, ψ) = 1 π √ 2 { 1 ( 1 2 − 3 ) sin [ π ( 1 2 − 3 )] − 1 ( 1 2 + 3 ) sin [ π ( 1 2 + 3 )] } = 1 π √ 2 { − 2 5 sin [( − 5π 2 )] − 2 7 sin [( 7π x )]} = 1 π √ 2 { 2 5 sin [( 5π 2 )] − 2 7 sin [( 7π 2 )]} = ( 12 √ 2 35π ) = 0.15 The probability is found by squaring this number. So the probability of finding the system in the ground state of the newly widened box is: P(n = 1) = (0.15)2 = 0.024 and 〈p〉. SOLUTION The wavefunction is: ψ(x) = √ 2 a sin (πx a ) 〈x〉 = ∫ ∞ −∞ ψ∗(x)xψ(x)dx = ∫ a 0 √ 2 a sin (πx a ) (x) √ 2 a sin (πx a ) dx = 2 a ∫ a 0 x ( sin (πx a ))2 dx = 2 a ∫ a 0 x ( 1 − cos ( 2πx a ) 2 ) dx = 2 a ∫ a 0 xdx − 1 a ∫ a 0 x cos ( 2πx a ) dx integrating the first term yields: ∫ a 0 xdx = x2/2 ∣ ∣ a 0 = a2/2 The second term can be integrated by parts, giving ∫ a 0 x cos ( 2πx a ) dx = a2/4π2 cos (2πx/a) + a/2πx sin ( 2πx a ) ∣ ∣ ∣ ∣ a 0 = a2/4π [cos 2π − cos 0] = 0 EXAMPLE 15 A particle m a one-dimensional box 0 ≤ x ≤ a is in the ground state. Find 〈x〉 And so, we find that: 〈x〉 = 1 a ∫ a 0 xdx = ( 1 a ) x2/2 ∣ ∣ ∣ ∣ a 0 = a/2 To calculate 〈p〉, we write p as a derivative operator: 〈P 〉 = ∫ ∞ −∞ ψ∗(x)pψ(x)dx = ∫ a 0 √ 2 a sin (πx a ) ( −ih d dx √ 2 a sin (πx a ) ) dx = −ih 2 a (π a ) ∫ a 0 sin πx a cos πx a dx You may already know this integral is zero, but we can calculate it easily so we proceed. Let u = sin ( πx a ) , then du = ( π a ) cos ( πx a ) dx . This gives: ∫ udu = u2/2, ⇒ ∫ ∞ −∞ sin (πx a ) cos (πx a ) dx = (1/2) ( sin (πx a ))2 ∣ ∣ ∣ ∣ a 0 = (1/2) [ (sin (π))2 − (sin(0))2 ] = 0 And so, for the ground state of the particle in a box, we have found: 〈p〉 = 0 ψ(x) = ( 2a π )1/4 exp(−ax2) Assuming that u is real, find 〈xn〉 for arbitrary integers n > 0. SOLUTION ψ is real so: ψ∗(x) = ψ(x) = ( 2a π )1/4 exp(−ax2) So the expectation value of xn is given by: 〈 xn 〉 = ∫ ∞ −∞ ψ∗(x)xnψ(x)dx = ∫ ∞ −∞ ( 2a π )1/4 exp(−ax2)xn ( 2a π )1/4 exp(−ax2)dx = √ 2a π ∫ ∞ −∞ xn exp(−2ax2)dx EXAMPLE 16 Let H = − h̄2 2m d2 dx2 + V (x, t) = P 2 2m + V (x, t) As you can see this is just one side of the Schrödinger equation. Therefore acting H on a wavefunction gives: Hψ(x, t) = ih̄ dψ(x, t) dt When the potential is time independent, and we have the time-independent Schrödinger equation, we arrive at an eigenvalue equation for H : Hψ(x) = Eψ(x) The eigenvalues E of the Hamiltonian are the energies of the system. Or you can say the allowed energies of the system are the eigenvalues of the Hamiltonian operator H . The average or mean energy of a system expanded in the basis states φn is found from: 〈H 〉 = ∑ En |cn| 2 = ∑ EnPn where pn , is the probability that energy is measured. Earlier we considered the following state for a particle trapped in a one-dimensional box: ψ(x) = i 2 √ 2 a sin (πx a ) + √ 1 a sin ( 3πx a ) − 1 2 √ 2 a sin ( 4πx a ) What is the mean energy for this system? SOLUTION For this state, we found that the possible energies that could be measured and their respective probabilities were: E1 = h̄2π2 2ma2 , P (E1) = |c ∗ 1c1| = ( −i 2 ) i 2 = 1 4 = 0.25 E3 = 9h̄2π2 2ma2 , P (E3) = |c3|2 = ( 1 √ 2 )2 = 1 2 = 0.25 E4 = 8h̄2π2 ma2 , P (E4) = |c4|2 = ( 1 2 )2 = 1 4 = 0.25 So the mean energy is: 〈H 〉 = ∑ EnPn = h̄ 2 π 2 2ma2 ( 1 4 ) + 3h̄2π2 2ma2 ( 1 2 ) + 4h̄2π2 2ma2 ( 1 4 ) = h̄ 2 π 2 2ma2 [ 1 4 + 3 2 + 4 4 ] + h̄ 2 π 2 2ma2 ( 11 4 ) = 11h̄2π2 8ma2 Note that the mean energy would never actually be measured for this system. EXAMPLE 18 SOLUTION We need to show that: ∫ φ∗(x)[pψ(x)]dx = ∫ (φ(x)p)∗ψ(x)]dx Using pψ = −ih̄(dψ/dx), we have: ∫ φ∗(x)[pψ∗(x)]dx = ∫ φ(x) ( −ih̄ dφ dx ) dx = ih̄ ∫ φ∗(x) dφ dx dx Recalling the formula for integration by parts: ∫ udv = uv − ∫ vdu We let u = φ∗(x), ⇒ du = dφ∗(x) dx dx dv = dφ dx , ⇒ v = ψ(x) The wavefunction must vanish as x → ±∞, and so the boundary term (uv = φ∗(x)ψ(x)) vanishes. We are then left with: ih̄ ∫ φ∗(x) dφ dx dx = ih̄ [ φ∗(x)ψ(x) ∣ ∣ ∞ −∞ − ∫ dφ∗(x) dx ψ(x)dx ] = −ih̄ ∫ dφ∗(x) dx ψ(x)dx = ∫ ( ih̄ dφ dx )∗ ψ(x)dx We have shown that: ∫ φ∗(x)pψ(x)dx = ∫ (φ(x)p)∗ψ(x)dx, therefore p is Hermitian. ψ(x) = 1 √ a for − a ≤ x ≤ a Find the momentum space wavefunction !(p). EXAMPLE 19 Is the momentum operator p Hermitian? EXAMPLE 20 Suppose that SOLUTION φ(p) = 1 √ 2πh̄ ∫ ∞ −∞ ψ(x)e−ipx/h̄dx = 1 √ 2πh̄ ∫ a −a 1 √ a e−ipx/h̄dx = 1 √ 2πh̄a ∫ a −a e−ipx/h̄dx = 1 √ 2πh̄ ( h̄ −ip ) e−ipx/h̄ ∣ ∣ ∣ ∣ a −a = 1 √ 2πh̄a 2h̄ p ( eipa/h̄ − e−ipa/h̄ 2i ) = √ 2a πh̄ sin(pa/h̄) (pa/h̄) = 2a √ πh̄ = sinc(pa/h̄) A plot Figure 2-6 of the so-called sinc function shows that the momentum-space wavefunction, like the position space wavefunction in this case, is also localized: It is a fact of Fourier theory and wave mechanics that the spatial extension of the wave described by ψ(x) and the extension of wavelength described by the Fourier transform !(p) cannot be made arbitrarily small. This observation is described mathematically by the Heisenberg uncertainty principle: "x"p ≥ h̄ Or, using p = h̄k, "x, "k ≥ 1. Let φ(k) = e− a b (k−ko) 2 Use the Fourier transform to find ψ (x) EXAMPLE 21 SOLUTION By definition j (x, t) = ( h̄ 2im )[ ψ∗(x, t) d2ψ(x, t) dx2 − ψ(x, t) d2ψ∗(x, t) dx2 ] Now ψ(x, t) = √ 1 a sin (πx a ) e−iE1t/h̄ + √ 1 a sin ( 2πx a ) e−iE2t/h̄ and: dψ(x, t) dx = √ 1 a (π a ) cos (πx a ) e−iE1t/h̄ + √ 1 a ( 2π a ) cos ( 2πz a ) e−iE2t/h̄ dψ∗(x, t) dx = √ 1 a (π a ) cos (πx a ) e+iE1t/h̄ + √ 1 a ( 2π a ) cos ( 2πz a ) eiE2t/h̄ Algebra shows that: ψ∗(x, t) dψ(x, t) dx = π a2 sin (πx a ) cos (πx a ) + 2π a2 sin ( 2πx a ) cos ( 2πx a ) + 2π a2 sin (πx a ) cos ( 2πx a ) e−iE1t/h̄ + π a2 sin ( 2πx a ) cos (πx a ) e−iE1t/h̄ We also find that: ψ(x, t) dψ∗(x, t) dx = π a2 sin (πx a ) cos (πx a ) + 2π a2 sin ( 2πx a ) cos ( 2πx a ) + 2π a2 sin (πx a ) cos ( 2πx a ) eiE1t h̄ + π a2 sin ( 2πx a ) cos (πx a ) e−iE1t/h̄ ⇒ ψ∗(x, t) dψ(x, t) dt = ψ∗(x, t) dψ(x, t) dx + 2π a2 sin (πx a ) cos ( 2πx a ) e−i(E1−E2)t/h̄ − e−i(E1−E2)t/h̄ = 2π a2 sin (πx a ) cos ( 2πx a ) (2i) sin[(E1 − E2)t/h̄] − π a2 sin 2πx a cos (πx a ) (2i) sin[(E1 − E2)t/h̄] = (2i) sin[(E1 − E2)t/h̄] π a2 [ 2 sin (πx a ) cos ( 2πx a ) − sin ( 2πx a ) cos (πx a ) ] Using j (x, t) = ( h̄ 2im )[ ψ∗(x, t) dψ(x, t) dx − ψ(x, t) dψ∗(x, t) dx ] we find that for this wavefunction: j (x, t) = ( h̄ 2im ) (2i) sin[(E1 − E2)t/h̄] π a2 [ 2 sin (πx a ) cos ( 2πx a ) − sin ( 2πx a ) cos (πx a ) ] = ( h̄ m ) sin[(E1 − E2)t/h̄] π a2 [ 2 sin (πx a ) cos ( 2πx a ) − sin ( 2πx a ) cos (πx a ) ] 1(x) = cos(2x) and ψ2(x) = cos(10x). Describe and plot the wavepacket formed by the superposition of these two waves. SOLUTION To form the wavepacket, we use the trig identity: cos(A) − cos(B) = −2 sin A + B 2 sin A − B 2 ψ1(x) + ψ2(x) = cos(10x) − cos(2x) = −2 sin(6x) sin(4x) The plot of ψ2(x)is (figure 3-5): The plot of ψ1(x)is (Figure 3-6): Finally we plot ψ1(x)+ψ2(x). Notice how it shows hints of attaining a localized waveform (Figure 3-7): 2. The Time IndependentSchrödinger Equation EXAMPLE 1 Let ψ SOLUTION This time the energy of the incoming beam of particles is greater than that of the potential barrier. A schematic plot of this situation is shown in Figure 3-11: We call the region left of the barrier (x < 0) Region I while Region II is x > 0. To the left of the barrier in Region I, the situation is exactly the same as it was in the previous example. So the wavefunctions are: ψinc(x) = Ae ik1x, ψref (x) = Be ik1x This time, since E > V , the Schrödinger equation has the form: d2ψ dx2 + 2m h̄2 (V − E)ψ = 0 E > V , and so V − E < 0 and we have: d2ψ dx2 + k22ψ = 0 where k22 = 2m h̄2 (E − V ) represents the wavenumber in Region II. The solution to the Schrödinger equation in Region II is: ψtrans(x) = Ce ik2x, De−ik2x However, the term De−ik2x represents a beam of particles moving to the left towards −x and from the direction of x = +∞. The problem states that the incoming beam comes from the left, therefore there are no particles coming from the right in Region II. Therefore D = 0 and we take the wavefunction to be: ψtrans(x) = Ce ik2x The procedure to determine the current densities in Region I is exactly the same as it was in the last problem. Therefore: Jinc = h̄ m (k1A 2) Jref = h̄ m (k1B 2) Continuity of the wavefunction at x = 0 leads to the same condition that was found in the previous example: A + B = C ⇒ 1 + B A = C A In Region I, the derivative at x = 0 is also the same as it was in the previous example: ψ(x = 0) − ik1(A − B) In Region II, this time ψtrans(x) = Ce ik2x, ⇒ ψtrans = ik2Ce −ik2x And so, the matching condition of the derivatives at x = 0 becomes: ik1(A − B) = ik2C, ⇒ C = k1(A − B) k2 From A + B = C , we substitute B = C − A and obtain: C = k1(A − (C − A)) k2 ⇒ C(1 + k1 k2 ) = 2 k1 k2 A or: C A = 2k1 k1 + k2 This leads to: T = 4k21 (k1 + k2)2 R = (2k1 − k2) /2 It can be verified that: T + R = 1 Recalling that in Region I, we have a free particle so: k21 = 2mE h̄2 In Region II, we found that: k21 = 2m h̄2 (E − V ) Therefore, the ratio between the wave numbers in the two regions is given by: k21 k22 = 2mE h̄2 2m h̄2 (E − V ) = E E − V = 1 − E V on a square potential barrier V , where E < V (Figure 3-12): The potential is V for 0 < x < a and is 0 otherwise. Find the transmission coefficient. SOLUTION We divide this problem into three regions: Region I: − ∞ < x < 0 Region II: 0 < x < a With the definitions: k2 = 2mE h̄2 , β2 = 2m(V − E) h̄2 the wavefunctions for each of the three regions are: φI (x) = Ae ikx + Be−ikx φII (x) = Ce βx + De−βx φIII (x) = Ee ikx + Fe−ikx The incident particles are a coming from x = −∞, so F = 0. Matching the wavefunctions at x = 0 gives: A + B = C + D Matching at x = a gives: Ceβa + De−βa = Eeika Continuity of the first derivative at x = 0 gives: ik(A − B) = β(C − D) and at x = a : Cβeβa + Dβe−βa = ikEeika Combining the matching condition of the wavefunction and the first derivative at x = 0 yields: A = ik + β 2ik C + ik − β 2ik D Doing the same at x = a gives: EXAMPLE 4 Incident particles of energy E coming from the direction of x = −∞ are incident V (x) = ψ(x) = { -V f or − a/2 < x < a/2 0 Otherwise SOLUTION A qualitative plot of the potential is shown in Figure 3-13: Following the procedure used in the previous problem, we define three regions: Region I: − ∞ < x < 0 Region II: 0 < x < a Region III: a < x < ∞ We again define: k2 = 2mE h̄2 , β2 = 2m(V − E) h̄2 with the wavefunctions: φI (x) = Ae βx + Be−βx φII (x) = Ce ikx + De−ikx φIII (x) = Ee βx + Fe−βx The condition that φ → 0 as x → ±∞ requires us to set B = 0 and E = 0. φI (x) = Ae βx φII (x) = Ce ikx + De−ikx φIII (x) = Fe −βx In Region II, notice that the well is centered about the origin. Therefore the solutions will be either even functions or odd functions. Even solutions are given in terms of cos functions while odd solutions are given in terms of sin functions. We proceed with the even solutions. The odd case is similar: φI (x) = Ae βx φII (x) = C cos(kx) φIII (x) = Fe −βx EXAMPLE 5 Find the possible energies for the square well defined by: The derivatives of these functions under those conditions are: φ′I (x) = βAe βx φ′II (x) = −kC sin(kx) φ′III (x) = −βFe −βx The wavefunction must match at the boundaries of the well. We can match it at x = a/2, and so: Aeβa/2 = C cos(ka/2) The first derivative must also be continuous at this boundary, so: −βAe−βa/2 = −kC sin(ka/2) Dividing this equation by the first one gives: k tan(k − a/2) = β Recalling that k2 = 2mE h̄2 , β2 = 2m(V − E) h̄2 The relation above is a transcendental equation that can be used to find the allowed energies. This can be done numerically or graphically. We rewrite the equation slightly: tan(ka/2) = β k The places where these curves intersect give the allowed energies. These eigenval- ues are discrete, the number of them that are found depends on the parameter: λ = 2mV a2 h̄2 If λ is large, there will be several allowed energies, while if λ is small, there might be two or even just one bound state energy. A plot in Figure 3-14 shows an example: The procedure for the odd solutions is similar, except you will arrive at a cos function instead of the tangent. Suppose that: V (x) = −V δ(x) where V > 0. ( a ) Let E < 0 and find the bound state wavefunction and the energy (b ) Let an incident beam of particles with E > 0 approach from the direction of x = −∞ and find the reflection and transmission coefficients. EXAMPLE 6 SOLUTION ( a ) If we let β2 = −2mE h̄2 , the Schrödinger equation is: 0 = d2ψ dx2 + 2m h̄2 (V δ(x) + E)ψ = d2ψ dx2 + 2m h̄2 V δ(x) − β2ψ We consider two regions. Region I, for −∞ < x < 0, has: ψI (x) = Ae βx + Be−βx We take functions of the form Aeβx because we are seeking a bound state. The requirement that ψ → 0 as x → −∞ for a bound state forces us to set B = 0. In Region II, ψII (x) = Ce βx + De−βx The requirement that ψ → 0 as x → ∞ forces us to set C = 0. So the wavefunctions are: ψI (x) = Ae βx ψII (x) = De −βx Even with a delta function potential, continuity of the wavefunction is required. Continuity of # at x = 0 tells us that: A = D The derivative of the wavefunction is not continuous. We can find the dis- continuity by integrating the term 2m h̄2 V δ(x). We can integrate about the delta function from ±ǫ where ǫ is a small parameter, and then let ǫ → 0 to find out how the derivative behaves. Let’s recall the Schrödinger equation: d2ψ dx2 + 2m h̄2 V δ(x) − β2ψ = 0 Examining the last term, we integrate over ±ǫ and use the fact that #(0) = A: ∫ ǫ −ǫ β2ψdx ≈ ∫ ǫ −ǫ β2Adx = β2A ∫ ǫ −ǫ dx = 2β2Aǫ Letting ε → 0, we see that this term vanishes. This leaves two terms we need to calculate: ∫ ǫ −ǫ d2ψ dx2 dx + ∫ ǫ −ǫ 2m h̄2 V δ(x)ψ(x)dx = 0 The first term is: ∫ ǫ −ǫ d2ψ dx2 dx = dψ dx |x=+ǫ − dψ dx |x=−ǫ x = +ε corresponds to the wavefunction in Region II. Using ψII (x) = Ae −βx We have: ψII (x) ′ − ψI (x) ′ = −βAe−βx − βAeβx = −βA(e−βx + eβx) Show that f (x) = xe−x [ θ(x) − θ(x − 10) ] where θ is the “Heaviside” or “unit step” function, belongs to the space L2 . SOLUTION The unit step function is defined to be 1 for x ≥ 0 and is zero otherwise. Here is a plot Figure 4-1: θ(x − 10) shifts the discontinuity to x = 10. Therefore this function is 1 for x ≥ 10 and is zero otherwise Figure 4-2: Subtracting this from θ(x) to get θ(x)–θ(x − 10), we obtain a function that is 1 for 0 ≤ x ≤ 10, and is zero otherwise Figure 4-3: Therefore we see that the function f (x) = xe−x(θ(x)− θ(x − 10)) is only going to be non-zero for 0 ≤ x ≤ 10. Here is a plot of the function Figure 4-4 to get an idea of its behavior: 3.An Introduction to state Space EXAMPLE 1 Since this function is non-zero over a finite interval, we expect it to have a finite integral and belong to L2 . We compute ∫ f 2(x) dx : ∫ x2e−2xdx = − 1 4 e−2x(1 + 2x + 2x2) (This integral can be computed using integration by parts). Evaluating it at the limits x = 10 and x = 0 we find that: ∫ 10 0 xe−2x dx = − 1 4 e−20(221) + 1 4 = 0.25 (e−20 is very small, so we can take it to be zero). The norm of the function is finite, and so this integral belongs to the Hilbert space L2 . Let a Hilbert space consist of functions defined over the range 0 < x < 3. Does the function ϕ(x) = sin π 3 x satisfy the requirement that ∫ 3 0 ϕ∗(x)ϕ(x) dx < ∞? SOLUTION A plot of the function follows Figure 4-5: Here we plot the square of the function Figure 4-6: EXAMPLE 2 From the plots ii, is apparent that the function does belong to the Hilbert space. But let’s compute the integral explicitly. ∫ 3 0 ϕ∗(x)ϕ(x) dx = ∫ 3 0 sin (πx 3 )2 dx We use a familiar trig identity to rewrite the integrand: ∫ 3 0 sin (πx 3 )2 dx = 1 2 ∫ 3 0 dx − 1 2 ∫ 3 0 cos ( 2πx 3 ) dx The result of the first integral is immediately apparent: 1 2 ∫ 3 0 dx = 1 2 x|30 = 3/2 For the second integral, ignoring the 1/2 out in front, we obtain: ∫ 3 0 cos ( 2πx 3 ) dx = 3 2π sin 2πx 3 |30 = 3 2π [ sin(2π) − sin(0) ] = 0 So the integral is finite: ∫ 3 0 sin (πx 3 )2 dx = 3 2 and this function satisfies the requirement that ∫ 3 0 ϕ∗(x)ϕ(x) dx ≤ ∞ The task of expanding a function in terms of a given basis is one that is encoun- tered frequently in quantum mechanics. We have already seen this in previous chap- ters. Let’s examine how to find the components of an arbitrary function expanded in some basis. Suppose that we have an infinite square well of width a . The wavefunctions that are the solutions to the Schrodinger equation are the basis functions for a Hilbert space defined over 0 ≤ x ≤ a . We recall that these basis functions are given by: φn(x) = √ 2 a sin nπx a We can expand any function in terms of these basis functions using ϕ(x) =∑ ∞ i=1 xiφi(x) where the expansion coefficients are found using: EXAMPLE 3 1, z2, . . . ., zn) called “n-tuples.” We can define a vector space of n-tuples of complex numbers where we represent a vector by an n × 1 matrix called a column vector. For example, consider two vectors |ψ〉 , |φ〉 given by: |ψ〉 =    z1 z2 ... zn    , |φ〉 =    w1 w2 ... wn    Vector addition in this space is carried out by adding together the individual com- ponents of the vectors |ψ〉 + |φ〉 =    z1 z2 ... zn    +    w1 w2 ... wn    =    z1 + w1 z2 + w2 ... zn + wn    We see that the operation of addition has generated a new list of n complex numbers—a new n-tuple—so we have produced a new vector that still belongs to the space. Since the addition of complex numbers is commutative and associa- tive, we see that the vectors in this space automatically satisfy the other properties listed as well. Scalar multiplication is carried out by multiplying each component of the vector in the following manner α |ψ〉 = α    z1 z2 ... zn    =    α z1 α z2 ... αzn    Let A = A1î + A2ĵ + A3k̂ be an ordinary vector in three-dimensional space. Is the set of all vectors with A1 = 13 a vector space? SOLUTION If A1 = 13 ⇒ A = 13î + A2ĵ + A3k̂ . It is easy to see that a set of vectors of this form cannot form a vector space. This is because the addition of two vectors does not produce another vector in the space. Suppose that B = 13î + B2ĵ + B3k̂ is another vector in this space. If this set constitutes a vector space, then A + B must be another vector in the space. But A+ B = 13î+A2ĵ +A3k̂+(13î+B2ĵ +B3k̂) = 26î+(A2+B2)ĵ +(A3+B3)k̂ Since the component associated with the basis vector î is 26 = 13, this vector does not belong in the space and axiom (1) is not satisfied. Therefore the set of all vectors with A1 = 13 is not a vector space. DEFINITION: Dual Vector In order to carry over the notion of a “dot product” to an abstract vector space we will need to construct a dual vector. In the language of kets, the dual vector is called a “bra.” Using Dirac notation, the dual of a vector |ψ〉 is written as 〈ψ |. 4.The MathematicalStructure of Quantum Mechanics I EXAMPLE 1 Consider sets or lists of n complex numbers (z EXAMPLE 2 Returning to the example of complex n-tuples, we write the list of complex numbers out in a row and then take their complex conjugates to obtain the dual vector. In other words: |ψ〉 =    z1 z2 ... zn    ⇒ 〈ψ | = [ z∗1, z ∗ 2, . . . , z ∗ n ] The dual space of V is denoted V ∗ . With a definition of dual vectors in hand, we can generalize the dot product to an “inner product” between two abstract vectors. |A〉 = ( 2 −7i 1 ) , |B〉 = ( 1 + 3i 4 8 ) Let a = 6 + 5i ( a ) Compute a |A〉 , a |B〉, and a(|A〉 + |B〉). Show that a(|A〉 + |B〉) = a |A〉+ SOLUTION ( a ) a |A〉 = (6 + 5i) ( 2 −7i 1 ) = ( (6 + 5i)2 (6 + 5i)(−7i) (6 + 5i)1 ) = ( 12 + 10i 35 − 42i 6 + 5i ) a |B〉 = (6 + 5i) ( 1 + 3i 4 8 ) = ( (6 + 5i)(1 + 3i) (6 + 5i)4 (6 + 5i)8 ) = ( −9 + 23i 24 + 20i 48 + 40i ) ⇒ a |A〉 + a |B〉 = ( 12 + 10i 35 − 42i 6 + 5i ) + ( −9 + 23i 24 + 20i 48 + 40i ) = ( 12 + 10i + (−9 + 23i) 35 − 42i + (24 + 20i) 6 + 5i + (48 + 40i) ) = ( 3 + 33i 59 − 22i 54 + 45i ) Now adding the vectors first, we have |A〉 + |B〉 = ( 2 −7i 1 ) + ( 1 + 3i 4 8 ) = ( 2 + 1 + 3i −7i + 4 1 + 8 ) = ( 3 + 3i 4 − 7i 9 ) ⇒ a(|A〉 + |B〉) = (6 + 5i) ( 3 + 3i 4 − 7i 9 ) = ( (6 + 5i)(3 + 3i) (6 + i)(4 − 7i) (6 + 5i)(9) ) = ( 18 + 15i + 18i − 15 24 + 20i − 42i + 35 54 + 45i ) = ( 3 + 33i 59 − 22i 54 + 45i ) = a |A〉 + a |B〉 ( b ) First we compute 〈A|B〉. To form the dual vector of |A〉, we compute the complex conjugate of its elements, and then transpose the result to form a row vector: |A〉∗ = ( 2 −7i 1 )∗ = ( 2 7i 1 ) , ⇒ 〈A| = ( 2 7i 1 ) EXAMPLE 3 Two vectors in a three-dimensional complex vector space are defined by: And so the inner product is 〈A|B〉 = ( 2 7i 1 ) ( 1 + 3i 4 8 ) = 2(1 + 3i) + 7i(4) + 1(8) = 10 + 34i Now we compute 〈B|A〉. The complex conjugate of |B〉 is given by |B〉∗ = ( 1 + 3i 4 8 Now we transpose this to get the dual vector 〈B| = ( 1 − 3i 4 8 ) And so the inner product is 〈B|A〉 = ( 1 − 3i 4 8 ) ( 2 −7i 1 ) = (1−3i)(2)+(4)(−7i)+(8)(1) = 2−6i +8 = 10−34i Notice that 〈B|A〉 = 〈A|B〉∗ , a result that holds in general for the inner product in a complex vector space. We now list this and other important properties of the inner product. PROPERTIES OF THE INNER PRODUCT Let |ψ〉 , |φ〉 be two vectors belonging to a complex vector space V and let α and β be complex numbers. Then the following hold for the inner product : 1. 〈ψ |φ〉 = 〈φ|ψ〉∗ 2. 〈ψ | (α |φ〉 + β |ω〉) = α 〈ψ |φ〉 + β 〈ψ |ω〉 3. (〈αψ | + 〈βω|) |φ〉 = α∗ 〈ψ |φ〉 + β∗ 〈ψ |φ〉 4. 〈ψ |ψ〉 ≥ 0 with equality if and only if |ψ〉 = 0 If the inner product between two vectors is zero, 〈ψ |#〉 = 0, we say that the vectors are orthogonal. We now say some more about property (5), which generalizes the notion of length to give us the “norm” of a vector. |A〉 = ( 2 −7i 1 ) , |B〉 = ( 1 + 3i 4 8 ) Find the norm of each vector. SOLUTION 〈A|A〉 = ( 2 7i 1 ) ( 2 −7i 1 ) = 2(2) + 7i(−7i) + 1(1) = 4 + 49 + 1 = 54 ⇒ ||A|| = √ 〈A|A〉 = √ 54 = 3 √ 6 Now we compute the norm of B : 〈B|B〉 = ( 1 − 3i 4 8 ) ( 1 + 3i 4 8 ) = (1 − 3i)(1 + 3i) + 4(4) + 8(8) = 10 + 16 + 64 = 90 ⇒ ||B|| = √ 〈B|B〉 = √ 90 = 3 √ 10 EXAMPLE 4 Let two vectors be defined by SOLUTION For any complex number z, it is true that |Re(z)| ≤ |z|. Since the inner product is a complex number this tells us that Re(|〈ψ |φ 〉|) ≤ |〈ψ |φ 〉|. To derive the result, we use this fact together with the Schwartz inequality. First, we expand the inner product 〈ψ + φ |ψ + φ 〉: 〈ψ + φ |ψ + φ 〉 = 〈ψ |ψ 〉 + 〈ψ |φ 〉 + 〈φ |ψ 〉 + 〈φ |φ 〉 Once again, we note that 〈ψ |φ 〉 + 〈φ |ψ 〉 = 〈ψ |φ 〉 + 〈ψ |φ 〉∗ = 2Re(〈ψ |φ 〉), and so this can be written as 〈ψ |ψ 〉 + 〈φ |φ 〉 + 2Re(〈φ |ψ 〉) At this point we can use |Re(z)| ≤ |z| to write 〈ψ |ψ 〉 + 〈φ |φ 〉 + 2Re(〈φ |ψ 〉) ≤ 〈ψ |ψ 〉 + 〈φ |φ 〉 + 2 |〈φ |ψ 〉| From Cauchy-Schwarz, we have 〈ψ |ψ 〉 + 〈φ |φ 〉 + 2 |〈φ |ψ 〉| ≤ 〈ψ |ψ 〉 + 〈φ |φ 〉 + 2 √ 〈ψ |ψ 〉 √ 〈φ |φ 〉 = ( √ 〈ψ |ψ 〉 + √ 〈φ |φ 〉)2 Putting our results together allows us to conclude that √ 〈ψ + φ |ψ + φ 〉 ≤ √ 〈ψ |ψ 〉 + √ 〈φ |φ 〉 |a〉 = ( 1 2 1 ) |b〉 = ( 0 1 0 ) |c〉 = ( −1 0 −1 ) SOLUTION Notice that: 2 |b〉 − |c〉 = 2 ( 0 1 0 ) − ( −1 0 −1 ) = ( 0 2 0 ) + ( 0 1 0 ) = ( 1 2 1 ) = |a〉 Since |a〉 can be expressed as a linear combination of the other two vectors in the set, the set is linearly dependent. |a〉 = ( 2 0 0 ) , |b〉 = ( −1 0 −1 ) , |c〉 = ( 0 0 −4 ) EXAMPLE 10 Show that the following vectors are linearly dependent: EXAMPLE 11 Is the following set of vectors linearly independent? SOLUTION Let a1, a2, a3 be three unknown constants. To check linear independence, we write a1 |a〉 + a2 |b〉 + a3 |c〉 = 0 With the given column vectors, we obtain a1 ( 2 0 0 ) + a2 ( 0 −1 0 ) + a3 ( 0 0 −4 ) = ( 2a1 0 0 ) + ( 0 −a2 0 ) + ( 0 0 −4a3 ) = ( 2a1 −a2 −4a3 ) = 0 This equation can only be true if a1 = a2 = a3 = 0. Therefore the set of vectors is linearly independent. 1〉 , |u2〉 , |u3〉 is an orthonormal basis. In this basis let, |ψ〉 = 2i |u1〉 − 3 |u2〉 + i |u3〉 |φ〉 = 3 |u1〉 − 2 |u2〉 + 4 |u3〉 ( a ) Find 〈ψ | and 〈φ|. ( b ) Compute the inner product 〈φ |ψ 〉 and show that 〈φ |ψ 〉 = 〈ψ |φ 〉∗ . ( c ) Let a = 3 + 3i and compute |aψ〉. ( d ) Find |ψ + φ〉 , |ψ − φ〉. SOLUTION ( a ) We find the bra corresponding to each ket by changing the base kets to bras and taking the complex conjugate of each coefficient. Therefore 〈ψ | = (2i)∗ 〈u1| − 3 〈u2| + (i) ∗ 〈u3| = −2i 〈u1| − 3 〈u2| − i 〈u3| 〈φ| = 3 〈u1| − 2 〈u2| + 4 〈u3| ( b ) To compute the inner product, we rely on the fact that the basis is orthonormal, i.e. 〈 ui ∣ ∣uj 〉 = δij . And so we obtain 〈φ |ψ 〉 = (3 〈u1| − 2 〈u2| + 4 〈u3|)(2i |u1〉 − 3 |u2〉 + i |u3〉) = (3)(2i) 〈u1 |u1 〉 + (3)(−3) 〈u1 |u2 〉 + (3)(i) 〈u1 |u3 〉 + (−2)(2i) 〈u2 |u1 〉 + (−2)(−3) 〈u2 |u2 〉 + (−2)(i) 〈u2 |u3 〉 + (4)(2i) 〈u3 |u1 〉 + (4)(−3) 〈u3 |u2 〉 + (4)(i) 〈u3 |u3 〉 = 6i 〈u1 |u1 〉 + 6 〈u2 |u2 〉 + 4i 〈u3 |u3 〉 = 6 + 10i Now the inner product 〈ψ |φ 〉 is 〈ψ |φ 〉 = (−2i 〈u1| − 3 〈u2| − i 〈u3|)(3 |u1〉 − 2 |u2〉 + 4 |u3〉) = −6i 〈u1 |u1 〉 + 6 〈u2 |u2 〉 − 4i 〈u3 |u3 〉 = 6 − 10i ⇒ 〈φ |ψ 〉 = 〈ψ |φ 〉∗ EXAMPLE 12 Suppose that |u ( c ) To compute |aψ〉, we multiply each coefficient in the expansion by a : |ψ〉 = (3 + 3i)(2i |u1〉 − 3 |u2〉 + i |u3〉) = (3 + 3i)2i |u1〉 − (3 + 3i)3 |u2〉 + (3 + 3i)i |u3〉 = (−6 + 6i) |u1〉 − (9 + 9i) |u2〉 + (−3 + 3i) |u3〉 ( d ) To compute |ψ ± φ〉, we add/subtract components: |ψ〉 = 2i |u1〉 − 3 |u2〉 + i |u3〉 |φ〉 = 3 |u1〉 − 2 |u2〉 + 4 |u3〉 ⇒ |ψ + φ〉 = (3 + 2i) |u1〉 − 6 |u2〉 + (4 + i) |u3〉 |ψ − φ〉 = (−3 + 2i) |u1〉 − |u2〉 + (−4 + i) |u3〉 Let |ψ〉 = 3i |φ1〉 + 2 |φ2〉 − 4i |φ3〉 where the |φi〉 are an orthonormal basis. Normalize |ψ〉. SOLUTION The first step is to write down the bra corresponding to |ψ〉. Remember we need to complex conjugate each expansion coefficient: 〈ψ | = −3i 〈φ1| + 2 〈φ2| + 4i 〈φ3| Now we can compute the norm of the vector: 〈ψ |ψ 〉 = (−3i 〈φ1| + 2 〈φ2| + 4i 〈φ3|)(3i |φ1〉 + 2 |φ2〉 − 4i |φ3〉) = (−3i)(3i) 〈φ1 |φ1 〉 + 4 〈φ2 |φ2 〉 + (4i)(−4i) 〈φ3 |φ3 〉 = 9 + 4 + 16 = 29 where we have used the fact that the basis is orthonormal. The norm is the square root of this quantity: ‖|ψ〉‖ = √ 〈ψ |ψ 〉 = √ 29 And so the normalized vector is found by dividing |ψ〉 by the norm to give ∣ ∣ ∣ ψ̃ 〉 = 3i √ 29 |φ1〉 + 2 √ 29 |φ2〉 − 4i √ 29 |φ3〉 1〉 , |u2〉 , |u3〉 be an orthonormal basis for a three-dimensional vector space. Suppose that |ψ〉 = 2i |u1〉 − 3 |u2〉 + i |u3〉 Write the column vector representing this vector in the given basis. Then write down the row vector that represents 〈ψ | in this basis. SOLUTION The components of the column vector representing |ψ〉 are found by taking the EXAMPLE 13 EXAMPLE 14 Let |u Show this with matrix multiplication. SOLUTION First we write: 〈!| = ( −1 0 −i ) ⇒ |"〉〈!| = ( 2 3i 4 ) ( −1 0 −i ) = ( 2(−1) 2(0) 2(−i) 3i(−1) 3i(0) 3i(−i) 4(−1) 4(0) 4(−i) ) = ( −2 0 −2i −3i 0 3 −4 0 −4i ) 3|!〉 = 3 ( −1 0 i ) = ( −3 0 3i ) ⇒ (|"〉〈!|)3|! = ( −2 0 −2i −3i 0 3 −4 0 −4i ) ( −3 0 3i ) = ( −2(−3) + (−2i)(3i) −3i(−3) + 3(3i) −4(−3) + (−4i)(3i) ) = ( 12 18i 24 ) |0〉 = ( 1 0 ) , |1〉 = ( 0 1 ) and an operator  is given by: A = ( 1 −2i 2i 0 ) Express A in outer product notation. SOLUTION First we write A in terms of outer products of {|0〉, |1〉} with unknown coefficients: A = a |0〉 〈0| + b |0〉 〈1| + c |1〉 〈0| + d |1〉 〈1| The matrix A is ( 1 −2i 2i 0 ) = ( 〈0| A |0〉 〈0| A |1〉 〈1| A |0〉 〈1| A |1〉 ) This gives us four equations for the unknowns a, b, c, d . We use the orthonormality of the basis to evaluate each term, i.e. 〈0 |0 〉 = 〈1 |1〉 = 1, 〈1 |0〉 = 〈0 |1〉 = 0: 〈0| A |0〉 = 〈0| (a |0〉 〈0| + b |0〉 〈1| + c |1〉 〈0| + d |1〉 〈1|) |0〉 = a 〈0 |0〉 〈 0 |0〉 + b 〈0 |0〉 〈1 |0 〉 + c 〈0 |1〉 〈0 |0 〉 + d 〈0 |1〉 〈1 |0〉 = a ⇒ a = 1 −2i = 〈0| A |1〉 = 〈0| (a |0〉 〈0| + b |0〉 〈1| + c |1〉 〈0| + d |1〉 〈1|) |1〉 = a 〈0 |0 〉 〈0 |1〉 + b 〈0 |0〉 〈1 |1〉 + c 〈0 |1〉 〈0 |1〉 + d 〈0 |1〉 〈1 |1〉 = b ⇒ b = −2i EXAMPLE 3 Consider a two-dimensional space in which a basis is given by The same procedure can be applied to the other two terms 〈1|A|0〉 = 2i and 〈1|A|1〉 = 0 yielding c = 2i and d = 0. Therefore, we can write A as: A = |0〉 〈0| − 2i |0〉 〈1| + 2i |1〉 〈0| DEFINITION: The Trace of an Operator The trace of an operator T̂ is the sum of the diagonal elements of its matrix and is denoted tr (T̂ ). If T̂ = (Tij ) =    T11 T12 . . . T1n T21 T22 . . . T2n ... ... . . . ... Tn1 Tn2 . . . Tnn    =     〈u1|T̂ |u1〉 〈u1|T̂ |u2〉 . . . 〈u1|T̂ |un〉 〈u2|T̂ |u1〉 〈u1|T̂ |u2〉 . . . 〈u2|T̂ |un〉 ... ... . . . ... 〈un|T̂ |u1〉 〈un|T̂ |u2〉 . . . 〈un|T̂ |un〉     then T r(T̂ ) = T11 + T22 + . . . + Tnn = ∑n i=1 Tii . Alternatively, we can write the trace as: T r(T̂ ) = 〈u1|T̂ |u2〉 + 〈u2|T̂ |u2〉 + . . . + 〈un|T̂ |un〉 = n∑ i=1 〈ui|T̂ |ui〉. T r(ABC) = T r(BCA) = T r(CAB) Prove this for the case of two operators A and B , i.e. T r(AB) = T r(BA), SOLUTION We prove this using the closure relation considering some basis |ui〉. Recall that the identity operator can be written as 1 = ∑n i=1 |ui〉〈ui| Then we have: T r(AB) = n ∑ i=1 〈ui| AB |ui〉 = n ∑ i=1 〈ui| A(I)B |ui〉 = n ∑ i=1 〈ui| A   n ∑ j=1 |uj 〉〈uj |   B|ui〉 = n ∑ i=1 n ∑ j=1 〈 uj ∣ ∣ B|ui〉〈ui|A|uj 〉 = n ∑ j=1 〈uj |B ( n ∑ i=1 |ui〉〈ui| ) A|uj 〉 = n ∑ j=1 〈uj |B(I)A|uj 〉 EXAMPLE 4 The trace is cyclic; in other words, = n∑ j=1 〈uj |BA|uj 〉 = T r(BA) A = ( i 0 −2i 1 4 6 0 8 −1 ) Find the trace of A. SOLUTION The trace can be calculated from the matrix representation of an operator by sum- ming its diagonal elements, so: T r(A) = i + 4 − 1 = 3 + i T = − |!1〉 〈!1| + |!2〉 〈!2| + 2 |!3〉 〈!3| − i |!1〉 〈!2| + |!2〉 〈!1| in some basis |! 1〉, |!2〉, |!3〉. Calculate T r(T ). The basis is orthonormal. SOLUTION T r(T ) = ∑ 〈!i|T |!i〉 = 〈!1|T |!1〉 + 〈!2|T |!2〉 + 〈!3|T |!3〉 Begin by finding the action of T on each of the basis vectors |!1〉, |!2〉, |!3〉 T |!1〉 = (− |!1〉 〈!1| + |!2〉 〈!2| + 2 |!3〉 〈!3| − i |!1〉 〈!2| + |!2〉 〈!1|) |!1〉 = − |!1〉 〈!1| x!1〉 + |!2〉 〈!2| !1〉 + 2 |!3〉 〈!3| !1〉 − i |!1〉 〈!2| |!1〉 + |!2〉 〈!1| |!1〉 = − |!1〉 + |!2〉 ⇒ 〈!1| T |!1〉 = 〈!1| (− |!1〉 + |!2〉) = − 〈!1| !1〉 + 〈!1| !2〉 = − 〈!1| !1〉 = −1 T |!2〉 = (− |!1〉 〈!1| + |!2〉 〈!2| + 2 |!3〉 〈!3| − i |!1〉 〈!2| + |!2〉 〈!1|) |!2〉 = − |!1〉 〈!1 |!2〉 + |!2〉 〈!2 |!2〉 + 2 |!3〉 〈!3 |!2〉 − i |!1〉 〈!2 |!2〉 + |!2〉 〈!1 |!2〉 = −i |!1〉 + |!2〉 ⇒ 〈!2 T |!2〉 = 〈!2| (−i |!1〉 + |!2〉) = −i 〈!2| !1〉 + 〈!2 |!2 〉 = +1 T |!3〉 = (− |!1〉 〈!1| + |!2〉 〈!2| + 2 |!3〉 〈!3| − i |!1〉 〈!2| + |!2〉 〈!1|) |!3〉 = − |!1〉 〈!1 |!3〉 + |!2〉 〈!2| !3〉 + 2 |!3〉 〈!3 |!3〉 − i |!1〉 〈!2| |!3〉 + |!2〉 〈!1 |!3〉 = 2 |!3〉 ⇒ 〈!3| T |!3〉 = 2 Therefore T r(T ) = 〈!1|T |!1〉+ 〈!2|T |!2〉+ 〈!3|T |!3〉 = −1 + 1 + 2 = 2 EXAMPLE 5 Suppose that in some basis an operator A has the following matrix representation: EXAMPLE 6 An operator A = ( 3 1 1 0 4 −1 2 −5 0 ) and B = ( 1 0 4 i 7i 0 2 8 −1 ) Find det(A) and det(B). SOLUTION det(A) = det ( 3 1 1 0 4 −1 2 −5 0 ) = 3 det ( 4 −1 −5 0 ) − det ( 0 −1 2 0 ) + det ( 0 4 2 −5 ) = 3[4(0) − (5)(−1)] − [0(0) − 2(−1)] + [0(−5) − 2(4)] = 3(−5) − −2 − 8 = −15 − −10 = −25 Repeating the procedure for B : det(B) = det ( 1 0 4 i 7i 0 2 8 −1 ) = det ( 7i 0 8 −1 ) + 4 det ( i 7i 2 8 ) = −7i + 4[8i − 14i] = −31i Now that we have reviewed how to calculate some basic determinants, we find the eigenvalues for some matrices. Find the characteristic polynomial and eigenvalues for each of the following matri- ces: A = ( 5 3 2 10 ) , B = ( 7i −1 2 −6i ) , C = ( 2 0 −1 0 3 1 1 0 4 ) SOLUTION Starting with the matrix A, we have: det(A − λ1) = det ∣ ∣ ∣ ( 5 3 2 10 ) − λ ( 1 0 0 1 ) ∣ ∣ ∣ = det ∣ ∣ ∣ ( 5 3 2 10 ) − ( λ 0 0 λ )∣ ∣ ∣ = det ∣ ∣ ∣ ( 5 − λ 3 2 10 − λ )∣ ∣ ∣ = (5 − λ)(10 − λ) − 6 = λ2 − 15λ + 44 This is the characteristic polynomial. Setting it equal to zero and solving for λ, we find: λ2 − 15λ + 44 = 0 ⇒ (λ − 11)(λ − 4) = 0 and the eigenvalues are λ2 = 11, λ2 = 4. Following the same procedure for B , we find: det(B − λ1) = det ∣ ∣ ∣ ( 7i −1 2 6i ) − ( λ 0 0 λ )∣ ∣ ∣ = det ∣ ∣ ∣ ( 7i − λ −1 2 6i − λ )∣ ∣ ∣ = = (7i − λ)(6i − λ) + 2 the characteristic polynomial is λ2 − 13iλ − 40. Now we set this equal to zero to obtain the eigenvalues: λ2 − 13iλ − 40 = 0 EXAMPLE 11 Let EXAMPLE 12 We proceed to solve this equation using the quadratic formula: λ1,2 = 13i ± √ (13i)2 − 4(−40) 2 = 13i ± √ −169 + 160 2 = 13i ± √ −9 2 = 13i ± 3i 2 Therefore the eigenvalues of the matrix B are: λ1 = 8i, λ2 = 5i Now we obtain the characteristic polynomial and eigenvalues for C : det(C − λI) = det ∣ ∣ ∣ ∣ ∣ ( 2 0 −1 0 3 1 1 0 4 ) − λ ( 1 0 0 0 1 0 0 0 1 ) ∣ ∣ ∣ ∣ ∣ = det ( 2 − λ 0 −1 0 3 − λ 1 1 0 4 − λ = (2 − λ) det ( 3 − λ 1 0 4 − λ ) − det ( 0 3 − λ 1 0 ) = (2 − λ)[(3 − λ)(4 − λ)] + (3 − λ) = (3 − λ)[(2 − λ)(4 − λ) + 1] = (3 − λ)[λ2 − 6λ + 9] This is the characteristic polynomial for C . We do not carry through the multipli- cation of (3 − λ) because the equation is in a form that will let us easily find the eigenvalues. Setting equal to zero, (3 − λ)[λ2 − 6λ + 9] = 0 We see immediately that the first eigenvalue is λ1 = 3. We find the other two eigenvalues by solving: λ2 − 6λ + 9 = 0 This factors immediately into: (λ − 3)2 = 0 Therefore we find that λ2 = λ3 = 3 also. When a matrix or operator has repealed eigenvalues as in this example, we say that eigenvalue is degenerate. An eigenvalue that repeats n times is said to be n-fold degenerate. !1〉〈!1| + 2|!1〉〈!2| + |!2〉〈!1|. Find the matrix, representing T and find its (normalized) eigenvectors and eigen- values. This vector space is two-dimensional. SOLUTION The matrix representing T can be found by calculating T = ( 〈!1| T |!1〉 〈!1| T |!2〉 〈!2| T |!1〉 〈!2| T |!2〉 ) EXAMPLE 13 In some orthonormal basis an operator T = | T = ( 〉 〈! | T |! 〉 〈 | T |! 〉 ) To perform the calculations, the fact that the basis is orthonormal tells us that 〈!2 |!1〉 = 〈!1 |!2〉 = 0 〈!1 |!1〉 = 〈!2 |!2〉 = 1 Starting with 〈!1| T |!1〉, we have 〈!1| T |!1〉 = 〈!1| (|!1〉 〈!1| + 2 |!1〉 〈!2| + |!2〉 〈!1|) |!1〉 = 〈!1 |!1〉 〈!1| !1〉 + 2 〈!1 |!1〉 〈!2| !1〉 + 〈!1 |!2〉 〈!1| !1〉 = 1 〈!1| T |!2〉 = 〈!1| (|!1〉 〈!1| + 2 |!1〉 〈!2| + |!2〉 〈!1|) |!2〉 = 〈!1 |!1〉 〈!1| !2〉 + 2 〈!1 |!1〉 〈!2| !2〉 + 〈!1 |!2〉 〈!1| !2〉 = 2 〈!2| T |!1〉 = 〈!2| (|!1〉 〈!1| + 2 |!1〉 〈!2| + |!2〉 〈!1|) |!1〉 = 〈!2 |!1〉 〈!1| !1〉 + 2 〈!2 |!1〉 〈!2| !1〉 + 〈!2 |!2〉 〈!1| !1〉 = 1 〈!2| T |!2〉 = 〈!2| (|!1〉 〈!1| + 2 |!1〉 〈!2| + |!2〉 〈!1|) |!2〉 = 〈!2 |!1〉 〈!1| !2〉 + 2 〈!2 |!1〉 〈!2| !2〉 + 〈!2 |!2〉 〈!1| !2〉 = 0 ⇒ T = ( 1 2 1 0 ) in the basis {|!1〉, |!2〉}. We now solve det(T − λI) = 0 to find the eigenvalues of T : 0 = det(T − λI) = det ∣ ∣ ∣ ( 1 2 1 0 ) − λ ( 1 0 0 1 ) ∣ ∣ ∣ = det ∣ ∣ ∣ ( 1 − λ 2 1 −λ ) ∣ ∣ ∣ = (1 − λ)(−λ) − (2) ⇒ λ2 − λ − 2 = 0 This leads to λ1 = 2 and λ2 = −1. Starting with λ1 , we find the eigenvectors, which we label {|a1〉, |a2〉}; Let |a1〉 = ( a b ) where a, b are undetermined constants that may be complex. T |a1〉 = λ1|a1〉 ⇒ ( 1 2 1 0 ) ( a b ) = 2 ( a b ) ⇒ a + 2b = 2a , or a = 2b . We can then write: |a1〉 = ( 2b b ) Using the given matrix representation of the Hamiltonian, we have: H |!〉 = ( ω1 ω2 ω2 ω1 ) ( α(t) β(t) ) = ( ω1α(t) + ω2β(t) ω2α(t) + ω1β(t) ) The other side of the equation is ih̄ ∂ |!〉 ∂t = ih̄ ( α̇ β̇ ) where the dot indicates a time derivative. Setting both sides equal to each other leads to the following system ( ω1α(t) + ω2β(t) ω2α(t) + ω1β(t) ) = ih̄ ( α̇ β̇ ) which gives these two equations: ω1α + ω2β = ih̄α̇ ω2α + ω1β = ih̄β̇ Adding these equations, we obtain (ω1 + ω2)(α + β) = ih̄(α̇ + β̇) It would seem we have a complicated mess. But we can simplify things quite a bit by defining a new function that we call γ = a + β . Then this is a simple differential equation: (ω1 + ω2)γ = ih̄ dγ dt with solution γ = C exp ( ω1 + ω2 ih̄ t ) We now repeat the procedure. This time we subtract, giving (ω1 − ω2)(α − β) = ih̄(α̇ − β̇) Now let δ = α − β . This gives (ω1 − ω2)δ = ih̄ dδ dt , ⇒ δ = D exp ( ω1 − ω2 ih̄ t ) Now, α = γ + δ 2 ⇒ α = 1 2 [C exp ( ω1 + ω2 ih̄ t) + D exp ( ω1 − ω2 ih̄ t)] Recalling that the initial condition is |%(0)〉 = |0〉 = ( 1 0 ) with |%(t)〉 = ( α(t) β(t) ) , this implies that α(0) = 1. Therefore: α(0) = 1 = C + D 2 The initial condition also tells us that β(0) = 0. Using β = γ − δ 2 , this leads to the equation 0 = C − D 2 , ⇒ C = D Putting this together with the condition 1 = C + D 2 , we obtain C = D = 1. Substitution of C, D into the equation for α gives α(t) = 1 2 [ C exp ( ω1 + ω2 ih̄ t ) + D exp ( ω1 − ω2 ih̄ t )] = 1 2 [ exp ( ω1 + ω2 ih̄ t ) + exp ( ω1 − ω2 ih̄ t )] Pulling out the common multiplier exp ( ω1 ih̄ t ) , we write this as: α = 1 2 [ exp ( ω1 + ω2 ih̄ t ) + exp ( ω1 − ω2 ih̄ t )] = exp ( ω1t ih̄ ) 1 2 [ exp ( ω2 ih̄ t ) + exp ( − ω2 ih̄ t )] = exp ( −i ω1t h̄ ) 1 2 [ exp ( i ω2 h̄ t ) + exp ( −i ω2 h̄ t )] = exp ( −i ω1t h̄ ) cos ( ω2t h̄ ) A similar procedure applied to β (work it out for yourself) leads to β = ie −i ω1t h̄ sin ( ω2t h̄ ) , and therefore the state at time t is: |" (t)〉 = ( α β ) = e −i ω1 h̄ t   cos ( ω2 h̄ t ) −i sin ( ω2 h̄ t )   T and verify that it is equal to AT + BT A =   6 −1 3 4i 5 −2   B =   7 2 8 1 0 3   SOLUTION First we form A + B , which we do by adding the individual elements of both matrices: A + B =   6 + 7 −1 + 2 3 + 8 4i + 1 5 + 0 −2 + 3     13 1 11 1 + 4i 5 1   Now we compute the transpose: (A + B)T = ( 13 11 5 1 1 + 4i 1 ) EXAMPLE 15 For the given matrices A and B , find (A + B) Next, let’s write the transpose of each individual matrix: AT =   6 −1 3 4i 5 −2   = ( 6 3 5 −1 4i −2 ) BT =   7 2 8 1 0 3   T = ( 7 8 0 2 1 3 ) Finally, we form their sum: AT +BT = ( 6 + 7 3 + 8 5 + 0 −1 + 2 4i + 1 −2 + 3 ) = ( 13 11 5 1 1 + 4i 1 ) = (A + B)T A =   0 i 2i −i 0 2i 2i 7i 0   SOLUTION First we apply Step 1, and write down the transpose by exchanging rows and columns: A T =   0 i 2i −i 0 2i 2i 7i 0   T Now apply Step 2, forming the complex conjugate of each element by letting i → −i : A † =   0 −i 2i i 0 7i 2i 2i 0   ∗ =   0 i −2i −i 0 −7i −2i −2i 0   The Hermitian conjugate of a column vector is a row of vectors with each component replaced by its complex conjugate, as this next example shows. ! = ( 2 −3i 6 ) SOLUTION Taking the transpose gives a row vector: !T = ( 2 −3i 6 )T = (2 − 3i 6) To find the Hermitian conjugate, take the complex conjugate of each element: !† = (2 − 3i 6)∗ = (2 3i 6) EXAMPLE 16 Find the Hermitian conjugate of the matrix: EXAMPLE 17 Find the Hermitian conjugate of: Now we subtract this equation from the first one. The left side is just zero: 〈a| A |a〉 − 〈a| A |a〉 = 0 On the right side we have λ 〈a| a〉 − λ∗ 〈a| a〉 = ( λ − λ∗ ) 〈a| a〉 Since 〈a| a〉 is not zero, we must have: λ − λ∗ = 0, ⇒ λ = λ∗ Therefore, the eigenvalues of a Hermitian operator are real. Show that B = ( i 0 −2 0 3i 8 2 −8 −7i ) is skew-Hermitian SOLUTION First let’s write out −B : −B = ( −i 0 2 0 −3i −8 −2 8 7i ) Now let’s find B† beginning in the usual way: B T = ( i 0 2 0 3i −8 −2 8 −7i ) Now we complex conjugate each element: B † = ( −i 0 2 0 −3i −8 −2 8 7i ) So we see that B† = −B , and B is skew-Hermitian. We now consider unitary operators/matrices. Show that U = ( 1 3 − 2 3 i 2 3 i − 2 3 i − 1 3 − 2 3 i ) is a unitary matrix and verify that the rows of U form an orthonormal set. SOLUTION First we compute the Hermitian conjugate of U : U † = ( ( 1 3 − 2 3 i 2 3 i − 2 3 i − 1 3 − 2 3 i )T )∗ = ( 1 3 − 2 3 i − 2 3 i 2 3 i − 1 3 − 2 3 i )∗ = ( 1 3 + 2 3 i + 2 3 i − 2 3 i − 1 3 + 2 3 i ) EXAMPLE 22 EXAMPLE 23 Now we compute UU : UU † = ( 1 3 − 2 3 i 2 3 i − 2 3 i − 1 3 − 2 3 i ) ( 1 3 + 2 3 i + 2 3 i − 2 3 i − 1 3 + 2 3 i ) = ( [( 1 3 − 2 3 i ) ( 1 3 + 2 3 i ) + ( 2 3 i ) ( − 2 3 i )] [( 1 3 − 2 3 i ) ( 2 3 i ) + ( 2 3 i ) ( − 1 3 + 2 3 i )] [( − 2 3 i ) ( 1 3 + 2 3 i ) + ( − 1 3 − 2 3 i ) ( − 2 3 i )] [( − 2 3 i ) ( 2 3 i ) + ( − 1 3 − 2 3 i ) ( − 1 3 + 2 3 i )] ) = ( [ 5 9 + 4 9 ] [ 2 9 i + 4 9 − 2 9 i − 4 9 ] [ − 2 9 i + 4 9 + 2 9 i − 4 9 ] [ 4 9 + 5 9 ] ) = ( 1 0 0 1 ) = 1 ⇒ U is unitary Now let’s verify that the rows of U form an orthonormal set. First we compute the dot product of the first row with itself: ( 1 3 − 2 3 i, 2 3 i ) · ( 1 3 − 2 3 i, 2 3 i ) = ( 1 9 + 4 9 ) + 4 9 = 1 Now the first row with the second row: ( 1 3 − 2 3 i, 2 3 i ) · ( − 2 3 i, − 1 3 − 2 3 i ) = ( 2 9 i + 4 9 ) + ( − 2 9 i − 4 9 ) = 0 Finally, the second row with itself: ( − 2 3 i, − 1 3 − 2 3 i ) · ( − 2 3 −, − 1 3 − 2 3 i ) = 4 9 + ( 1 9 + 4 9 ) = 1 Note: on the Eigenvalues of a Unitary Matrix: The eigenvalues of a unitary matrix have unit magnitude, that is, |an| 2 = 1, where an is an eigenvalue of a unitary matrix. U =   1 √ 2 1 √ 2 i 0 − 1 √ 2 i 1 √ 2 i 0 0 0 i   is unitary and that its eigenvalues have unit magnitude. SOLUTION We compute U†: U T =   1 √ 2 1 √ 2 0 − 1 √ 2 i 1 √ 2 i 0 0 0 i   T =   1 √ 2 − 1 √ 2 i 0 1 √ 2 1 √ 2 i 0 0 0 i   Taking the complex conjugate of each element we find: U † =   1 √ 2 1 √ 2 0 1 √ 2 −1 √ 2 0 0 0 −i   EXAMPLE 24 Show that Therefore: UU† =   1√ 2 1√ 2 0 − 1√ 2 i 1√ 2 i 0 0 0 i     1√ 2 i√ 2 0 1√ 2 −i√ 2 0 0 0 −i   =    ( 1√ 2 ) ( 1√ 2 ) + ( 1√ 2 ) ( 1√ 2 ) ( 1√ 2 ) ( i√ 2 ) + ( −i√ 2 ) ( 1√ 2 ) 0 ( −1√ 2 ) ( 1√ 2 ) + ( 1√ 2 ) ( i√ 2 ) ( −1√ 2 ) ( 1√ 2 ) + ( 1√ 2 ) ( −1√ 2 ) 0 0 0 (i) (−i)    = ( 1 2 + 1 2 1 2 − 1 2 0 − i 2 + i 2 1 2 − 1 2 0 0 0 1 ) = ( 1 0 0 0 1 0 0 0 1 ) = 1 ⇒ U is unitary Now let’s find the eigenvalues: U − λI =   1√ 2 1√ 2 0 − 1√ 2 i 1√ 2 i 0 0 0 i   − λ ( 1 0 0 0 1 0 0 0 1 ) =   1√ 2 − λ 1√ 2 0 − 1√ 2 1√ 2 − λ 0 0 0 i − λ   det[U − λI ] = det ∣ ∣ ∣ ∣ ∣ ∣   1√ 2 − λ 1√ 2 0 − 1√ 2 1√ 2 − λ 0 0 0 i − λ   ∣ ∣ ∣ ∣ ∣ ∣ = ( 1 √ 2 − λ ) det ( 1√ 2 − λ 0 0 i − λ ) − 1 √ 2 det ( − 1√ 2 0 0 i − λ ) = ( 1 √ 2 − λ ) ( i √ 2 − λ ) (i − λ) − 1 √ 2 ( − 1 √ 2 ) (i − λ) Setting equal to zero, we find that: λ1 = i ⇒ |λ1|2 = (i) (−i) = 1 λ2 = √ 2 − √ 6 4 + i √ 2 + √ 6 4 ⇒ |λ2|2 = (√ 2 − √ 6 4 + i √ 2 + √ 6 4 ) (√ 2 − √ 6 4 − i √ 2 + √ 6 4 ) |!1〉 = ( a b ) = ( a (√ 2 − 1 ) a ) , 〈!1| = ( a∗ (√ 2 − 1 ) a∗ ) 1 = 〈!1 |!1 〉 = ( a∗ (√ 2 − 1 ) a∗ ) ( a (√ 2 − 1 ) a ) = a2 + (√ 2 − 1 )2 a2 = a2 ( 4 − 2 √ 2 ) This leads to: a = 1 √ 4 − 2 √ 2 , b = √ 2 − 1 √ 4 − 2 √ 2 = 1 √ 2 √ 2 |!1〉 =   1√ 4−2 √ 2 1√ 2 √ 2   We now consider the eigenvalue λ = −1 for |!2〉 = ( c d ) 1 √ 2 ( 1 1 1 −1 ) ( c c ) = − ( c d ) ⇒ 1 √ 2 (c + d) − −c, ⇒ d = − ( 1 + √ 2 ) c Normalizing, we find: 1 = 〈!2 |!2 〉 = ( c∗ − ( 1 + √ 2 ) c∗ ) ( c − ( 1 + √ 2 ) c ) = c2 + ( 1 + √ 2 )2 c2 = c2 ( 4 + 2 √ 2 ) c = 1 √ 4 + 2 √ 2 , b = −1 ( 1 + √ 2 ) √ 4 + 2 √ 2 = −1 √ 2 √ 2 |!2〉 =   1√ 4+2 √ 2 −1√ 2 √ 2   Prove that [X̂, P̂ ] = ih̄ SOLUTION We apply the commutator to a test wavefunction, ! (x) and recall that X̂ψ (x) = xψ (x) and P̂ = −ih̄ ∂ ∂x [ X̂, P̂ ] ψ (x) = ( X̂P̂ − P̂ X̂ ) ψ (x) = X̂P̂ψ (x) − P̂ X̂ψ (x) = X̂ − ih̄ ∂ψ ∂x + ih̄ ∂ ∂x ( X̂ψ (x) ) = −ih̄X ∂ψ ∂x + ih̄ ∂ ∂x (Xψ (x)) = ih̄X ∂ψ ∂x + ih̄ { ∂x ∂x ψ (x) + x ∂ψ ∂x } = −ih̄x ∂ψ ∂x + ih̄ { ψ (x) + x ∂ψ ∂x } EXAMPLE 27 = ih̄ψ (x) − ih̄x ∂ψ ∂x + ih̄x ∂ψ ∂x = ih̄ψ (x) So we conclude that [ X̂, P̂ ] ψ (x) = ih̄ψ (x) , ⇒ [ X̂, P̂ ] = ih̄ We have: [A, BC] = A (BC) − (BC) A Notice that B[A, C] = B (AC − CA) = BAC−BCA. We have the second term in this expression, but the first, BAC , is missing. So we use 0 = BAC − BAC to add in the missing piece and then rearrange terms: [A, BC] = A(BC) − (BC)A = A(BC) − (BC)A + BAC − BAC = ABC − BAC + BAC − BCA = ABC − BAC + B(AC − CA) = ABC − BAC + B[A, C] = (AB − BA)C + B[A, C] = [A, B]C + B[A, C] Let A and B be two operators that commute. If A has non-degenerate eigenvalues, show that an eigenvector of A is also an eigenvector of B . SOLUTION Since A and B commute, [A, B] = AB − BA = 0 ⇒ AB = BA. Let |a〉 be an eigenvector of A such that A |a〉 = λ |a〉. Then: AB |a〉 = BA |a〉 = Bλ |a〉 = λ (B |a〉) Therefore B |a〉 is also an eigenvector of A with eigenvalue λ. If A is non- degenerate, |a〉 is unique up to a proportionality factor. This implies that: B |a〉 = ω |a〉 for some ω . Therefore |a〉 is also an eigenvector of B . Let A = ( −1 2i 0 0 4 0 1 0 1 ) and B = ( 0 2 i −i 2i 0 0 1 4 ) ( a ) Find tr(A) and tr(B), ( b ) Find det(A) and det(B). ( c ) Find the inverse of A. ( d ) Do A and B commute? EXAMPLE 28 Show that [A, BC] = [A, B]C + B[A, C] SOLUTION EXAMPLE 29 EXAMPLE 30 SOLUTION ( a ) The trace is the sum of the diagonal elements: T r (A) = T r ( −1 2i 0 0 4 0 1 0 1 ) = −1 + 4 + 1 = 4 T r (B) = T r ( 0 2 i −i 2i 0 0 1 4 ) = 0 + 2i + 4 ( b ) We begin with the det (A): det (A) = det ( −1 2i 0 0 4 0 1 0 1 ) = − det ( 4 0 0 1 ) − 2i det ( 0 0 1 1 ) = −1 (4) = −4 det (B) = det ( 0 2 i −i 2i 0 0 1 4 ) = −2 det ( −i 0 0 4 ) + i det ( −i 2i 0 1 ) = −2 (−4i) + i (−i) = 8i + 1 ( c ) The det (A) = 0, ⇒ A does have an inverse. We recall that A−1 = C T det (A) where C is the matrix of cofactors. First we compute C , recalling that cij = (−1)i+j det(Mij ), where Mij is the minor obtained by crossing out row i and column j : M11 = ( 4 0 0 1 ) , det (M11) = 4, ⇒ c11 = (−1) 1+1 (4) = (−1)2 4 = 4 M12 = ( 0 0 1 1 ) , det (M12) = 0, ⇒ c12 = 0 M13 = ( 0 4 1 0 ) , det (M13) = −4, ⇒ c13 = (−1) 1+3 (−4) = (−1)4 − 4 = −4 M21 = ( 2i 0 0 1 ) , det (M21) = 2i, ⇒ c21 = (−1) 2+1 2i = (−1)3 2i = −2i M22 = ( −1 0 1 1 ) , det (M22) = −1, ⇒ c22 = (−1) 2+2 (−1) = −1 M23 = ( −1 2i 1 0 ) , det (M23) = −2i, ⇒ c23 = (−1) 2+3 (−2i) = (−1)5 (−2i) = (−1) (−2i) = 2i M31 = ( 2i 0 4 0 ) , det (M31) = 0, ⇒ c31 = 0 M32 = ( −1 0 0 0 ) , det (M32) = 0, ⇒ c32 = 0 M33 = ( −1 2i 0 4 ) , det (M33) = −4, ⇒ c33 = (−1) 3+3 (−4) = (−1)6 (−4) = −4 Putting all of this together, C is given by: C = ( 4 0 −4 −2i −1 2i 0 0 −4 ) , C T = ( 4 −2i 0 0 −1 0 −4 2i −4 ) Solving for b in terms of a in the second equation, we obtain: b(cos θ − eiθ ) = −a sin θ Recalling that eiθ = cos θ + i sin θ , the right side is just: b (cos θ − cos θ − i sin θ) = b (−i sin θ) which from our previous result is equal to −a sin θ . So in terms of a , b is given by b = a−i = ia . This allows us to write the eigenvector entirely in terms of a : !1 = ( a b ) = ( a ia ) To find the constant a , we normalize it. Noting ! † 1 = (a∗ (ia)∗), we obtain: 1 = !†1 · !1 = ( a∗ − ia∗ ) ( a ia ) = a∗(a) + (−ia∗)(ia) = |a|2 + |a|2 = 2 |a|2 ⇒ |a|2 = 1 2 , or a = 1 √ 2 . Therefore b = ia = i 1 √ 2 and we obtain !1 = ( a b ) = ( 1√ 2 i√ 2 ) . We follow the same procedure to find !2 = ( c d ) for λ2 = e−iθ : R!2 = λ2!2 ⇒ ( cos θ sin θ − sin θ cos θ ) ( c d ) = e−iθ ( c d ) ⇒ c cos θ+d sin θ = e−iθc, −c sin θ+d cos θ = e−iθd . Focusing on the second equation, we have: −c sin θ = d ( e−iθ − cos θ ) = d (cos θ − i sin θ − cos θ) = −id sin θ Solving for d in terms of c , we obtain: d = −ic Inserting this into !2 = ( c d ) = ( c −ic ) and normalizing using the same proce- dure as we did before !2 = ( 1√ 2 −i√ 2 ) We form the matrix U from these two eigenvectors. The first column of U is the first eigenvector, and the second column of U is the second eigenvector: U = (!1!2) = ( 1√ 2 1√ 2 i√ 2 −i√ 2 ) = 1 √ 2 ( 1 1 i −i ) ⇒ U† = 1 √ 2 ( 1 −i 1 i ) We check to see that UU† = I : UU† = 1 √ 2 ( 1 1 i −i ) 1 √ 2 ( 1 −i 1 i ) = 1 2 ( 1 1 i −i ) ( 1 −i 1 i ) = 1 2 ( 1(1) + 1(1) i(1) + (−i)(1) ∣ ∣ ∣ ∣ 1(−i) + 1(i) i(−i) + (−i)(i) ) = 1 2 ( 2 0 0 2 ) = ( 1 0 0 1 ) = I Finally we apply the transformation to diagonalize the matrix R : U † RU = 1 √ 2 ( 1 −i 1 i ) ( cos θ sin θ − sin θ cos θ ) 1 √ 2 ( 1 1 i −i ) = 1 2 ( 1 −i 1 i ) ( cos θ sin θ − sin θ cos θ ) ( 1 1 i −i ) = 1 2 ( 1 −i 1 i ) ( cos θ + i sin θ cos θ − i sin θ − sin θ + i cos θ − sin θ − i cos θ ) = 1 2 ( 1 −i 1 i ) ( e iθ e −iθ ie iθ −ie−iθ ) = 1 2 ( e iθ + eiθ e−iθ − e−iθ e iθ − eiθ e−iθ + e−iθ ) = 1 2 ( 2eiθ 0 0 2e−iθ ) = ( e iθ 0 0 e−iθ ) and we see that the diagonal form of the matrix. The elements on the diagonal are the eigenvalues of R .  = 2 |!1〉 〈!1| − i |!1〉 〈!2| + i |!2〉 〈!1| + 2 |!2〉 〈!2| , where |!1〉 and |!2〉 form an orthonormal and complete basis. ( a ) Is  Hermitian? (b ) Find the eigenvalues and eigenvectors of  and show they satisfy the com- pleteness relation. ( c ) Find a unitary transformation that diagonalizes Â. SOLUTION ( a ) First we recall the general rule for finding the adjoint of an expression: λ 〈u| ÂB̂ |v〉 → λ∗ 〈v| B̂†Â† |u〉 The Hermitian conjugate operation is linear, so we examine each piece of Â, replacing any scalars by their complex conjugates, turning kets into bras, bras into kets, and then reversing the order of factors. Therefore: (2 |"1〉 〈"1|) † = 2 |"1〉 〈"1| (−i |"1〉 〈"2|) † = i |"2〉 〈"1| (i |"2〉 〈"1|) † = −i |"1〉 〈"2| (2 |"2〉 〈"2|) † = 2 |"2〉 〈"2| Therefore, we have: † = (2 |"1〉 〈"1| − i |"1〉 〈"2| + i |"2〉 〈"1| + 2 |"2〉 〈"2|) † = (2 |"1〉 〈"1|) † + (−i |"1〉 〈"2|) † + (i |"2〉 〈"1|) † + (2 |"2〉 〈"2|) † = 2 |"1〉 〈"1| − i |"1〉 〈"2| + i |"2〉 〈"1| + 2 |"2〉 〈"2| = Â, ⇒ the operator is Hermitian. EXAMPLE 2 Suppose that an operator ( b ) To find the eigenvalues and eigenvectors of Â, we first find the representation of the operator from the given orthonormal basis: 〈!1|Â|!1〉 = 〈!1| ( 2 |!1〉 〈!1| − i |!1〉 〈!2| + i |!2〉 〈!1| + 2 |!2〉 〈!2| ) |!1〉 = 2 〈!1| !1〉 〈!1 |!1 〉 − i 〈!1 |!1 〉 〈!2 |!1 〉 + i 〈!1 |!2 〉 〈!1 |!1 〉 + 2 〈!1 |!2 〉 〈!2 |!1 〉 = 2 〈!1|Â|!2〉 = 〈!1| ( 2 |!1〉 〈!1| − i |!1〉 〈!2| + i |!2〉 〈!1| + 2 |!2〉 〈!2| ) |!2〉 = 2 〈!1 |!1 〉 〈!1 |!2 〉 − i 〈!1 |!1 〉 〈!2 |!2 〉 + i 〈!1 |!2 〉 〈!1 |!2 〉 + 2 〈!1 |!2 〉 〈!2 |!2 〉 = −i 〈!2|Â|!1〉 = 〈!2| ( 2 |!1〉 〈!1| − i |!1〉 〈!2| + i |!2〉 〈!1| + 2 |!2〉 〈!2| ) |!1〉 = 2 〈!2 |!1 〉 〈!1 |!1 〉 − i 〈!2 |!1 〉 〈!2 |!1 〉 + i 〈!2 |!2 〉 〈!1 |!1 〉 + 2 〈!2 |!2 〉 〈!2 |!1 〉 = +i 〈!2|Â|!2〉 = 〈!2| ( 2 |!1〉 〈!1| − i |!1〉 〈!2| + i |!2〉 〈!1| + 2 |!2〉 〈!2| ) |!2〉 = 2 〈!2 |!1 〉 〈!1 |!2 〉 − i 〈!2 |!1 〉 〈!2 |!2 〉 + i 〈!2 |!2 〉 〈!1 |!2 〉 + 2 〈!2 |!2 〉 〈!2 |!2 〉 = 2 We can use these results to write teh matrix representation of A: ⇒  = ( 〈!1|Â|!1〉〈!1|Â|!2〉 〈!2|Â|!1〉〈!2|Â|!2〉 ) = ( 2 −i i 2 ) We find that the eigenvalues are: 0 = det ∣ ∣ ∣  − λI ∣ ∣ ∣ = det ∣ ∣ ∣ ( 2 −i i 2 ) − λ ( 1 0 0 1 ) ∣ ∣ ∣ = det ∣ ∣ ∣ ( 2 − λ −i i 2 − λ ) ∣ ∣ ∣ ⇒ λ2 − 4λ + 3 = 0 This leads to the eigenvalues: λ1,2 = {3, 1} We find the respective eigenvectors: ( 2 −i i 2 ) ( a b ) = 3 ( a b ) This leads to: 2a − ib = 3a ia + 2b = 3b ⇒ b = ia given by: A = |a〉〈a| − i|a〉〈b| + i|b〉〈a| − |b〉〈b| ( a ) Is A a projection operator? (b ) Find the matrix representation of A and Tr(A). ( c ) Find the eigenvalues and eigenvectors of A. SOLUTION ( a ) First we find A†: A† = (|a〉 〈a |−i| a〉 〈b |+i| b〉 〈a |−| b〉 〈b|)† = |a〉 〈a |+i| b〉 〈a |−i| a〉 〈b |−| b〉 〈b| = |a〉 〈a |−i| a〉 〈b |+i| b〉 〈a |−| b〉 〈b| = A ⇒ A is Hermitian. Now: A2 = (|a〉 〈a |−i| a〉 〈b |+i| b〉 〈a |−| b〉 〈b|) (|a〉 〈a |−i| a 〈b |+i| b〉 〈a |−| b〉 〈b|) = |a〉 〈a| (|a〉 〈a|) + |a〉 〈a| (−i |a〉 〈b|) − i |a〉 〈b| (i |b〉 〈a|) − i |a〉 〈b| (− |b〉 〈b|) + i |b〉 〈a| (|a〉 〈a|) + i |b〉 〈a| (−i |a〉 〈b|) − |b〉 〈b| (i |b〉 〈a|) − |b〉 〈b| (− |b〉 〈b|) = |a〉 〈a| − i |a〉 〈b| + |a〉 〈a| + i |a〉 〈b| + |b〉 〈b| + i |b〉 〈a| − i |b〉 〈a| + |b〉 〈b| = 2 |a〉 〈a| + 2 |b〉 〈b| Although A is Hermitian, since A = A2 , A is not a projection operator. ( b ) A = |a〉 〈a| − i |a〉 〈b| + i |b〉 〈a| − |b〉 〈b| and so 〈a |A| a〉 = 〈a| (|a〉 〈a| − i |a〉 〈b| + i |b〉 〈a| − |b〉 〈b|) |a〉 = 〈a |a 〉 〈a |a 〉 − i 〈a |a 〉 〈b |a 〉 + i 〈a |b 〉 〈a |a 〉 − 〈a |b 〉 〈b |a 〉 = 1 Similarly we find that: 〈a |A| b〉 = −i 〈b |A| a〉 = i 〈b |A| b〉 = −1 So the matrix representation of A is: A = ( 〈a |A| a〉 〈a |A| b〉 〈b |A| a〉 〈b |A| b〉 ) = ( 1 −i i −1 ) The trace is the sum of the diagonal elements, and so: T r(A) = 1 − 1 = 0 ( c ) Solving det |A − λI | = 0 we find: EXAMPLE 5 Let {|a〉, |b〉} be an orthonormal two-dimensional basis and let an operator A be 0 = det ∣ ∣ ∣ ( 1 −i i −1 ) − λ ( 1 0 0 1 ) ∣ ∣ ∣ = det ∣ ∣ ∣ ( 1 − λ −i i −1 − λ ) ∣ ∣ ∣ = (1 − λ) (−1 − λ) + i2 This leads to the characteristic equation: −2 + λ2 = 0 So the eigenvalues of A are: λ1,2 = ± √ 2 To find the first eigenvector corresponding to λ = √ 2 we solve: ( l −i i −l ) ( α β ) = √ 2 ( α β ) This leads to: α − iβ = √ 2α, ⇒ β = i (√ 2 − 1 ) α and so we can write: ( α β ) = ( α i (√ 2 − 1 ) α ) Normalizing we find: 1 = ( α∗ −i( √ 2 − 1)α∗ ) ( α −i (√ 2 − 1 ) α ) and so the first eigenvector of A is: |1〉 =    1 2 −i (√ 2 − 1 ) 2    We leave it as an exercise to show that the other eigenvector of A is: |2〉 =     1√ 12 −i (√ 2 + 1 ) √ 12     n] = n[A, B]Bn−1 , show that [A, F(B)] = [A, B]F ′(B), where F ′(B) is the ordinary derivative of F with respect to B . SOLUTION This is easy to show by expanding F(B) in a power series: [A, F(B)] = [A, ∞∑ n=0 bnB n] EXAMPLE 6 Given that [A,B Using the fact that [A, B + C] = [A, B] + [A, C], [ A, ∞ ∑ n=0 bnB n ] = ∞ ∑ n=0 bn[A, B n] = ∞ ∑ n=0 bnn[A, B]B n−1 = [A, B] ∞ ∑ n=0 bnnB b−1 Given a power series expansion g(x) = ∑ anx n , then g′(x) = ∑ annx n−1 , and so ∑ ∞ n=0 bnnB n−1 = F ′(B). Therefore we have: [A, F(B)] = [A, B]F ′(B) We can expand any ket in a continuous basis by application of the closure relation: |ψ〉 = 1 |ψ〉 = ∫ |α〉 〈α| ψ〉 dα = ∫ c (α) |α〉 dα Let us assume that this expansion is not unique, so that: |ψ〉 = ∫ c (α) |α〉 dα and |ψ〉 = ∫ d (α) |α〉 dα Subtraction yields: |ψ〉− |ψ〉 = 0 = ∫ c (α) |α〉 dα − ∫ d (α) |α〉 dα = ∫ [c (α) − d (α)] |α〉 dα Since the |α〉 form a basis, orthonormality tells us that 〈 α′ |α 〉 = δ ( α − α′ ) . Now we take the inner product with 〈 α′ ∣ ∣ : 0 = 〈α′| ∫ [c(α) − d(α)]|α〉dα = ∫ [c (α) − d (α)] 〈 α′ |α 〉 dα = ∫ [c (α) − d (α)] δ ( α − α′ ) dα = ∫ [c (α) − d (α)] δ ( ) The sampling property of the δ function tells that this is: ∫ [c (α) − d (α)] δ ( α − α′ ) dα = c ( α′ ) − d ( α′ ) But this is equal to zero. So, we have found that c(α′) = d(α′), therefore the expansion is unique. This generalization to continuous spaces allows us to connect wave and matrix mechanics. States, which in general are kets, can be represented as ordinary func- tions of position or momentum. Operators become mathematical actions on those functions. For example, momentum is represented by: p̂ → −ih̄ d dx EXAMPLE 7 Show that the expansion of a ket in a continuous orthonormal basis |α〉 is unique. SOLUTION SOLUTION ( a ) In a continuous space the eigenvalue equation for an operator Ô can be written as: Ôf = λf In this case we have the familiar equation: −i df dϕ = λf Rearranging terms and integrating: df f = iλdϕ ⇒ ln = iλϕ + C where C is the constant of integration. Taking the exponential of both sides: f = Ceiλϕ To find C , we apply the normalization condition 1 = ∫ f ∗f dϕ : 1 = ∫ 2π 0 f ∗f dϕ = ∫ 2π 0 ( Ceiλϕ )∗ Ceiλϕ dϕ = ∫ 2π 0 C∗e−iλϕCeiλϕ dϕ = ∫ 2π 0 |C|2 dϕ = C2 ∫ 2π 0 dϕ = C2ϕ ∣ ∣ 2π 0 = C22π ⇒ C = 1√ 2π A quick check shows that λ is in fact the eigenvalue for Ô : Ôf = −i d dϕ ( 1√ 2π eiλϕ ) = −i (iλ) 1√ 2π eiλϕ = λ 1√ 2π eiλϕ = λf ( b ) To find the eigenvalues, we first apply Euler’s formula: f = 1 √ 2π eiλϕ = 1 √ 2π [cos λϕ + i sin λϕ] The boundary condition at ϕ = 0 is automatically satisfied: f (0) = 1 √ 2π [cos (0) + i sin(0)] = 1 √ 2π [1 + i0] = 1 √ 2π At ϕ = 2π we have: f (2π) = 1 √ 2π [cos λ2π + i sin λ2π ] The function f is complex and has real and imaginary parts. For f (2π) = 1√ 2π to be satisfied, the imaginary part of f which is given by the sin function must be zero. Therefore we must have: sin λ2π = 0 But let’s ignore that for the moment, and focus on the cos term. To have f (2π) = 1√ 2π = 1√ 2π cos λ2π, the condition cos λ2π = +1 must be met. Notice that: cos(0) = +1 cos(π) = −1 cos(2π) = +1 cos(3π) = −1 cos(4π) = +1 So we see that the argument to the cos function must be an even integral multiple of π . Since we have cos λ2π , this is automatically satisfied for any λ that is an integer. For example: λ = 0 ⇒ [cos(0)(2π)] = cos(0) = +1 λ = 1 ⇒ [cos(1)(2π)] = cos(2π) = +1 λ = 2 ⇒ [cos(2)(2π)] = cos(4π) = +1 λ = 3 ⇒ [cos(3)(2π)] = cos(6π) = +1 With the further requirement that λ be strictly positive, we omit λ = 0 and find that the eigenvalues of Ô are: λ = 1, 2, 3, . . . · We see that the condition sin λ2π = 0 is automatically satisfied: λ = 1 ⇒ sin[(1)2π ] = sin[2π ] = 0 λ = 2 ⇒ sin[(2)2π ] = sin[4π ] = 0 λ = 3 ⇒ sin[(3)2π ] = sin[6π ] = 0 and so on. To find [Ô, ϕ̂], we apply the commutator to a test function g . Remember the operator ϕ̂ results in multiplication, so ϕ̂g = ϕg [Ô, ϕ̂]g = (Ôϕ̂ − ϕ̂Ô)g = Ô ( φ̂g ) − ϕ̂ ( Ôg ) = −i d dϕ (ϕg) − ϕ ( −i dg dϕ ) = −i d dϕ (ϕg) + iϕ dg dϕ = −ig − iϕ dg dϕ + iϕ dg dϕ = −ig [Ô, ϕ̂]g = −ig, ⇒ [Ô, ϕ̂] = −i ( c ) To check if Ô is Hermitian, we need to see if 〈g|Ôf 〉 = 〈gÔ|f 〉: 〈g|Ôf 〉 = ∫ 2π 0 g∗ ( −i df dϕ ) dϕ Integrating by parts, we find that: 〈g|Ôf 〉 = ∫ 2π 0 g∗ ( −i df dϕ ) dϕ = fg|2π0 − ∫ 2π 0 (−i) dg∗ dϕ f dϕ Since f (2π) = f (0) and g(2π) = g(0), the boundary term vanishes and we have: 〈g|Ôf 〉 = − ∫ 2π 0 (−i) dg∗ dϕ f dϕ = ∫ 2π 0 i dg∗ dϕ f dϕ = ∫ 2π 0 ( −i dg dϕ )∗ f dϕ = 〈g|Ôf 〉, ⇒ Ô is Hermitian An orthonormal basis of a Hamiltonian operator in four dimensions is defined as follows H |1〉 = E |1〉 , H |2〉 = 2E |2〉 , H |3〉 = 3E |3〉 , H |4〉 = 4E |4〉 A system is in the state |ψ〉 = 3 |1〉 + |2〉 − |3〉 + 7 |4〉 ( a ) If a measurement of the energy is made, what results can be found and with what probabilities? (b ) Find the average energy of the system. SOLUTION First we check to see if the state is normalized. We have 〈ψ | ψ〉 = |3|2 + |1|2 + |1|2 + |7|2 = 9 + 1 + 1 + 49 = 60 Therefore it is necessary to normalize the state. The normalized state is found by dividing by √ 〈ψ |ψ〉. Calling the normalized state |χ〉 we obtain |χ〉 = 1 √ 〈ψ |ψ 〉 |ψ〉 = 3 √ 60 |1〉 + 1 √ 60 |2〉 − 1 √ 60 |3〉 + 7 √ 60 |4〉 Since the state is expanded in the eigenbasis of the Hamiltonian, the only possible values of measurment are the eigenvalues of the Hamiltonian, which have been given to us as (E, 2E, 3E, 4E). The probabilties of obtaining each measurement are found by application of the Born rule, squaring the coefficients for the normalized state. ( a ) The probability of obtaining E is ∣ ∣ ∣ ∣ 3 √ 60 ∣ ∣ ∣ ∣ 2 = 9 60 The probability of obtaining 2E is ∣ ∣ ∣ ∣ 1 √ 60 ∣ ∣ ∣ ∣ 2 = 1 60 The probability of obtaining 3E is ∣ ∣ ∣ ∣ −1 √ 60 ∣ ∣ ∣ ∣ 2 = 1 60 The probability of finding 4E is ∣ ∣ ∣ ∣ 7 √ 60 ∣ ∣ ∣ ∣ 2 = 49 60 A quick check shows these probabilities add up to one 9 60 + 1 60 + 1 60 + 49 60 = 60 60 = 1 ( b ) We can find the average energy from 〈H 〉 = ∑ p (Ei) Ei where p (Ei) is the probability of obtaining measurement result Ei . We find that the average energy is 〈H 〉 = E ( 9 60 ) + 2E ( 1 60 ) + 3E ( 1 60 ) + 4E ( 49 60 ) = 63 60 E = 21 20 E EXAMPLE 3 i〉 such that H |u1〉 = E |u1〉 H |u2〉 = E |u2〉 H |u3〉 = 2E |u3〉 H |u4〉 = 4E |u4〉 the system is in the state |ψ〉 = 1 √ 6 |u1〉 + 1 √ 2 |u2〉 + 1 2 |u3〉 + 1 √ 12 |u4〉 ( a ) Write the Hamiltonian operator in outer product notation. (b ) Write the projection operator P that projects a state onto the subspace spanned by {|u1〉, |u2〉}. ( c ) The energy is measured. What values can be found, and with what probabilities? (d ) Suppose the energy is measured and found to be E . What is the state of the system after measurement? SOLUTION ( a ) Using the spectral decomposition of H , we find H = E |u1〉 〈u1| + E |u2〉 〈u2| + 2E |u3〉 〈u3| + 4E |u4〉 〈u4| ( b ) The projection operator for the subspace spanned by {|u1〉 , |u2〉} is found by summing over the individual projection operators for each state PE = |u1〉 〈u1| + |u2〉 〈u2| ( c ) The reader should verify that the state is normalized. The possible results of measurement are E, 2E , and 4E , corresponding to eigenvectors {|u1〉 , |u2〉}, |u3〉, and |u4〉, respectively. To calculate the probability of obtaining the measurement result E , which is degenerate, we use pE = 2 ∑ i=1 |〈ui |ψ 〉|2 = |〈u1 |ψ 〉|2 + |〈u2 |ψ 〉|2 = ( 1 √ 6 )2 + ( 1 √ 2 )2 = 1 6 + 1 2 = 2 3 The other probabilities are for non-degenerate eigenvalues and can be cal- culated immediately using the Born rule: p2E = |〈u3 |ψ〉 |2 = ( 1 2 )2 = 1 4 p4E = |〈u4 |ψ〉 |2 = ( 1 √ 12 )2 = 1 12 EXAMPLE 4 Suppose a Hamiltonian operator has a basis |u ( d ) If the energy is measured and is found to be E , the state after measurement is: ∣ ∣ψaf ter 〉 = 1 √ 〈ψ | PE |ψ〉 PE |ψ〉 PE |ψ〉 = ( |u1〉 〈u1| + |u2〉 〈u2| ) ( 1 √ 6 |u1〉 + 1 √ 2 |u2〉 + 1 2 |u3〉 + 1 √ = 1 √ 6 |u1〉 + 1 √ 2 |u2〉 It is easy to show that 〈ψ | PE |ψ〉 = ( 1 √ 6 )2 + ( 1 √ 2 )2 = 1 6 + 1 2 = 2 3 and so we have 1 √ 〈ψ | PE |ψ〉 = √ 3 2 Therefore, the state after measurement is ∣ ∣ψaf ter 〉 = 1 √ 〈ψ | PE |ψ〉 PE |ψ〉 = √ 3 2 ( 1 √ 6 |u1〉 + 1 √ 2 |u2〉 ) = 1 2 |u1〉 + √ 3 2 |u2〉 Note: The state of the system cannot be further distinguished; with the information given there is nothing we can do to distinguish |u1〉 from |u2〉. Y ⊗ Z SOLUTION Recalling that Y = ( 0 −i i 0 ) , Z = ( 1 0 0 −1 ) we have Y ⊗ Z = ( (0)Z (−i)Z (i)Z (0)Z ) =   0 0 −i 0 0 0 0 i i 0 0 0 0 −i 0 0   |ψ〉 = √ 2 3 |+〉 + 1 √ 3 |−〉 where |+〉 = ( 1 0 ) and |−〉 = ( 0 1 ) . Find |ψ〉⊗2 and |ψ〉⊗3 . EXAMPLE 5 Consider the two Pauli matrices Y and Z . Compute the tensor product EXAMPLE 6 Let
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved