Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Quantum Mechanics: Uncertainty Principle and Operators in Hilbert Space, Study notes of Quantum Physics

This document delves into the principles of quantum mechanics, focusing on the uncertainty principle and the role of operators in the physical hilbert space. It covers the hermitian operators ˆx and ˆp, their matrix elements, and the concept of observables. The document also discusses the concept of eigenvalues and eigenvectors, and how measurements affect the state vector of a particle.

Typology: Study notes

Pre 2010

Uploaded on 08/18/2009

koofers-user-vq7-1
koofers-user-vq7-1 🇺🇸

3

(1)

10 documents

1 / 10

Toggle sidebar

Related documents


Partial preview of the text

Download Quantum Mechanics: Uncertainty Principle and Operators in Hilbert Space and more Study notes Quantum Physics in PDF only on Docsity! Handout 5 from PHYS 560 Northern Illinois University Fall 2007 (Dated: version printed September 12, 2007) Brief (well fairly expanded) notes on concepts and topics from Chapter 4, section 1 and 2 and 3. Shankar’s Chapter Four: The postulates - a general discussion 4.1. THE POSTULATES • The postulates (be aware of comparison to the classical equivalents given in book). Note that some texts have more than 4 postulates, but it is merely separating them differently. • State of particle represented by |ψ(x)〉 (in physical Hilbert space) • Variables x and p are Hermitian operators X̂ and P̂ – matrix elements expressed in eigenbasis of X̂ are 〈x′|X̂|x〉 = xδ(x− x′) and 〈x′|P̂ |x〉 = −i~δ′(x− x′) – matrix elements expressed in eigenbasis of X̂ are 〈p′|X̂|p〉 = i~δ′(p− p′) and 〈p′|P̂ |p〉 = pδ(p− p′) – Operators corresponding functions ω(x, p) are replaced with Hermitian operators Ω̂(X̂, P̂ ) ⇔ ω(x → X̂, p→ P̂ ) – Observables are associated with Hermitian operators. • If particle in state |ψ〉, then observation (measurement) of Ω̂ will yield ONE of its eigenvalues ω. If another particle in state |ψ〉 is measured, also measurement will yield ONE of the eigenvalues of Ω̂ but not necessarily the same one. The probability of measuring any particular eigenvalue ωi is P (ωi) ∝ |〈ωi|ψ〉|2. And, upon the measurment, the state vector (i.e., wave function) describing the particle will no longer be |ψ〉, but will be the |ωi〉 that is Ω̂’s eigenket associated with the eigenvalue ωi. • State vectors |Ψ〉 obey Schrodinger equation. Ĥ is the quantized Hamiltonian operator that is related to H for classical problem (with the x and p changed to corresponding operators X̂ and P̂ ). i~ d dt |Ψ(t)〉 = Ĥ|Ψ(t)〉 4.2. DISCUSSION OF POSTULATES I, II, AND III • Ponder the implicaton from the postulates above: Why was it so important from the first chapter to become fluent in expanding a ket in arbitrary basis sets? Also, how does it apply to the whole endeavour that the eigen- kets of an operator (assuming non-degenerate eigenvalues) form a complete basis set, (and that of a Hermitian operators, a complete orthonormal basis set)? – |ψ〉 = ∑ i |ωi〉〈ωi|ψ〉 – P (ω) ∝ |〈ω|ψ〉|2 = 〈ψ|ω〉〈ω|ψ〉 • Ideal measurement (designed to not disturb the system) still by virtue of measuring quantity described by Ω̂ means that final state vector of particle will be the eigenket of Ω̂. The state changes! (Unless, of course, |ψ〉 is already an eigenket of Ω̂.) – Given |ψ〉 prepared in particular state. Probability that particular ωi will be measured is given above. – Implication that logically follows is that AFTER first measurement is done, system is in |ωi〉 state. All subsequent measurements of Ω̂ will give ωi with 100% probability! (until we alter the state vector by measuring Λ̂) 2 • Ponder the above statements to see that it provides way to ‘prepare’ system in a state that is a mixed state. – Assume that observables associated with Λ̂ and Ω̂ and that these operators do not commute. Therefore, they do NOT have the same eigenkets and at least two terms will be non-zero in following expansion |λi〉 = ∑ j |ωj〉〈ωj |λi〉. And vice versa. – Start with state |ψ〉. Perhaps we do not know the probabilities, however, upon measuring parameter associated with Ω̂ we measure a particular ωi. – Now, if we were to measure parameters associated with Λ̂, the system is in a mixed state with respect to the eigenkets |λ〉. • Ponder the above statements for measurements that correspond to observables with operators that commute. Let [Ω̂, Γ̂] = 0. Then, the same eigenkets describe both operators. – For reminder, I will denote the eigenkets by |ω, γ〉 merely to emphasize that the same eigenket is in the |ωi〉 and |γi〉 eigenket equations below. Different values for the eigenvalues but same eigenkets. Ω̂|ωi〉 = ωi|ωi〉 ≡ ωi|ωi, γi〉 and Γ̂|γi〉 = γi|γi〉 ≡ γi|ωi, γi〉 – When a measurement of a system gives a particular ωi, then the system is in the ket |ωi〉 ≡ |ωi, γi〉. Just as any subsequent measurements of Ω̂ are performed, 100% of the time that particular ωi is measured. And, if Γ̂ is measured, 100% of the time the associated, γi is measured. – Until things get mixed up again by measuring quantity associated with Λ̂ which does not commute with Γ̂ or Ω̂. • The points above are related to the idea of Incompatible and Compatible Variables, that is, if the asso- ciated operators commute, the observables/variables are compatible, and if the operators do not commute, the variables are incompatible. Occasionally there may be some common eigenkets between two operators, even though the operators do not commute. (If all eigenkets were the same, then obviously the two operators would commute). However, doesn’t change the way we would use the postulates. • The points above are also related to the idea that by measuring, we Collapse the State vector. More on this later. • Normalization, typically we will want ‘normalized’ states. Then ∑ i P (ωi) = 1, and other things will be auto- matically a bit easier. • Example, a Normalized state |ψ〉 composed of two eigenkets |ω1〉 and |ω2〉. |ψ〉 = α|ω1〉+ β|ω2〉 (|α|2 + |β|2)1/2 Complications • Operator is degenerate. – The projection operator includes those parts of the eigenspace with the degenerate eigenvalues ω. Eigenkets are distinct and different, even though the eigenvalues are the same (or at least someone should have gone to the trouble to complete the eigenket basis set) Pω = ∑ over i degeneracies |ω, i〉〈ω, i| P (ω) = ∑ over i degeneracies |〈ω, i|ψ〉|2 5 and it is valid in range from x = −∞ to x = +∞. • As a ket |ψ〉, this is the x representation, 〈x|ψ〉. By the way, this is good function to know along with its normalization. Gaussians pop up everywhere in life. – Note, the eigenkets of the X̂ operator are |x〉. The eigenvalues are x. Thus, the eigenvalue spectrum for X̂ is continuous and P (x) = |〈x|ψ〉|2 must be interpreted and used as a probability density. P (x)dx = |〈x|ψ〉|2dx is the probability of measuring the position of a particle between x and x + dx (when the initial state vector of the particle is described by ket |ψ〉). • Find ∆X̂ for the particle described by the |ψ〉 noted above. It should be equal to (〈X̂2〉 − 〈X̂〉2)1/2, also as noted even earlier. 〈X̂〉 = ∫ ∞ −∞ dx〈ψ|x〉〈x|X̂|ψ〉 = ∫ ∞ −∞ ∫ ∞ −∞ dxdx′〈ψ|x〉〈x|X̂|x′〉〈x′|ψ〉∫ ∞ −∞ ∫ ∞ −∞ dxdx′〈ψ|x〉(xδ(x− x′))〈x′|ψ〉 = ∫ ∞ −∞ dx〈ψ|x〉x〈x|ψ〉 = ∫ ∞ −∞ dx ψ(x)∗ x ψ(x) = ∫ ∞ −∞ dx |ψ(x)|2x Do the same for 〈X̂2〉. If one wanted to go through each insertion of identity operators without taking shortcuts that experienced quantum operators eventually recognize, it would eventually give something that looked like this, 〈ψ|X̂2|ψ〉 = ∫ ∞ −∞ ∫ ∞ −∞ ∫ ∞ −∞ dxdx′dx′′〈ψ|x〉〈x|X̂|x′′〉〈x′′|X̂|x′〉〈x′|ψ〉 = ∫ ∞ −∞ dx|ψ(x)|x2 The ∆P̂ construction • Perform it in momentum space. Note that 〈p|ψ〉 = ψ̃(p). We only have a representation in x, ψ(x). But, 〈p|ψ〉 = ∫ dx〈p|x〉〈x|ψ〉. Note that p = ~k and thus we already know (Class Notes set 4 regarding 〈k|x〉) that 〈p|x〉 = e −ipx/~ √ 2π~ and 〈x|p〉 = e+ipx/~√ 2π~ . So ψ̃(p) and its complex conjugate, are related to the Fourier transforms of ψ(x) and ψ∗(x). Example, 〈p|ψ〉 = ∫ dx e −ipx/~ √ 2π~ ψ(x) = ψ̃(p). Assuming that we can do those, continue. 〈ψ|P̂ |ψ〉 = ∫ ∫ dpdp′〈ψ|p〉〈p|P̂ |p′〉〈p′|ψ〉 = ∫ ∫ dpdp′ψ̃∗(p) pδ(p− p′)ψ̃(p′) = ∫ dp p|ψ̃(p)|2 • Perform it in position space. Then, it would be 〈ψ|P̂ |ψ〉 = ∫ ∫ dxdx′〈ψ|x〉〈x|P̂ |x′〉〈x′|ψ〉 = ∫ ∫ dxdx′〈ψ|x〉 (−i~δ′(x− x′)) 〈x′|ψ〉 = −i~ ∫ dx〈ψ|x〉 [ d dx 〈x|ψ〉 ] = −i~ ∫ dxψ∗(x) [ dψ(x) dx ] . Note, the order stays quite important here. It is not equal to −i~ ∫ dx d(|ψ(x)|2)/dx. • That final evaluation of 〈P̂ 〉 = 0 for the particular gaussian wavefunction ψ(x) is left as exercise. (Exercise 4.2.2 It is zero for any real wavefunction, i.e., when ψ(x) = ψ∗(x)). • Then for the 〈P̂ 2〉, similar shenanigans can be used inserting identity operators until all the terms are terms that we know. 6 – So 〈P 2〉 in ‘simplest’ terms in position representation 〈ψ|P̂ 2|ψ〉 = ∫ ∞ −∞ ∫ ∞ −∞ ∫ ∞ −∞ dxdx′dx′′〈ψ|x〉〈x|P̂ |x′′〉〈x′′|P̂ |x′〉〈x′|ψ〉 = −~2 ∫ dxψ∗(x) d2 ψ(x) dx2 – or 〈P 2〉 in ‘simplest’ terms in momentum representation 〈ψ|P̂ 2|ψ〉 = ∫ ∞ −∞ ∫ ∞ −∞ ∫ ∞ −∞ dpdp′dp′′〈ψ|p〉〈p|P̂ |p′′〉〈p′′|P̂ |p′〉〈p′|ψ〉 = ∫ dp |ψ̃(p)|2 p2 • Apply these on the normalized ket given at the beginning, (the gaussian). The calculations (exercise for reader) will give the following results. For a different state vector |ψ〉, the final answer will be different!!!!. Do not get confused, there is a ∆ in the gaussian equation given. It is just a variable (scalar) denoting the width of the gaussian. It makes the discussion somewhat confusing, as we also are calculating terms denoted by ∆Ω̂. Those do not mean∆gaussian× Ω̂ but just our usual, root-mean-square deviation of measurements of Ω̂, i.e., the Uncertainty= ∆Ω̂. 〈X̂2〉 = ∆ 2 2 + a2 and (〈X̂〉2 = a2 so ∆X̂ = ∆√ 2 〈P̂ 2〉 = ~√ 2∆ and (〈P̂ 〉) = 0 so ∆P̂ = ~√ 2∆ Thus · · · ∆X̂ ·∆P̂ = ~ 2 • This is one way to get Heisenberg Uncertainty principle. It is at its minimum for Gaussian wavepackets. That is, it is an equality, and not a ≥. Any other ket will give a value that is larger. It can be proved that the gaussian is the wave packet that gives the minimum ∆X̂∆P̂ . Density Matrix What if instead of having an ensemble of N systems, each prepared in identical state |ψ〉, one has the much more realistic situation of an ensemble distributed among a set of states, |ψi〉, (the distribution more-or-less known). How to determine the expectation values, and uncertainties and general methods of dealing with calculations in this situation? • Obviously, two kinds of ‘averaging’ now must happen. The quantum averages calculated as normal 〈ψi|Ω̂|ψi〉, and then a classical average (mean) is calculated weighted over distribution. – Occupancy number ni, refers to the number of states in the distribution that are in particular state |ψi〉. That will be useful for the classical average. • For now, we restrict ourselves to those distributions that are ensembles over a set of orthonormal |i〉 states. (That is, each one of the N systems will be in one or another eigenket of a particular Λ̂, i.e., |ψi〉 = |i〉. They could still be ‘mixed’ with respect to some other observable, |ψi〉 = ∑ α |α〉〈α|ψi〉.) But, they are set of state kets that are different kets orthonormal to each other. • How to describe the ensemble? This is the density matrix operator ρ̂. Let ρi = ni/N , the probability of picking out of the N systems a system in state |i〉. ρ̂ = ∑ i ρi|i〉〈i| A pure (unmixed) ensemble would be a matrix with all entries 0, except for one (thus picking out the one identical state |ψi〉 for each member of the ensemble). • How to determine the quantum mechanic’s ensemble average of Ω̂? Calculate each possible expectation value 〈i|Ω̂|i〉, and classically average over how many systems are in i, j, k, etc since we know the weighting by ni/N = ρi. 7 – This is denoted by 〈Ω〉. (Sort of a combination of the (〈 〉) notation denoting expectation value calculation, and the ( ) notation of taking the mean (classical averaging). 〈Ω〉 = ∑ i ρi〈i|Ω̂|i〉 • Ponder the derivation (Equation 4.2.22) for the Expectation Value of the distribution. (and convince yourself that you can repeat the steps) proving 〈Ω〉 = Tr(Ω̂ρ̂) This is the expected value that an experimenter would report for the observable Ω̂ if she performed the measurement on the ensemble. • Ponder derivation (equations following 4.2.22) and convince yourself that you can repeat it. What is the probability with a particular ensemble for obtaining a particular measurement ωα, (assuming that the experiment could pull out one system at random and measure it independently.) P (ωα) = Tr(Pωα ρ̂) caution: distinguish between notation for probability P and projection operator P. – In the derivations, is it necessary that the |i〉 be eigenkets of Ω̂? – In any measurement of any particular one system, would the measurement leave that system in an eigenket of Ω̂? – When calculating the P (ωα), is this the probability of measuring a particular eigenvalue of Ω̂ or the probability of measuring a particular eigenvalue of Λ̂ for which the kets were an orthonormal basis? • Very useful properties to memorize and derive. ρ̂† = ρ̂ , Trρ̂ = 1 , Trρ̂2 ≤ 1 ρ̂ = 1 k Î : for uniform distribution over k states, (i.e., same number in each state) ρ̂2 = ρ̂ , Trρ̂2 = 1 : for pure ensembles Generalization to more degrees of freedom • We come back to this and related questions in Chapter 10 Systems of N Degrees of Freedom. • Also some interesting related discussion on issues arising from ‘Quantizing’ a system (converting H(x, p) → Ĥ(X̂, P̂ ) when trying to work in spherical coordinates. This is in Chapter 7 The Harmonic Oscillator, for the last few pages of section 4. (on page 214 (in second edition), and right after exercise 7.4.10 in 1st and 2nd editions). No wonder we usually do the changes when it is ‘easy’ and in cartesian coordinates. • More degrees of freedom can mean – More than one particles but confined in one dimension – One particle but not confined to one dimension, (e.g., 2D or 3D. . . ). – Or of course, both. • Postulate II changes as (note the cartesian coordinate formulation) Corresponding to N Cartesian coordinates x1, . . . , xN in classical, exist N mutually commuting operators X̂1, . . . , X̂N . Eigenbasis is coordinate basis |x1, x2, · · · , xN 〉 – Normalization 〈x1, x2, · · · , xN |x′1, x′2, · · · , x′N 〉 = δ(x1 − x′1)δ(x2 − x′2) · · · δ(xN − x′N ) – Similarly 〈x1, x2, · · · , xN |ψ〉 = ψ(x1, x2, · · · , xN ) – 〈x1, x2, · · · , xN |X̂i|ψ〉 = xiψ(x1, x2, · · · , xN )
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved