Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Introduction to Quantum Field Theory for Mathematicians, Lecture notes of Quantum Mechanics

A set of lecture notes for Math 273 at Stanford University in Fall 2018. The notes cover the basics of quantum field theory (QFT) and its mathematical foundations. the limitations of quantum mechanics and how QFT is supposed to describe phenomena that quantum mechanics cannot. The notes also cover the rigorous meanings of operator-valued distributions and the relevant Hilbert space. The document acknowledges that much of QFT lacks rigorous mathematical foundations, but concrete calculations can still be carried out.

Typology: Lecture notes

2017/2018

Uploaded on 05/11/2023

strawberry3
strawberry3 🇺🇸

4.6

(38)

173 documents

1 / 130

Toggle sidebar

Related documents


Partial preview of the text

Download Introduction to Quantum Field Theory for Mathematicians and more Lecture notes Quantum Mechanics in PDF only on Docsity! Introduction to Quantum Field Theory for Mathematicians Lecture notes for Math 273, Stanford, Fall 2018 Sourav Chatterjee (Based on a forthcoming textbook by Michel Talagrand) LECTURE 1 Introduction Date: 9/24/2018 Scribe: Andrea Ottolini 1.1. Preview This course is intended to be an introduction to quantum field theory for mathematicians. Although quantum mechanics has been successful in explaining many microscopic phenomena which appear to be genuinely ran- dom (i.e., the randomness does not stem from the lack of information about initial condition, but it is inherent in the behavior of the particles), it is not a good theory for elementary particles, mainly for two reasons: • It does not fit well with special relativity, in that the Schrödinger equation is not invariant under Lorentz transformations. • It does not allow creation or annihilation of particles. Since in lots of interesting phenomena (e.g., in colliders) particles travel at speeds comparable to the speed of light, and new particles appear after they collide, these aspects have to be taken into account. Quantum field theory (QFT) is supposed to describe these phenomena well, yet its mathematical foundations are shaky or non-existent. The fun- damental objects in quantum field theory are operator-valued distributions. An operator-valued distribution is an abstract object, which when integrated against a test function, yields a linear operator on a Hilbert space instead of a number. For example, we will define operator-valued distributions a and a† on R3 which satisfy that for all p,p′ ∈ R3, [a(p), a(p′)] = 0, [a†(p), a†(p′)] = 0, [a(p), a†(p′)] = (2π)3δ(3)(p− p′)1, where [A,B] = AB − BA is the commutator, δ(3) is the Dirac δ on R3, and 1 denotes the identity operator on an unspecified Hilbert space. For someone with a traditional training in mathematics, it may not be clear what the above statement means. Yet, physics classes on QFT often begin by introducing these operator-valued distributions as if their meaning is self-evident. One of the first objectives of this course is to give rigorous meanings to a and a†, and define the relevant Hilbert space. It turns out 1 2 1. INTRODUCTION that the correct Hilbert space is the so-called bosonic Fock space, which we will define. Using a and a†, physicists then define the massive scalar free field ϕ with mass parameter m, as ϕ(t,x) = ∫ R3 d3p (2π)3 1√ 2ωp ( e−itωp+ix·pa(p) + eitωp−ix·pa†(p) ) , where ωp = √ m2 + |p|2. Here x · p is the scalar product of x and p, and |p| is the Euclidean norm of p. This is an operator-valued distribution defined on spacetime. Again, it is not at all clear what this means, nor the purpose. We will give a rigorous meaning to all of these and understand where they come from. We will then move on to discuss interacting quantum fields, where the Hilbert space is not clear at all, since the Fock space, which does the job for the free field, is not going to work. Still, computations can be carried out, scattering amplitudes can be obtained, and unrigorous QFT theory leads to remarkably correct predictions for a wide range of phenomena. We will talk about all this and more. In particular, we will talk about ϕ4 theory, one-loop renormalization, and the basics of quantum electrodynamics. 1.2. A note on mathematical rigor Much of quantum field theory is devoid of any rigorous mathematical foundation. Therefore we have no option but to abandon mathematical rigor for large parts of this course. There will be parts where we will not prove theorems with full rigor, but it will be clear that the proofs can be made mathematically complete if one wishes to do so. These are not the problematic parts. However, there will be other parts where no one knows how to put things in a mathematically valid way, and they will appear as flights of fancy to a mathematician. Yet, concrete calculations yielding actual numbers can be carried out in these fanciful settings, and we will go ahead and do so. These situations will be pointed out clearly, and will sometimes be posed as open problems. 1.3. Notation The following are some basic notations and conventions that we will follow. We will need more notations, which will be introduced in later lectures. • Throughout these lectures, we will work in units where ~ = c = 1, where ~ is Planck’s constant divided by 2π and c is the speed of light. • H is a separable complex Hilbert space. • If a ∈ C, a∗ is its complex conjugate. 1.3. NOTATION 3 • The inner product of f, g ∈ H, denoted by (f, g), is assumed to be antilinear in the first variable and linear in the second. In partic- ular, if {en}∞n=1 is an orthonormal basis of H, and if f = ∑ αnen and g = ∑ βnen, then (f, g) = ∑∞ n=1 α ∗ nβn. • The norm of a state f is denoted by ‖f‖. A state f is called normalized if ‖f‖ = 1. • If A is a bounded linear operator on H, A† denotes its adjoint. • If A is a bounded linear operator and A = A†, we will say that A is Hermitian. We will later replace this with a more general notion of ‘self-adjoint’. • δ is the Dirac delta at 0, and δx is the Dirac delta at x. Among the properties of the delta function, we will be interested in the following two:∫ ∞ −∞ dzδ(x− z)δ(z − y)ξ(z) = δ(x− y)ξ(z), δ(x) = lim ε→0 ∫ R dy 2π eixy−εy 2 = 1 2π ∫ R dy eixy. • f̂(p) = ∫ R dx e −ixpf(x) is the Fourier transform of f . Note that some of the definitions are slightly different than the usual math- ematical conventions, such as that of the Fourier transform. Usually, it is just a difference of sign, but these differences are important to remember. 6 2. THE POSTULATES OF QUANTUM MECHANICS with eigenvalues 1,−1. If the state of the system is( α1 α2 ) ∈ C2, then Prob(O = 1) = |α1|2 |α1|2 + |α2|2 and Prob(O = −1) = |α2|2 |α1|2 + |α2|2 . 2.3. Adjoints of unbounded operators Definition 2.1. An unbounded operator A on a Hilbert space H is a linear map from a dense subspace D(A) into H. Definition 2.2. An unbounded operator is called symmetric if (x,Ay) = (Ax, y) ∀x, y ∈ D(A). Take any unbounded operator A with domain D(A). We want to define the adjoint A†. We first define D(A†) to be the set of all y ∈ H such that sup x∈D(A) |(y,Ax)| ‖x‖ <∞. Then for y ∈ D(A†) define A†y as follows. Define a linear functional λ : D(A)→ C as λ(x) = (y,Ax). Since y ∈ D(A†), c := sup x∈D(A) |(y,Ax)| ‖x‖ <∞. Thus ∀x, x′ ∈ D(A), |λ(x)− λ(x′)| = |(y,A(x− x′))| ≤ c‖x− x′‖. This implies that λ extends to a bounded linear functional on H. Hence there exists unique z such that λ(x) = (z, x). Let A†y := z. Definition 2.3. A symmetric unbounded operator is called self-adjoint if D(A) = D(A†), and A† = A on this subspace. (In practice we only need to verify D(A†) = D(A), since for any sym- metric operator, D(A) ⊆ D(A†), and A† = A on D(A).) Definition 2.4. An operator B is called an extension of A if D(A) ⊆ D(B) and A = B on D(A). An example is if A is symmetric then A† is an extension of A. Definition 2.5. A symmetric operatorA is called essentially self-adjoint if it has a unique self-adjoint extension. 2.6. POSTULATE 5 7 2.4. Unitary groups of operators Definition 2.6. A surjective linear operator U : H → H is called uni- tary if ‖Ux‖ = ‖x‖ ∀x ∈ H. Definition 2.7. A strongly continuous unitary group (U(t))t∈R is a collection of unitary operators such that • U(s+ t) = U(s)U(t) ∀s, t ∈ R, and • for any x ∈ H the map t 7→ U(t)x is continuous. 2.5. Stone’s Theorem There is a one-to-one correspondence between one parameter strongly continuous unitary groups of operators on H and self-adjoint operators on H. Given U , the corresponding self-adjoint operator A is defined as Ax = lim t→0 U(t)x− x it , with D(A) = {x : the above limit exists}. (It is conventional to write U(t) = eitA.) Conversely, given any self-adjoint operator A, there is a strongly continuous unitary group (U(t))t∈R such that the above relation between A and U is satisfied on the domain of A. 2.6. Postulate 5 P5 If the system is not affected by external influences then its state evolves in time as ψt = U(t)ψ for some strongly continuous unitary group U that only depends on the system (and not on the state). By Stone’s theorem there exists a unique self-adjoint operator H such that U(t) = e−itH . This H is called the ‘Hamiltonian’. The Hamiltonian satisfies d dt U(t) = −iHU(t) = −iHe−itH = −iU(t)H = −ie−itHH. We will use the above relations extensively in the sequel. Besides the five postulates stated above, there is also a sixth postulate about collapse of wavefunctions that we will not discuss (or need) in these lectures. 3.2. A NON-RELATIVISTIC PARTICLE IN 1-D SPACE 11 Thus the probability density of the position is |ψ(x)|2∫ |ψ(z)|2dz . The ‘position eigenstates’ are δx, x ∈ R. To make this precise, let us approx- imate δx(y) by 1√ 2πε e−(y−x)2/2ε as ε → 0. This state has p.d.f. of the position proportional to e−(y−x)2/ε. This probability distribution converges to the point mass at x as ε→ 0. The second observable is the momentum observable. The momentum operator is given by Pψ = −i d dx ψ, so notationally, P = −i d dx . We may take the domain of P to be {ψ ∈ L2(R) : ψ′ ∈ L2(R)}, but then P will not be self-adjoint. However, one can show that P is es- sentially self-adjoint, that is, there is a unique extension of P to a larger domain where it is self-adjoint. Using a similar procedure as with the position, we may show that the probability density of the momentum is given by |ψ̂(p)|2 ‖ψ̂‖2 L2 , where ψ̂ is the Fourier Transform of ψ. The (improper) momentum eigen- state for momentum p is ψ(x) = eipx, because Pψ = pψ. Note that ψ is not in L2(R), but we may approximate ψ(x) ≈ eipx−εx2 for ε small. The Fourier Transform of this function is ψ̂(p′) = ∫ e−ip ′xeipxdx = 2πδ(p− p′), which is proportional to the Dirac delta at p. LECTURE 4 Time evolution Date: 10/1/2018 Scribe: Jack Lindsey 4.1. Probability density of momentum Let us continue our discussion about the 1D non-relativistic particle that we started in the previous lecture. If the system is in state ψ, then we claim that the probability density function of the momentum at p ∈ R is |ψ̂(p)|2 ‖ψ̂‖2 L2 , where ψ̂ is the Fourier transform of ψ. Although a complete proof using our version of Postulate 4 takes some work, it is easy to see why this is true from the following sketch. First, observe that P kψ = (−i)k d k dxk ψ. From this it follows that P̂ kψ(p) = pkψ̂(p). On the other hand, by Postulate 4, the expected value of the kth power of the momentum is (ψ, P kψ) ‖ψ‖2 L2 . By Parseval’s identity, this equals (ψ̂, P̂ kψ) ‖ψ̂‖2 L2 = ∫ ∞ −∞ dp pk |ψ̂(p)|2 ‖ψ̂‖2 L2 . This strongly indicates that the p.d.f. of the momentum is proportional to |ψ̂(p)|2. A complete proof would require that we work with characteristic functions instead of moments. 4.2. The uncertainty principle Consider the improper state ψ(x) = δx0(x). The position of a particle in this state is fully concentrated at x0. However, ψ̂(p) = e−ipx0 , which means that the momentum of the particle is ‘uniformly distributed on the real line’ — which does not make mathematical sense, but can be thought of as an idealization of a very spread out probability distribution. On the other 13 16 4. TIME EVOLUTION This result does not work for growing potentials. For instance, it does not cover the case of a simple harmonic oscillator, where V (x) grows quadrat- ically in x. For such cases, the following result, due to Kato, is helpful. Theorem 4.3. The Hamiltonian H is essentially self-adjoint if V is locally L2 and V (x) ≥ −V ∗(|x|) for some V ∗ such that V ∗(r) = o(r2) as r →∞. (Note that in particular if V (x) is locally L2 and lower-bounded by a constant, then it satisfies the condition.) 4.7. Simple harmonic oscillator Theorem 4.3 says, for example, that H is essentially self-adjoint if V (x) = 1 2 mω2x2. This is the potential for a simple harmonic oscillator with frequency ω. Moreover, the corresponding Hamiltonian H = − 1 2m d2 dx2 + 1 2 mω2x2 has a complete orthonormal sequence of eigenvectors. For simplicity take m = ω = 1. Then the (orthonormal) eigenvectors are en(x) = CnHn(x)e−x 2/2, where Cn is the normalization constant, and Hn(x) = (−1)nex 2 dn dxn (e−x 2 ) is the ‘nth physicist’s Hermite polynomial’. 4.8. Bound states Note that ifH is a Hamiltonian and ψ is an eigenfunction with eigenvalue λ, then the evolution of this eigenfunction under this Hamiltonian is given by ψt = e−itHψ = e−itλψ. So the p.d.f. of the position does not change over time. Physicists call this a ‘bound state’. This means that if you are in the state, you will not freely move out of it (e.g., the p.d.f. of the position will not become more and more flat). If you have a potential which allows particles to move out freely, then the Hamiltonian cannot have a complete orthonormal sequence of eigenstates as in the previous example. 4.9. DIRAC NOTATION 17 4.9. Dirac notation Vectors in a Hilbert space are denoted by |x〉 (these care called ‘ket vectors’). Often, we will write vectors like |0〉 , |1〉 , |2〉 , etc. Physicists will say, for example, that |λ〉 is an eigenvector with eigenvalue λ. Just like mathematicians would write x1, x2, x3, . . . , physicists write |1〉 , |2〉 , |3〉 , . . .. We also have ‘bra vectors’: 〈x| is the linear functional taking y 7→ (x, y). With this notation, we have • 〈x|y〉 is the action of x on y, equal to (x, y). • 〈αx|y〉 = α∗ 〈x|y〉 and 〈x|αy〉 = α 〈x|y〉. • (x,Ay) is written as 〈x|A|y〉. Note that A |y〉 = Ay. One of the great uses of this notation is the following. Let |1〉 , |2〉 , |3〉 , . . . be an orthonormal basis of H. Then ∑∞ i=1 |i〉 〈i| = 1, meaning that( ∞∑ i=1 |i〉 〈i| ) |x〉 = ∞∑ i=1 |i〉 〈i|x〉 = |x〉 . This is very useful; often one replaces 1 with such an expression in a deriva- tion. Going even further, on L2(R), a physicist would write that 1 2π ∫ ∞ −∞ dp |p〉 〈p| = 1, where |p〉 (x) = eipx, which is an improper element of L2. The derivation is as follows:( 1 2π ∫ ∞ −∞ dp |p〉 〈p| ) ψ(x) = ( 1 2π ∫ ∞ −∞ dp |p〉 〈p|ψ〉 ) (x) = 1 2π ∫ ∞ −∞ dpψ̂(p)eipx = ψ(x). 5.4. TIME EVOLUTION ON A TENSOR PRODUCT SPACE 21 What is ψ1 ⊗ · · · ⊗ ψn for ψ1, . . . , ψn ∈ H? Take any basis e1, e2, . . .. Suppose that ψi = ∑∞ j=1 aijej . Define ψ1 ⊗ · · · ⊗ ψn := ∑ j1,...,jn a1j1a2j2 . . . anjnej1 ⊗ · · · ⊗ ejn . This is a basis-dependent map from Hn into H⊗ne . However, we can have a basis-independent definition of ψ1⊗ · · · ⊗ψn by observing that the diagram Hn H⊗ne H⊗nf commutes. This implies that (ψ1, . . . , ψn) 7→ ψ1 ⊗ · · · ⊗ ψn ∈ H⊗n is well- defined. Example 5.2. Suppose H = L2(R) and e1, e2, . . . is an orthonormal basis. Then an element of H⊗ne is of the form ∑ αi1···inei1 ⊗ · · · ⊗ ein . We can map this element into L2(Rn) as ψ(x1, . . . , xn) = ∑ αi1···inei1(x1)ei2(x2) · · · ein(xn). It is straightforward to check that this map is an isomorphism. If we use a different orthonormal basis f1, f2, . . ., then the isomorphic image of this element in H⊗nf also maps to the same function ψ ∈ L2(Rn). If ψ1, . . . , ψn ∈ L2(R) and ψ = ψ1 ⊗ · · · ⊗ ψn, then ψ, as an element of L2(Rn), is given by ψ(x1, . . . , xn) = ψ1(x1) · · ·ψn(xn) 5.4. Time evolution on a tensor product space Suppose that the state of a single particle evolves according to the uni- tary group (U(t))t∈R. Then the time evolution on H⊗n of n non-interacting particles, also denoted by U(t), is defined as U(t)(ψ1 ⊗ · · · ⊗ ψn) := (U(t)ψ1)⊗ (U(t)ψ2)⊗ · · · ⊗ (U(t)ψn) and extended by linearity. (It is easy to check that this is well-defined.) Consequently, the Hamiltonian is given by H(ψ1 ⊗ · · · ⊗ ψn) = − lim t→0 1 it (U(t)(ψ1 ⊗ · · · ⊗ ψn)− ψ1 ⊗ · · · ⊗ ψn) = − lim t→0 1 it (U(t)ψ1 ⊗ U(t)ψ2 ⊗ · · · ⊗ U(t)ψn − ψ1 ⊗ ψ2 ⊗ · · · ⊗ ψn) = − lim t→0 1 it n∑ j=1 U(t)ψ1 ⊗ · · · ⊗ U(t)ψj−1 ⊗ (U(t)ψj − ψj)⊗ ψj+1 ⊗ · · · ⊗ ψn = n∑ j=1 ψ1 ⊗ · · · ⊗ ψj−1 ⊗Hψj ⊗ ψj+1 ⊗ · · · ⊗ ψn. 22 5. MANY PARTICLE STATES 5.5. Example of time evolution on a product space Take H = L2(R) and let U(t) be the free evolution group, generated by the Hamiltonian H = − 1 2m d2 dx2 . From the previous lecture we know that H⊗n = L2(Rn). Moreover if ψ = ψ1 ⊗ · · · ⊗ ψn, then as a function, ψ(x1, . . . , xn) = ψ1(x1) · · ·ψn(xn). Therefore, Hψ = n∑ i=1 ψ1(x1) · · ·ψi−1(xi−1) ( − 1 2m ψ ′′ i (xi) ) ψi+1(xi+1) . . . ψn(xn) = − 1 2m n∑ i=1 ∂2 ∂x2 i (ψ1(x1) . . . ψn(xn)) = − 1 2m ∆ψ. So by linearity, Hψ = − 1 2m ∆ψ for each ψ ∈ D(∆), where D(∆) is the domain of the unique self-adjoint extension of the Laplacian. LECTURE 6 Bosonic Fock space Date: 10/5/2018 Scribe: Anav Sood 6.1. Bosons There are two kinds of elementary particles — bosons and fermions. We will deal only with bosons for now. Let H be the single particle state space for any given particle that is classified as a boson. The main postulate about bosons is that the state of a system of n such particles is always a member of a certain subspace of H⊗n, which we denote by H⊗nsym and define below. Let e1, e2, . . . be an orthonormal basis of H. Define H⊗ne,sym to be all elements of the form ∑ αi1···inei1 ⊗ · · · ⊗ ein such that αi1···in = αiσ(1)···iσ(n) for all i1, . . . , in and σ ∈ Sn, where Sn is the group of all permutations of 1, . . . , n. It turns out that this is a basis-independent definition, in the sense that the natural isomorphism between H⊗ne and H⊗nf discussed earlier is also an isomorphism between the corresponding H⊗ne,sym and H⊗nf,sym. Thus we can simply refer to H⊗nsym. Moreover, this is a closed subspace and hence a Hilbert space. For example, take H = L2(R) so H⊗n = L2(Rn). Then it is not hard to show that H⊗nsym = {ψ ∈ L2(Rn) : ψ(x1 . . . , xn) = ψ(xσ(1), . . . , xσ(n)) for all σ ∈ Sn}. Another important fact is that if U(t) is any evolution on H, then its exten- sion to H⊗n maps H⊗nsym into itself. Next, let us consider the problem of finding an orthonormal basis for H⊗nsym, starting with an orthonormal basis e1, e2, . . . of H. Take m1, . . . ,mn and consider the vector ∑ σ∈Sn emσ(1) ⊗ · · · ⊗ emσ(n) . This element does not have norm 1, so in attempt to construct an orthonor- mal basis we will normalize it. First, for each i ≥ 1 let ni = |{j : mj = i, 1 ≤ j ≤ n}|. 23 LECTURE 7 Creation and annihilation operators Date: 10/8/2018 Scribe: Casey Chu 7.1. Operator-valued distributions Fix a basis of H and for each k ≥ 1, let a†k : B0 → B be a linear operator defined on basis elements as a†k|n1, n2, . . .〉 = √ nk + 1|n1, n2, . . . , nk−1, nk + 1, nk+1, . . .〉. Note that a†k maps H⊗nsym into H⊗(n+1) sym . Thus it ‘creates’ a particle, and is therefore called a ‘creation operator’. Note, in particular, that a†k|0〉 = a†k|0, 0, . . . 〉 = | 0, . . . , 0︸ ︷︷ ︸ k−1 0s , 1, . . .〉. Next, for all k, let ak : B0 → B be a linear operator defined on basis elements as ak|n1, n2, . . . 〉 = { √ nk|n1, n2, . . . , nk−1, nk − 1, nk+1, . . . 〉 if nk ≥ 1, 0 if nk = 0. Again, note that ak maps H⊗nsym into H⊗(n−1) sym for n ≥ 1. Thus it ‘destroys’ a particle, and is therefore called an ‘annihilation operator’. Note, in par- ticular, that ak| 0, . . . , 0︸ ︷︷ ︸ k−1 0s , 1, . . .〉 = 1 · |0, 0, . . . 〉 = |0〉. Now using these operators we will define operator-valued distributions on H, namely an object which takes in a function as input and returns an operator as output. Take any f ∈ H and let ∑∞ k=1 αkek be its expansion in the chosen basis. Define A(f) = ∞∑ k=1 α∗kak, A†(f) = ∞∑ k=1 αka † k. Note that A and A† map from H into the set of linear operators from B0 into B. Although these are defined in a basis-dependent way, we will now show that A and A† are actually basis-independent. 27 28 7. CREATION AND ANNIHILATION OPERATORS 7.2. Basis-independence of A First, we will show that A is basis-independent. We will show this only for H = L2(R), but the proof extends to general H using isometries between Hilbert spaces. Let us start with A. Fix an orthonormal basis e1, e2, . . . ∈ H. Consider a basis element |n1, n2, . . .〉 of B, where ∑∞ i=1 ni = n and hence |n1, n2, . . .〉 ∈ H⊗nsym = L2(Rn)sym. Let ϕ denote this function, which is a symmetric func- tion in n variables. More explicitly, choose a list of integers m1,m2, . . . ,mn such that for each i, ni counts the number of i’s listed. Then ϕ(x1, . . . , xn) = 1√ n! ∏ i ni! ∑ σ∈Sn emσ(1)(x1) · · · emσ(n)(xn). Let ψ = akϕ. We know that ψ = √ nk |n1, n2, . . . , nk − 1, ...〉 ∈ L2(Rn−1)sym, so we may similarly obtain the following explicit expression for ψ. Let m′1,m ′ 2, . . . ,m ′ n−1 be integers obtained by removing one k from m1, . . . ,mn. Then ψ(x1, . . . , xn−1) = √ nk√ (n− 1)!(nk − 1)! ∏ i 6=k ni! ∑ σ∈Sn−1 em′ σ(1) (x1) · · · em′ σ(n−1) (xn−1). Now note that∫ ∞ −∞ ek(y)∗ ∑ σ∈Sn emσ(1)(x1) · · · emσ(n−1) (xn−1) emσ(n)(y) dy = ∑ σ∈Sn emσ(1)(x1) · · · emσ(n−1) (xn−1) ∫ ∞ −∞ ek(y)∗emσ(n)(y) dy = ∑ σ∈Sn mσ(n)=k emσ(1)(x1) · · · emσ(n−1) (xn−1) = nk ∑ σ∈Sn−1 emσ(1)(x1) · · · emσ(n−1) (xn−1), using the fact that ∫ ∞ −∞ ej(y)∗ek(y) dy = δjk to go from the second line to the third. We recognize that the summation in this final expression is the same as the summation in our explicit expression 7.4. COMMUTATION RELATIONS 31 7.4. Commutation relations To summarize, we have the following two basis-independent expressions for A(f) and A†(f): (A(f)ϕ)(x1, . . . , xn−1) = √ n ∫ ∞ −∞ f(y)∗ϕ(x1, . . . , xn−1, y) dy, (A†(f)ϕ)(x1, . . . , xn+1) = 1√ n+ 1 n+1∑ j=1 f(xj)ϕ(x1, . . . , x̂j , . . . , xn+1) where ϕ ∈ L2(Rn)sym. These expressions allow us to define a(x) and a†(x), continuous analogues of ak and a†k. Heuristically, for each x ∈ R, we set a(x) = A(δx), a†(x) = A†(δx), meaning that (a(x)ϕ)(x1, . . . , xn−1) = √ nϕ(x1, . . . , xn−1, x), (a†(x)ϕ)(x1, . . . , xn+1) = 1√ n+ 1 n+1∑ j=1 δ(x− xj)ϕ(x1, . . . , x̂j , . . . , xn+1). Under these definitions, we see that we may symbolically re-express A(f) and A†(f) as A(f) = ∫ dx f(x)∗a(x), A†(f) = ∫ dx f(x) a†(x). (Note that the definitions of a and a† do not make sense directly, since L2 functions cannot be evaluated pointwise, and the delta function is not in L2. Therefore, to be precise, we must integrate these against test functions, yielding A and A†, respectively.) With our original ak and a†k, it is not hard to verify the commutation relation [ak, a † l ] = δk,l1. To derive the continuous analogue, first let f = ∑ αkek and g = ∑ βkek, and recall that [A(f), A†(g)] = [ ∞∑ k=1 α∗kak, ∞∑ l=1 βla † l ] = ∑ k,l α∗kβl[ak, a † l ] = ∑ k,l α∗kβlδk,l1 = (f, g)1. 32 7. CREATION AND ANNIHILATION OPERATORS But using the symbolic expressions, we have [A(f), A†(g)] = ∫ dx dy f(x)∗g(y) [a(x), a†(y)]. This gives us the commutation relation [a(x), a†(y)] = δ(x− y)1. (7.1) Similarly, we may derive that [a(x), a(y)] = [a†(x), a†(y)] = 0 for all x, y ∈ R. Jointly, these are all the commutation relations satisfied by the a and a† operators. This is where physics classes start, with the ‘operators’ a(x) and a†(x) defined at every point in space and satisfying the commutation relations. Instead, we have defined this concept rigorously, using operator-valued dis- tributions. 7.5. Creation and annihilation on a general state space Let us now extend the definitions of a and a† to general H = L2(X, dλ), where X is some measurable space and λ is a measure on X. First, the notion of a delta function on such a space is a distribution that maps a function f to its value at a specific point. That is, δx(f) = f(x) ∀x ∈ X. Our definitions of A and A† work for general H, and in particular for H = L2(X, dλ). We symbolically represent A and A† as A(f) = ∫ dλ(x)f(x)∗a(x) A†(f) = ∫ dλ(x)f(x)a†(x). This yields similar expressions for a and a†: (a(x)ϕ)(x1, . . . , xn−1) = √ nϕ(x1, . . . , xn−1, x) (a†(x)ϕ)(x1, . . . , xn+1) = 1√ n+ 1 n+1∑ j=1 δxj (x)ϕ(x1, . . . , x̂j , . . . , xn+1). LECTURE 8 Time evolution on Fock space Date: 10/10/2018 Scribe: Andy Tsao 8.1. Defining a†(x)a(x) Ordinarily, it does not make sense to talk about the product of two dis- tributions. However, creation and annihilation operators can be multiplied in certain situations. We have already seen one example of that in the com- mutation relations for a and a†. Let us now see one more example. Given any smooth test function f , we will show that ∫ dxf(x)a†(x)a(x) is a well- defined operator on B0. Indeed, take any ϕ ∈ L2(Rn), and let ψ = a(x)ϕ and ξ = a†(x)ψ. Then, ψ(x1, . . . , xn−1) = √ nϕ(x1, . . . , xn−1, x), and ξ(x1, . . . , xn) = 1√ n n∑ j=1 δ(x− xj)ψ(x1, . . . , x̂j , . . . , xn) = n∑ j=1 δ(x− xj)ϕ(x1, . . . , x̂j , . . . , xn, x). Integrating over x gives us(∫ dxf(x)ξ ) (x1, . . . , xn) = n∑ j=1 ∫ dxf(x)δ(x− xj)ϕ(x1, . . . , x̂j , . . . , xn, x) = n∑ j=1 f(xj)ϕ(x1, . . . , x̂j , . . . , xn, xj) = ( n∑ j=1 f(xj) ) ϕ(x1, . . . , xn), where the last equality holds because ϕ is symmetric. 8.2. Free evolution on Fock space We have shown previously that the free evolution Hamiltonian H acts as Hϕ = − 1 2m∆ϕ for ϕ ∈ L2(Rn)sym. From this it follows by linearity that 33 36 8. TIME EVOLUTION ON FOCK SPACE The n-fold tensor product of H is L2(Rn, (2π)−ndp1 · · · dpn). For ϕ ∈ H⊗n, the Hamiltonian operator on momentum space acts as Hpϕ = ( 1 2m n∑ j=1 p2 j ) ϕ. Since Hp acts on ϕ by scalar multiplication, its representation using a and a† is simply Hp = ∫ dp 2m p2 2m a†(p)a(p). One can check that Hp and Hx satisfy the relationship Hpϕ̂ = Ĥxϕ. In general we will not differentiate between Hp and Hx and denote them both by H (although they are operators on different spaces). We will use the following notation throughout the rest of the lectures: |p1, p2, . . . , pn〉 := a†(p1)a†(p2) · · · a†(pn) |0〉 . Just like in position space, the above state is the state of n non-relativistic bosons in one-dimensional space with momenta exactly equal to p1, . . . , pn. Generally, we will be working in momentum space when we move to developing QFT. Consider a state ψ = ∑∞ n=0 ψn ∈ B, where B is the bosonic Fock space associated with the Hilbert space L2(R, dp/2π). Then note that 〈p1, p2, . . . , pn|ψ〉 = 〈p1, p2, . . . , pn|ψn〉 , since inner products of the form 〈p1, p2, . . . , pn|ψm〉 are zero when n 6= m (states with different particle number are orthogonal by definition in the Fock space). This shows that | 〈p1, p2, . . . , pn|ψ〉 |2 is proportional to the joint probability density of the n momenta, conditional on the number of particles = n, if the state of the system is ψ. Recall that the system has n particles with probability proportional to ‖ψn‖2. LECTURE 9 Special relativity Date: 10/12/2018 Scribe: George Hulsey 9.1. Special relativity notation We have now finished our preliminary discussion of non-relativistic quan- tum field theory. We will now move on to incorporate special relativity. As always, we will be adopting units where ~ = c = 1. The following nota- tions and conventions are standard in special relativity, and carry over to quantum field theory. In special relativity, our universe is represented by the vector space R1,3, which is just R4 but with a different inner product. We will denote a vector x ∈ R1,3 without an arrow or bolding. We will write components as: x = (x0, x1, x2, x3) where x0 is the time coordinate (note that we have used superscripts). Given x ∈ R1,3, we let x = (x1, x2, x3) be the 3-tuple of spatial coordinates, which is a vector in R3. Then x = (x0,x). The distinction between x and x is very important. We now define a symmetric matrix η as: η =  1 0 0 0 0 −1 0 0 0 0 −1 0 0 0 0 −1  (9.1) We will write ηµν to denote the (µ, ν)th entry of η. Here, we use a subscript while we used a superscript to label the coordinates. This matrix serves as a quadratic form to define an inner product on R1,3. Given x, y ∈ R1,3, the inner product is defined as: (x, y) = x0y0 − x1y1 − x2y2 − x3y3 = ∑ µ,ν ηµνx µyν This is the Minkowski inner product. But if we simply consider x ·y, we get the usual: x · y = x1y1 + x2y2 + x3y3. 37 38 9. SPECIAL RELATIVITY This is the regular dot product. So the Minkowski inner product can be written more succinctly as (x, y) = x0y0 − x · y. In special relativity, it is conventional to write x2 for x · x and x2 = (x, x) = (x0)2 − x · x. The Euclidean norm of a 3-vector x will be denoted by |x|. It is vital to remember and be comfortable with all of the above notations for the rest of this lecture series. 9.2. Lorentz Transformations A Lorentz transformation L on R1,3 is a linear map such that (Lx,Ly) = (x, y) for all x, y ∈ R1,3 (using the Minkowski inner product). In the lan- guage of matrices, this means that: LT η L = η. This is very similar to the orthogonality condition under the usual inner product. Like orthogonal matrices, Lorentz transformations form a Lie group O(1, 3). In analogy with the regular orthogonal groups, we can con- sider the map L 7→ det(L) where the determinant serves as a homomorphism: det : O(1, 3)→ Z2 = {−1, 1}. In the case of the orthogonal groups, quotienting out by the action of this homomorphism is sufficient to reduce O(4) to SO(4). However, the group O(1, 3) has another homomorphism L 7→ sign(L0 0), where L0 0 is the top left entry of the matrix. We define the restricted Lorentz group SO↑(1, 3) to be the set of all L ∈ O(1, 3) such that det(L) = 1 and sign(L0 0) = 1. This is a subgroup of O(1, 3). 9.3. What is special relativity? Classical physics is invariant under isometries of R3, that is, translations and rotations. What does that mean? Suppose you have a computer pro- gram simulating physics. You input the state of the system at time 0, then the program gives you the state at time t. Suppose a trickster enters the state of the system at time 0 but in a different coordinate system. If your program is properly built, the result it returns for the trickster will (after changing back coordinates) be the exact same as the result you saw earlier. This a property of classical physics. Special relativity claims that the laws of physics remain invariant un- der restricted Lorentz transformations and spacetime translations of R1,3, in the same sense as above. Now space and time become mixed into a single entity called spacetime, coordinates of which are changed by Lorentz trans- formations and spacetime translations. The same analogy as before holds, LECTURE 10 The mass shell Date: 10/15/2018 Scribe: Alec Lau 10.1. Time evolution of a relativistic particle Recall the mass shell Xm = {p ∈ R1,3 : p2 = m2, p0 ≥ 0} defined in the last lecture. In quantum field theory, we model the behavior of the four- momentum of a particle instead of the classical momentum. The Hilbert space is H = L2(Xm, dλm), where λm is the unique measure (up to a multi- plicative constant) that is invariant under the action of the restricted Lorentz group on Xm. We will define λm later in this lecture. Suppose that this invariant measure exists. How does the state of the free particle evolve? The main postulate is the following: Postulate. If the four-momentum state of a freely evolving particle at time 0 is ψ ∈ L2(Xm, dλm), then its state at time t is the function ψt, given by ψt(p) = e−itp 0 ψ(p). (Recall that p0 is the first coordinate of p.) The reader may be wondering how time evolution makes sense in the above manner, when time itself is not a fixed notion in special relativity. Indeed, the above concept of time evolution does not make sense in the relativistic setting. The above postulate is simply the convenient way to think about what is going on. What is really going on is a bit more complicated. We will talk about it in the next lecture. The above postulate is a direct generalization of free evolution in the non-relativistic setting in R3, since in that case ψt = e−itp 2/2mψ, where p is the non-relativistic momentum and p2/2m is the non-relativistic energy. The postulate simply replaces the non-relativistic energy by the relativistic energy p0. Notice that the Hamiltonian for time evolution in the relativistic setting is therefore just Hψ(p) = p0ψ(p). Consequently, ∂ ∂t ψt(p) = −ip0ψt(p). While this equation is actually correct, note that there is an obvious con- ceptual difficulty because the equation gives a special status to time, which is not acceptable in special relativity. We will resolve this difficulty in the next lecture. 41 42 10. THE MASS SHELL 10.2. The measure λm We will now construct the invariant measure on our mass shell in analogy with the unique invariant measure on a sphere with respect to rotations. One way to construct the uniform measure on the sphere is to make a thin annulus around the sphere, take the Lebesgue measure on the annulus, normalize it, and take the width of the annulus to zero. The key to why this works is that a thin annulus is also rotationally invariant, giving a rotationally invariant measure in the limit. One way to define Lorentz invariant annuli is to set Xm,ε = {p : m2 < p2 < (m+ ε)2}, where the square is the Minkowski norm, hence making this annulus Lorentz invariant. Scaling Lebesgue measure on this annulus in a suitable way gives a nontrivial measure as ε → 0. To integrate functions with respect to this measure, we bring our annuli down to R3. Any point p ∈ R3 corresponds to a unique point p ∈ Xm where p = (ωp,p), with ωp = √ m2 + p2. The thickness of Xm,ε at p is√ (m+ ε)2 + p2 − √ m2 + p2 ≈ εm ωp . From this it is easy to derive that for an appropriate scaling of Lebesgue measure on Xm,ε as ε→ 0, the scaling limit λm satisfies, for any integrable function f on Xm,∫ Xm dλm(p)f(p) = ∫ R3 d3p (2π)3 1 2ωp f(ωp,p). (10.1) Note that constants do not matter because we are free to define our measure up to a constant multiple. The factor 2 in the denominator is conventional. The above integration formula gives a convenient way to integrate on the mass shell. LECTURE 11 The postulates of quantum field theory Date: 10/17/2018 Scribe: Henry Froland 11.1. Changing the postulates We now arrive at a fundamental problem with our picture, which is, ‘what does it mean to say momentum state at a given time?’ This is because Lorentz transforms change the concept of time slices. That is, the notion of two events happening ‘at the same time’ need not remain the same under a change of coordinate system. As we have defined it, ψ evolving as ψt(p) = e−itp 0 ψ(p) is just a convenient way to think of time evolution, but the true picture is more complicated. To go from quantum mechanics to QFT, we need to fundamentally change the postulates to get a picture that is fully consistent with special relativity. Recall that we had five postulates of quantum mechanics that were introduced in the second lecture. The postulates need to be modified as follows. P1 For any quantum system, there exists a Hilbert space H such that the state of the system is described by vectors in H. This looks the same as the original P1, but there is one crucial difference. In the new version, a state is not for a given time, but for all spacetime. In other words, a state gives a complete spacetime description of the system. P2 This postulate is unchanged. P3 This postulate is unchanged. P4 This postulate is unchanged. P5 This postulate is completely different. There is no concept of time evolution of a state in QFT. We need some extra preparation before stating this postulate, which we do below. The Poincaré group P is the semi-direct product R1,3 o SO↑(1, 3), where R1,3 is the group of spacetime translations (for x ∈ R1,3, x(y) = x+y). This means P = {(a,A) : a ∈ R1,3, A ∈ SO↑(1, 3)} (11.1) with the group operation defined as (a,A)(b, B) = (a+Ab,AB). This is the group of isometries of the Minkowski spacetime. The action of (a,A) ∈ P on x ∈ R1,3 is defined as (a,A)(x) = a+Ax. 43 LECTURE 12 The massive scalar free field Date: 10/19/2018 Scribe: Manisha Patel 12.1. Creation and annihilation on the mass shell Let H = L2(Xm, dλm), and let B be the bosonic Fock space for this H. Recall the operator-valued distributions A, A† on the Fock space B, and the formal representations A(f) = ∫ dλmf ∗(p)a(p), A†(f) = ∫ dλmf(p)a†(p). These are the usual, basis-independent definitions we have been working with. Recall that (a(p)φ)(p1, . . . , pn) = √ nφ(p1, . . . , pn−1, p), (a†(p)φ)(p1, . . . , pn+1) = 1√ n+ 1 n+1∑ j=1 δpj (p)φ(p1, . . . , p̂j , . . . , pn+1), where δpj is the Dirac delta on Xm at the point pj . Note that we do not write δ(p − pj) because Xm is not a vector space, and so p − pj may not belong to Xm. We define two related operator-valued distributions on B: a(p) = 1√ 2ωp a(p), a†(p) = 1√ 2ωp a†(p). Note that because we are on Xm, the last three coordinates p define the first, so that p = (ωp,p). The following are the commutation relations for the operators defined above, easily derived using the commutation relations for a(p) and a†(p) that we already know from previous discussions: [a(p), a(p′)] = 0, [a†(p), a†(p′)] = 0, [a(p), a†(p′)] = (2π)3δ(3)(p− p′)1. 47 48 12. THE MASSIVE SCALAR FREE FIELD For example, to prove the last identity, notice that by (7.1) and (10.1),∫∫ d3p (2π)3 d3p′ (2π)3 f(p)∗g(p′)[a(p), a†(p′)] = ∫∫ d3p (2π)3 d3p′ (2π)3 1√ 4ωpωp′ f(p)∗g(p′)[a(p), a†(p′)] = ∫∫ dλm(p)dλm(p′) √ 4ωpωp′f(p)∗g(p′)[a(p), a†(p′)] = (∫ dλm(p)2ωpf(p)∗g(p) ) 1 = (∫ d3p (2π)3 f(p)∗g(p) ) 1, where 1 denotes the identity operator on L2(Xm, dλm). 12.2. The massive scalar free field The massive scalar free field ϕ is an operator-valued distribution acting on S (R1,3), the space of Schwartz functions on R1,3. The action of ϕ on a Schwartz function f is defined as ϕ(f) = A(f̂∗ ∣∣ Xm ) +A†(f̂ ∣∣ Xm ) where the hat notation denotes the Fourier transform of f . On Minkowski space, the Fourier transform is defined as f̂(p) = ∫ d4x ei(x,p)f(x). Note that there is no minus sign in the exponent because we have the Minkowski inner product, so the minus sign is contained in the space coor- dinates. The free field ϕ is the first real quantum field that we are seeing in this course. Quantum fields are operator-valued distributions. A field is an abstract ‘thing’ that doesn’t exist as an object, but has real, observable effects. For example consider the classical magnetic field. For this field, we assign to each point in space a vector that denotes the direction and strength of the field at that point. Similarly we can also consider fields that put a scalar at each point. These are called classical scalar fields. Quantum mechanics replaces observables with operators, so this is how we arrive at an operator at each point in our spacetime. These fields act on particles by Hamiltonians defined using the fields. 12.3. Multiparticle states We previously defined the state for a system with n particles with four- momenta exactly equal to p1, . . . , pn as |p1, . . . , pn〉 = a†(p1) · · · a†(pn)|0〉. 12.7. EXPRESSING THE HAMILTONIAN USING THE FREE FIELD 51 12.7. Expressing the Hamiltonian using the free field Write x = (t,x), where x = (x1, x2, x3). Returning to our expression for ϕ(x) derived previously, where ϕ(x) = ∫ R3 d3p (2π)3 1√ 2ωp ( e−i(x,p)a(p) + ei(x,p)a†(p) ) , we can formally differentiate ϕ(x) with respect to t, x1, x2, and x3, denoted by ∂tϕ, ∂1ϕ, ∂2ϕ, and ∂3ϕ. It can be shown by a simple but slightly tedious calculation that for any t, H0 = 1 2 ∫ R3 d3x : ( (∂tϕ(x))2 + 3∑ ν=1 (∂νϕ(x))2 +m2ϕ(x)2 ) :. LECTURE 13 Introduction to ϕ4 theory Date: 10/22/2018 Scribe: Youngtak Sohn 13.1. Evolution of the massive scalar free field In the last lecture, we have defined the free field ϕ(x) for x ∈ R1,3. Then ϕ(x) is like an operator on B for any x. (More precisely it is an operator when one averages over a test function.) The following proposition describes how the massive free field evolves. Proposition 13.1. For any x ∈ R3 and any t ∈ R, ϕ(t,x) = eitH0ϕ(0,x)e−itH0 , where H0 is the free evolution Hamiltonian. Recall the Heisenberg picture where a Hamiltonian H makes an operator B evolve as Bt = eitHBe−itH . The above proposition says that for any x, ϕ(t,x) evolves according to H0. To prove the proposition, we start with the following lemma. Lemma 13.1. If U is a unitary operator on H, extended to B, then for all f ∈ H, UA(f)U−1 = A(Uf) UA†(f)U−1 = A†(Uf). Proof. For H = L2(R), write down (UA(f)U−1)(g) explicitly using the formula for A(f) when g = g1 ⊗ · · · ⊗ gn, and check that it is the same as A(Uf)(g). Similarly, check for A†. Extend to general H by isometries or a direct rewriting of the proof.  Proof of Proposition 13.1. The proof given below is formal, but can be made completely rigorous. Notice that a(p) = 1√ 2ωp a(p) = 1√ 2ωp A(δp). 53 LECTURE 14 Scattering 14.1. Classical scattering Let V be a potential which is strong near 0 ∈ R3, but very weak as you go away from 0. Let us first try to make sense of the following question in the setting of classical Newtonian mechanics: Suppose a particle moves towards the origin with velocity v under the influence of the potential V . What is the outgoing velocity? The trajectory of a free particle is always of the form (x + tv)t∈R, where x,v ∈ R3 and x denotes the position at time 0. Denote such a trajectory by (x,v). Given some trajectory (x,v), and some t < 0, consider a particle that is on this trajectory at time t. Let x′ be its location at time 0 if it is moving under the influence of the potential V from time t onwards, and let v′ be its velocity at time 0. Let (x,v)t := (x′,v′) and define Ω+(x,v) = lim t→−∞ (x,v)t, assuming that the limit exists. Then Ω+(x,v) can be interpreted as the (location, velocity) at time 0 of a particle coming in ‘along the trajectory (x,v)’ from the far past and moving under the influence of V . Next, take t > 0 and look at a particle on the trajectory (x,v) at time t. Find (x′,v′) such that if a particle were at (x′,v′) at time 0, and the potential is turned on, it would be at x + tv at time t. Here we assume that such a pair (x′,v′) exists. Let (x,v)t := (x′,v′) and define Ω−(x,v) = lim t→∞ (x,v)t, again assuming that the limit exists. Then Ω−(x,v) is the (location,velocity) of a particle at time 0, which when moving in the potential, assumes the trajectory (x,v) in the far future. Finally, the scattering operator is defined as S := Ω−1 − Ω+, if it makes sense. To understand what it means, let (y,u) = S(x,v). Then Ω−(y,u) = Ω+(x,v). The right hand side gives the (location, velocity) at time 0 if (x,v) is the trajectory in the far past. The left hand side gives the (location, velocity) at time 0 if (y,u) is the trajectory in the far future. This implies (y,u) is the trajectory in the far future if (x,v) is the trajectory in the far past. 57 58 14. SCATTERING 14.2. Scattering in non-relativistic QM We now know what scattering means in the classical case, although it is a bit complicated even in that context. Now, consider the setting of non- relativistic quantum mechanics in the context of a single particle in three dimensional space. Let H = H0 + gHI where H0 is the free Hamiltonian, HI some interaction Hamiltonian (e.g. HI = V for some potential V ), and g is a coupling parameter. We want to understand evolution under H using the scattering approach. Let U(t) = e−itH and U0(t) = e−itH0 . If |ψ〉 is a state of the system at time 0, then its free evolution is the collection of states (U0(t) |ψ〉)t∈R, which is the analog of a straight line in the classical case. Let us identify |ψ〉 with this trajectory and call it ‘the trajectory |ψ〉’. Now suppose that the state of the particle is evolving according to U instead of U0. Also, suppose that it is ‘in the trajectory |ψ〉’ at some time t < 0. That is, its state at time t is U0(t) |ψ〉. Then its state at time 0 is U(−t)U0(t) |ψ〉 . Define Ω+ |ψ〉 ≡ lim t→−∞ U(−t)U0(t) |ψ〉 which is the ‘state of a particle at time 0 if it is on the trajectory |ψ〉 in the far past’. Similarly, Ω− is the state at time 0 of a particle that is on the trajectory |ψ〉 in the far future: Ω− |ψ〉 ≡ lim t→∞ U(−t)U0(t) |ψ〉 . As in the classical case, define the scattering operator S = Ω−1 − Ω+. That is, S |ψ〉 is the ‘trajectory of a particle in the far future if it is on the trajectory |ψ〉 in the far past’. If |ϕ〉 = S |ψ〉, then Ω− |ϕ〉 = Ω+ |ψ〉. This means that ‘if the particle looked like it was evolving as U0(t) |ψ〉 for t  0, then it will evolve as U0(t) |ϕ〉 for t 0’. More compactly, S = lim t2→∞, t1→−∞ U0(−t2)U(t2 − t1)U0(t1) But there are two main problem in this set-up: • Limits may not exist in the definitions of Ω+ and Ω−. • We need Range(Ω+) ⊆ Range(Ω−) to define S = Ω−1 − Ω+. The condition Range(Ω+) = Range(Ω−) is called ‘asymptotic complete- ness’. If this is not valid, a particle can get ‘trapped by the potential’, and will not look like it is in a free state at late times. It is a complicated tech- nical condition, and we will not bother to verify it because our main goal is LECTURE 15 The Born approximation Date: 10/26/2018 Scribe: Julien Boussard 15.1. Derivation of the Born approximation Suppose that we have a Hamiltonian H = H0 + gHI . Then we derived the Dyson series expansion for the scattering operator S = 1 + ∞∑ n=1 (−ig)n n! ∫ ∞ −∞ ∫ ∞ −∞ · · · ∫ ∞ −∞ dt1 · · · dtnT HI(t1) · · ·HI(tn), where HI(t) = eitH0HIe −itH0 and T HI(t1) · · ·HI(tn) = HI(tσ(1)) · · ·HI(tσ(n)) for σ ∈ Sn such that tσ(1) ≥ · · · ≥ tσ(n). Now let H0 = − 1 2m∆ and HI = V ∈ L2(R3). For this problem, we will now work out the first order approximation of S: S = 1 + (−ig) ∫ ∞ −∞ dteitH0V e−itH0 +O(g2). Let us try to compute 〈p2|S|p1〉 for some p2,p1 ∈ R3. It is equal to 〈p2|1|p1〉+ (−ig) ∫ ∞ −∞ dt〈p2|eitH0V e−itH0 |p1〉+O(g2). Recall that 〈p2|1|p1〉 = 〈p2|p1〉 = (2π)3δ(3)(p2 − p1). Next, recall that as a function on momentum space, the state |p1〉 is the function ψ(p) = (2π)3δ(3)(p− p1). Since H0 on momentum space is just multiplication by p2/2m, (e−itH0ψ)(p) = e−itp 2/2mψ(p) = e−itp 2/2m(2π)3δ(3)(p− p1) = e−itp 2 1/2m(2π)3δ(3)(p− p1) = e−itp 2 1/2mψ(p). In short, e−itH0 |p1〉 = e−itp 2 1/2m|p1〉. 61 62 15. THE BORN APPROXIMATION Carrying out a similar calculation for p2, we get 〈p2|eitH0V e−itH0 |p1〉 = eit(p 2 2−p2 1)/2m〈p2|V |p1〉. Next, recall that in position space, |p〉 is represented by the function ϕ(x) = eix·p. Therefore a position space calculation gives 〈p2|V |p1〉 = ∫ d3xe−ix·(p2−p1)V (x) = V̂ (p2 − p1). Combining, we get 〈p2|S|p1〉 = (2π)3δ(3)(p2 − p1) + (−ig) ∫ ∞ −∞ dteit(p 2 2−p2 1)/2mV̂ (p2 − p1) +O(g2) = (2π)3δ(3)(p2 − p1) + (−ig)(2π)V̂ (p2 − p1)δ((p2 2 − p2 1)/2m) +O(g2). This is called the Born approximation. Note that the first delta function is a delta function in R3 and the second one is a delta function in R. 15.2. What does it mean? The meaning of the Born approximation is not transparent from the representation in terms of delta functions. Let us now try to understand it better. Suppose that the incoming state was a proper state |ψ〉 instead of the improper state |p1〉. Then f(p2) = |〈p2|S|ψ〉|2 would be proportional to the probability density of the momentum of the outgoing state. Let us approximate the improper state |p1〉 by the proper state |ψε〉, represented in momentum space by the Gaussian density ψε(p) = 1 (2π)3/2ε3/2 e−(p−p1)2/2ε, where ε is some small number. With this approximation, the 3D delta function δ(3)(p2−p1) in the Born approximation gets replaced by something like C1 ε3/2 exp ( −(p2 − p1)2 2ε ) , where C1 is a constant. In a similar way, the one-dimensional delta function δ((p2 2 − p2 1)/2m) gets replaced by something like C2√ ε exp ( −(p2 2 − p2 1)2 8m2ε ) . Then, we have f(p2) = |〈p2|S|ψε〉|2 = |A(p2) +B(p2) +O(g2)|2 with: A(p2) = C1(2π)3 ε3/2 exp ( −(p2 − p1)2 2ε ) and B(p2) = C2(−ig)(2π)V̂ (p2 − p1) 1√ ε exp ( −(p2 2 − p2 1)2 8m2ε ) . 15.2. WHAT DOES IT MEAN? 63 Let S be the sphere of center 0 and radius |p1| in R3. Let Bε denote the ball of radius √ ε around p1. Let Aε denote the annulus of width √ ε around the sphere S. Then, roughly speaking, • A(p2) is of order ε−3/2 in Bε, and very small outside. • B(p2) is of order ε−1/2 in Aε, and very small outside. This shows that, again roughly speaking, • f(p2) is of order ε−3 in Bε and very small outside. • f(p2) is of order ε−1 in Aε and very small outside. Moreover, the volume of Bε is of order ε3/2, and the volume of Aε is of order ε1/2. Thus, the integral of f(p2) in Bε is of order ε−3 · ε3/2 = ε−3/2, and the integral of f(p2) in Aε is of order ε−1 · ε1/2 = ε−1/2. Everywhere else, the integral is negligible. Since ε−3/2  ε−1/2, this shows that as ε → 0, the probability density fully concentrates near p1. Thus, if a particle comes in with momentum ≈ p1, it also goes out with momentum ≈ p1. This looks uninteresting, but the interesting thing happens if we con- dition on the event that the particle scatters. When the incoming state is |ψε〉, the above calculations show that the conditional density of the outgo- ing momentum given |p2−p1| > η for η small is approximately proportional to |V̂ (p2 − p1)|2 +O(g) in Aε. If we fix η and send ε→ 0, and then send η → 0, we get the probability distribution of the outgoing momentum conditional on it being different than the incoming momentum. This probability distribution is supported on S, with probability density proportional to |V̂ (p2 − p1)|2 +O(g). To summarize, the Born approximation says that if a particle comes in with momentum exactly equal to p1, it also goes out with momentum exactly equal to p1, but conditional on the probability zero event that the outgoing momentum 6= p1, the outgoing momentum has a probability distribution supported on the sphere of radius |p1| and center 0 in R3, with density proportional to f(p2) = |V̂ (p2 − p1)|2 +O(g). It may seem strange to condition on a probability zero event, but this is routinely done in probability theory, for example conditioning on the end- point of Brownian motion being zero to get the Brownian bridge. Moreover, since it is impossible to generate quantum particles with momentum exactly equal to a given value, the above statement is only an idealization. In prac- tical terms, it means that if the momentum of an incoming particle is very close to a given value with high probability, then the outgoing momentum is also close to same value with high probability, but if by chance it scatters, then the Born approximation gives a first order approximation for the prob- ability amplitude of the outgoing momentum. This probability distribution is supported on the set of momenta that have the same Euclidean norm as the incoming momentum, in agreement with conservation of energy. 66 16. HAMILTONIAN DENSITIES scattering processes for general Hamiltonians, which reduces to understand- ing the scattering operator S. In QFT, however, we must allow for particle number to change: For p1, . . . ,pk incoming momenta and q1, . . . ,qj outgo- ing momenta, we wish to compute the amplitudes〈 q1, . . . ,qj |S|p1, . . . ,pk 〉 . The Dyson series expansion remains valid in the QFT setting, since the for- mal derivation is exactly the same. When HI is constructed using objects known as Hamiltonian densities, and the Hamiltonian densities satisfy cer- tain properties, the Dyson series expansion takes a particularly nice form. We will work this out in this lecture. 16.2. Construction of interaction Hamiltonians We will now see a general prescription for constructing interaction Hamil- tonians. Due to the difficulties associated with particle creation and annihi- lation, we will be vague about the underlying Hilbert space on which these operators will act. Definition 16.1. A Hamiltonian density is an operator-valued dis- tribution, with kernel denoted both asH(x) andH(t,x) for x = (t,x) ∈ R1,3. This distribution must satisfy the following: (1) (Time evolution.) H(t,x) = eitH0H(0,x)e−itH0 . (2) (Equal time commutation.) For any t ∈ R and any x,y ∈ R3, [H(t,x),H(t,y)] = 0. The associated interaction Hamiltonian HI is the operator-valued dis- tribution HI = ∫ R3 d3xH(0,x). Remark 16.1. In a physical setting, one imposes the following additional constraints: (1) The distribution H(x) should be Lorentz invariant. (2) H(x),H(y) should commute whenever x and y are spacelike sepa- rated, i.e. (x− y)2 < 0. We do not need to verify these conditions for our main purpose, which is to get a certain form of the Dyson expansion. We now turn to an example of a Hamiltonian constructed via a density, namely the ϕ4 theory. Here, the Hamiltonian density is H(x) = 1 4! :ϕ(x)4:. In the remainder of this section, we prove H(x) satisfies conditions (1) and (2) of Definition 16.1. Lemma 16.1. For all k ∈ N, :ϕ(x)k: is a formal polynomial in ϕ(x). 16.2. CONSTRUCTION OF INTERACTION HAMILTONIANS 67 Proof. The proof is by induction on k. The k = 1 case follows immedi- ately, as Wick ordering has no impact on the expression for ϕ(x). So suppose that the result is given for k ≤ n. First, writing ϕ(x) = ϕ−(x) + ϕ+(x), we observe that after Wick ordering, the product (ϕ−(x) + ϕ+(x))n collapses as though the ϕ± commute: :ϕ(x)n: = :(ϕ−(x) + ϕ+(x))n: = n∑ j=0 ( n j ) ϕ−(x)jϕ+(x)n−j . Next, we turn to the commutator [ϕ−(x), ϕ+(x)]: [ϕ+(x), ϕ−(x)] = ∫ ∫ d3p (2π)3 d3q (2π)3 1√ 2ωp 1√ 2ωq ei(x,q−p)[a(p), a†(q)]. But we have seen earlier that the commutator in the integrand is just (2π)3δ(3)(q − p)1, so integration over d3q fixes the value of q. Since q = (ωq,q), we see q is only a function of q, and so integrating the delta function in fact sets q = p. This procedure thus yields [ϕ+(x), ϕ−(x)] = (∫ d3p (2π)3 1 2ωp ) 1. Let us denote the term within the brackets by C. Note C is not a finite quantity, but just a symbol in our formal calculations. We now employ the commutation relation repeatedly to compute ϕ−(x)jϕ+(x)n−jϕ−(x) = ϕ−(x)jϕ+(x)n−j−1ϕ−(x)ϕ+(x) + Cϕ−(x)jϕ+(x)n−j−1 = ϕ−(x)jϕ+(x)n−j−2ϕ−(x)ϕ+(x)2 + 2Cϕ−(x)jϕ+(x)n−j−1 = · · · = ϕ−(x)j+1ϕ+(x)n−j + (n− j)Cϕ−(x)jϕ+(x)n−j−1. Therefore, :ϕ(x)n:ϕ(x) = ( n∑ j=0 ( n j ) ϕ−(x)jϕ+(x)n−j )( ϕ+(x) + ϕ−(x) ) = n∑ j=0 ( n j ) ϕ−(x)jϕ+(x)n−j+1 + n∑ j=0 ( n j ) ϕ−(x)jϕ+(x)n−jϕ−(x) = n+1∑ j=0 ( n+ 1 j ) ϕ−(x)jϕ+(x)n+1−j + n−1∑ j=0 C(n− j) ( n j ) ϕ−(x)jϕ+(x)n−j−1 = n+1∑ j=0 ( n+ 1 j ) ϕ−(x)jϕ+(x)n+1−j + n−1∑ i=0 Cn ( n− 1 j ) ϕ−(x)jϕ+(x)n−1−j , 68 16. HAMILTONIAN DENSITIES where we have employed the previous display and the identity( n j ) + ( n j − 1 ) = ( n+ 1 j ) in the second-to-last equality, and the identity (n− j) ( n j ) = n ( n− 1 j ) in the final step. Thus, we get :ϕ(x)n:ϕ(x) = :ϕn+1(x): + Cnϕn−1(x):, which clearly completes the induction step.  Proposition 16.1. The Hamiltonian density H(x) = :ϕ(x)k: satisfies conditions (1) and (2) of Definition 16.1. Proof. In addition to the lemma above, we will need the following facts: (1) For all x,y ∈ R3, [ϕ(0,x), ϕ(0,y)] = 0. This follows directly from a computation analogous to the one given above. (2) The time evolution of the free field satisfies ϕ(t,x) = eitH0ϕ(0,x)e−itH0 , which was observed in an earlier lecture. We may generalize the second fact to arbitrary polynomials of ϕ, using eitH0ϕ(0,x)ke−itH0 = (eitH0ϕ(0,x)e−itH0)k = ϕ(t,x)k. Thus, using that :ϕ(t,x)k: = f(ϕ(t,x)) for f a formal polynomial, we con- clude :ϕ(t,x)k: = f(ϕ(t,x)) = eitH0f(ϕ(0,x))e−itH0 = eitH0 :ϕ(0,x)k:e−itH0 , which proves the time evolution property. Similarly, using the first fact above, and the observation that A,B commute implies Am, Bn commute, we get [ϕ(t,x)m, ϕ(t,y)n] = [eitH0ϕ(0,x)me−itH0 , eitH0ϕ(0,y)ne−itH0 ] = eitH0 [ϕ(0,x)m, ϕ(0,y)n]e−itH0 = 0. Thus, again writing :ϕ(t,x)k: = f(ϕ(t,x)) for some formal polynomial f , and exploiting the bilinearity of the commutator, we see that [:ϕ(t,x)k:, :ϕ(t,y)k:] is a sum of commutators of the form [ϕ(t,x)m, ϕ(t,y)n], all of which vanish by the above argument. Thus the equal time commutation property holds for H.  LECTURE 17 Wick’s theorem Date: 10/31/2018 Scribe: Sungyeon Yang 17.1. Calculating the Dyson series: First steps In this lecture we begin the process of learning how to compute the terms in the Dyson series expansion for ϕ4 theory. Recall that the Hilbert space of interest is H = L2(Xm, dλm) and we have operator-valued distributions A,A† acting on this space. Let C be the class of all operators on B0 of the form A(ξ) +A†(η) for some ξ, η ∈ H. Lemma 17.1. If B1 = A(ξ1) +A†(η1) and B2 = A(ξ2) +A†(η2), then we have 〈0|B1B2|0〉 = (ξ1, η2). Proof. It is easy to see from the definitions of A and A† that for any ξ, A(ξ)|0〉 = 0 and 〈0|A†(ξ) = 0. Thus, 〈0|B1B2|0〉 = 〈0|(A(ξ1) +A†(η1))(A(ξ2) +A†(η2))|0〉 = 〈0|A(ξ1)A†(η2)|0〉. The proof is now easily completed using the commutation relation [A(ξ), A†(η)] = (ξ, η)1 that we derived earlier.  Note that a(p), a†(p) and ϕ(x) are elements of C as a(p) = a(p)√ 2ωp = 1√ 2ωp A(δp), a†(p) = 1√ 2ωp A†(δp), and ϕ(x) = A(fx) +A†(fx), 71 72 17. WICK’S THEOREM where fx = ei(x,p). Here, as usual, p = (ωp,p). For the last claim, note that ϕ(x) = ∫ d3p (2π)3 1√ 2ωp (e−i(x,p)a(p) + ei(x,p)a†(p)) = ∫ d3p (2π)3 1 2ωp (e−i(x,p)a(p) + ei(x,p)a†(p)) = ∫ dλm(p)(e−i(x,p)a(p) + ei(x,p)a†(p)) = A(fx) +A†(fx). Consider the amplitude 〈0|a(p′)a†(p)|0〉. By the commutation relation for a(p) and a†(p), a(p′)a†(p) = a†(p)a(p′) + (2π)3δ(3)(p− p′)1. On the other hand, a(p′) |0〉 = 0 and 〈0| a†(p) = 0. Combining, we get 〈0|a(p′)a†(p)|0〉 = (2π)3δ(3)(p− p′). We also have 〈0|a†(p′)a(p)|0〉 = 0, 〈0|a(p′)a(p)|0〉 = 0, 〈0|a†(p′)a†(p)|0〉 = 0. By Lemma 17.1 and the observations made following the proof of the lemma, we have 〈0|a(p)ϕ(x)|0〉 = 1√ 2ωp (δp, fx) = 1√ 2ωp ei(x,p), 〈0|ϕ(x)a†(p)|0〉 = 1√ 2ωp (fx, δp) = 1√ 2ωp e−i(x,p). We will need these computations later. 17.2. Wick’s theorem Let us now introduce the main tool for computing the terms in Dyson’s expansion in QFT. If k is an even number, and pairing l of k is a permutation (l1, l ′ 1, l2, l ′ 2, . . . , lk/2, l ′ k/2) of (1, 2, . . . , k) such that lj < l′j for all j. The following result is known as Wick’s theorem. Theorem 17.1. If B1, . . . , Bk ∈ C, then we have 〈0|B1B2 · · ·Bk|0〉 = ∑ pairings l (k/2∏ j=1 〈0|BljBl′j |0〉 ) for k even, and 〈0|B1B2 · · ·Bk|0〉 = 0 for k odd. 17.3. CONTRACTION DIAGRAMS 73 Proof. We prove the theorem by induction on k. Let Bj = A(ξj) + A†(ηj). Then, 〈0|B1 · · ·Bk−1Bk|0〉 = 〈0|B1 · · ·Bk−1(A(ξk) +A†(ηk))|0〉 = 〈0|B1 · · ·Bk−1A †(ηk)|0〉, since A(ξk) |0〉 = 0. Since [A†(ξ), A†(η)] = 0 for any ξ and η, Lemma 17.1 gives [Bk−1, A †(ηk)] = [A(ξk−1), A†(ηk)] = (ξk−1, ηk)1. Thus, 〈0|B1 · · ·Bk−1A †(ηk)|0〉 = 〈0|B1 · · ·Bk−2A †(ηk)Bk−1|0〉 + (ξk−1, ηk)〈0|B1 · · ·Bk−2|0〉. We can iterate this step until A†(ηk) moves all the way to the left, which gives zero since 〈0|A†(ηk) = 0. After the final step, we get 〈0|B1B2 · · ·Bk|0〉 = k−1∑ j=1 (ξj , ηk)〈0|B1 · · · B̂j · · ·Bk−1|0〉 where the hatted term is omitted. The induction step is now easily com- pleted by recalling that (ξj , ηk) = 〈0|BjBk|0〉.  The number of terms in the sum in Wick’s theorem is a well-known quantity. It is equal to (k − 1)!! := (k − 1)(k − 3) · · · 5 · 3 · 1. When applying Wick’s theorem, verifying that the total number of terms considered is indeed (k−1)!! is one way of ensuring that we have not missed out anything. 17.3. Contraction diagrams Each 〈0|BjBk|0〉 in Wick’s theorem is called a contraction of Bj and Bk. The sum in Wick’s theorem is convenient to handle using contraction diagrams. Diagramatically, we represent each Bj by a vertex, with an edge hanging out. Then we tie up each tail with one other, so that there is no untied tail. Each such diagram contributes one term to the sum in Wick’s theorem. Consider, for example, the calculation of 〈0|B1B2B3B4|0〉. The vertices with freely hanging edges, and the three diagrams obtained by tying up the edges, are shown in Figure 17.1. When some Bi occurs as a power Bk i , then it is represented as a single vertex with k distinct tails hanging out. Consider 〈0|B1B 2 2B3|0〉 for ex- ample. There are three diagrams in this calculation, but one is repeated twice. So the pictorial representation shows two diagrams. This is shown in Figure 17.2. 76 18. A FIRST-ORDER CALCULATION IN ϕ4 THEORY • Any contraction like 〈0|B−i B+ i |0〉 is the same in both diagrams and is equal to 0 due to the same reason as above. • Any contraction like 〈0|B+ i B − i |0〉 (which is nonzero) in a diagram for X has to be replaced by 〈0|B−i B+ i |0〉 (which is 0) in the corre- sponding diagram for Y . This shows that the diagram for Y can be computed by taking the diagram for X and replacing it by 0 if there exists any contraction like 〈0|B+ i B + i |0〉, 〈0|B−i B−i |0〉, 〈0|B−i B+ i |0〉, or 〈0|B+ i B − i |0〉 in the diagram. (The first three above are automatically 0 and the last one, by Wick’s ordering, can be replaced by 〈0|B−i B+ i |0〉.) But note that such terms arise from terms like 〈0|BiBi|0〉 in the original Wick expansion for 〈0|B1 · · ·Bk i · · ·Bm|0〉. This tells us that if we take the Wick expansion for 〈0|B1 · · ·Bk i · · ·Bm|0〉 and remove any diagram that has a contraction of the form 〈0|BiBi|0〉, then we will get the Wick expansion for 〈0|B1 · · · :Bk i : · · ·Bm|0〉.  18.2. A first-order calculation in ϕ4 theory Consider ϕ4 theory. Suppose we have distinct p1,p2,p3,p4, and we want to compute 〈p3,p4|S|p1,p2〉 to the first order in perturbation theory. Recall the Dyson series expansion S = 1 + ∞∑ n=1 (−ig)n n! ∫ · · · ∫ d4x1 · · · d4xnT ( 1 4! :ϕ(x1)4: · · · 1 4! :ϕ(xn)4: ) = 1− ig 4! ∫ d4x :ϕ(x)4: +O(g2). Since pi’s are distinct, we have 〈p3,p4|p1,p2〉 = 0. Therefore, 〈p3,p4|S|p1,p2〉 = − ig 4! ∫ d4x〈p3,p4|:ϕ(x)4:|p1,p2〉+O(g2). Note that 〈p3,p4|:ϕ(x)4:|p1,p2〉 = 〈0|a(p3)a(p4):ϕ(x)4:a†(p1)a†(p2)|0〉. (18.2) The set of contraction diagrams for the above quantity consists of 4! dia- grams like the one displayed in Figure 18.1. Feynman diagrams are contraction diagrams but without labels and with arrows denoting incoming and outgoing particles. The contraction diagram of Figure 18.1 becomes the Feynman diagram of Figure 18.2. The usual convention for Feynman diagrams is that particles are shown to be coming in from the left and exiting on the right. 18.2. A FIRST-ORDER CALCULATION IN ϕ4 THEORY 77 Figure 18.1. The diagrams for (18.2) are 4! repetitions of the above diagram. Figure 18.2. An example of a Feynman diagram, corre- sponding to the contraction diagram of Figure 18.1. Note that incoming particles enter from the left and exit on the right. Now recall that for any p ∈ R3, p denotes the vector (ωp,p) ∈ Xm. Any contraction diagram of the type shown in Figure 18.1 contributes 〈0|a(p3)ϕ(x)|0〉〈0|a(p4)ϕ(x)|0〉〈0|ϕ(x)a†(p1)|0〉〈0|ϕ(x)a†(p2)|0〉 = ei(x,p3+p4−p1−p2)√ 16ωp1ωp2ωp3ωp4 . Multiplying the above by 4! and integrating over x, we get 〈p3,p4|S|p1,p2〉 = −ig ∫ d4x ei(x,p3+p4−p1−p2)√ 16ωp1ωp2ωp3ωp4 +O(g2) = −ig(2π)4δ(4)(p3 + p4 − p1 − p2)√ 16ωp1ωp2ωp3ωp4 +O(g2). (18.3) What does (18.3) mean? Like in the Born approximation, we can conclude that (up to first order) the probability distribution of (p3,p4), given that 78 18. A FIRST-ORDER CALCULATION IN ϕ4 THEORY the scattering has resulted in two outgoing particles, is supported on the manifold {(p3,p4) : p3 + p4 = p1 + p2} Note that this is a manifold in R6 described by 4 constraints. Therefore we expect this to be a 2D manifold. You can define a notion of ‘Lebesgue mea- sure’ on this manifold as the limit of a sequence of measures with densities proportional to exp ( −‖p3 + p4 − p1 − p2‖2 ε ) as ε → 0, where the multiplicative factor is taken in such a way as to give a nontrivial limit. The scattering amplitude implies that the conditional p.d.f. of (p3,p4) with respect to this ‘Lebesgue measure’ on the manifold is proportional to 1 ωp3ωp4 . The constraint p0 3 + p0 4 = p0 1 + p0 2 shows that the manifold is bounded. Since the above density is also bounded, we can conclude that the density is integrable on the manifold and gives a legitimate probability measure. The above reasoning can be made completely rigorous by replacing the improper incoming state |p1,p2〉 by a proper state which approximates it (for example, a Gaussian density), and then taking a sequence of approxi- mations converging to the improper state. Note that if we are in the non-relativistic limit, where |p1|  m and |p2|  m, the constraint p0 3 + p0 4 = p0 1 + p0 2 approximately says p2 3 2m + p2 4 2m = p2 1 2m + p2 2 2m , which is conservation of classical kinetic energy. 18.3. Words of caution One should be aware of two things about the above calculation. Both have been mentioned before but are worth repeating here. First, the calcu- lation is not rigorous because we do not know how to rigorously define ϕ4 theory, or justify the Dyson expansion for this theory. However, if we ignore these two (severe) problems, the rest of the calculation can be easily made fully rigorous. The second thing to be aware of is that ϕ4 theory does not describe any known particle. It is purely a hypothetical theory that exhibits many of the complexities of quantum field theories that describe real particles, and is therefore useful for introducing various tools and techniques. 19.3. AN ALTERNATIVE EXPRESSION FOR THE FEYNMAN PROPAGATOR 81 19.3. An alternative expression for the Feynman propagator The form of the Feynman propagator given in (19.2) is hard to work with, because of the presence of ωp in the denominator and in the exponent. Fortunately, it has a much friendlier form. Lemma 19.1. As a tempered distribution, ∆F (x) = lim ε→0+ ∫ R1,3 d4p (2π)4 e−i(x,p) −p2 +m2 − iε Proof. Let x = (t,x) and let p = (p0,p). We first integrate the right side in p0. Recall p2 = (p0)2 − p2. So we want to compute:∫ ∞ −∞ dp0 2π e−itp 0 −(p0)2 + p2 +m2 − iε Let’s write z = p0 and f(z) = −z2 + p2 + m2 − iε, so that we have to compute ∫ ∞ −∞ dz 2π e−itz f(z) . We will calculate this integral using contour integration. For that, it is important to understand the behavior of the quadratic polynomial f near its roots. If ε = 0, the roots of f are ±ωp. For ε > 0, the roots are ±ωp,ε, where ωp,ε is slightly below ωp in the complex plane, and −ωp,ε is slightly above −ωp. ωp ωp,ε −ωp −ωp,ε −R R <(z) =(z) ωp ωp,ε −ωp −ωp,ε −R R <(z) =(z) Figure 19.2. Contours for t ≥ 0 and t < 0. Suppose that t ≥ 0. Then we take a contour going from −R to R along the real axis and then back to −R along a semicircle below the real axis (the left side of Figure 19.3). Since t ≥ 0, we can show that the contribution of the semicircular part approaches 0 as R→∞. If t < 0, we take the flipped contour going above the real axis (the right side of Figure 19.3). There is 82 19. THE FEYNMAN PROPAGATOR only one pole that we have to consider for each case, and the residues for the two poles are −e −itωp,ε 2ωp,ε at ωp,ε, eitωp,ε 2ωp,ε at − ωp,ε. So using Cauchy’s theorem gives∫ ∞ −∞ dz 2π e−itz f(z) = { i(2ωp,ε) −1e−itωp,ε if t ≥ 0, i(2ωp,ε) −1eitωp,ε if t < 0. This completes the proof.  19.4. Putting it all together Using Lemma 19.1, we get 〈0|T ϕ(x1)ϕ(x2)|0〉2 = (−i∆F (x1 − x2))2 = − lim ε→0+ ∫∫ d4pd4p′ (2π)8 e−i(x1−x2,p+p ′) (−p2 +m2 − iε)(−p′2 +m2 − iε) . Putting this together with (19.1), we obtain 〈0|ϕ(x1)a†(p1)|0〉 〈0|ϕ(x1)a†(p2)|0〉 · (〈0|T ϕ(x1)ϕ(x2))|0〉)2 〈0|a(p3)ϕ(x2))|0〉 〈0|a(p4)ϕ(x2))|0〉 = −1√ 16ωp1ωp2ωp3ωp4 · lim ε→0+ ∫∫ d4pd4p′ (2π)8 ei(x2,p3+p4+p+p′)e−i(x1,p1+p2+p+p′) (−p2 +m2 − iε)(−p′2 +m2 − iε) . We will continue from here in the next lecture. LECTURE 20 The problem of infinities Date: 11/7/2018 Scribe: Sohom Bhattacharya 20.1. Completing the second order calculation in ϕ4 theory Let us continue from where we stopped in the previous lecture. Recall that we were trying to calculate the second order term in the perturbative expansion for a scattering amplitude in ϕ4 theory, and we ended up with a term containing the integral lim ε→0+ ∫ ∫ d4pd4p′ (2π)8 ei(x2,p3+p4+p+p′)e−i(x1,p1+p2+p+p′) (−p2 +m2 − iε)(−(p′)2 +m2 − iε) . To get the second order term in the Dyson series, we have to integrate this with respect to x1 and x2. Note that∫∫ d4x1d 4x2e i(x2,p3+p4+p+p′)−i(x1,p1+p2+p+p′) = (2π)8δ(4)(p3 + p4 + p+ p′)δ(4)(p1 + p2 + p+ p′). Recall the identity∫ ∞ −∞ δ(x− z)δ(y − z)ξ(z)dz = δ(x− y)ξ(x), which holds true in higher dimensions also. Using this identity, and ex- changing integrals and limits at will, we get lim ε→0+ ∫∫∫∫ d4x1d 4x2 ∫ d4p (2π)4 d4p′ (2π)4 ei(x2,p3+p4+p+p′)−i(x1,p1+p2+p+p′) (−p2 +m2 − iε)(−p′2 +m2 − iε) = lim ε→0+ ∫∫ d4pd4p′ δ(4)(p3 + p4 + p+ p′)δ(4)(p1 + p2 + p+ p′) (−p2 +m2 − iε)(−p′2 +m2 − iε) = lim ε→0+ ∫ d4p δ(4)(p3 + p4 − p1 − p2) (−p2 +m2 − iε)(−(−p1 − p2 − p)2 +m2 − iε) , which finally by a change of variable p 7→ −p yields lim ε→0+ ∫ d4p δ(4)(p3 + p4 − p1 − p2) (−p2 +m2 − iε)(−(p1 + p2 − p)2 +m2 − iε) . 83 86 20. THE PROBLEM OF INFINITIES renormalization, which becomes harder to manage for higher orders of per- turbation theory. If, however, it can be done for all orders, the theory is called ‘perturbatively renormalizable’. Note that the above calculation was slightly complicated because we wanted to get an approximation for A solely in terms of A∗, and not using any information about θ or g. If we know g, then the problem becomes easier, because we can simply approximate M̃ by L+ M̃∗ in (20.3). LECTURE 21 One-loop renormalization in ϕ4 theory Date: 11/9/2018 Scribe: Laura Lyman 21.1. A toy example Recall the basic idea of renormalization from the previous lecture: Sup- pose that we want to carry out a calculation for a physical system, where we input some quantity a, where a can be a vector or scalar (e.g. the 4-tuple of incoming and outgoing momenta p1, p2, p3, p4, as considered in the previ- ous lecture), and the output is a scalar f(a) (e.g. the probability amplitude 〈p3,p4|S|p1,p2〉). As seen in the previous lecture, sometimes the theory will yield a prediction for f(a) in terms of divergent integrals. However the observed value of f(a) is finite. The optimistic viewpoint is that the theory is approximately correct, in the sense that the divergent integrals should be replaced with integrals with some cutoffs (regularized versions). Nature provides the cutoff function θ, but it is unknown to us. The solution to this obstacle is that we can still approximately recover f(a) if we know f(a′) for any single a′ even if θ is unknown. When this can be done for all orders of perturbation theory, the theory is called perturbatively renormalizable. To understand the situation, consider the following toy example. Sup- pose that the input quantity is some number a > 0, and the output predicted by theory is ftheory(a) = ∫ ∞ 0 dx x+ a =∞. However, suppose that the experimentally observed output f(a) is always finite. To resolve this discrepancy, assume that the theory is approximately correct, in the sense that the true f(a) is given by f(a) = ∫ ∞ 0 θ(x)dx x+ a where θ is a function such that θ(x) ∈ [0, 1] for all x, θ(x) = 1 when x ≤ R for some large R ∈ R, and θ decays sufficiently fast in the region x > R so that the integral converges. Then f(a)− f(a′) = ∫ ∞ 0 θ(x)(a′ − a)dx (x+ a)(x+ a′) = ∫ ∞ 0 dx(a− a′) (x+ a)(x+ a′) +O(1/R). 87 88 21. ONE-LOOP RENORMALIZATION IN ϕ4 THEORY Thus, we can approximately recover f(a) for any a if we observe the value of f(a′) for some a′, even if we do not know θ. 21.2. Main result Recall that we were trying to calculate the second order term in the Dyson expansion of 〈p3,p4|S|p1,p2〉. There were no divergent integrals in the first order term. However, in the second order, each Feynman diagram had a single loop, which gave divergent integrals like lim ε→0+ ∫ d4p (2π)4 1 (−p2 +m2 − iε)(−(w − p)2 +m2 − iε) where w = p1 + p2. Let us therefore assume that the “true” value is U(w, θ) := lim ε→0+ ∫ d4p (2π)4 θ(p) (−p2 +m2 − iε)(−(w − p)2 +m2 − iε) where θ : R3 → [0, 1] is a function such that θ(p) = 1 when |p| ≤ R, and θ decays sufficiently fast in the region |p| > R. The following theorem shows that the above integral can be approximately computed for any w if we know its value at a single w′, even if we do not know θ. Theorem 21.1 (One-loop renormalization). For any w,w′ ∈ R1,3 lim R→∞ (U(w, θ)− U(w′, θ)) exists, is finite, and depends only on (w,w′). Here we assume that θ varies with R in such a way that we always have θ(p) = 1 for |p| ≤ R. Why is the situation of Theorem 21.1 harder to analyze than the toy example discussed above? The difference is that terms like −p2 +m2 in the denominator introduce infinite manifolds of singularities as ε → 0. To get rid of such singularities, we need two technical tools. Lemma 21.1 (Feynman parameter). Suppose that A,B ∈ C are such that the line segment joining A and B in the complex plane does not pass through 0. Then 1 AB = ∫ 1 0 du 1 (Au+B(1− u))2 . (Here u is called a Feynman parameter.) Proof. Note that d du [ 1 (B −A)(Au+B(1− u)) ] = 1 (Au+B(1− u))2 , and substitute above.  21.3. TOWARDS THE PROOF OF THE MAIN RESULT 91 Proof. If R is sufficiently large, note that θ(p+uw) = 1 for any p such that ‖p‖ ≤ R/2, and any u ∈ [0, 1]. Thus, letting C = −u(1−u)w2 +m2−iε and using polar coordinates, we get U1(w, θ) = i lim ε→0+ ∫ 1 0 du ∫ ‖p‖≤R/2 d4p (2π)4 1 (‖p‖2 + C)2 = lim ε→0+ i 16π4 ∫ 1 0 du (2π2) ∫ R/2 0 r3dr (r2 + C)2 . Now∫ R/2 0 r3dr (r2 + C)2 = ∫ R/2 0 rdr r2 + C − C ∫ R/2 0 rdr (r2 + C)2 = 1 2 [ log((R/2)2 + C)− logC ] + C 2 [ 1 (R/2)2 + C − 1 C ] = 1 2 [ log((R/2)2 + C)− logC − (R/2)2 (R/2)2 + C ] , where we use the branch of the logarithm that is defined on C \ (−∞, 0]. Now sending ε→ 0+ in the definition of C completes the proof.  LECTURE 22 A glimpse at two-loop renormalization Date: 11/12/2018 Scribe: Lingxiao Li 22.1. Finishing the proof for one-loop renormalization In this lecture we will first complete the proof of Theorem 21.1. Recall the quantity U(w, θ) from the previous lecture and the decomposition U = U1 +U2. From the expression we obtained for U1(w, θ), it is easy to see that lim R→∞ (U1(w, θ)− U1(w′, θ)) = L(w)− L(w′), where L(w) = −i 16π2 ∫ 1 0 du log(m2 − u(1− u)w2). As before, here we use the convention that log(−x) = log x− iπ when x > 0. Our next step is to prove the following lemma. Lemma 22.1. lim R→∞ (U2(w, θ)− U2(w′, θ)) = 0. This will imply Theorem 21.1, and will moreover prove that lim R→∞ (U(w, θ)− U(w′, θ)) = L(w)− L(w′). Proof of Lemma 22.1. By definition, we have U2(w, θ) = i lim ε→0+ ∫ 1 0 du ∫ ‖p‖≥R/2 d4p (2π)4 θ(p + uw) (‖p‖2 − u(1− u)w2 +m2 − iε)2 . If ‖p‖ is sufficiently large (by letting R be large), the denominator in the integrand will be far away from 0, so we can interchange limit and integrals and send ε→ 0 to get U2(w, θ) = i ∫ 1 0 du ∫ ‖p‖≥R/2 d4p (2π)4 θ(p + uw) (‖p‖2 − u(1− u)w2 +m2)2 = i ∫ 1 0 du ∫ ‖p−uw‖≥R/2 d4p (2π)4 θ(p) (‖p− uw‖2 − u(1− u)w2 +m2)2 , 93
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved