Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Modern Electronic Structure Theory in Physical Chemistry, Lecture notes of Physics

The limitations of solving electronic structure problems by hand and the use of fast personal computers to solve more accurate models of molecular electronic structure. It explains the basic idea of electronic structure theory and how it helps in interpreting experimental results. The document also discusses the expansion of AO basis sets and the inclusion of polarization basis functions. It is a useful resource for students studying physical chemistry and electronic structure theory.

Typology: Lecture notes

Pre 2010

Available from 04/25/2023

tandhi-wahyono
tandhi-wahyono 🇮🇩

5

(15)

618 documents

1 / 14

Toggle sidebar

Related documents


Partial preview of the text

Download Modern Electronic Structure Theory in Physical Chemistry and more Lecture notes Physics in PDF only on Docsity! 1 5.61 Physical Chemistry Lecture #32 MODERN ELECTRONIC STRUCTURE THEORY At this point, we have more or less exhausted the list of electronic structure problems we can solve by hand. If we were limited to solving problems manually, there would be a lot of chemistry we wouldn’t be able to explain! Fortunately, the advent of fast personal computers allows chemists to routinely use more accurate models of molecular electronic structure. These types of calculations typically play a significant role in interpreting experimental results: calculations can be used to assign spectra, evaluate reaction mechanisms and predict structures of molecules. In this way computation is complementary to experiment: when the two agree we have confidence that our interpretation is correct. The basic idea of electronic structure theory is that, within the Born Oppenheimer approximation, we can fix the M nuclei in our molecule at some positions RI. Then, we are left with the Hamiltonian for the electrons moving in the effective field created by the nuclei: N N M N Ĥ ≡ − 1 2 ∑∇ i 2 -∑∑ Z I +∑ 1 Eq. 1 i=1 i=1 I =1 r̂ − R i< j r̂ − r̂i I i j Where the first term is the kinetic energy of all N electrons, the second term is the attraction between the electrons and nuclei and the third is the pairwise repulsion between all the electrons. The central aim of electronic structure theory is to find all the eigenfunctions of this Hamiltonian. As we have seen, the eigenvalues we get will depend on our choice of the positions of the nuclei – Eel(R1,R2,R3,…RM). As was the case with diatomics, these energies will tell us how stable the molecule is with a given configuration of the nuclei {RI} – if Eel is very low, the molecule will R1 be very stable, while if Eel is high, the molecule will be unstable in that configuration. The energy Eel(R1,R2,R3,…RM) is called the potential energy surface, and it contains a wealth of information, as illustrated in the picture at above. We can determine the equilibrium configuration of the Equilibrium Conformation Unstable intermediate Reaction Barrier R2 Eel(R1,R2) 2 5.61 Physical Chemistry Lecture #32 molecule by looking for the minimum energy point on the potential energy surface. We can find metastable intermediate states by looking for local minima – i.e. minima that are not the lowest possible energy states, but which are separated from all other minima by energy barriers. In both of these cases, we are interested in points where ∇E el = 0 . Further, the potential surface can tell us about the activation energies between different minima and the pathways that are required to get from the “reactant” state to the “product” state. Solving the electronic Schrödinger also gives us the electronic wavefunctions Ψel(r1,r2,r3,…rN), which allow us to compute all kinds of electronic properties – average positions, momenta, uncertainties, etc – as we have already seen for atoms. We note that while the Hamiltonian above will have many, many eigenstates, in most cases we will only be interested in the lowest eigenstate – the electronic ground state. The basic reason for this is that in stable molecules, the lowest excited states are usually several eV above the ground state and therefore not very important in chemical reactions where the available energy is usually only tenths of an eV. In cases where multiple electronic states are important, the Hamiltonian above will give us separate potential surfaces E1el, E 2 el, Erxn σ* potential E3el … and separate wavefunctions surface Ψ1 el, Ψ 2 el, Ψ 3el. The different potential surfaces will tell us about the favored conformations of the molecules in the different electronic σ potential states. We have already seen this surface for H2 +. When we solved for the electronic states, we got two eigenstates: σ and σ*. If we put the electron in the σ orbital, the molecule was bound and had a potential surface like the lower surface at right. However, if we put the electron in the σ∗ orbital the molecule was not bound and we got the upper surface. 5 5.61 Physical Chemistry Lecture #32 while using Gaussians may mean we have to use a few extra AOs, if we use enough of them we should be able to get the same answer. So we plan to use relatively large Gaussian basis sets for our calculations. How exactly do we choose those basis sets? Thankfully, a significant amount of trial­and­error research has distilled the choices down to a few key basis set features. Depending on the problem at hand and the accuracy desired we only need to consider three aspects of the AO basis. Single, Double, Triple, Quadruple Zeta Basis Sets As we have already discussed for MO theory of diatomics, the smallest basis we can think of for describing bonding would include all the valence orbitals of each atom involved. Thus, for H we had 1 s­function, for C there were 2 s­functions and one set of p’s. Similarly, for sulfur we would have needed 3 s­functions and 2 p’s …. A basis of this size is called a minimal or single zeta basis. The term “single zeta” refers to the fact that we have only a single set of the valence functions (Note: single valence might seem like a more appropriate name, but history made a different choice). The most important way to expand the basis is to include more than a single set of valence functions. Thus, in a double zeta (DZ) basis set, one would include 2 s­ functions for H, 3 s­ and 2 p­functions for C and 4 s­ and 3 p­functions for S. Qualitatively, we think of these basis functions as coming from increasing the n quantum number: the first s function on each atom is something like 1s, the second something like 2s, the third like 3s …. Of course, since we are using Gaussians, they’re not exactly 1s, 2s, 3s … but that’s the basic idea. Going one step further, a triple zeta (TZ) basis would have: H=3s, C=4s3p, S=5s4p. For Quadruple zeta (QZ): H=4s, C=5s4p, S=6s5p and so on for 5Z, 6Z, 7Z. Thus, one has: H,He Li­Ne Na­Ar Names Minimal 1s 2s1p 3s2p STO­3G DZ 2s 3s2p 4s3p 3­21G,6­31G, D95V TZ 3s 4s3p 5s4p 6­311G,TZV Unfortunately, the commonly used names for basis sets follow somewhat uneven conventions. The basic problem is that many different folks develop basis sets and each group has their own naming conventions. At the end of the table above, we’ve listed a few names of commonly used SZ,DZ and TZ basis sets. There aren’t any commonly used QZ basis sets, because once 6 5.61 Physical Chemistry Lecture #32 your basis is that large, it is best to start including polarization functions (see below). Polarization Basis Functions Note that no matter how high you go in the DZ, TZ, QZ hierarchy, you will never, for example, get a p­function on hydrogen or a d­function on carbon. These functions tend to be important for describing polarization of the electrons; at a qualitative level, the p­functions aren’t as flexible in their angular parts and it’s hard to get them to “point” in as many directions as d­ functions. Thus, particularly when dealing with directional bonding in molecules, it can be important to include some of these higher angular momentum functions in your AO basis. In this situation the basis set is said to contain some “polarization” functions. The general nomenclature of polarization functions is to add the letter “P” to a basis set with a single set of polarization functions, and “2P” to a basis with two sets. Thus, a DZP basis would have: 2s1p on hydrogen, 3s2p1d on carbon and 4s3p1d on sulfur. A TZP basis set would have 3s1p on hydrogen, 4s3p1d on carbon and 5s4p1d on sulfur. H,He Li­Ne Na­Ar Names DZP 2s1p 3s2p1d 4s3p1d 6­31G(d,p), D95V TZP 3s1p 4s3p1d 5s4p1d 6­311G(d,p),TZVP We note that in practice it is possible to mix­and­match different numbers of polarization functions with different levels of zeta basis sets. The nomenclature here is to put (xxx,yyy) after the name of the basis set. “xxx” specifies the number and type of polarization functions to be placed on Hydrogen atoms and “yyy” specifies the number and type of polarization functions to be placed on non­hydrogen atoms. Thus, we would have, for example: H,He Li­Ne Na­Ar 6­311G(2df,p) 3s1p 4s3p2d1f 5s4p2d1f Diffuse Functions Occasionally, and particularly when dealing with anions, the SZ/DZ/TZ/… hierarchy converges very slowly. For anions, this is because the extra electron is only very weakly bound, and therefore spends a lot of time far 7 5.61 Physical Chemistry Lecture #32 from the nucleus. It is therefore best to include a few basis functions that decay very slowly to describe this extra electron. Functions of this type are called “diffuse” functions. They are still Gaussians ( e −α r 2 ), but the value of α is very, very small causing the atomic orbital to decay slowly. Similar to the situation for polarization functions, diffuse functions can be added in a mix­and­match way to standard basis sets. Here, the notation “+” or “aug­“ is added to a basis set to show that diffuse functions have been added. Thus, we have basis sets like 3­21++G, 6­31+G(d,p), aug­TZP. Aside: Transition Metals Those of you interested in inorganic chemistry will note that no transition metals appear in the tables above. This is not because there aren’t basis sets for transition metals – it is just more complicated to compare different transition metal basis sets. First, we note that many of the basis sets above are defined for transition metals. Thus, for example, a 6­31G(d,p) basis on iron is 5s4p2d1f while a TZV basis for iron is 6s5p3d. The reason we didn’t include this above is that the idea of “valence” for a transition metal is a subject of debate: is the valence and s­ and d­ function? An s a p and a d? Hence, depending on who put the basis set together, there will be some variation in the number of functions. However, one still expects the same ordering in terms of quality: TZ will be better than DZ, DZP will be better than a minimal basis, etc. Thus, you can freely use the above basis sets for all the elements between K and Kr without significant modification. Extending the above table for specific basis sets gives: K­Ca Sc­Zn Ga­Kr 3­21G 5s4p 5s4p2d 5s4p1d 6­31G(d,p) 5s4p1d 5s4p2d1f N/A 6­311G(d,p) 8s7p2d N/A 8s7p3d TZV 6s3p 6s3p2d 6s5p2d Things also become more complicated when dealing with second row transition metals. Here, relativistic effects become important, because the Schrödinger equation predicts that the 1s electrons in these atoms are actually moving at a fair fraction of the speed of light. Under these circumstances, the Schrödinger equation is not strictly correct and we need 5.61 Physical Chemistry Lecture #32 10 change the ordering of different orbitals (e.g. σ might shift below π once interactions are included). Now, the molecular orbitals (and hence the energy) are determined by their coefficients. Finding the best orbitals is thus equivalent to finding the best coefficients. Mathematically, then, we want to find the orbitals that make the derivative of the IPM energy zero: ∂E ∂ N IPM � α = α ∑E i + ∑ J� ij − K ij = 0 ∂c i ∂c i i=1 In order to satisfy this condition, one typically resorts to an iterative procedure, where steps 2­5 of our MO procedure are performed repeatedly: 1) Choose an AO Basis 1’) Guess an IPM Hamiltonian Heff 2) Build Heff, S matrices Choose 3) Solve the Eigenvalue Problem Better Heff 4) Occupy Lowest Orbitals dE 5) Compute E, dc dE = 0? Done No dc Yes Here, HF makes use of the fact that defining an IPM Hamiltonian, Heff, completely determines the molecular orbital coefficients, c. Thus, the most convenient way to change the orbitals is actually to change the Hamiltonian that generates the orbitals. The calculation converges when we find the molecular orbitals that give us the lowest possible energy, because then dE = 0 . These iterations are called self­consistent field (SCF) iterations dc and the effective Hamiltonian Heff is often called the Fock operator, in honor of one of the developers of the Hartree­Fock approximation. Generally Hartree Fock is not very accurate, but it is quite fast. On a decent computer, you can run a Hartree Fock calculation on several hundred atoms quite easily, and the results are at least reasonable. 11 5.61 Physical Chemistry Lecture #32 Density Functional Theory (DFT) Here, we still use a Slater determinant to describe the electrons. Hence, the things we want to optimize are still the MO coefficients c α . However, we use a different prescription for the energy – one that is entirely based on the electron density. For a single determinant, the electron density, ρ(r) is just the probability of finding an electron at the point r. In terms of the occupied orbitals, the electron density for a Slater Determinant is: N 2 ρ r ψ r( ) = ∑ α ( ) Eq. 2 α =1 2 This has a nice interpretation: ψ i r( ) is the probability of finding an electron in orbital i at a point r. So the formula above tells us that for a determinant the probability of finding an electron at a point r is just the sum of the probabilities of finding it in one of the orbitals at that point. There is a deep theorem (the Hohenberg­Kohn Theorem) that states: There exists a functional Ev[ρ] such that, given the ground state density, ρ0, Ev[ρ0]=E0 where E0 is the exact ground state energy. Further, for any density, ρ’, that is not the ground state density, Ev[ρ’]>E0. This result is rather remarkable. While solving the Schrödinger Equation required a very complicated 3N dimensional wavefunction Ψel(R1, R2,…RN), this theorem tells us we only need to know the density ­ which is a 3D function! – and we can get the exact ground state energy. Further, if we don’t know the density, the second part of this theorem gives us a simple way to find it: just look for the density that minimizes the functional Ev. The unfortunate point is that we don’t know the form of the functional Ev. We can prove it exists, but we can’t construct it. However, from a pragmatic point of view, we do have very good approximations to Ev, and the basic idea is to choose an approximate (but perhaps very, very good) form for Ev and then minimize the energy as a function of the density. That is, we dE vlook for the point where = 0 . Based on Eq. 2 above, we see that ρ just dρ 12 5.61 Physical Chemistry Lecture #32 depends on the MOs and hence on the MO coefficients, so once again we are dE vlooking for the set of MO coefficients such that = 0 . Given the dc similarity between DFT and HF, it is not surprising that DFT is also solved by self consistent field iterations. In fact, in a standard electronic structure code, DFT and HF are performed in exactly the same manner (see flow chart above). The only change is the way one computes the energy and dE . dc Now, as alluded to above, there exist good approximations (note the plural) to Ev. Just as was the case with approximate AO basis sets, these approximate energy expressions have strange abbreviations. We won’t go into the fine differences between different DFT energy expressions here. I’ll simply note that roughly, the quality of the different functionals is expected to follow: LSDA < PBE ≈ BLYP < PBE0 ≈ B3LYP Thus, LSDA is typically the worst DFT approximation and B3LYP is typically among the best. I should mention that this is just a rule of thumb; unlike the case for basis sets where we were approaching a well­defined limit, here we are trying various uncontrolled approximations to an unknown functional. Experience shows us that B3LYP is usually the best, but this need not always be the case. Finally, we note that the speed of a DFT calculation is about the same as Hartree Fock – both involve self­consistent field iterations to determine the best set of orbitals, and so both take about the same amount of computer time. However, for the same amount of effort, you can get quite accurate results. As a rule of thumb, with B3LYP you can get energies correct to within 3 kcal/mol and distances correct to within .01 Å. Post­Hartree Fock Calculations Here, the idea is to employ wavefunctions that are more flexible than a Slater determinant. This can be done by adding up various combinations of Slater determinants, by adding terms that explicitly correlate pairs of electrons (e.g. functions that depend on r1 and r2 simulataneously) and a variety of other creative techniques. These approaches are all aimed at
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved