Perturbation theory (quantum mechanics)
From Academic Kids

In quantum mechanics, perturbation theory is a set of approximation schemes directly related to mathematical perturbation for describing a complicated quantum system in terms of a simpler one. The idea is to start with a simple system and gradually turn on an additional "perturbing" Hamiltonian representing a weak disturbance to the system. If the disturbance is not too large, the various physical quantities associated with the perturbed system (e.g. its energy levels and eigenstates) will be continuously generated from those of the simple system. We can therefore study the former based on our knowledge of the latter.
Contents 
Applications of perturbation theory
Perturbation theory is an extremely important tool for describing real quantum systems, as it turns out to be very difficult to find exact solutions to the Schrödinger equation for Hamiltonians of even moderate complexity. The Hamiltonians to which we know exact solutions, such as the hydrogen atom, the quantum harmonic oscillator and the particle in a box, are too idealized to adequately describe most systems. Using perturbation theory, we can use the known solutions of these simple Hamiltonians to generate solutions for a range of more complicated systems. For example, by adding a perturbative electric potential to the quantum mechanical model of the hydrogen atom, we can calculate the tiny shifts in the spectral lines of hydrogen caused by the presence of an electric field (the Stark effect). This is only approximate because the sum of a Coulomb potential with a linear potential is unstable although the tunneling time (decay rate) is very long. This shows up as a broadening of the energy spectrum lines, something which perturbation theory fails to notice entirely.
The expressions produced by perturbation theory are not exact, but they can lead to accurate results as long as the expansion parameter, say <math>\alpha<math>, is very small. Typically, the results are expressed in terms of finite power series in <math>\alpha<math> that seem to converge to the exact values when summed to higher order. After a certain order <math>n\sim 1/\alpha<math>, however, the results become increasingly worse since the series are usually divergent, being asymptotic series). There exist ways to convert them into convergent series, which can be evalauted for largeexpansion parameters, most efficiently by variational perturbation theory.
In the theory of quantum electrodynamics (QED), in which the electronphoton interaction is treated perturbatively, the calculation of the electron's magnetic moment has been found to agree with experiment to eleven decimal places. In QED and other quantum field theories, special calculation techniques known as Feynman diagrams are used to systematically sum the power series terms.
Under some circumstances, perturbation theory is an invalid approach to take. This happens when the system we wish to describe cannot be described by a small perturbation imposed on some simple system. In quantum chromodynamics, for instance, the interaction of quarks with the gluon field cannot be treated perturbatively at low energies because the coupling constant (the expansion parameter) becomes too large. Perturbation theory also fails to describe states that are not generated adiabatically from the "free model", including bound states and various collective phenomena such as solitons. Imagine, for example, that we have a system of free (i.e. noninteracting) particles, to which an attractive interaction is introduced. Depending on the form of the interaction, this may create an entirely new set of eigenstates corresponding to groups of particles bound to one another. An example of this phenomenon may be found in conventional superconductivity, in which the phononmediated attraction between conduction electrons leads to the formation of correlated electron pairs known as Cooper pairs. When faced with such systems, one usually turns to other approximation schemes, such as the variational method and the WKB approximation. This is because there is no analogue of a bound particle in the unperturbed model and the energy of a soliton typically goes as the inverse of the expansion parameter. Perturbation theory can only detect solutions "close" to the unperturbed solution, even if there are other solutions (which typically blows up as the expansion parameter goes to zero).
The problem of nonperturbative systems has been somewhat alleviated by the advent of modern computers. It has become practical to obtain numerical nonperturbative solutions for certain problems, using methods such as density functional theory. These advances have been of particular benefit to the field of quantum chemistry. Computers have also been used to carry out perturbation theory calculations to extraordinarily high levels of precision, which has proven important in particle physics for generating theoretical results that can be compared with experiment.
Timeindependent perturbation theory
There are two categories of perturbation theory: timeindependent and timedependent. In this section, we discuss timeindependent perturbation theory, in which the perturbation Hamiltonian is static (i.e., possesses no time dependence.) Timeindependent perturbation theory was invented by Erwin Schrödinger in 1926, shortly after he invented wave mechanics.
We begin with an unperturbed Hamiltonian H_{0}, which is also assumed to have no time dependence. It has known energy levels and eigenstates, arising from the timeindependent Schrödinger equation:
 <math> H_0 n^{(0)}\rang = E_n^{(0)} n^{(0)}\rang \quad,\quad n = 1, 2, 3, \cdots <math>
For simplicity, we have assumed that the energies are discrete. The (0) superscripts denote that these quantities are associated with the unperturbed system.
We now introduce a perturbation to the Hamiltonian. Let V be a Hamiltonian representing a weak physical disturbance, such as a potential energy produced by an external field. (Thus, V is formally a Hermitian operator.) Let λ be a dimensionless parameter that can take on values ranging continuously from 0 (no perturbation) to 1 (the full perturbation). The perturbed Hamiltonian is
 <math> H = H_0 + \lambda V <math>
The energy levels and eigenstates of the perturbed Hamiltonian are again given by the Schrödinger equation:
 <math> \left(H_0 + \lambda V \right) n\rang = E_n n\rang <math>
Our goal is to express E_{n} and n> in terms of the energy levels and eigenstates of the old Hamiltonian. If the perturbation is sufficiently weak, we can write them as power series in λ:
 <math> E_n = E_n^{(0)} + \lambda E_n^{(1)} + \lambda^2 E_n^{(2)} + \cdots <math>
 <math> n\rang = n^{(0)}\rang + \lambda n^{(1)}\rang + \lambda^2 n^{(2)}\rang + \cdots <math>
When λ = 0, these reduce to the unperturbed values, which are the first term in each series. Since the perturbation is weak, the energy levels and eigenstates should not deviate too much from their unperturbed values, and the terms should rapidly become smaller as we go to higher order.
Plugging the power series into the Schrödinger equation, we obtain
 <math>\begin{matrix}
\left(H_0 + \lambda V \right) \left(n^{(0)}\rang + \lambda n^{(1)}\rang + \cdots \right) \qquad\qquad\qquad\qquad\\ \qquad\qquad= \left(E_n^{(0)} + \lambda E_n^{(1)} + \lambda^2 E_n^{(2)} + \cdots \right) \left(n^{(0)}\rang + \lambda n^{(1)}\rang + \cdots \right) \end{matrix}<math>
Expanding this equation and comparing coefficients of each power of λ results in an infinite series of simultaneous equations. The zerothorder equation is simply the Schrödinger equation for the unperturbed system. The firstorder equation is
 <math> H_0 n^{(1)}\rang + V n^{(0)}\rang = E_n^{(0)} n^{(1)}\rang + E_n^{(1)} n^{(0)}\rang <math>
This leads to the firstorder energy shift:
 <math> E_n^{(1)} = \langle n^{(0)}  V  n^{(0)} \rangle <math>
This is simply the expected value of the perturbation Hamiltonian while the system is in the unperturbed state. This result can be interpreted in the following way: suppose the perturbation is applied, but we keep the system in the quantum state n^{(0)}>, which is a valid quantum state though no longer an energy eigenstate. The perturbation causes the average energy of this state to increase by <n^{(0)}Vn^{(0)}>. However, the true energy shift is slightly different, because the perturbed eigenstate is not exactly the same as n^{(0)}>. These further shifts are given by the second and higher order deviations.
To obtain the firstorder deviation in the energy eigenstate, we insert our expression for the firstorder energy shift back into the above equation between the firstorder coefficients of λ. We then make use of the resolution of the identity,
 <math> Vn^{(0)}\rangle = \left( \sum_{k} k^{(0)}\rangle\langle k^{(0)} \right) Vn^{(0)}\rangle <math>
The result is
 <math> \left(E_n^{(0)}  H_0 \right) n^{(1)}\rang = \sum_{k \ne n} \left(\langle k^{(0)}Vn^{(0)} \rangle \right) k^{(0)}\rang <math>
For the moment, suppose that this energy level is not degenerate, i.e. there is no other eigenstate with the same energy. The operator on the left hand side therefore has a welldefined inverse, and we get
 <math> n^{(1)}\rang = \sum_{k \ne n} \frac{\langle k^{(0)}Vn^{(0)} \rangle}{E_n^{(0)}  E_k^{(0)}} k^{(0)}\rang <math>
The firstorder change in the nth energy eigenket has a contribution from each of the energy eigenstates k ≠ n. Each term is proportional to the matrix element <k^{(0)}Vn^{(0)}>, which is a measure of how much the perturbation mixes eigenstate n with eigenstate k; it is also inversely proportional to the energy difference between eigenstates k and n, which means that the perturbation deforms the eigenstate to a greater extent if there are more eigenstates at nearby energies. We see also that the expression is singular if any of these states have the same energy as state n, which is why we assumed that there is no degeneracy.
We can find the higherorder deviations by a similar procedure, though the calculations become quite tedious with our current formulation. For example, the secondorder energy shift is
 <math>E_n^{(2)} = \sum_{k \ne n} \frac{\langle k^{(0)}Vn^{(0)} \rangle^2} {E_n^{(0)}  E_k^{(0)}} <math>
Effects of degeneracy
Suppose that two or more energy eigenstates are degenerate. Our above calculation for the firstorder energy shift is unaffected, but the calculation of the change in the eigenstate is problematized because the operator
 <math> E_n^{(0)}  H_0 <math>
does not having a welldefined inverse.
This is actually a conceptual, rather than mathematical, problem. Imagine that we have two or more perturbed eigenstates with different energies, which are continuously generated from an equal number of unperturbed eigenstates that are degenerate. Let D denote the subspace spanned by these degenerate eigenstates. The problem lies in the fact that there is no unique way to choose a basis of energy eigenstates for the unperturbed system. In particular, we could construct a different basis for D by choosing different linear combinations of the spanning eigenstates. In such a basis, the unperturbed eigenstates would not continuously generate the perturbed eigenstates.
We thus see that, in the presence of degeneracy, perturbation theory does not work with an arbitrary choice of energy basis. We must instead choose a basis so that the perturbation Hamiltonian is diagonal in the degenerate subspace D. In other words,
 <math>V k^{(0)}\rangle = \epsilon_k k^{(0)}\rangle + \mbox{(terms not in D)} \qquad \forall \; k^{(0)}\rangle \in D <math>
In that case, our equation for the firstorder deviation in the energy eigenstate reduces to
 <math> \left(E_n^{(0)}  H_0 \right) n^{(1)}\rang = \sum_{k \not\in D} \left(\langle k^{(0)}Vn^{(0)} \rangle \right) k^{(0)}\rang <math>
The operator on the left hand side is not singular when applied to eigenstates outside D, so we can write
 <math> n^{(1)}\rang = \sum_{k \not\in D} \frac{\langle k^{(0)}Vn^{(0)} \rangle}{E_n^{(0)}  E_k^{(0)}} k^{(0)}\rang <math>
Timedependent perturbation theory
Timedependent perturbation theory, developed by Paul Dirac, studies the effect of a timedependent perturbation V(t) applied to a timeindependent Hamiltonian H_{0}. Since the perturbed Hamiltonian is timedependent, so are its energy levels and eigenstates. Therefore, the goals of timedependent perturbation theory are slightly different from timeindependent perturbation theory. We are interested in the following quantities:
 The timedependent expected value of some observable A, for a given initial state.
 The timedependent amplitudes of those quantum states that are energy eigenkets in the unperturbed system.
The first quantity is important because it gives rise to the classical result of an A measurement performed on a macroscopic number of copies of the perturbed system. For example, we could take A to be the displacement in the xdirection of the electron in a hydrogen atom, in which case the expected value, when multiplied by an appropriate coefficient, gives the timedependent electrical polarization of a hydrogen gas. With an appropriate choice of perturbation (i.e. an oscillating electric potential), this allows us to calculate the AC permittivity of the gas.
The second quantity looks at the timedependent probability of occupation for each eigenstate. This is particularly useful in laser physics, where one is interested in the populations of different atomic states in a gas when a timedependent electric field is applied. These probabilities are also useful for calculating the "quantum broadening" of spectral lines (see line broadening).
We will briefly examine the ideas behind Dirac's formulation of timedependent perturbation theory. Choose an energy basis {n>} for the unperturbed system. (We will drop the (0) superscripts for the eigenstates, because it is not meaningful to speak of energy levels and eigenstates for the perturbed system.)
If the unperturbed system is in eigenstate j> at time t = 0, its state at subsequent times varies only by a phase (we are following the Schrödinger picture, where state vectors evolve in time and operators are constant):
 <math> j(t)\rang = e^{iE_j t /\hbar} j(t)\rang <math>
We now introduce a timedependent perturbing Hamiltonian V(t). The Hamiltonian of the perturbed system is
 <math> H = H_0 + V(t) <math>
Let ψ(t)> denote the quantum state of the perturbed system at time t. It obeys the timedependent Schrödinger equation,
 <math> H \psi(t)\rang = i\hbar \frac{\partial}{\partial t} \psi(t)\rang<math>
The quantum state at each instant can be expressed as a linear combination of the basis {n>}. We can write the linear combination as
 <math> \psi(t)\rang = \sum_n c_n(t) e^{ i E_n t / \hbar} n\rang <math>
where the c_{n}(t)s are undetermined complex functions of t which we will refer to as amplitudes (strictly speaking, they are the amplitudes in the Dirac picture.) We have explicitly extracted the exponential phase factors exp(iE_{n}t/h) on the right hand side. This is only a matter of convention, and may be done without loss of generality. The reason we go to this trouble is that when the system starts in the state j> and no perturbation is present, the amplitudes have the convenient property that, for all t, c_{j}(t) = 1 and c_{n}(t) = 0 if n≠j.
The absolute square of the amplitude c_{n}(t) is the probability that the system is in state n at time t, since
 <math> \leftc_n(t)\right^2 = \left\lang n\psi(t)\rang\right^2<math>
Plugging into the Schrödinger equation and using the fact that ∂/∂t acts by a chain rule, we obtain
 <math> \sum_n \left( i\hbar \frac{\partial c_n}{\partial t} + c_n(t) V(t) \right) n\rang = 0<math>
By resolving the identity in front of V, this can be reduced to a set of partial differential equations for the amplitudes:
 <math> \frac{\partial c_n}{\partial t} = \frac{i}{\hbar} \sum_k \lang nV(t)k\rang \,c_k(t)\, e^{i(E_k  E_n)t/\hbar} <math>
The matrix elements of V play a similar role as in timeindependent perturbation theory, being proportional to the rate at which amplitudes are shifted between states. Note, however, that the direction of the shift is modified by the exponential phase factor. Over times much longer than the energy difference E_{k}E_{n}, the phase winds many times. If the timedependence of V is sufficiently slow, this may cause the state amplitudes to oscillate. Such oscillations are useful for managing radiative transitions in a laser.
Up to this point, we have made no approximations, so this set of differential equations is exact. By supplying appropriate initial values c_{n}(0), we could in principle find an exact (i.e. nonperturbative) solution. This is easily done when there are only two energy levels (n = 1, 2), and the solution is useful for modelling systems like the ammonia molecule. However, exact solutions are difficult to find when there are many energy levels, and one instead looks for perturbative solutions, which may be obtained by putting the equations in an integral form:
 <math> c_n(t) = c_n(0) + \frac{i}{\hbar} \sum_k \int_0^t dt' \;\lang nV(t')k\rang \,c_k(t')\, e^{i(E_k  E_n)t'/\hbar} <math>
By repeatedly substituting this expression for c_{n} back into right hand side, we get an iterative solution
 <math>c_n(t) = c_n^{(0)} + c_n^{(1)} + c_n^{(2)} + \cdots<math>
where, for example, the firstorder term is
 <math>c_n^{(1)}(t) = \frac{i}{\hbar} \sum_k \int_0^t dt' \;\lang nV(t')k\rang \, c_k(0) \, e^{i(E_k  E_n)t'/\hbar} <math>
Many further results may be obtained, such as Fermi's golden rule, which relates the rate of transitions between quantum states to the density of states at particular energies, and the Dyson series, obtained by applying the iterative method to the time evolution operator, which is one of the starting points for the method of Feynman diagrams.