NyquistShannon sampling theorem
From Academic Kids

The NyquistShannon sampling theorem is the fundamental theorem in the field of information theory, in particular telecommunications. It is also known as the WhittakerNyquistKotelnikovShannon sampling theorem or just simply the sampling theorem.
The theorem states that:
 when sampling a signal (e.g., converting from an analog signal to digital), the sampling frequency must be greater than twice the bandwidth of the input signal in order to be able to reconstruct the original perfectly from the sampled version.
If B is the bandwidth and <math>F_s<math> is the sampling rate, then the theorem can be stated mathematically (called the "sampling condition" from here on)
 <math>2 B < F_s<math>
IMPORTANT NOTE: This theorem is commonly misstated/misunderstood (or even mistaught). The sampling rate must be greater than twice the signal bandwidth, not the maximum/highest frequency. A signal is a baseband signal if the maximum/highest frequency coincides with the bandwidth, which means the signal contains zero hertz. Not all signals are baseband signals (e.g., FM radio). This principle finds practical application in the "IFsampling" techniques used in some digital receivers.
Contents 
Aliasing
If the sampling condition is not satisfied, then frequencies will overlap (see the proof). This overlap is called aliasing.
To prevent aliasing, two things can readily be done
 Increase the sampling rate
 Introduce an antialiasing filter or make antialiasing filter more stringent
The antialiasing filter is to restrict the bandwidth of the signal to satisfy the sampling condition. This holds in theory, but is not satisfiable in reality. It is not satisfiable in reality because a signal will have some energy outside of the bandwidth. However, the energy can be small enough that the aliasing effects are negligible.
Downsampling
When a signal is downsampled, the theorem still must be satisfied. The theorem is satisfied when downsampling by filtering the signal appropriately with an antialiasing filter.
Critical frequency
The critical frequency is defined as twice the bandwidth (if the sampling condition was an equality instead of an inequality).
If the sampling frequency is exactly twice the highest frequency of the input signal, then phase mismatches between the sampler and the signal will distort the signal. For example, sampling <math>\cos(\pi t)<math> at <math>t=0,1,2\dots<math> will give you the discrete signal <math>\cos(\pi n)<math>, as desired. However, sampling the same signal at <math>t=0.5,1.5,2.5\dots<math> will give you a constant zero signal. These two sets of samples, which differ only in phase and not frequency, give dramatically different results because they sample at exactly the critical frequency.
Historical background
The theorem was first formulated by Harry Nyquist in 1928 ("Certain topics in telegraph transmission theory"), but was only formally proven by Claude E. Shannon in 1949 ("Communication in the presence of noise"). Kotelnikov published in 1933, Whittaker in 1935, and Gabor in 1946.
Mathematically, the theorem is formulated as a statement about the Fourier transformation.
If a function <math>s(t) \ <math> has a Fourier transform <math>\mathcal{F} \{s(t) \} = S(f) = 0 \ <math> for <math>f \ge W \ <math>, then it is completely determined by giving the value of the function at a series of points spaced <math>\frac{1}{2 W}<math> apart. The values <math>s_n = s(\frac{n}{2 W})<math> are called the samples of s(t).
The minimum sample frequency that allows reconstruction of the original signal, that is <math>2 W \ <math> samples per unit distance, is known as the Nyquist frequency, (or Nyquist rate). The time inbetween samples is called the Nyquist interval.
If <math>S(f) = 0 \ <math> for <math>f \ge W \ <math>, then <math>s(t) \ <math> can be recovered from its samples by the NyquistShannon interpolation formula.
A wellknown consequence of the sampling theorem is that a signal cannot be both bandlimited and timelimited. To see why, assume that such a signal exists, and sample it faster than the Nyquist frequency. These finitely many timedomain coefficients should define the entire signal. Equivalently, the entire spectrum of the bandlimited signal should be expressible in terms of the finitely many timedomain coefficients obtained from sampling the signal. Mathematically this is equivalent to requiring that a (trigonometric) polynomial can have infinitely many zeros since the bandlimited signal must be zero on an interval beyond a critical frequency which has infinitely many points. However, it is wellknown that polynomials do not have more zeros than their orders due to the fundamental theorem of algebra. This contradiction shows that our original assumption that a timelimited and bandlimited signal exists is incorrect.
Undersampling
When sampling a nonbaseband signal, the theorem states that the sampling rate need only be twice the bandwidth W if the frequency band is some interval [NW,(N+1)W]. Doing this results in a sampling rate less than the carrier frequency (N+1/2)W of the signal. In other cases of nonbaseband signals the intuitive samplingbytakingvalues has to be replaced by samplingbytakingscalarproducts, as is (implicitly) the case in Frequencydivision multiplexing.
Consider FM radio to illustrate the idea of undersampling. In the US, FM radio operates on the frequency band from 88 MHz to 108 MHz. To satisfy the sampling condition, the covered bandwidth W needs to be greater than 20MHz; it is easy to verify that N=4 and W=22MHz results in a frequency band from 88 to 110 MHz. So the sampling rate needs to be greater than 44 MHz. Clearly 44 MHz is less than 88 or 108 MHz and this is a scenario of undersampling.
Note that when undersampling a realworld signal, the sampling circuit must be fast enough to capture the highest signal frequency of interest. Theoretically, each sample should be taken during an infinitesimally short interval, but this is not practically feasible. Instead, the sampling of the signal should be made in a short enough interval that it can represent the instantaneous value of the signal with the highest frequency. This means that in the FM radio example above, the sampling circuit must be able to capture a signal with a frequency of 110 MHz, not 44MHz. Thus, the sampling frequency may be 44 MHz, but the input bandwidth of the system must be at least 110 MHz.
If the theorem is misunderstood to mean twice the highest frequency, then the sampling rate would assumed to need to be greater than 216 MHz. While this does satisfy the correctlyapplied sampling condition (<math>40 MHz < F_s<math>) it is grossly over sampled.
Note that if the FM radio band is sampled at >40 MHz then a bandpass filter is required for the antialiasing filter.
In certain problems, the frequencies of interest are not an interval of frequencies, but perhaps some more interesting set F of frequencies. Again, the sampling frequency must be proportional to the size of F. For instance, certain domain decomposition methods fail to converge for the 0th frequency (the constant mode) and some medium frequencies. Then the set of interesting frequencies would be something like 10 Hz to 100 Hz, and 110 Hz to 200 Hz. In this case, one would need to sample at a data rate of 360 Hz — i.e. at a sampling rate of 20 Hz with 18 real values in each sample — not 400 Hz, to fully capture these signals.
Proof
To prove the theorem, consider two continuously defined signals: any continuous signal <math>f(t)<math> with fast decay at infinity and a Dirac comb <math>\delta_T(t)<math>.
Let the result of the multiplication be
 <math>
f^{*}(t) = f(t) \delta_T(t) = f(t) \sum_{n=\infty}^{\infty} \delta(t  n T) =\sum_{n=\infty}^{\infty} f(nT)\delta(t  n T)<math>
and taking the Fourier transform and applying the multiplication/convolution property
 <math>F^{*}(\omega) = \mathcal{F} \{f^{*}(t)\}<math>
 <math>= \frac{1}{\sqrt{2 \pi}} F(\omega) * \mathcal{F}\{\delta_T(t)\}<math>
 <math>= \frac{1}{\sqrt{2 \pi}} F(\omega) * \left\{ \frac{\sqrt{2 \pi}}{T} \sum_{n = \infty}^{\infty} \delta (\omega  n \omega_s) \right\}<math>,
and by the shifting property of the Dirac delta distribution under convolution,
 <math>F^{*}(\omega) = \frac{1}{T} \sum_{n = \infty}^{\infty} F(\omega  n \omega_s)<math>
where <math>\omega_s = \frac{2 \pi}{T}<math> and is the sampling rate.
Looking at the combed function in a second way, we see that
 <math>F^{*}(\omega)
= \frac{1}{\sqrt{2 \pi}}\int_{\infty}^\infty \sum_{n=\infty}^{\infty} f(nT) \delta(t  n T) e^{i\omega t}\,dt = \frac{1}{\sqrt{2 \pi}}\sum_{n=\infty}^{\infty} f(nT) e^{i\omega n T} <math>. The integral is to be interpreted as application of the compact distribution <math>\delta<math> to the bounded, continuous function <math>e^{i\omega t}<math>.
The end result is a summation of shifted <math>F(\omega)<math>, which is equal to some Fourier series with values of f at equally spaced points as coefficients. This is just the Poisson summation formula.
Let <math>\omega_{\max}<math> be the maximum frequency of <math>F(\omega)<math>, then the support of <math>F(\omega)<math> is bounded to <math>\left[ \omega_{\max}, \omega_{\max} \right]<math>. The bandwidth of <math>F(\omega)<math> is then <math>2 \omega_{\max}<math>. In order for a replicated <math>F(\omega)<math> shifted by <math>\omega_s<math> to not overlap then the condition <math>2 \omega_{\max} < \omega_s<math> must hold true. In this case <math>F^*(\omega)=F(\omega)<math> on <math>\omega\in[\omega_s/2,\omega_s/2]<math> and
 <math>
f(t) = \frac1{\sqrt{2\pi}} \int_{\omega_s / 2}^{\omega_s / 2} F^*(\omega) e^{i\omega t} \, d\omega =\frac{T}{2\pi}\sum_{n=\infty}^{\infty} f(nT) \int_{\omega_s/2}^{\omega_s/2} e^{i\omega (xnT)}\,d\omega <math>
 <math>=\sum_{n=\infty}^{\infty} f(nT) \, \frac{\sin(\pi(x/Tn))}{\pi(x/Tn)}
=\sum_{n=\infty}^{\infty} f(nT) \, \mbox{sinc}_N(x/Tn) <math>, the last expression being a cardinal series with the cardinal sine function as interpolation kernel.
Otherwise, if <math>\omega_s<math> is not sufficiently large then the terms of the summation will overlap and aliasing will be introduced.
Although the theorem states that the sampling rate must be twice the bandwidth, it can readily be seen that this proof still holds. One has just to make sure that <math>F^*(\omega)=F(\omega)<math> on the frequency support, i.e., of all the <math>F(\omega+n\omega_s)<math>, <math>n\in\mathbb Z<math>, only one should be nonzero for all <math>\omega<math>. For real bandlimited signals one must assume that the frequency support is centered about zero.
See also
 Aliasing
 Antialiasing filter: lowpass filter, bandpass filter
 Dirac comb
 NyquistShannon interpolation formula
 Sampling (information theory)
 Signal (information theory)
 Reconstruction from Zero Crossings
References
 E. T. Whittaker, "On the Functions Which are Represented by the Expansions of the Interpolation Theory," Proc. Royal Soc. Edinburgh, Sec. A, vol.35, pp.181194, 1915
 H. Nyquist, "Certain topics in telegraph transmission theory," Trans. AIEE, vol. 47, pp. 617644, Apr. 1928.
 V. A. Kotelnikov, "On the carrying capacity of the ether and wire in telecommunications," Material for the First AllUnion Conference on Questions of Communication, Izd. Red. Upr. Svyazi RKKA, Moscow, 1933 (Russian).
 C. E. Shannon, "Communication in the presence of noise," Proc. Institute of Radio Engineers, vol. 37, no.1, pp. 1021, Jan. 1949.
External links
 Learning by Simulations (http://www.vias.org/simulations/simusoft_nykvist.html) Interactive simulation of the effects of inadequate samplingde:NyquistShannonAbtasttheorem
es:Teorema de muestreo de NyquistShannon fr:Théorème d'échantillonnage de NyquistShannon it:Teorema del campionamento di NyquistShannon nl:Bemonsteringstheorema van NyquistShannon ja:標本化定理 ru:Теорема отсчётов УиттакераНайквистаКотельниковаШеннона