4.5  Power spectral density (PSD)

In most cases, the signal under analysis has infinite energy, such as a deterministic power signal (e. g. a periodic signal) or realizations of a stationary random process. Therefore, the main interest in spectral analysis relies not on the ESD but on the PSD.

4.5.1  Main property of a PSD

A PSD describes the distribution of power over frequency. The frequency can be linear f in Hz, angular frequency ω in radians/s or discrete-time angular frequency Ω, which is an angle in radians.

Before detailing PSD definitions, this subsection presents their main property: the PSD has the important property that the average power P can be obtained in continuous-time by

Pc =S(f)df,
(4.20)

with S(f) in watts/Hz.

Similar to the reasoning associated to Eq. (4.19), the version of Eq. (4.20) when the continuous-time PSD S(ω) is a function of ω = 2πf in radians/s is

Pc = 1 2πS(ω)dω,
(4.21)

and S(ω)(2π) is interpreted in units of watts per rad/s.

The PSD definition in discrete-time is

Pd = 1 2π<2π>S(ejΩ)dΩ,
(4.22)

where S(ejΩ)(2π) is interpreted in units of watts per radians.

Table 4.3 summarizes the discussed PSD functions.

Table 4.3: PSD functions. P is the average power and the column “Ind. var.” indicates the units and symbols used for the independent variable of each function. Ff and Fω denote the Fourier transform in Hertz and rad/s, respectively. The units of S(f), S(ω)(2π) and S(ejΩ)(2π) are W/Hz, W/(rad/s) and W/rad, respectively.
Time Ind. var. Definition Main property
Continuous-time f (Hz) S(f) = Ff{R(τ)} P = S(f)df
Continuous-time ω (rad/s) S(ω) = Fω{R(τ)} P = 1 2π S(ω)dω
Discrete-time Ω (rad) S(ejΩ) = F{R[]} P = 1 2π <2π>S(ejΩ)dΩ

4.5.2  PSD definitions

As discussed in Section 1.6, there are many categories of signals. Figure 4.24 illustrates some of them.

PIC

Figure 4.24: Important categories of signals. Some of these features need to be taken in account when defining the PSD function.

Similar to the definitions of correlation in Table 1.9, different categories of signals require distinct definitions of PSD. To simplify the discussion, some spectral analysis textbooks4 choose to emphasize discrete-time random signals, and define the PSD for this kind of signals. Moreover, the spectral estimation problem is defined in these textbooks5 as:

In other words, the context of this specific problem is restricted to discrete-time power signals with duration of N samples. Here we also discuss other cases, such as: a) continuous-time PSDs, b) when the signal is deterministic and c) the theoretical PSD expressions for infinite-duration signals. This will require attention to adopting the correct definitions.

For instance, the PSD S(f) for a continuous-time power signal can be defined as

S(f) = lim T𝔼[|XT (f)|2] T ,
(4.23)

where XT (f) is the Fourier transform of a signal obtained by multiplying the original signal by a window of duration T. Alternatively, the PSD definition for continuous-time deterministic signals is

S(f) = lim T|XT (f)|2 T ,

which is basically the ESD normalized by the time interval T.

An important method for obtaining the PSD of a wide-sense stationary (WSS) random process is given by the Wiener–Khinchin theorem,6 which states that the PSD is the Fourier transform of the autocorrelation function.

For instance, assuming R(τ) is the autocorrelation of a WSS random process X(t), the PSD can be obtained via

S(f) = F{R(τ)}.
(4.24)

Analogous results hold in the discrete-time case, where the PSD is the DTFT of the autocorrelation sequence:

S(ejΩ) = =R[]ejΩ,
(4.25)

where is the lag, and the inverse DTFT gives

R[] = 1 2πππS(ejΩ)ejΩdΩ.
(4.26)

Similar constructions apply to deterministic finite-energy signals, where the ESD is the Fourier transform of the signal’s autocorrelation function. The next subsections provide more details about distinct PSD definitions.

4.5.3  Advanced: PSD of random signals

It makes complete sense to focus the study of PSD estimation on random signals, because they are important in many applications, such as in digital communications. One useful model for these signals is the wide-sense stationary (WSS) random process with autocorrelation R(τ) (see definitions in Appendix A.20). In many spectral analysis problems,7 besides being WSS, the random process is assumed to be autocorrelation ergodic.8

The PSD S(f) for a continuous-time power signal x(t) corresponding to a realization of a WSS stochastic process X(t) is defined as

S(f)=deflim T𝔼[|XT (f)|2] T ,
(4.27)

where XT (f) is the Fourier transform of a truncated (windowed) version of x(t) with duration T. In other words, XT (f) = F{xT (t)} where xT (t) = x(t) for T2 t T2 or zero otherwise.

Similar to Eq. (4.27), the PSD S(ejΩ) for a discrete-time power signal x[n] corresponding to a realization of a WSS stochastic process X[n] is defined as

S(ejΩ)=deflim N 1 N𝔼 [ | n=0N1x[n]ejΩn| 2] = lim N 1 N𝔼 [ |XN(ejΩ)|2] .
(4.28)

where XN(ejΩ) is the DTFT of xN[n], a truncated version of x[n] obtained via a rectangular window of N non-zero samples. The unit of S(ejΩ)(2π) is watts per radians.

In practice, there is a finite number of realizations of X[n] and often only one realization x[n] is available. Fortunately, ergodicity of the autocorrelation can be assumed in many cases and the ensemble averages substituted by averages taken over time (see Appendix A.19). Besides, the number of samples (the duration of x[n]) is often limited to a given value that can be determined, for example, by the time over which the process can be considered stationary. For example, in speech analysis applications, it is typically assumed the process of vowel production is quasi-stationary over segments with durations from 40 to 80 ms. With limited-duration signals, the challenge for spectral analysis is to obtain accurate estimates, as discussed in this chapter.

For both Eq. (4.27) and Eq. (4.28), windows other than the rectangular can be used when the signal under analysis has a short duration. But a rectangular window with infinite duration is the adequate window for the PSD definition.

4.5.4  Advanced: PSD of deterministic and periodic signals

Noticing from Eq. (4.15) and (4.18) that squared magnitudes of X(f) provide the energy distribution over frequency, to obtain the power distribution one can intuitively consider dividing the ESD by “time”, via a normalization factor that converts energy into power.

The PSD definition that is adopted for continuous-time deterministic signals (and does not require the expected value used in Eq. (4.27)) is

S(f) = lim T|XT (f)|2 T ,

which is basically the ESD normalized by the time interval T.

A special case of deterministic signals are the periodic ones. Assuming a continuous-time signal x(t) with period T0, its PSD is

S(f) = k=|c k|2δ(f kF 0),
(4.29)

where ck are the Fourier Series coefficients and F0 = 1T0 is the fundamental frequency. Similarly, for S(ω), the expression is

S(ω) = 2π k=|c k|2δ(f kω 0),
(4.30)

where ω0 = (2π)T0 rad/s.

In summary, the PSD S(f) of a deterministic (non-random) periodic signal is composed by impulses with areas determined by the squared magnitude of Fourier series coefficients.

Example 4.9. PSD of a continuous-time sinusoid. If x(t) = Acos (2πfct), then S(f) = A2 4 [δ(f + fc) + δ(f fc)] and the average power is S(f)df = A22.   

When considering a discrete-time periodic signal x[n], its PSD S(ejΩ) can be obtained by first considering an expression S (ejΩ) for the frequency range [0,2π[:

S (ejΩ) = 2π k=0N0 1|X~[k]|2δ(Ω kΩ 0),
(4.31)

where N0 is the period, Ω0 = (2π)N0 is the fundamental frequency, and X~[k] the DTFS of x[n]. Finally, the PSD S(ejΩ) is simply the periodic repetition of S (ejΩ):

S(ejΩ) = p=S (ej(Ω+p2π)).
(4.32)

Example 4.10. PSD of a discrete-time sinusoid. If x[n] = Acos (Ω1n) (assume Ω1 obeys Eq. (1.21) to have x[n] periodic), then S (ejΩ) = πA2 2 [δ(Ω + Ω1) + δ(Ω Ω1)] and

S(ejΩ) = p=πA2 2 [δ(Ω + Ω1 + p2π) + δ(Ω Ω1 + p2π)]

provides its PSD, which has period 2π, as expected.   

4.5.5  Advanced: Fourier modulation theorem applied to PSDs

If Sx(f) is the PSD of a WSS random process X(t), the PSD Sy(f) of a new process Y (t) = X(t)ej2πfct is

Sy(f) = Sx(f fc).
(4.33)

Similarly, if Y (t) = X(t)cos (2πfct), then

Sy(f) = 1 4 [Sx(f + fc) + Sx(f fc)].
(4.34)

To observe why Eq. (4.33) (and, consequently, Eq. (4.34)) is true, recall the Wiener-Khinchin theorem of Eq. (4.24) and the autocorrelation definition of Eq. (1.50). Generally speaking, when modifying a WSS process X(t), the effect on its PSD can be obtained by checking how the modification affects its autocorrelation, and then relating this to frequency domain using Eq. (4.24). For example, multiplying X(t) by a scalar α corresponds to scaling its autocorrelation Rx(τ) by α2 and leads to a PSD α2Sx(f) given the linearity of the Fourier transform.

According to this reasoning, a proof sketch of Eq. (4.33) follows:

Sy(f) = F{Ry(τ)} = F{𝔼[Y (t + τ)Y (t)]} = F{𝔼[X(t + τ)ej2πfc(t+τ)X(t)ej2πfct]} = F{ej2πfcτ𝔼[X(t + τ)X(t)]} = F{ej2πfcτR x(τ)} = Sx(f fc).

Eq. (4.34) can be obtained by decomposing the cosine cos (2πfct) = 12(ej2πfct + ej2πfct) into two complex exponentials and taking in account that the factor α = 12 leads to the 14 in the PSD expression. Eq. (4.34) allows to observe that the result of a signal multiplied by a cosine of unitary amplitude has half of the original signal power.