2.11  Comments and Further Reading

Some authors recognize the advantages of presenting concepts of digital signal processing before their analog counterpart. This chapter adopts this approach. For example, it presents block transforms before the continuous time Fourier transform.

A commonly adopted notation is to call inverse transformation the matrix multiplication x = AX, while the forward (or direct) transformation is denoted as X = A1x., such that the basis are the columns of A [Mal92]. Experience proved that this notation confuses beginners and it was not adopted here.

Most textbooks introduce Z and Laplace transforms using the fact that their basis functions are eigenfunctions of linear and time-invariant systems. In this chapter, both transforms were presented as extensions of their Fourier counterparts.

All the four Fourier representations are interrelated. There are many interesting properties among the representations, which are explored in, e. g., [OS09].

With respect to block transforms, the attention is restricted to the ones represented by square matrices. Also, the emphasis is on their use and interpretation. Fast algorithms are out of the scope. A more advanced treatment of transforms, including lapped transforms (that correspond to non-square matrices) and fast algorithms can be found in [Mal92].

There are four DCT transforms [RY90]. The DCT-II was assumed in this text because it is the most popular for coding applications.

It is assumed by default the bilateral Laplace and Z transforms. The unilateral versions are useful for solving differential and difference equations, which is a task not emphasized in this text.

Unitary matrices are sometimes called orthogonal matrices (instead of orthonormal), which is confusing. For vectors, the jargon is more consistent.

A nice geometrical explanation about linearly independent, orthogonal, and uncorrelated variables is provided in [RNT84].