2.2  Linear Transform

The goal in this section is to analyze a signal (x(t) or x[n]) using a linear transform. The following steps will be discussed:

f 1.
We will choose the linear transform.
f 2.
The linear transform will represent the signal using its basis functions, which are often orthogonal among themselves.
f 3.
The transform operation corresponds to finding the values called transform coefficients, which multiplied by the corresponding basis functions (one coefficient per basis functions) reconstruct the original signal.

To fully understand linear transforms, we will discuss these concepts using vectors and linear algebra. Later we will generalize from vectors to signals. We start this study of linear transform, associating it to a simple matrix multiplication.

2.2.1  Matrix multiplication corresponds to a linear transform

In linear algebra, any linear transformation1 (or transform) can be represented by a matrix A. The linear transform operation is given by

y = Ax,
(2.1)

where x and y are the input and output column vectors, respectively.

Example 2.1. Example of linear transform. The matrix

A = [ cos (𝜃) sin (𝜃) sin (𝜃) cos (𝜃) ]
(2.2)

implements a transform that corresponds to a clockwise rotation of the input vector by an angle 𝜃.

PIC

Figure 2.1: Rotation of a vector x by an angle 𝜃 = π2 radians using y = Ax with A given by Eq. (2.2).

Figure 2.1 illustrates the rotation for a vector x = [4,8]T by an angle 𝜃 = π2 radians, i. e., A = [0,1;1,0], resulting in y = Ax = [8,4]T .    

2.2.2  Basis: standard, orthogonal and orthonormal

Another important concept in linear transforms, which has origin in linear algebra, is the concept of basis. The linear combination of basis vectors allow to create any possible vector in the corresponding vector space. Many basis are orthogonal or orthonormal. Figure 2.1 indicates a pair of orthonormal2 vectors i¯ = [1,0] and j¯ = [0,1] that span 2. The vectors i¯ and j¯ form a standard basis and allow to easily represent any vector y 2, such as y = 8i¯ 4j¯. It is useful to get a geometric interpretation by studying basis vectors, and later generalize the concepts to basis functions that represent discrete or continuous-time signals.

When the basis vectors are organized as the columns of a matrix A, the elements of the input vector x indicate the coefficients of a linear combination of the basis functions that leads to y. For example, in the case of the standard basis:

y = [ 8 4 ] = [ 1 0 0 1 ] [ 8 4 ] = Ax,

which is a trivial relation because A is the identity matrix. More interesting transforms are used in practice.

Example 2.2. Interpreting the given example as a linear transform. Assume one is dealing with computer graphics and wants to rotate vectors. Eq. (2.2) of Example 2.1 with 𝜃 = π2 radians leads to (see Figure 2.1):

y = [ 8 4 ] = [ cos (π2) sin (π2) sin (π2) cos (π2) ] [ 4 8 ] = [ 0 1 1 0 ] [ 4 8 ] = Ax.

This can be interpreted as a transform as follows:

f 1.
We chose A with 𝜃 = π2 as the linear transform.
f 2.
In this case, the elements of y are interpreted as the transform coefficients, while x is interpreted as the original vector.

The forward transform operation corresponds to finding the coefficients to compose y. In the inverse operation, the coefficients y would be the input values, and x could be found using x = A1y. In practice, we avoid the task of inverting a matrix and often choose A with special properties. For instance, because the columns of A are orthonormal, its inverse A1 = AH is equal to its Hermitian (and the basis vectors are the rows of A). Tricks to avoid inverting matrices will be further explored, alongside with the adoption of inner products.