2.3  Advanced: Inner Products to Obtain the Transform Coefficients

In the general case, calculating the forward or inverse transforms corresponds to performing a matrix multiplication. However, most linear transforms are designed such that “transforming” the original signal corresponds to the calculation of inner products. Hence, this section will elaborate on inner products and their use in transforms.

The main motivation is that we can calculate an inner product between a pair of vectors, but also between a pair of functions, and a pair of signals. We start by discussing the inner product between vectors, to make useful analogies with the inner product between signals in the context of linear transforms.

An inner product is a generalization of the dot product x y between two vectors x and y of N elements or, equivalently, equal-length sequences (with N samples). The dot product is defined for vectors with real-valued elements as (the notation of an inner product x,y will be used hereafter instead of x y):

x,yxycos (𝜃),
(2.3)

where 𝜃 is the angle between y and x, which is restricted to the range 0 𝜃 180 degrees. Alternatively, this inner product can also be calculated as

x,y = i=1Nx i yi = x1y1 + x2y2 + + xNyN,
(2.4)

where xi and yi are the i-th elements of x and y, respectively.

Another aspect of the inner product x,y is that it is proportional to the norm of the projection of x on y and vice versa. Repeating Eq. (A.36) below, for convenience

pxy = |x,y| y ,
(2.5)

and interpreting y as a basis vector of norm equals to 1 (y = 1, one has pxy = |x,y|. A large magnitude of x,y corresponds to a basis vector y that represents well (a reasonable part of the energy) of the vector x (later interpreted as signal x[n]). On the other hand, x,y = 0 means that the vectors are orthogonal.

Example 2.3. Calculating the angle between vectors using their inner product. For example, consider x = [1,1,1] and y = [1,3,2]. Their inner product is calculated from Eq. (2.4) as x,y = 6, such that cos (𝜃) = x,y(xy) 6(1.732 × 3.742) = 0.926 and consequently, 𝜃 = 0.387 radians.

As another example, if x = [3,1,0] and y = [1,3,2], their inner product is x,y = 0. In this case, from Eq. (2.3) and observing that the norms of both vectors are non-zero, the inner product is zero necessarily because cos (𝜃) = 0 and consequently, 𝜃 = π2. Hence, when vectors are perpendicular (𝜃 = π2), their inner product is zero.   

Orthogonal is a generalization of perpendicular. In general, when two vectors are orthogonal their inner product is zero.

There are many distinct inner products. But an operation must obey specific properties such as linearity to be “valid” as an inner product and, consequently, define an inner product space, where concepts such as norm and orthogonality are natural extensions of the ones with geometric interpretations provided by Eq. (2.3). This geometric interpretation is highly beneficial when interpreting the inner products used in transforms. Table 2.2 illustrates alternative definitions of inner products that are discussed in the sequel.

Table 2.2: Examples of inner product definitions.

Equation

Used for

Number

i=1Nxi yi

finite-length real-valued vectors or sequences

(2.4)

n=x[n] y[n]

infinite-length complex-valued vectors or sequences

(2.6)

x(t)y(t)dt

continuous-time complex-valued signals

(2.7)

Tx(t)y(t)dt

continuous-time complex-valued signals with duration T

(2.8)

As indicated in Table 2.2, it is also possible to define an inner product for infinite duration signals. For example, consider the complex-valued signal y[n] = ejΩn, where n = ,,1,0,1,,. It is a common operation in Fourier transform to calculate inner products among such signals using the definition

x[n],y[n] n=x[n]y[n].
(2.6)

Note that when applied to complex-valued signals, the inner product is defined using a complex conjugation.

When dealing with continuous-time signals, a convenient definition is obtained by changing the summation by an integral:

x(t),y(t) =x(t)y(t)dt.
(2.7)

The inner product of finite-duration signals with support T is simplified to

x(t),y(t) =Tx(t)y(t)dt,
(2.8)

where T denotes a range of T such as [0,T] or [T2,T2].

When using inner products between continuous-time signals, it is possible to make useful analogies to vectors in the Euclidean space and benefit from geometrical interpretations. For example, similar to vectors, the squared norm x(t),x(t) = x(t)2 is the signal energy.3 But the most important lesson in this context is that orthogonal signals share similar properties with orthogonal vectors.4

Example 2.4. Example of a pair of orthogonal signals. The signals x(t) = 1 and y(t) = 2u(t) 1 can be interpreted as orthogonal because x(t),y(t) = 0. This inner product can be obtained by noting that x(t)y(t) is 1 for t < 0 and 1 for t 0, leading to a zero integral when using Eq. (2.7).   

At this point, the reader may eventually benefit from Appendices A.14.2 and A.14.3, which provide a review of linear algebra applied to transforms. The goal of these appendices is to interpret, for example, the Fourier transform

X(ω) =x(t)ejωtdt = x(t),ejωt

as the inner product x(t),ejωt between the signal x(t) and the basis function ejωt.

As discussed in Appendix A.14.3, if a signal x(t) can be represented as a linear combination x(t) = d=1Dmdφd(t), where the D functions φd(t) compose a set {φj(t)} of orthonormal basis functions, then the values (or coefficients) md can be recovered using inner products:

md = x(t),φd(t).
(2.9)

This result is discussed in Appendix A.14.3 via examples with vectors.

Before dealing with infinite duration signals, the next section discusses transforms that operate on blocks of samples.