2.3 Advanced: Inner Products to Obtain the Transform Coefficients
In the general case, calculating the forward or inverse transforms corresponds to performing a matrix multiplication. However, most linear transforms are designed such that “transforming” the original signal corresponds to the calculation of inner products. Hence, this section will elaborate on inner products and their use in transforms.
The main motivation is that we can calculate an inner product between a pair of vectors, but also between a pair of functions, and a pair of signals. We start by discussing the inner product between vectors, to make useful analogies with the inner product between signals in the context of linear transforms.
An inner product is a generalization of the dot product between two vectors and of elements or, equivalently, equal-length sequences (with samples). The dot product is defined for vectors with real-valued elements as (the notation of an inner product will be used hereafter instead of ):
| (2.3) |
where is the angle between and , which is restricted to the range degrees. Alternatively, this inner product can also be calculated as
| (2.4) |
where and are the -th elements of and , respectively.
Another aspect of the inner product is that it is proportional to the norm of the projection of on and vice versa. Repeating Eq. (A.36) below, for convenience
| (2.5) |
and interpreting as a basis vector of norm equals to 1 (, one has . A large magnitude of corresponds to a basis vector that represents well (a reasonable part of the energy) of the vector (later interpreted as signal ). On the other hand, means that the vectors are orthogonal.
Example 2.3. Calculating the angle between vectors using their inner product. For example, consider and . Their inner product is calculated from Eq. (2.4) as , such that and consequently, radians.
As another example, if and , their inner product is . In this case, from Eq. (2.3) and observing that the norms of both vectors are non-zero, the inner product is zero necessarily because and consequently, . Hence, when vectors are perpendicular (), their inner product is zero.
Orthogonal is a generalization of perpendicular. In general, when two vectors are orthogonal their inner product is zero.
There are many distinct inner products. But an operation must obey specific properties such as linearity to be “valid” as an inner product and, consequently, define an inner product space, where concepts such as norm and orthogonality are natural extensions of the ones with geometric interpretations provided by Eq. (2.3). This geometric interpretation is highly beneficial when interpreting the inner products used in transforms. Table 2.2 illustrates alternative definitions of inner products that are discussed in the sequel.
Equation | Used for | Number |
| finite-length real-valued vectors or sequences | (2.4) |
| infinite-length complex-valued vectors or sequences | (2.6) |
| continuous-time complex-valued signals | (2.7) |
| continuous-time complex-valued signals with duration | (2.8) |
|
As indicated in Table 2.2, it is also possible to define an inner product for infinite duration signals. For example, consider the complex-valued signal , where . It is a common operation in Fourier transform to calculate inner products among such signals using the definition
| (2.6) |
Note that when applied to complex-valued signals, the inner product is defined using a complex conjugation.
When dealing with continuous-time signals, a convenient definition is obtained by changing the summation by an integral:
| (2.7) |
The inner product of finite-duration signals with support is simplified to
| (2.8) |
where denotes a range of such as or .
When using inner products between continuous-time signals, it is possible to make useful analogies to vectors in the Euclidean space and benefit from geometrical interpretations. For example, similar to vectors, the squared norm is the signal energy.3 But the most important lesson in this context is that orthogonal signals share similar properties with orthogonal vectors.4
Example 2.4. Example of a pair of orthogonal signals. The signals and can be interpreted as orthogonal because . This inner product can be obtained by noting that is for and for , leading to a zero integral when using Eq. (2.7).
At this point, the reader may eventually benefit from Appendices A.14.2 and A.14.3, which provide a review of linear algebra applied to transforms. The goal of these appendices is to interpret, for example, the Fourier transform
as the inner product between the signal and the basis function .
As discussed in Appendix A.14.3, if a signal can be represented as a linear combination , where the functions compose a set of orthonormal basis functions, then the values (or coefficients) can be recovered using inner products:
| (2.9) |
This result is discussed in Appendix A.14.3 via examples with vectors.
Before dealing with infinite duration signals, the next section discusses transforms that operate on blocks of samples.