List of Figures
List of Tables
Listings
Preface
1
Analog and Digital Signals
1.1
To Learn in This Chapter
1.2
Analog, Digital and Discrete-Time Signals
1.2.1
Advanced:
Ambiguous notation: whole signal or single sample
1.2.2
Digitizing Signals
1.2.3
Discrete-time signals
1.3
Basic Signal Manipulation and Representation
1.3.1
Manipulating the independent variable
1.3.2
When the independent variable is not an integer
1.3.3
Frequently used manipulations of the independent variable
1.3.4
Using impulses to represent signals
1.3.5
Using step functions to help representing signals
1.3.6
The rect function
1.4
Block or Window Processing
1.4.1
Advanced:
Block processing with overlapped windows
1.5
Advanced:
Complex-Valued and Sampled Signals
1.5.1
Complex-valued signals
1.5.2
Sampled signals
1.6
Signal Categorization
1.6.1
Even and odd signals
1.6.2
Random signals and their generation
1.6.3
Periodic and aperiodic signals
1.6.4
Power and energy signals
1.7
Modeling the Stages in A/D and D/A Processes
1.7.1
Modeling the sampling stage in A/D
1.7.2
Oversampling
1.7.3
Mathematically modeling the whole A/D process
1.7.4
Sampled to discrete-time (S/D) conversion
1.7.5
Continuous-time to discrete-time (C/D) conversion
1.7.6
Discrete-time to sampled (D/S) conversion
1.7.7
Reconstruction
1.7.8
Discrete-time to continuous-time (D/C) conversion
1.7.9
Analog to digital (A/D) and digital to analog (D/A) conversions
1.7.10
Sampling theorem
1.7.11
Different notations for S/D conversion
1.8
Relating Frequencies of Continuous and Discrete-Time Signals
1.8.1
Units of continuous-time and discrete-time angular frequencies
1.8.2
Mapping frequencies in continuous and discrete-time domains
1.8.3
Nyquist frequency
1.8.4
Frequency normalization in Python and Matlab/Octave
1.9
An Introduction to Quantization
1.9.1
Quantization definitions
1.9.2
Implementation of a generic quantizer
1.9.3
Uniform quantization
1.9.4
Granular and overload regions
1.9.5
Design of uniform quantizers
1.9.6
Design of optimum non-uniform quantizers
1.9.7
Quantization stages: classification and decoding
1.9.8
Binary numbering schemes for quantization decoding
1.9.9
Quantization examples
1.10
Correlation: Finding Trends
1.10.1
Autocorrelation function
1.10.2
Cross-correlation
1.11
Advanced:
A Linear Model for Quantization
1.12
Advanced:
Power and Energy in Discrete-Time
1.12.1
Power and energy of discrete-time signals
1.12.2
Power and energy of signals represented as vectors
1.12.3
Advanced:
Power and energy of vectors whose elements are not time-ordered
1.12.4
Power and energy of discrete-time random signals
1.12.5
Advanced:
Relating Power in Continuous and Discrete-Time
1.13
Applications
1.14
Comments and Further Reading
1.15
Review Exercises
1.16
Exercises
2
Transforms and Signal Representation
2.1
To Learn in This Chapter
2.2
Linear Transform
2.2.1
Matrix multiplication corresponds to a linear transform
2.2.2
Basis: standard, orthogonal and orthonormal
2.3
Advanced:
Inner Products to Obtain the Transform Coefficients
2.4
Block Transforms
2.4.1
Advanced:
Unitary or orthonormal transforms
2.4.2
DCT transform
2.4.3
DFT transform
2.4.4
Haar transform
2.4.5
Advanced:
Properties of orthogonal and unitary transforms
2.5
Fourier Transforms and Series
2.5.1
Fourier series for continuous-time signals
2.5.2
Discrete-time Fourier series (DTFS)
2.5.3
Continuous-time Fourier transform using frequency in Hertz
2.5.4
Continuous-time Fourier transform using frequency in rad/s
2.5.5
Discrete-time Fourier transform (DTFT)
2.6
Relating spectra of digital and analog frequencies
2.7
Advanced:
Summary of equations for DFT / FFT Usage
2.7.1
Advanced:
Three normalization options for DFT / FFT pairs
2.8
Laplace Transform
2.8.1
Motivation to the Laplace transform
2.8.2
Advanced:
Laplace transform basis functions
2.8.3
Laplace transform of one-sided exponentials
2.8.4
Region of convergence for a Laplace transform
2.8.5
Inverse Laplace of rational functions via partial fractions
2.8.6
Calculating the Fourier transform from a Laplace transform
2.9
Z Transform
2.9.1
Relation between Laplace and Z transforms
2.9.2
Advanced:
Z transform basis functions
2.9.3
Some pairs and properties of the Z-transform
2.9.4
Region of convergence for a Z transform
2.9.5
Inverse Z of rational functions via partial fractions
2.9.6
Calculating the DTFT from a Z transform
2.10
Applications
2.11
Comments and Further Reading
2.12
Review Exercises
2.13
Exercises
3
Analog and Digital Systems
3.1
To Learn in This Chapter
3.2
Contrasting Signals and Systems
3.3
A Quick Discussion About Filters
3.3.1
Cutoff and natural frequencies
3.4
Linear Time-Invariant Systems
3.4.1
Impulse response and convolution for LTI systems
3.4.2
Advanced:
Convolution properties
3.4.3
Advanced:
Convolution via correlation and vice-versa
3.4.4
Advanced:
Discrete-time convolution in matrix notation
3.4.5
Approximating continuous-time via discrete-time convolution
3.4.6
Frequency response: Fourier transform of the impulse response
3.4.7
Fourier convolution property
3.4.8
Circular and fast convolutions using FFT
3.5
Advanced:
Sampling and Signal Reconstruction Revisited
3.5.1
A proof sketch of the sampling theorem
3.5.2
Energy and power of a sampled signal
3.5.3
Energy / power conservation after sampling and reconstruction
3.5.4
Sampling theorem uses a strict inequality
3.5.5
Undersampling or passband sampling
3.5.6
Sampling a complex-valued signal
3.5.7
Signal reconstruction and D/S conversion revisited
3.6
Advanced:
First and Second-Order Analog Systems
3.6.1
First-order systems
3.6.2
Second-order systems
3.7
Advanced:
Bandwidth and Quality Factor
3.7.1
Bandwidth and Quality Factor of Poles
3.7.2
Bandwidth and Quality Factor of Filters
3.8
Importance of Linear Phase (or Constant Group Delay)
3.9
Advanced:
Filtering technologies: Surface acoustic wave (SAW) and others
3.10
Introduction to Digital Filters
3.10.1
Designing simple filters using specialized software
3.10.2
Distinct ways of specifying the “ripple” / deviation in filter design
3.10.3
LCCDE digital filters
3.10.4
FIR, IIR, AR, MA and ARMA systems
3.10.5
Filter frequency scaling
3.10.6
Filter bandform transformation: Lowpass to highpass, etc.
3.11
IIR Filter Design
3.11.1
Direct IIR filter design
3.11.2
Indirect IIR filter design
3.11.3
Methods to convert continuous into discrete-time system functions
3.11.4
Summary of methods to convert continuous-time system function into discrete-time
3.12
Bilinear Transformation: Definition and Properties
3.12.1
Bilinear mapping between s and z planes and vice-versa
3.12.2
Non-linear frequency warping imposed by bilinear
3.12.3
Tracking the frequency warping provoked by bilinear
3.12.4
Advanced:
Properties of the bilinear transformation
3.13
System Design with Bilinear Transformation
3.13.1
Bilinear for IIR filter design
3.13.2
Bilinear for matching a single frequency
3.13.3
Bilinear for mimicking G(s)
3.14
FIR Filter Design
3.14.1
A FIR filter does not have finite poles
3.14.2
The coefficients of a FIR coincide with its impulse response
3.14.3
Algorithms for FIR filter design
3.14.4
FIR design via least-squares
3.14.5
FIR design via windowing
3.14.6
Two important characteristics: FIRs are always stable and can have linear phase
3.14.7
Examples of linear and non-linear phase filters
3.14.8
Zeros close to the unit circle may impact the phase linearity
3.14.9
Four types of symmetric FIR filters
3.15
Realization of Digital Filters
3.15.1
Structures for FIR filters
3.15.2
Structures for IIR filters
3.15.3
Running a digital filter using filter or conv
3.15.4
Advanced:
Effects of finite precision
3.16
Advanced:
Minimum phase systems
3.17
Advanced:
Multirate Processing
3.17.1
Upsampler and interpolator
3.17.2
Downsampler and decimator
3.18
Applications
3.19
Comments and Further Reading
3.20
Exercises
3.21
Extra Exercises
4
Spectral Estimation Techniques
4.1
To Learn in This Chapter
4.2
Introduction
4.3
Windows for spectral analysis
4.3.1
Popular windows
4.3.2
Figures of merit applied to windows
4.3.3
Leakage
4.3.4
Picket-fence effect
4.3.5
Advanced:
Only a bin-centered sinusoid leads to an FFT without visible leakage
4.3.6
Summarizing leakage and picket-fence effect
4.3.7
Example of using windows in spectral analysis
4.4
The ESD, PSD and MS Spectrum functions
4.4.1
Energy spectral density (ESD)
4.4.2
Advanced:
Units of ESD when angular frequencies are adopted
4.4.3
Power spectral density (PSD)
4.4.4
Advanced:
Fourier modulation theorem applied to PSDs
4.4.5
Mean-square (MS) spectrum
4.5
Filtering Random Signals and the Impact on PSDs
4.5.1
Response of LTI systems to random inputs
4.5.2
Filtering continuous-time signals that have a white PSD
4.5.3
Advanced:
Filtering discrete-time signals that have a white PSD
4.6
Nonparametric PSD Estimation via Periodogram
4.6.1
Periodogram of periodic signals and energy signals
4.6.2
Examples of continuous-time PSD estimation using periodograms
4.6.3
Relation between MS spectrum and periodogram
4.6.4
Estimation of discrete-time PSDs using the periodogram
4.6.5
Examples of discrete-time PSD estimation
4.6.6
Estimating the PSD from Autocorrelation
4.7
Nonparametric PSD Estimation via Welch’s method
4.7.1
The periodogram variance does not decrease with
N
4.7.2
Welch’s method for PSD estimation
4.8
Parametric PSD Estimation via Autoregressive (AR) Modeling
4.8.1
Advanced:
Spectral factorization
4.8.2
AR modeling of a discrete-time PSD
4.8.3
AR modeling of a continuous-time PSD
4.8.4
Advanced:
Yule-Walker equations and LPC
4.8.5
Examples of autoregressive PSD estimation
4.9
Time-frequency Analysis using the Spectrogram
4.9.1
Definitions of STFT and spectrogram
4.9.2
Advanced:
Wide and narrowband spectrograms
4.10
Applications
4.11
Comments and Further Reading
4.12
Exercises
4.13
Extra Exercises
A
Useful Mathematics
A.1
Euler’s equation
A.2
Trigonometry
A.3
Manipulating complex numbers and rational functions
A.4
Manipulating complex exponentials
A.5
Q function
A.6
Matched filter and Cauchy-Schwarz’s inequality
A.7
Geometric series
A.8
Sum of squares
A.9
Summations and integrals
A.10
Partial fraction decomposition
A.11
Calculus
A.12
Sinc Function
A.13
Rectangular Integration to Define Normalization Factors for Functions
A.13.1
Two normalizations for the histogram
A.13.2
Two normalizations for power distribution using FFT
A.14
Linear Algebra
A.14.1
Inner products and norms
A.14.2
Projection of a vector using inner product
A.14.3
Orthogonal basis allows inner products to transform signals
A.14.4
Moore-Penrose pseudoinverse
A.15
Gram-Schmidt orthonormalization procedure
A.16
Principal component analysis (PCA)
A.17
Fourier Analysis: Properties
A.18
Fourier Analysis: Pairs
A.19
Probability and Stochastic Processes
A.19.1
Joint and Conditional probability
A.19.2
Random variables
A.19.3
Expected value
A.19.4
Orthogonal versus uncorrelated
A.19.5
PDF of a sum of two independent random variables
A.20
Stochastic Processes
A.20.1
Cyclostationary random processes
A.20.2
Two cyclostationary signals: sampled and discrete-time upsampled
A.20.3
Converting a WSC into WSS by randomizing the phase
A.21
Estimation Theory
A.21.1
Probabilistic estimation theory
A.21.2
Minimum mean square error (MMSE) estimators
A.21.3
Orthogonality principle
A.22
One-dimensional linear prediction over time
A.22.1
The innovations process
A.23
Vector prediction exploring spatial correlation
A.24
Decibel (dB) and Related Definitions
A.25
Insertion loss and insertion frequency response
A.26
Discrete and Continuous-Time Impulses
A.26.1
Discrete-time impulse function
A.26.2
Why defining the continuous-time impulse? Some motivation
A.26.3
Definition of the continuous-time impulse as a limit
A.26.4
Continuous-time impulse is a distribution, not a function
A.26.5
Mathematical properties of the continuous-time impulse
A.26.6
Convolution with an impulse
A.26.7
Applications of the impulse
A.27
System Properties
A.27.1
Linearity (additivity and homogeneity)
A.27.2
Time-invariance (or shift-invariance)
A.27.3
Memory
A.27.4
Causality
A.27.5
Invertibility
A.27.6
Stability
A.27.7
Properties of Linear and time-invariant (LTI) systems
A.28
Fixed and Floating-Point Number Representations
A.28.1
Representing numbers in fixed-point
A.28.2
IEEE 754 floating-point standard
B
Useful Softwares and Programming Tricks
B.1
Matlab and Octave
B.1.1
Octave Installation
B.2
Manipulating signals stored in files
B.2.1
Hex / Binary File Editors
B.2.2
ASCII Text Files: Unix/Linux versus Windows
B.2.3
Binary Files: Big versus Little-endian
B.2.4
Some Useful Code to Manipulate Files
B.2.5
Interpreting binary files with complex headers
Glossary
1
Text Conventions
2
Main Abbreviations
3
Main Symbols
Bibliography
Index
Digital Signal Processing with Python, Matlab or Octave
GitHub
PDF
Home
⇐
Previous
⇑
Up
Next
⇒
List of Figures
1.1
Example of analog signal. Note the abscissa is continuous and the amplitude is not quantized, assuming an infinite number of possible values.
1.2
Example of digital signal obtained by digitalizing the analog signal in Figure 1.1
1.3
Example of discrete-time signal
1.4
Example of a discrete-time cosine generated with Listing 1.1.
1.5
Representation of signals related by
y
[
n
]
=
x
[
−
n
]
.
1.6
Examples of manipulating the independent variable: time-shift, contraction and dilation.
1.7
Three examples of simultaneously scaling and shifting a signal
x
(
t
)
.
1.8
Examples of manipulating the independent variable (in this case
x
) of a
sinc
function.
1.9
Graph of a continuous-time sinc function centered at
t
=
0
.
5
s, obtained with Listing 1.3.
1.10
Graphs of continuous and discrete-time signals obtained with the rect function in Listing 1.4.
1.11
Graph of the signal described in Eq. (1.7).
1.12
The top representation shows non-overlapping windows of
L
=
4
samples, with both non-windowed indexing
x
[
n
]
and windowed indexing
x
[
k
,
m
]
. The bottom representation shows overlapping windows with
L
=
4
and shift
S
=
1
sample using non-windowed indexing.
1.13
Interpreting Eq. (1.10) to obtain
M
=
3
windows of
L
=
5
samples from a total of
N
=
1
3
samples using a shift of
S
=
3
samples.
1.14
Signals classification including the sampled signals.
1.15
Example of sampled signal
1.16
Even and odd components of a signal
x
[
n
]
representing a finite-duration segment of the step function
u
[
n
]
. Note the symmetry properties:
x
e
[
n
]
=
x
e
[
−
n
]
and
x
o
[
n
]
=
−
x
o
[
−
n
]
.
1.17
Even and odd components of a signal
x
[
n
]
=
n
2
u
[
n
]
representing a parabolic function.
1.18
Even and odd components of a signal
x
[
n
]
representing a triangle that starts at
n
=
2
1
and has its peak with an amplitude
x
[
6
0
]
=
4
0
at
n
=
6
0
. Note that the peak amplitude of the two components is 20.
1.19
Waveform representation of a random signal with 100 samples draw from a Gaussian distribution
N
(
0
,
1
)
.
1.20
Histogram of the signal in Figure 1.19 with 10 bins.
1.21
Example of histogram with
B
=
3
bins. The centers are 1.5, 2.5 and 3.5, all marked with ’
×
’. The bin edges are
1
,
2
,
3
and 4.
1.22
PDF estimate from the histogram in Figure 1.20
1.23
Comparison of normalized histogram and the correct Gaussian
N
(
4
,
0
.
0
9
)
when using 10,000 samples and 100 bins
1.24
Graph of the signal
x
[
n
]
=
sin
(
0
.
2
n
)
.
1.25
Graph of the signal
x
[
n
]
=
sin
(
(
3
π
∕
1
7
)
n
)
.
1.26
An impulse train with unitary areas and
T
s
=
1
2
5
μ
s.
1.27
Example of a sampled signal obtained with a sampling frequency smaller than the required for accurately representing the original signal (shown in dotted lines).
1.28
Example of discrete-time cosines generated with sampling intervals of
1
2
5
μ
s (top) and 625 ns (bottom) to illustrate the better representation achieved by using a smaller sampling interval
T
s
, which corresponds to adopting a larger oversampling factor.
1.29
Complete process of A/D conversion with intermediate stages and four signals: analog, sampled, discrete-time and digital.
1.30
Example of S/D conversion assuming
T
s
=
0
.
2
s.
1.31
Example of D/S conversion assuming
T
s
=
0
.
2
s. It implements the inverse operation of Figure 1.30.
1.32
Example of ZOH reconstruction using the signals of Example 1.24 with
T
s
=
0
.
2
s. In this case,
x
[
n
]
=
0
.
5
δ
[
n
]
−
2
.
8
δ
[
n
−
1
]
+
1
.
3
δ
[
n
−
2
]
+
3
.
5
δ
[
n
−
3
]
−
1
.
7
δ
[
n
−
4
]
+
1
.
1
δ
[
n
−
5
]
)
+
4
δ
[
n
−
6
]
.
1.33
Example of D/A of signal
x
q
[
n
]
=
δ
[
n
]
−
3
δ
[
n
−
1
]
+
3
δ
[
n
−
2
]
(with quantized amplitudes) with D/S using
T
s
=
0
.
2
s and reconstruction using sinc functions.
1.34
Identification of the individual scaled sinc functions (dashed lines) after D/S with
T
s
=
0
.
2
s and signal reconstruction of
x
q
[
n
]
=
δ
[
n
]
−
3
δ
[
n
−
1
]
+
3
δ
[
n
−
2
]
in Figure 1.33.
1.35
Signal reconstruction of a signal with quantized amplitudes using ZOH.
1.36
Complete processing chain of an input analog signal
x
(
t
)
to generate an output
y
(
t
)
using DSP.
1.37
Sampling and sinc-based perfect reconstruction of a cosine as implemented in function
ak_sinc_reconstruction.m
.
1.38
Single sinc parcel from Figure 1.39 corresponding to the sinc centered at
t
=
−
0
.
8
s as dashed line and the reconstructed signal as a solid line.
1.39
All individual parcels in dashed lines corresponding to each sinc and the summation as reconstructed signal (solid line) of Listing 1.10.
1.40
Sampling and reconstruction of
x
(
t
)
=
sinc
(
t
∕
0
.
2
)
−
3
sinc
(
(
t
−
0
.
2
)
∕
0
.
2
)
+
3
sinc
(
(
t
−
0
.
4
)
∕
0
.
2
)
using
T
s
=
0
.
1
s.
1.41
Sinc parcels used in the reconstruction of
x
(
t
)
of Figure 1.40.
1.42
Example of failing to reconstruct the signal
x
(
t
)
=
cos
(
2
π
2
.
5
t
−
0
.
5
π
)
using
F
s
=
2
f
max
=
5
Hz.
1.43
Reconstruction of signal given by Eq. (1.7) fails because the sampling theorem is not obeyed.
1.44
Sinc parcels used in the signal reconstructed as depicted in Figure 1.43.
1.45
Versions of the Nyquist frequency in continuous and discrete-time.
1.46
Input/output mapping for the
non-uniform
quantizer specified by
ℳ
=
{
−
4
,
−
1
,
0
,
3
}
.
1.47
Input/output relation of a 3-bits quantizer with
Δ
1
.
1.48
Input/output relation of a 3-bits quantizer with
Δ
0
.
5
.
1.49
Theoretical and estimated Gaussian probability density functions with thresholds represented by dashed lines and the output levels indicated with circles in the abscissa.
1.50
Input/output mapping for the quantizer designed with a Gaussian input and outputs given by
ℳ
=
[
−
6
.
8
,
−
4
.
2
,
−
2
.
4
,
−
0
.
8
,
0
.
8
,
2
.
4
,
4
.
2
,
6
.
8
]
.
1.51
Results for the quantizer designed with the mixture of Eq. (1.45) as input.
1.52
A quantizer
Q
is composed by the classification and decoding stages, denoted as
Q
~
c
and
Q
~
d
, respectively.
1.53
Example of conversion using proportion when the dynamic ranges of both analog and digital signal are available
1.54
A sinusoid of period
N=8
samples and its autocorrelation, which is also periodic each 8 lags
1.55
The a) unbiased and b) raw (unscaled) autcorrelations for the sinusoid of Figure 1.54 with a new period of
N=15
samples.
1.56
Sinusoid of amplitude 4 V immersed in AWGN of power 25 W. The bottom graph is a zoom showing the first 100 samples.
1.57
Autocorrelations of sine plus noise
1.58
Continuous-time version of the AWGN channel model.
1.59
Example of sound recorded at
F
s
=
4
4
.
1
kHz with the Audacity sound editor.
1.60
Some Audacity options for saving an uncompressed WAVE file. The two non-linear PCMs are indicated.
1.61
Cosine obtained with Listing 1.26 and a loopback cable connecting the soundboard DAC and ADC.
1.62
Setup for loopback of the sound system using an audio cable.
1.63
Example of options provided by Windows and the sound board. All the enhancements for both recording and playback devices should be disabled.
1.64
Audacity window after reading in the ’impulses.wav’ file.
1.65
Audacity window after simultaneously recording and playing ’impulses.wav’ with a loopback.
1.66
Zoom of the response to the second impulse in Figure 1.64.
1.67
Scatter plot of customer age versus purchased units for three products.
1.68
Autocorrelation of the sunspot data.
1.69
Autocorrelation of a cosine of 300 Hz
1.70
First graph shows signals
x
(
t
)
and
y
(
t
−
0
.
2
5
)
contaminated by AWGN at a SNR of 10 dB.
1.71
Example of continuous-time signal.
2.1
Rotation of a vector
x
by an angle
𝜃
=
π
∕
2
radians using
y
=
A
x
with
A
given by Eq. (2.2).
2.2
The first three (
k
=
0
,
1
,
2
) and the last (
k
=
3
1
) basis functions for a 32-points DCT.
2.3
Angles
Ω
=
0
,
π
∕
2
,
π
and
3
π
∕
2
rad (or, equivalently, discrete-time angular frequencies) used by a DFT of
N
=
4
points when
k
varies from
k
=
0
to 3, respectively.
2.4
Angles
Ω
=
0
,
2
π
∕
5
,
4
π
∕
5
,
6
π
∕
5
and
8
π
∕
5
rad (that in degrees corresponds to
0
,
7
2
∘
,
1
4
4
∘
,
−
1
4
4
∘
,
−
7
2
∘
) used by a DFT of
N
=
5
points when
k
varies from
k
=
0
to 4, respectively.
2.5
Mapping between angular frequencies
Ω
and the corresponding DFT coefficient
X
[
k
]
for
N
=
6
.
2.6
Mapping between angular frequencies
Ω
and the corresponding DFT coefficient
X
[
k
]
for
N
=
5
.
2.7
The angles corresponding to
W
N
on the unit circle, for
N
=
3
,
4
,
5
,
6
.
2.8
Computational cost of the DFT calculated via matrix multiplication versus an FFT algorithm. Note that
N
=
4
0
9
6
are used in standards such as VDSL2, for example, and it is clearly unreasonable to use matrix multiplication.
2.9
The first four (
k
=
0
,
1
,
2
,
3
) and the last (
k
=
3
1
) basis functions for a 32-points Haar.
2.10
Basis functions
k
=
0
,
1
7
and the last four (
k
=
2
8
,
2
9
,
3
0
,
3
1
) for a 32-points Haar.
2.11
Fourier series basis functions for analyzing signals with period
T
0
=
1
∕
5
0
seconds. Because the basis functions are complex-valued signals, the plots show their real (top) and imaginary (bottom) parts.
2.12
Spectrum of
x
(
t
)
=
4
+
1
0
cos
(
2
π
5
0
t
+
2
)
+
4
sin
(
2
π
1
5
0
t
−
1
)
.
2.13
Unilateral spectrum of (real) signal
x
(
t
)
=
4
+
1
0
cos
(
2
π
5
0
t
+
2
)
+
4
sin
(
2
π
1
5
0
t
−
1
)
.
2.14
DTFS / DFT of
x
[
n
]
=
1
0
cos
(
π
6
n
+
π
∕
3
)
calculated with
N
=
1
2
.
2.15
Complete representation of the DTFS / DFT of
x
[
n
]
=
1
0
cos
(
π
6
n
+
π
∕
3
)
indicating the periodicity
X
[
k
]
=
X
[
k
+
N
]
.
2.16
Example of a Fourier transform
X
(
f
)
and its equivalent representation
X
(
ω
)
with
ω
in rad/s.
2.17
Fourier transform of
x
(
t
)
=
6
cos
(
1
0
π
t
)
represented in Hertz and radians per second, indicating the scaling of the impulses areas by the factor
2
π
.
2.18
Example of bins when using an
N
-points DFT with
N
=
4
. The bin centers are
0
,
π
∕
2
,
π
and
3
π
∕
2
, all marked with ’
×
’.
2.19
Spectrum
X
(
f
)
(top) and
X
(
e
j
Ω
when
F
s
=
6
0
Hz.
2.20
Real part of
e
(
−
σ
+
j
1
0
π
)
t
. The values of
σ
are
−
0
.
3
and 0.3 for the first (left) and second graphs, respectively.
2.21
Three poles (marked with ‘x’) and zero (marked with ‘o’) of Eq. (2.54).
2.22
Magnitude (in dB) of
X
(
s
)
=
s
−
1
s
3
+
4
s
2
+
9
s
+
1
0
=
s
−
1
(
s
+
2
)
(
s
+
1
−
j
2
)
(
s
+
1
+
j
2
)
(Eq. (2.54)).
2.23
Phase (in rad) of Eq. (2.54).
2.24
Graph of the magnitude (in dB) of
X
(
s
)
=
s
−
1
s
3
+
4
s
2
+
9
s
+
1
0
=
s
−
1
(
s
+
2
)
(
s
+
1
−
j
2
)
(
s
+
1
+
j
2
)
(Figure 2.22) with the identification of the corresponding values of the Fourier transform (magnitude).
2.25
The values of the magnitude of the Fourier transform corresponding to Figure 2.24.
2.26
Two dimensional representation of Figure 2.25 obtained with the command
freqs
in Matlab/Octave showing the peak at
ω
=
2
rad/s due to the respective pole.
2.27
Magnitude (in dB) of
X
(
z
)
=
z
+
0
.
9
(
z
−
0
.
8
)
(
z
−
0
.
5
−
j
0
.
6
)
(
z
−
0
.
5
+
j
0
.
6
)
(Eq. (2.63)).
2.28
Phase (in rad) of Eq. (2.63).
2.29
Pole / zero diagram for Eq. (2.63).
2.30
Graph of the magnitude (in dB) of
X
(
z
)
=
z
+
0
.
9
(
z
−
0
.
8
)
(
z
−
0
.
5
−
j
0
.
6
)
(
z
−
0
.
5
+
j
0
.
6
)
(Figure 2.27) with the identification of the corresponding values of the DTFT (unit circle
|
z
|
=
1
).
2.31
The values of the magnitude of the DTFT corresponding to Figure 2.30.
2.32
Magnitude (top) and phase (bottom) of the DTFT corresponding to Eq. (2.63). These plots can be obtained with the Matlab/Octave command
freqz
and are a more convenient representation than, e. g., Figure 2.31.
2.33
Signal
x
[
n
]
=
δ
[
n
−
1
1
]
analyzed by 32-points DCT and Haar transforms.
2.34
A segment of one channel of the original ECG data.
2.35
Original and reconstructed ECG signals with DCT of
N
=
3
2
points and discarding 26 (high-frequency) coefficients.
2.36
Performance of five DCT-based ECG coding schemes. The number of points is varied
N
∈
{
4
,
8
,
3
2
,
6
4
,
1
2
8
}
and
K
=
1
,
2
,
…
,
M
−
1
.
2.37
A zoom of the eye region of the Lenna image.
2.38
Alternative representation of the DTFS / DFT of
x
[
n
]
=
1
0
cos
(
π
6
n
+
π
∕
3
)
using
fftshift
.
2.39
Five cosine signals
x
i
[
n
]
=
1
0
cos
(
Ω
n
)
with frequencies
Ω
=
0
,
2
π
∕
3
2
,
4
π
∕
3
2
,
π
,
3
1
π
∕
1
6
for
i
=
0
,
1
,
2
,
1
6
,
3
2
, and the real part of their DTFS using
N
=
3
2
points.
2.40
Analysis with DFT of 32 points of
x
[
n
]
composed by three sinusoids and a DC level.
2.41
Spectrum of a signal
x
[
n
]
=
4
cos
(
(
2
π
∕
6
)
n
)
with period of 6 samples obtained with a 16-points DFT, which created spurious components.
2.42
Explicitly repeating the block of
N
cosine samples from Figure 2.41 to indicate that spurious components are a manifest of the lack of a perfect cosine in time-domain.
2.43
Three periods of each signal: pulse train
x
[
n
]
with
N
=
1
0
and
N
1
=
5
and amplitude assumed to be in volts (a), the magnitude (b) and phase (c) of its DTFS.
2.44
Behavior when
N
1
of Figure 2.43 is decreased from
N
1
=
4
to 1.
2.45
DTFT of an aperiodic pulse with
N
1
=
5
non-zero samples and DTFT estimates obtained via a DFT of
N
=
2
0
points.
2.46
A version of Figure 2.45 using a DFT of
N
=
2
5
6
points.
2.47
A version of Figure 2.45 using
freqz
with 512 points representing only the positive part of the spectrum.
2.48
Reproducing the graphs generated by
freqz
in Figure 2.47.
2.49
DFT with
N
=
8
points of a signal sampled at
F
s
=
1
0
0
kHz.
3.1
Ideal magnitude specifications for lowpass and highpass filters.
3.2
Magnitude of the frequency response of a practical analog filter in linear scale (
|
H
(
f
)
|
, top) and in dB (
2
0
l
o
g
1
0
|
H
(
f
)
|
), which should be compared to the ideal case of Figure 3.1.
3.3
Example of specification masks for designing low and highpass filters.
3.4
Example of specification mask for designing bandpass filters.
3.5
Diagram of systems, emphasizing the linear and time-invariant (LTI) systems and the systems described by linear, constant-coefficient differential (or difference) equations (LCCDE).
3.6
Example of convolution between
x
[
n
]
=
2
δ
[
n
]
−
3
δ
[
n
−
1
]
and
h
[
n
]
=
δ
[
n
]
−
2
δ
[
n
−
1
]
+
δ
[
n
−
2
]
.
3.7
Convolution of a pulse
p
(
t
)
=
4
rect
(
5
t
−
0
.
1
)
with itself, obtained with Listing 3.5.
3.8
Frequency response of
H
(
ω
)
=
1
j
ω
+
2
represented in polar form: magnitude (top) and phase (bottom).
3.9
Frequency response represented in polar form: magnitude (top) and phase (bottom).
3.10
Version of Figure 3.9 obtained with the command
freqz(1,[1 -0.7])
.
3.11
Spectrum
X
s
(
f
)
of a sampled signal
x
s
(
t
)
obtained by the convolution between
X
(
f
)
and
P
(
f
)
as indicated in Eq. (3.16).
3.12
Passband signal with
BW
=
2
5
Hz and center frequency
f
c
=
7
0
Hz.
3.13
Result of sampling
X
(
f
)
in Figure 3.12 with
F
s
=
5
6
Hz.
3.14
Sampling with
F
s
=
4
5
0
Hz a complex-valued signal with non-symmetrical spectrum.
3.15
Result of converting
x
[
n
]
with spectrum
X
(
e
j
Ω
into
x
s
(
t
)
with
X
s
(
ω
)
=
X
(
e
j
ω
T
s
)
via a D/S conversion using
F
s
=
1
0
Hz.
3.16
Extended version of Figure 1.36 using an arbitrary reconstruction filter
h
(
t
)
and incorporating the filters
A
(
s
)
and
R
(
s
)
.
3.17
Reconstruction of a digital signal with
BW
=
2
5
kHz and
F
s
=
2
0
0
kHz using an analog filter with cutoff frequency
f
c
=
2
5
kHz.
3.18
Same as Figure 3.17, but for a signal with
BW
=
f
c
=
8
0
kHz.
3.19
Relations between natural frequency
ω
n
, pole center frequency
ω
0
, decay rate
α
and damping ratio
ζ
for a pair of complex conjugate poles.
3.20
Magnitude of the frequency response for the SOS expressed by Eq. (3.28) and Eq. (3.31).
3.21
Time-domain performance parameters for an underdamped system based on its step response.
3.22
Bandwidth
BW
=
f
2
−
f
1
defined by the cutoff frequencies where the gain falls
−
3
dB below the reference value at
f
0
.
3.23
DTFT magnitude in dB of a 4-th order Butterworth filter with cutoff frequency of 50 Hz and the equivalent ideal filter with absolute bandwidth of 102.4 Hz.
3.24
Effect of adding a linear phase
e
−
j
2
π
N
0
k
∕
N
(
N
0
=
4
and
N
=
5
0
) to the DTFS of
x
[
n
]
resulting in a delayed version
y
[
n
]
=
x
[
n
−
4
]
.
3.25
Effect of adding the specified nonlinear phase (top) to the pulse in Figure 3.24, which leads to a distorted signal (bottom).
3.26
Performance of a commercial SAW filter. The insertion gain
IG
(
f
)
at two resolutions at the left plot and superimposed to the group delay at the right.
3.27
Performance of a commercial ceramic filter.
3.28
Lowpass active analog filter designed with Texas Instruments’ FilterPro software.
3.29
Frequency response of filters designed in Listing 3.11. The magnitude specification masks are indicated. Note that the phase was unwrapped with the command
unwrap
for better visualization.
3.30
The canonical interface of a digital filter
H
(
z
)
with the analog world via A/D and D/A processes.
3.31
Comparison of two analog filters with its digital counterpart of Eq. (3.49). The “ideal” analog corresponds to Eq. (3.48) while the “10% tolerance” corresponds to its realization using the schematic of Figure 3.28.
3.32
Comparison of analog filter specification (a) and two corresponding digital versions assuming
F
s
=
2
0
0
0
Hz: (b) was obtained with
ω
=
Ω
s
and (c) uses the convention adopted in Matlab/Octave.
3.33
Examples (a) and (b) of bilinear mappings from the unit circle in z to s plane. Mappings (c) and (d) of chosen points in s to z plane. Each example shows the points identified by numbers in their original and mapped planes.
3.34
Version of Figure 1.45 in which the mapping between
ω
and
Ωses the bilinear transformation instead of the fundamental equation
ω
=
Ω
s
of Eq. (1.36).
3.35
Bilinear leads to a nonlinear warping between
ω
(rad/s) and
Ωrad). This example uses
F
s
=
0
.
5
Hz such that
ω
=
tan
(
Ω
2
)
.
3.36
Relation imposed by the bilinear transformation between
ω
(rad/s) and
Ωrad) for
F
s
=
1
0
0
Hz. In this case, the frequency
ω
0
=
5
4
0
.
4
rad/s is mapped to
Ω
=
2
.
4
3
3
±
k
2
π
rad.
3.37
|
H
(
f
)
|
corresponding to
H
(
s
)
=
1
0
1
(
s
−
1
)
(
s
−
1
)
∕
[
(
s
+
1
)
(
s
+
1
−
j
1
0
)
(
s
+
1
+
j
1
0
)
]
of Eq. (3.76), to illustrate the bilinear transformation.
3.38
Frequency responses from
H
(
z
)
obtained via the bilinear transformation of
H
(
s
)
in Eq. (3.76) using
F
s
=
1, 3, 5 and 7 Hz.
3.39
Magnitude (in dB) for bilinear transformations of
H
(
s
)
.
3.40
Frequency responses of
H
s
(
ω
a
)
given by Eq. (3.78),
H
z
(
e
j
Ω
obtained via
bilinear
and
H
z
(
e
j
ω
d
T
s
)
(from top to bottom).
3.41
Three categories of bilinear applications and the respective choices of the sampling frequency
F
s
in the bilinear transformation depends on the application, and can be arbitrary in the case of digital filter design.
3.42
Steps of IIR filter design using the bilinear transformation when the filter requirements are in discrete-time (frequencies in radians).
3.43
Version of Figure 1.45 and Figure 3.34 that considers both the bilinear transformation and the conversion of continuous to discrete-time via
ω
=
Ω
s
.
3.44
Steps for designing an IIR filter
H
(
z
)
using the bilinear transformation, assuming a lowpass filter. The set of requirements can be in discrete (
Φ
) or continuous-time (
ϕ
) domain.
3.45
Steps for matching a single frequency
ω
m
using the bilinear transformation when a continuous-time system function
G
(
s
)
and
F
s
are provided.
3.46
Version of Figure 3.40 for which
bilinear
used pre-warping for obtaining
ω
a
=
ω
d
=
2
0
rad/s.
3.47
Mask for a differentiator filter specified with the syntax for arbitrary filter magnitudes.
3.48
Frequency response of filters obtained with
firls
.
3.49
FIR design described in both time and frequency domains.
3.50
Group delay and phase for a channel represented by a symmetric FIR with linear phase and constant group delay of 3 samples.
3.51
Group delay and phase for a channel represented by a non-symmetric FIR
h=[0.3 -0.4 0.5 0.8 -0.2 0.1 0.5]
with non-linear phase.
3.52
Impulse and frequency responses for the four types of symmetric FIR filters exemplified in Listing 3.31.
3.53
Two distinct realizations of
y
[
n
]
=
5
x
[
n
]
−
5
x
[
n
−
1
]
.
3.54
Two structures for FIR realizations.
3.55
Two alternatives for implementing the digital filter of Eq. (3.97).
3.56
IIR of Eq. (3.97) implemented with the transposed direct form II. The intermediate diagram in (a) is obtained by transposing Figure 3.55(c), while (b) simply reorganizes it.
3.57
Realization of Eq. (3.97) with transposed direct form II second order sections.
3.58
The original magnitude response of the 8-th order filter and its 16-bits per coefficient Q7.8 quantized version (
b
i
=
7
and
b
f
=
8
).
3.59
Zeros and poles for the original and quantized filters discussed in Figure 3.58.
3.60
Magnitude of quantized filter with Q10.15 using 26 bits (
b
i
=
1
0
and
b
f
=
1
5
) and the original filter of order 8 in Figure 3.58.
3.61
Magnitude responses for original filter of order 14 and its quantized version with
b
=
2
6
bits (
b
i
=
1
0
and
b
f
=
1
5
).
3.62
Zeros and poles for the corresponding filter in Figure 3.61. Note the occurrence of poles outside the unit circle, which make the quantized filter unstable.
3.63
Example of filter outputs with floating-point double precision and fixed-point using Q2.3 generated with Listing 3.38.
3.64
Frequency responses of
H
non
(
z
)
and its minimum-phase counterpart
H
min
(
z
)
.
3.65
Group delay for the non-minimum and minimum phase systems of Figure 3.64.
3.66
Original spectrum
X
(
e
j
Ω
(top) and its upsampled version
Q
(
e
j
Ω
=
X
(
e
j
L
Ω
with
L
=
4
(bottom).
3.67
Zoom of the bottom plot of Figure 3.66: due to the upsampling by
L
=
4
,
Q
(
e
j
Ω
has four replicas of the original lowpass spectrum.
3.68
Original spectrum
Z
(
e
j
Ω
(top) and the result
Y
(
e
j
Ω
(bottom) of downsampling it by
M
=
3
.
3.69
Screenshot of the DigitalFilter GUI after user informed the coefficients of the filter obtained with Matlab/Octave command
[B,A]=butter(4,0.5)
.
3.70
Result of experiment with Listing 3.43 and soundboards of two computers: an analog bandpass filter implemented with the canonical interface of Figure 3.30.
3.71
Result similar to Figure 3.70 but without silence intervals in acquired signal (top plot).
3.72
Magnitudes of frequency responses for two resonators with
H
(
s
)
as in Eq. (3.27).
3.73
Poles (left) and magnitude
|
H
(
ω
)
|
(right) for the sixth-order Butterworth filter of Listing 3.45.
3.74
Similar to Figure 3.73, but with a sixth-order Chebyshev Type 1 filter.
3.75
Zero-pole plot (top) and magnitude in dB (bottom) for the sixth-order elliptic filter of Listing 3.45.
3.76
Sound system phase frequency response
∠
H
(
f
)
estimated from an impulse response.
3.77
DSL line topology and corresponding impulse response
3.78
Impulse response and group delay of IIR filter obtained with
[B,A]=butter(8,0.3)
.
3.79
Input signal
x
[
n
]
and its FFT.
3.80
Input
x
[
n
]
and output
y
[
n
]
signals obtained via convolution with truncated
h
[
n
]
.
3.81
Input
x
[
n
]
and output
y
[
n
]
signals obtained via
filter
.
3.82
Output
y
[
n
]
aligned with the input
x
[
n
]
and corresponding error
x
[
n
]
−
y
[
n
]
.
3.83
Filtering in blocks of
N
=
5
samples but not updating the filter’s memory.
3.84
Linear phase FIR channel obtained with
h=fir1(10,0.8)
.
3.85
IIR channel obtained with
[B,A]=butter(5,0.8)
.
3.86
Spectrum of a band-limited even and real-valued signal
x
[
n
]
.
4.1
Selected windows
w
[
n
]
of duration
N
=
3
2
samples in time-domain.
4.2
The DTFTs
W
(
e
j
Ω
of the windows in Figure 4.1.
4.3
The DTFT of windows in Figure 4.1 with their values normalized such that
|
W
(
e
j
Ω
|
=
1
for
Ω
0
rad.
4.4
Comparison of DTFT and DFT/FFT of a bin-centered signal
x
[
n
]
=
6
cos
(
(
α
2
π
∕
N
)
n
)
, with
α
=
8
and
N
=
3
2
, using a rectangular window.
4.5
Comparison of DTFT and DFT/FFT of a bin-centered signal
x
[
n
]
=
6
cos
(
(
α
2
π
∕
N
)
n
)
, with
α
=
8
and
N
=
3
2
, using a flattop window.
4.6
Comparison of DTFT and DFT/FFT of a non-bin-centered signal
x
[
n
]
=
6
cos
(
(
α
2
π
∕
N
)
n
)
, with
α
=
8
.
5
and
N
=
3
2
, using a rectangular window.
4.7
Comparison of DTFT and DFT/FFT of a non-bin-centered signal
x
[
n
]
=
6
cos
(
(
α
2
π
∕
N
)
n
)
, with
α
=
8
.
5
and
N
=
3
2
, using a flattop window.
4.8
Comparison of DTFT and DFT/FFT of a non-bin-centered signal
x
[
n
]
=
6
cos
(
(
α
2
π
∕
N
)
n
)
, with
α
=
8
.
5
and
N
=
3
2
, using a Hann window.
4.9
DFT filter bank with rectangular window of
N
=
8
samples. The circles mark the DFT bin centers. The filter for
k
=
3
is emphasized.
4.10
DFT filter bank with Kaiser window of
N
=
8
samples. The filter for
k
=
6
is emphasized.
4.11
Relationship between roots location and spectrum for a rectangular window with
N
=
3
2
samples.
4.12
Relationship between roots location and spectrum for a Kaiser window with
N
=
3
2
samples.
4.13
Spectrum leakage when a cosine
x
[
n
]
=
3
cos
(
Ω
n
)
, with
Ω
=
1
rad, is windowed with a rectangular window
w
[
n
]
of duration
N
=
1
4
samples.
4.14
Spectrum leakage when the cosine
x
[
n
]
=
3
cos
(
n
)
of Figure 4.13 is now windowed with a rectangular window
w
[
n
]
of duration
N
=
5
0
samples.
4.15
Spectrum leakage when the cosine
x
[
n
]
=
3
cos
(
n
)
of Figure 4.13 is now windowed with a rectangular window
w
[
n
]
of duration
N
=
1
0
0
0
samples.
4.16
Comparison of DTFT and DFT/FFT for bin-centered
x
[
n
]
=
6
cos
(
(
α
2
π
∕
N
)
n
)
, with
α
=
2
and
N
=
8
using a rectangular window.
4.17
Version of Figure 4.16 for a (worst-case) not bin-centered
x
[
n
]
=
6
cos
(
(
α
2
π
∕
N
)
n
)
, with
α
=
2
.
5
and
N
=
8
using a rectangular window.
4.18
Comparison of DTFT and DFT/FFT for a constant signal
x
[
n
]
=
6
cos
(
(
α
2
π
∕
N
)
n
)
=
A
, with
α
=
0
(centered at DC) and
N
=
8
, using a rectangular window.
4.19
Comparing FFT results for non-periodic and a periodic but not bin-centered signals.
4.20
Comparison of DTFT and DFT for
x
[
n
]
=
6
cos
(
(
α
2
π
∕
N
)
n
)
with
α
=
5
0
(left) and
α
=
3
0
.
5
(right)
N
=
2
5
6
using a rectangular window.
4.21
Comparison of spectra obtained with four windows in case both sinusoids are bin-centered (left plots) and not (right plots).
4.22
Individual spectra of the two sinusoids superimposed obtained using the Kaiser window.
4.23
Individual spectra of the two sinusoids superimposed, obtained using the rectangular window.
4.24
Important categories of signals. Some of these features need to be taken in account when defining the PSD function.
4.25
PSD
S
x
(
f
)
of a continuous-time white noise with
N
0
∕
2
=
3
W/Hz (top) and its discrete-time counterpart
S
x
(
e
j
Ω
obtained with
F
s
=
2
0
0
Hz (bottom).
4.26
ESD, PSD and MS spectrum with respective units, and two methods for estimating PSDs that can also be used to estimate the ESD and MS spectrum.
4.27
Periodogram of
x
[
n
]
=
1
0
cos
(
(
2
π
∕
6
4
)
n
)
in dBW/Hz estimated by
periodogram.m
with a
1
0
2
4
-point FFT and assuming
F
s
=
8
kHz.
4.28
Periodogram of two bin-centered sinusoids at 250 and 500 Hz, calculated via Matlab and its definition.
4.29
Periodograms of two bin-centered sinusoids at 250 and 500 Hz, calculated with the rectangular and Hamming windows.
4.30
Periodograms of two non-bin-centered sinusoids at
f1=507.8125
and
f2=257.8125
Hz calculated with the rectangular and Hamming windows.
4.31
Periodograms of a cosine contaminated by AWGN at an SNR of
−
3
dB with an FFT of
N
=
1
0
2
4
points (top plot) and 16384 (bottom). In this case the SNR cannot be inferred directly from the noise level.
4.32
Discrete-time PSD of
x
[
n
]
=
1
0
cos
(
(
2
π
∕
8
)
n
)
in linear scale estimated with periodograms.
4.33
Periodogram and MS spectrum for a sum of sinusoids. Both are in dB scale.
4.34
Periodograms of a white noise with power equal to 600 W estimated with
N
=
3
0
0
(top) and
N
=
3
0
0
0
(bottom) samples.
4.35
PSD of a filtered white noise
x
[
n
]
estimated via the autocorrelation.
4.36
PSD of a white noise
x
[
n
]
with power equal to 600 W estimated by Welch’s method with
M
=
3
2
(top) and
M
=
2
5
6
(bottom) samples per segment.
4.37
PSD of a filtered white noise
x
[
n
]
estimated by Welch’s method.
4.38
The prediction-error filter is
A
(
z
)
=
1
−
A
~
(
z
)
, where
A
~
(
z
)
provides a prediction
y
~
[
n
]
of the current
n
-th signal sample
y
[
n
]
, based on previous samples
y
[
n
−
1
]
,
…
,
y
[
n
−
P
]
.
4.39
PSDs estimated from a realization
y
[
n
]
of a autoregressive process. The model adopted for the AR-based estimation matches the one used to generate
y
[
n
]
.
4.40
PSDs estimated from a realization
y
[
n
]
of a moving average process that does not match the model adopted for the AR-based estimation.
4.41
PSD (top) and spectrogram (bottom) of a cosine that has its frequency increased from
Ω
2
π
∕
3
0
to
2
π
∕
7
and its power decreased by 20 dB at half of its duration.
4.42
All twelve DTMF symbols: 1-9,*,0,#, each one composed by a sum of a low [697,770,852,941] and a high [1209,1336,1477] (Hz) frequencies.
4.43
Example of narrowband spectrogam of a speech signal.
4.44
Example of wideband spectrogam of a speech signal.
4.45
DTFT magnitude of a cosine of frequency
Ω
=
1
.
7
2
7
9
rad.
4.46
DTFT and FFT magnitudes of a cosine of frequency
Ω
=
2
.
1
2
0
6
rad.
4.47
Sound system magnitude frequency response
|
H
(
f
)
|
estimated from an impulse response.
4.48
Sound system magnitude frequency response
|
H
(
f
)
|
estimated from a white noise input.
4.49
Spectrogram and tracks of the first four formant frequencies estimated via LPC for a speech sentence “We were away”.
4.50
Eight DTMF symbols, each one with a 100 ms duration.
A.1
Q function for three different SNR ranges.
A.2
The perpendicular line for obtaining the projection
p
x
y
of a vector
x
onto
y
in
ℝ
2
.
A.3
Projections of a vector
x
and
y
onto each other.
A.4
Scatter plot of the input data and the basis functions obtained via PCA and Gram-Schmidt orthonormalization.
A.5
Scatter plots of two-dimensional Gaussian vector
x
(represented by x) and PCA transformed vectors
y
(represented by +)
A.6
Scatter plots of two-dimensional Gaussian vector
x
(x) and Gram-Schmidt transformed vectors
y
(+).
A.7
PMF for a dice result.
A.8
Obtaining probability from a pdf (density function) requires integrating over a range.
A.9
Example of a finite-duration random signal.
A.10
Example of five realizations of a discrete-time random process.
A.11
Example of evaluating a random process at time instants
n
=
4
and
n
=
6
, which correspond to the values of two random variables
X
[
4
]
and
X
[
6
]
.
A.12
Example of a joint pdf of the continuous random variables
X
[
4
]
and
X
[
6
]
.
A.13
Correlation for data in matrix
victories
.
A.14
Version of Figure A.13 using an image.
A.15
Comparison between the 3-d representation of the autocorrelation matrix in Figure A.13 and the one using lags as in Eq. (A.73).
A.16
Comparison between autocorrelation representations using images instead of 3-d graphs as in Figure A.15.
A.17
Representation of a WSS autocorrelation matrix that depends only on the lag
l
.
A.18
Suggested taxonomy of random processes.
A.19
Correlation for random sequences with two equiprobable values:
A.20
Alternative representation of the correlation values for the polar case of Figure A.19.
A.21
One-dimensional ACF
R
X
[
l
]
for the data corresponding to Figure A.19 (unipolar and polar codes).
A.22
ACF estimated using ergodicity and waveforms with 1,000 samples for each process.
A.23
Functions used to characterize cyclostationary processes.
A.24
Single realization
x
[
n
]
of Eq. (A.82) (top) and the ensemble variance over time (bottom plot), which has a period of
P
∕
2
=
1
5
samples.
A.25
Cyclostationary analysis of the modulated white Gaussian noise.
A.26
Realizations of polar signal
m
u
[
n
]
after
upsampling
by
L
=
4
samples.
A.27
Autocorrelation of the cyclostationary
m
u
[
n
]
polar signal
upsampled
by
L
=
4
.
A.28
Autocorrelation of the cyclostationary
m
u
[
n
]
unipolar signal obtained with
upsampling
by
L
=
4
.
A.29
Realizations of an upsampled polar signal (
L
=
4
) with random initial sample.
A.30
ACF for the same polar process that generated the realizations in Figure A.29.
A.31
Three realizations of the random process corresponding to upsampling by 2 and randomly shifting a sinusoid of period
N
=
4
and amplitude
A
=
4
.
A.32
Correlation of the WSS process corresponding to Figure A.31.
A.33
Autocorrelation matrices for the a) WSS process obtained by phase randomization of the b) WSC process.
A.34
Basic setup for measuring the insertion loss of a device under test (DUT).
A.35
PDF for a dice roll result when the random variable is assumed to be continuous.
A.36
A 3-d representation of the system
H
(
z
)
=
3
−
2
z
−
1
.
A.37
Fixed-point representation Q3.4 of the real number 5.0625.
A.38
Comparison of step sizes for IEEE 754 floating points with single and double precision
B.1
Screenshot of the FileViewer software.
B.2
Screenshot of the FileViewer dialog window that allows to convert between Linux and Windows text files.
B.3
Screenshot of the FileViewer software showing the result of converting the Windows file of Figure B.1 to the Linux format.
B.4
Interpretation of the file in Figure B.3 as short (2-bytes) big-endian elements.
B.5
Contents in hexadecimal of file
floatsamples.bin
generated with Listing B.1.
B.6
Interpretation as big-endian floats of file
floatsamples.bin
generated with Listing B.1.
B.7
Contents interpreted as floats of big-endian file
htk_file.bin
generated with HTK.
Digital Signal Processing with Python, Matlab or Octave
GitHub
PDF
Home
⇐
Previous
⇑
Up
Next
⇒