1.2. Signals: categories, representations and characterizations – Digital Filters Design for Signal and Image Processing

1.2. Signals: categories, representations and characterizations

1.2.1. Definition of continuous-time and discrete-time signals

The function of a signal is to serve as a medium for information. It is a representation of the variations of a physical variable.

A signal can be measured by a sensor, then analyzed to describe a physical phenomenon. This is the situation of a tension taken to the limits of a resistance in order to verify the correct functioning of an electronic board, as well as, to cite one example, speech signals that describe air pressure fluctuations perceived by the human ear.

Generally, a signal is a function of time. There are two kinds of signals: continuous and discrete-time.

A continuous-time or analog signal can be measured at certain instants. This means physical phenomena create, for the most part, continuous-time signals.

Figure 1.1. Example of the sleep spindles of an electroencephalogram (EEG) signal

The advancement of computer-based techniques at the end of the 20th century led to the development of digital methods for information processing. The capacity to change analog signals to digital signals has meant a continual improvement in processing devices in many application fields. The most significant example of this is in the field of telecommunications, especially in cell phones and digital televisions. The digital representation of signals has led to an explosion of new techniques in other fields as varied as speech processing, audio frequency signal analysis, biomedical disciplines, seismic measurements, multimedia, radar and measurement instrumentation, among others.

The signal is said to be a discrete-time signal when it can be measured at certain instants; it corresponds to a sequence of numerical values. Sampled signals are the result of sampling, uniform or not, of a continuous-time signal. In this work, we are especially interested in signals taken at regular intervals of time, called sampling periods, which we write as where fs is called the sampling rate or the sampling frequency. This is the situation for a temperature taken during an experiment, or of a speech signal (see Figure 1.2). This discrete signal can be written either as x(k) or x(kTs). Generally, we will use the first writing for its simplicity. In addition, a digital signal is a discrete-time discrete-valued signal. In that case, each signal sample value belongs to a finite set of possible values.

Figure 1.2. Example of a digital voiced speech signal (the sampling frequency fs is at 16 KHz)

The choice of a sampling frequency depends on the applications being used and the frequency range of the signal to be sampled. Table 1.1 gives several examples of sampling frequencies, according to different applications.

Table 1.1. Sampling frequencies according to processed signals

In Figure 1.3, we show an acquisition chain, a processing chain and a signal restitution chain.

The adaptation amplifier makes the input signal compatible with the measurement chain.

A pre-filter which is either pass-band or low-pass, is chosen to limit the width of the input signal spectrum; this avoids the undesirable spectral overlap and hence, the loss of spectral information (aliasing). We will return to this point when we discuss the sampling theorem in section 3.2.2.9. This kind of anti-aliasing filter also makes it possible to reject the out-of-band noise and, when it is a pass-band filter, it helps suppress the continuous component of the signal.

The Analog-to-Digital Converter (A/D) partly carries out sampling, and then quantification, at the sampling frequency fs, that is, it allocates a coding to each sampling on a certain number of bits.

The digital input signal is then processed in order to give the digital output signal. The reconversion into an analog signal is made possible by using a D/A converter and a smoothing filter.

Many parameters influence sampling, notably the quantification step and the response time of the digital system, both during acquisition and restitution. However, by improving the precision of the AID converter and the speed of the calculators, we can get around these problems. The choice of the sampling frequency also plays an important role.

Figure 1.3. Complete acquisition chain and digital processing of a signal

Different types of digital signal representation are possible, such as functional representations, tabulated representations, sequential representations, and graphic representations (as in bar diagrams).

Looking at examples of basic digital signals, we return to the unit sample sequence represented by the Kronecker symbol δ(k), the unit step signal u(k), and the unit ramp signal r(k). This gives us:

Unit sample sequence:

Unit step signal:

Unit ramp signal:

Figure 1.4. Unit sample sequence δ(k) and unit step signal u(k)

1.2.2. Deterministic and random signals

We class signals as being deterministic or random. Random signals can be defined according to the domain in which they are observed. Sometimes, having specified all the experimental conditions of obtaining the physical variable, we see that it fluctuates. Its values are not completely determined, but they can be evaluated in terms of probability. In this case, we are dealing with a random experiment and the signal is called random. In the opposite situation, the signal is called deterministic.

Figure 1.5. Several realizations of a 1-D random signal

EXAMPLE 1.1.– let us look at a continuous signal modeled by a sinusoidal function of the following type.

x(t) = a×sin (2πft)

This kind of model is deterministic. However, in other situations, the signal amplitude and the signal frequency can be subject to variations. Moreover, the signal can be disturbed by an additive noise b(t); then it is written in the following form:

x(t) = a(t)×sin (2πf (tt)+b(t)

where a(t), f(t) and b(t) are random variables for each value of t. We say then that x(t) is a random signal. The properties of the received signal x(t) then depends on the statistical properties of these random variables.

Figure 1.6. Several examples of a discrete random 2-D process

1.2.3. Periodic signals

The class of signals termed periodic plays an important role in signal and image processing. In the case of a continuous-time signal, a signal is called periodic of period T0 if T0 is the smallest value verifying the relation:

x(t + T0) = x(t), t.

And, for a discrete-time signal, the period of which is N0, we have:

x(k + N0) = x(t), k.

EXAMPLE 1.2.– examples of periodic signals:

1.2.4. Mean, energy and power

We can characterize a signal by its mean value. This value represents the continuous component of the signal.

When the signal is deterministic, it equals:

When a continuous-time signal is periodic and of period T0. the expression of the mean value comes to:

PROOF – we can always express the integration time T1 according to the period of the signal in the following way:

T1 = kT0 + ξ where k is an integer and ξ is chosen so that 0 < ξ ≤ T0.

From there, , since ξ becomes insignificant compared to kT0.

By using the periodicity property of the continuous signal x(t), we deduce that

When the signal is random, the statistical mean is defined for a fixed value of t, as follows:

where E[.] indicates the mathematical expectation and p(x, t) represents the probability density of the random signal at the instant t. We can obtain the mean value if we know p(x, t); in other situations, we can only obtain an estimated value.

For the class of signals called ergodic in the sense of the mean, we assimilate the statistical mean to the temporal mean, which brings us back to the expression we have seen previously:

Often, we are interested in the energy of the processed signal. For a continuous-time signal x(t), we have:

In the case of a discrete-time signal, the energy is defined as the sum of the magnitude-squared values of the signal x(k):

For a continuous-time signal x(t), its mean power P is expressed as follows:

For a discrete-time signal x(k), its mean power is represented as:

In signal processing, we often introduce the concept of signal-to-noise ratio (SNR) to characterize the noise that can affect signals. This variable, expressed in decibels (dB), corresponds to the ratio of powers between the signal and the noise. It is represented as:

where Psignal and Pnoise indicate, respectively, the powers of the sequences of the signal and the noise.

EXAMPLE 1.3.– let us consider the example of a periodic signal with a period of 300 Hz signal that is perturbed by a zero-mean Gaussian additive noise with a signal-to-noise ratio varying from 20 to 0 dB at each 10 dB step. Figures 1.7 and 1.8 show these different situations.

Figure 1.7. Temporal representation of the original signal and of the signal with additive noise. with a signal-to-noise ratio equal to 20 dB

Figure 1.8. Temporal representation of signals with additive noise, with signal-to-noise ratios equal to 10 dB and 0 dB

1.2.5. Autocorrelation function

Let us take the example of a deterministic continuous signal x(t) of finite energy. We can carry out a signal analysis from its autocorrelation function, which is represented as:

The autocorrelation function allows us to measure the degree of resemblance existing between x(t) and x(t − τ). Some of these properties can then be shown from the results of the scalar products.

From the relations shown in equations (1.4) and (1.9), we see that Rxx(0) corresponds to the energy of the signal. We can easily demonstrate the following properties:

When the signal is periodic and of the period T0, the autocorrelation function is periodic and of the period T0. It can be obtained as follows:

We should remember that the autocorrelation function is a specific instance of the intercorrelation function of two deterministic signals x(t) and y(t), represented as:

Now, let us look at a discrete-time random process {x(k)}. We can describe this process from its autocorrelation function, at the instants kl and k2, written Rxx (k1, k2) and expressed as

where x*(k2) denotes the conjugate of x(k2) in the case of complex processes.

The covariance (or autocovariance) function Cxx taken at instants k1 and k2 of the process is shown by:

where E[x(k1)] indicates the statistical mean of x(k1).

We should keep in mind that, for zero-mean random processes, the autocovariance and autocorrelation functions are equal.

The correlation coefficient is as follows:

It verifies:

When the correlation coefficient ρxx (k1, k2) takes a high and positive value, the values of the random processes at instants k1 and k2 have similar behaviors. This means that the elevated values of x(k1) correspond to the elevated values of x (k2). The same holds true for the lowered values k1; the process takes the lowered values of k2. The more ρxx (k1, k2) tends toward zero, the lower the correlation. When ρxx (k1, k2) equals zero for all distinct values of k1 and k2, the values of the process are termed decorrelated. If ρxx(k1, k2) becomes negative, x(k1) and x(k2) have opposite signs.

In a more general situation, if we look at two random processes x(k) and y(k), their intercorrelation function is written as:

As for the intercovariance function, it is shown by:

The two random process are not correlated if

A process is called stationary to the 2nd order, even in a broad sense, if its statistical mean μ = E[x(k)] is a constant and if its autocorrelation function only depends on the gap between k1 and k2; that is, if:

From this, in stationary processes, the autocorrelation process verifies two conditions.

The first condition relates to symmetry. Given that:

we can easily show that:

For the second condition, we introduce the random vector consisting of M+ I samples of the process {x(k)}:

The autocorrelation matrix RM is represented by where indicates the hermetian vector of . This is a Toeplitz matrix that is expressed in the following form:

NOTE.– vectoral and matricial approaches can often be employed in signal processing. As well, using autocorrelation matrices and, more generally, intercorrelation matrices, can be effective. This type of matrix plays a role in the development of optimal filters, notably those of Wiener and Kalman. It is important to implement decomposition techniques in signal and noise subspaces used for spectral analysis, speech enhancement, determining the number of users in a telecommunication cell, to mention a few usages.