User:Eric.clay

From Class Wiki
Revision as of 12:47, 16 December 2008 by Eric.clay (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Eric Clay's Signals and Systems Homepage

Hi Everyone,

I'm enrolled in Signals and Systems for Fall 2008. I'm a senior this year (not graduating) in electrical engineering.

Orthogonal Functions & Fourier Series

Orthogonal Functions

If we think of functions as vectors, then the concept of orthogonality between functions and vectors should be the same. Mathematically vectors are orthogonal if the inner ("dot") product is 0 and this can be extended to functions using .

Another way to think of this is that vectors are orthogonal if they have no component in each other. For functions, this can be roughly translated to mean that there is no overlap between the functions.

Fourier Series

When thinking in terms of orthogonal functions, it is helpful to take the idea of basis vectors and apply them to functions. The Fourier series can be used as a set of basis functions since it is infinitely repeating with n and allows us to use Fourier transforms to simplify calculations.

Oversampling in a CD Player

CD Player Overview

A CD stores audio data as a series of amplitudes measured at discrete time intervals. The sampling frequency for an audio CD is about 44,000 Hz. This frequency was chosen because the upper limit of human hearing is about 20,000 Hz and according to Nyquist's Theorem, we need to sample at least twice this frequency to avoid alias signals (see section on signal aliasing). The extra 2,000 Hz is used as an alias buffer for high frequencies. When this standard was adopted, data storage was still expensive so this was the minimum sample rate that could be used that would prevent aliasing. The first generation of CD players didn't do any digital processing and just converted the digital signal straight to analog where a high quality filter and amplifier would then clean it up and send it to the speakers. As digital electronics improved, digital signal processing in a device such as a CD player became feasible. The expensive analog circuits were replaced by digital signal processors and cheap RC filters. One technique of improving the sound quality was to use oversampling.

Oversampling

The basic idea behind oversampling is that curves follow a reasonably predictable path between the data points. When the CD is being played, the player essentially is connecting the data points like you would connect dots in a paper game. When you start oversampling, you are basically guessing at the intermediate values between the data points on the CD to try to produce a smoother curve that will sound better. The more points you add in, the smoother the curve. For example, 8x oversampling adds 8 points between every data point on the CD. This is possible due to the fact that most signal processors run in the MHz compared to the minimum sample rate of 44,000 Hz. The simplest type of oversampling is nearest neighbor. With this type of oversampling you take the average of the amplitudes of two points and put a point halfway between them with this average value. This method can also be expanded to more than two points if desired. Since this is a digital process implemented in code on a microprocessor, much more complex algorithms can be easily implemented provided the processor is fast enough.

The Mathematics Behind Oversampling

We can think of the process of making a digital recording of an analog signal for a CD as the multiplication of the audio wave with a string of identical impulse functions spaced out every 44,000 Hz. This produces a string of impulse functions of varying height that approximates the audio wave if you connect the data points with lines. Based on what we did in class, we saw that we could implement nearest neighbor oversampling by convolving the digital approximation of the audio wave with the function


Discrete Fourier Transform

The purpose of a Discrete Fourier Transform (DFT) is to allow for a discrete range of data (both in time and frequency) to be transformed and put into a finite system such as a computer. With a regular Fourier Transform you need to know the function for all time (an infinite amount of data) and if the function is not known for all time then the result is infinite in terms of frequency. The DFT gets around both of these problems by using a sum rather than an integral and provides an approximation of the Fourier Transform for the function based on the finite data set you do have. The result of a DFT is a periodic function, which can easily be broken down into a discrete function by taking a single period.

The Math of a DFT

A regular FT is of the form .

The DFT is of the form .

The main difference here is that the DFT is a discrete numerical approximation of a FT over a finite data set. This allows it to be done with real-world data in a computer and makes the transform useful for more than just theoretical applications. In real-world use, the data set would be transformed using the DFT and then be processed and then put back into the time domain using the IDFT since this process is faster than trying to convolve in the time domain.


Adaptive FIR Filter

Analog filters have their coefficients set beforehand by the physical component values. In a digital system, the filter is implemented in code running on a microprocessor and does not have these same limitations. One of the developments to take advantage of this new technology was the Adaptive FIR Filter. This filter periodically updates the coefficients to try to knock out a certain signal characteristic. One of the most common uses of this is noise cancellation.

Noise Cancellation With an Adaptive FIR Filter

One common type of noise is a periodic signal. This may be generated by someone testing their radio transmitter with a tone or could be from something like a jet engine. In this case, the filter is designed to lock onto periodic waveforms and notch them out. It does this by trying to drive the output to 0, but the coefficient adjust is such that it will only be able to lock onto periodic signals; mostly random signals such as voice change too much for the filter to effectively lock onto and eliminate. An example of an octave algorithm is shown below:

 h = ones(N,1);
 mu = 0.1;
 for k=N:Ls
 xk = x(k:-1:(k-N+1));
 y(k) = h'*xk';
 e(k) = y(k);
 h = h - mu*e(k)*xk'/(xk*xk');
 end

where N is the number of filter coefficients, mu is the step size for how much it moves per period, x is a matrix containing the noisy signal and e is a matrix containing the cleaned signal.

Another way to do noise cancellation is to use two microphones, one picking up just the noise, and the other picking up the noise and the signal. In this case, the filter tries to drive the noise to 0 on the pure noise signal, and also applies this filter to the noisy signal. The filter is much more aggressive than the one used for periodic signals and can eliminate many more types of noise since it is trying to extract the difference between the two signals rather than having to guess what the original signal was.