Fourier series - by Ray Betz: Difference between revisions

From Class Wiki
Jump to navigation Jump to search
No edit summary
 
(29 intermediate revisions by 2 users not shown)
Line 1: Line 1:
*[[Signals and systems|Signals and Systems]]
==Fourier Series==
==Fourier Series==
If
If
# <math> x(t) = x(t + T)</math>
# <math> x(t) = x(t + T)</math>
# Dirichlet conditions are satisfied
*[[Dirichlet Conditions]] are satisfied
then we can write
then we can write
<center>
<center>
<math> \bold x(t) = \sum_{k=-\infty}^\infty \alpha_k e^ \frac {j 2 \pi k t}{T}</math>
<math> \bold x(t) = \sum_{k=-\infty}^\infty \alpha_k e^ \frac {j 2 \pi k t}{T}</math>
</center>
</center>
The above equation is called the complex fourier series. Given <math>x(t)</math>, we may determine <math> \alpha_k </math> by taking the [[inner product]] of <math>\alpha_k</math> with <math>x(t)</math>.
The above equation is called the complex Fourier Series. Given <math>x(t)</math>, we may determine <math> \alpha_k </math> by taking the [[inner product]] of <math>\alpha_k</math> with <math>x(t)</math>.
Let us assume a solution for <math>\alpha_k</math> of the form <math>e^ \frac {j 2 \pi n t}{T}</math>. Now we take the inner product of <math>\alpha_k</math> with <math>x(t)</math> over the interval of one period, <math> T </math>.
Let us assume a solution for <math>\alpha_k</math> of the form <math>e^ \frac {j 2 \pi n t}{T}</math>. Now we take the inner product of <math>e^ \frac {j 2 \pi n t}{T}</math> with <math>x(t)</math> over the interval of one period, <math> T </math>.
<math> <\alpha_k|x(t)> = <e^ \frac {j 2 \pi n t}{T}|\sum_{k=-\infty}^\infty \alpha_k e^ \frac {j 2 \pi k t}{T}> </math>
<math> <e^ \frac {j 2 \pi n t}{T}|x(t)> = <e^ \frac {j 2 \pi n t}{T}|\sum_{k=-\infty}^\infty \alpha_k e^ \frac {j 2 \pi k t}{T}> </math>
<math>= \int_{-\frac{T}{2}}^\frac{T}{2} x(t)e^ \frac {-j 2 \pi n t}{T} dt </math>
<math>= \int_{-\frac{T}{2}}^\frac{T}{2} x(t)e^ \frac {-j 2 \pi n t}{T} dt </math>
<math>= \int_{-\frac{T}{2}}^\frac{T}{2} \sum_{k=-\infty}^\infty \alpha_k e^ \frac {j 2 \pi k t}{T}e^ \frac {-j 2 \pi n t}{T} dt </math>
<math>= \int_{-\frac{T}{2}}^\frac{T}{2} \sum_{k=-\infty}^\infty \alpha_k e^ \frac {j 2 \pi k t}{T}e^ \frac {-j 2 \pi n t}{T} dt </math>
Line 22: Line 23:
<math> \int_{-\frac{T}{2}}^\frac{T}{2} e^ \frac {j 2 \pi (k-n) t}{T} dt = 0 </math>
<math> \int_{-\frac{T}{2}}^\frac{T}{2} e^ \frac {j 2 \pi (k-n) t}{T} dt = 0 </math>


We can simplify the above two conclusion into one equation. (What is the [[delta function]] below?)
We can simplify the above two conclusions into one equation. (What is the [[delta function]] below?)


<math> \sum_{k=-\infty}^\infty \alpha_k \int_{-\frac{T}{2}}^\frac{T}{2} e^ \frac {j 2 \pi (k-n) t}{T} dt = \sum_{k=-\infty}^\infty T \delta_{k,n} \alpha_k = T \alpha_n </math>
<math> \sum_{k=-\infty}^\infty \alpha_k \int_{-\frac{T}{2}}^\frac{T}{2} e^ \frac {j 2 \pi (k-n) t}{T} dt = \sum_{k=-\infty}^\infty T \delta_{k,n} \alpha_k = T \alpha_n </math>
Line 44: Line 45:
What if we now multiplied our impulse by a coefficient? Since our system is linear, the proportionality property applies. If we put <math> x(u)\delta(t-u)</math> into our system then we should get out <math>x(u)h(t-u)</math>.
What if we now multiplied our impulse by a coefficient? Since our system is linear, the proportionality property applies. If we put <math> x(u)\delta(t-u)</math> into our system then we should get out <math>x(u)h(t-u)</math>.


By the superposition property(because we have a linear system) we may put into the system the integral of <math> x(u)\delta(t-u)</math> and we would get out <math> \int_{-\infty}^\infty x(u)h(t-u) du</math>. What would we get if we put <math> e^{j 2 \pi f t} </math> into our system? We could find out by plugging <math> e^{j 2 \pi f t} </math> in for <math> x(u) </math> in the integral that we just found the output for above. If we do a change of variables (<math> v = t-u </math>, and <math> dv = -du </math>) we get <math> \int_{-\infty}^\infty x(u)h(t-u) du = \int_{-\infty}^\infty e^{j 2 \pi f t} h(t-u) du = -\int_{\infty}^{-\infty} e^{j 2 \pi f (t-v)} h(v) dv = e^{j 2 \pi f t} \int_{-\infty}^\infty h(v)e^{-j 2 \pi f v} dv</math>. By pulling <math> e^{j 2 \pi f t} </math> out of the integral and calling the remaining integral <math> B_k </math> we get <math> e^{j 2 \pi f t} B_k</math>.
By the superposition property(because we have a linear system) we may put into the system the integral of <math> x(u)\delta(t-u)</math> with respect to u and we would get out <math> \int_{-\infty}^\infty x(u)h(t-u) du</math>. This is because What would we get if we put <math> e^{j 2 \pi f t} </math> into our system? We could find out by plugging <math> e^{j 2 \pi f t} </math> in for <math> x(u) </math> in the integral that we just found the output for above. If we do a change of variables (<math> v = t-u </math>, and <math> dv = -du </math>) we get <math> \int_{-\infty}^\infty x(u)h(t-u) du = \int_{-\infty}^\infty e^{j 2 \pi f t} h(t-u) du = -\int_{\infty}^{-\infty} e^{j 2 \pi f (t-v)} h(v) dv = e^{j 2 \pi f t} \int_{-\infty}^\infty h(v)e^{-j 2 \pi f v} dv</math>. By pulling <math> e^{j 2 \pi f t} </math> out of the integral and calling the remaining integral <math> H_f </math> we get <math> e^{j 2 \pi f t} H_f</math>.




Line 76: Line 77:
|-
|-
|<math> e^{j 2 \pi f t} </math>
|<math> e^{j 2 \pi f t} </math>
|<math> e^{j 2 \pi f t} B_k</math>
|<math> e^{j 2 \pi f t} H_f</math>
|Superposition (from above)
|Superposition (from above)
|}
|}
Line 101: Line 102:


<math> x(t) = \alpha_0 +\sum_{n=1}^\infty 2Re(\alpha_n e^ \frac {j 2 \pi n t}{T}) </math>
<math> x(t) = \alpha_0 +\sum_{n=1}^\infty 2Re(\alpha_n e^ \frac {j 2 \pi n t}{T}) </math>

In terms of cosine <math> x(t) = \alpha_0 +\sum_{n=1}^\infty 2 |\alpha_n| cos(\frac{2 \pi n t}{T} + \omega_n) </math> where <math> \omega_n </math> is an angle.


==Fourier Transform==
==Fourier Transform==


Fourier transforms emerge because we want to be able to make Fourier expressions of non-periodic functions. We can take the limit of those non-periodic functions to get a fourier expression for the function.
Fourier transforms emerge because we want to be able to make Fourier expransions of non-periodic functions. We can accomplish this by taking the limit of x(t).


Remember that:
Remember that:
<math>x(t)=x(t+T)= \sum_{k=-\infty}^\infty \alpha_k e^ \frac {j 2 \pi k t}{T} = \sum_{k=-\infty}^\infty 1/T \int_{-\frac{T}{2}}^\frac{T}{2} x(u)e^ \frac {-j 2 \pi k u }{T} du e^ \frac {j 2 \pi k t}{T} </math>
<math>x(t)=x(t+T)= \sum_{k=-\infty}^\infty \alpha_k e^ \frac {j 2 \pi k t}{T} = \sum_{k=-\infty}^\infty 1/T \int_{-\frac{T}{2}}^\frac{T}{2} x(u)e^ \frac {-j 2 \pi k u }{T} du e^ \frac {j 2 \pi k t}{T} </math>


Let's substitute in x(t) for k/T substitute f, for 1/T substitute df, and for the summation substitute the integral.


So,
So,
<math> \lim_{x \to \infty}x(t)= \int_{-\infty}^\infty (\int_{-\infty}^\infty x(u) e^{-j 2 \pi f u} du) e^{j 2 \pi f t} df</math>
<math> \lim_{T \to \infty}x(t)= \int_{-\infty}^\infty (\int_{-\infty}^\infty x(u) e^{-j 2 \pi f u} du) e^{j 2 \pi f t} df</math>


From the above limit we define <math> x(t)</math> and <math> X(f) </math>.
From the above limit we define <math> x(t)</math> and <math> X(f) </math>.
Line 119: Line 123:
<math> X(f) = \mathcal{F}[x(t)] = \int_{-\infty}^\infty x(t) e^ {-j 2 \pi f t} dt</math>
<math> X(f) = \mathcal{F}[x(t)] = \int_{-\infty}^\infty x(t) e^ {-j 2 \pi f t} dt</math>


By using the above transforms we can now look at something in the frequency domain or the time domain. We are not limited to just one domain but can use both of them.
By using the above transforms we can now change a function from the frequency domain to the time domain or vise versa. We are not limited to just one domain but can use both of them.


We can take the derivitive of <math> x(t) </math> and then put in terms of the reverse fourier transform.
We can take the derivitive of <math> x(t) </math> and then put it in terms of the reverse fourier transform.


<math> \frac{dx}{dt} = \int_{-\infty}^\infty j 2 \pi f X(f) e^ {j 2 \pi f t} df = \mathcal{F}^{-1}[j 2 \pi f X(f)]
<math> \frac{dx}{dt} = \int_{-\infty}^\infty j 2 \pi f X(f) e^ {j 2 \pi f t} df = \mathcal{F}^{-1}[j 2 \pi f X(f)]
Line 141: Line 145:
<math> = \frac{1}{2} \int_{-\infty}^\infty x(t) e^{-j 2 \pi (f-f_0) t} dt + \frac{1}{2} \int_{-\infty}^\infty x(t) e^{j 2 \pi (f+f_0) t} dt = \frac{1}{2} X(f-f_0) + \frac{1}{2} X(f+f_0)</math>
<math> = \frac{1}{2} \int_{-\infty}^\infty x(t) e^{-j 2 \pi (f-f_0) t} dt + \frac{1}{2} \int_{-\infty}^\infty x(t) e^{j 2 \pi (f+f_0) t} dt = \frac{1}{2} X(f-f_0) + \frac{1}{2} X(f+f_0)</math>


What would happen if we multiplied our time by a constant in <math> x(t) </math>? We will substitute <math> u=at </math> and <math> du = adt </math>. If <math> a \ne 0 </math>:
What would happen if we multiplied our time (time scaling) by a constant in <math> x(t) </math>? We will substitute <math> u=at </math> and <math> du = adt </math>. If <math> a \ne 0 </math>:


<math> \mathcal{F} [x(a t)] = \int_{-\infty}^\infty x(at) e^{-j 2 \pi f t} dt = \int_{-\infty}^\infty x(u) e^\frac{-j 2 \pi f u}{a} \frac{du}{|a|} = \frac{1}{|a|} X(\frac{f}{a})</math>
<math> \mathcal{F} [x(a t)] = \int_{-\infty}^\infty x(at) e^{-j 2 \pi f t} dt = \int_{-\infty}^\infty x(u) e^\frac{-j 2 \pi f u}{a} \frac{du}{|a|} = \frac{1}{|a|} X(\frac{f}{a})</math>

Ok, lets take the fourier transform of the fourier series.

<math> \mathcal{F} [\sum_{n=-\infty}^{\infty} \alpha_n e^\frac{j 2 \pi n t}{T}] = \int_{-\infty}^\infty \sum_{n=-\infty}^{\infty} \alpha_n e^\frac{j 2 \pi n t}{T} e^{-j 2 \pi f t} dt = \sum_{n=-\infty}^{\infty} \alpha_n \int_{-\infty}^\infty e^{-j 2 \pi (f-\frac{n}{T}) t} dt = \sum_{n=-\infty}^{\infty} \alpha_n\delta(f-\frac{n}{T}) </math>

Remember: <math> \delta (f) = \int_{-\infty}^\infty e^{-j 2 \pi f t} dt </math>


==CD Player==
==CD Player==


Below is a diagram of how the information on a CD player is read and processed. As you can see the information on the CD is processed by the D/A converter and then sent through a low pass filter and on to the speaker. If you were recording sound, the sound would be captured through a microphone. Then, it should be sent through a low pass filter and onto the A/D converter and then it is ready to be put on the CD. Recording signals is essentially the reverse of the operation pictured below.
Below is a diagram of how the information on a CD player is read and processed. As you can see the information on the CD is processed by the D/A converter and then sent through a low pass filter and then to the speaker. If you were recording sound, the sound would be captured by a microphone. Then, it should be sent through a low pass filter. The reason you want a low-pass filter is to keep high frequencies (that you don't intend to record) from being recorded. If a high frequency was recorded at say 30 KHz and the maximum frequency you intended to record was 20KHz, then when you played back the recording you would here a tone at 10KHz. From the filter the signal goes onto the A/D converter and then it is ready to be put on the CD. Recording signals (as just described) is essentially the reverse of the operation pictured below.


[[Image:CDsystem.jpg]]
[[Image:CDsystem.jpg]]
Line 153: Line 163:
'''In Time Domain:'''
'''In Time Domain:'''


Let's start with a signal <math> h(t) </math>, as shown below. In this signal there is an infinite amount of information. Obviously, we can't hold it all in a computer, but we could take samples every <math> T </math>. Lets do that by multiplying <math> h(t) </math> by <math> \sum_{n=-\infty}^\infty \delta (t-nT) </math>. Since the magnetude of our delta function is one, we get a series of delta functions that record the value of <math> h(t) </math> at intervals of <math> T </math>. This gives us a result that looks like: <math> h(t)\sum_{n=-\infty}^\infty \delta (t-nT) = \sum_{n=-\infty}^\infty x(nt) \delta (t-nT)</math>
Let's start with a signal <math> h(t) </math>, as shown in the below picture. In this signal there is an infinite amount of information. Obviously, we can't hold it all in a computer, but we could take samples every <math> T </math> seconds. Lets do that by multiplying <math> h(t) </math> by <math> \sum_{n=-\infty}^\infty \delta (t-nT) </math>. Since the magnitude of our delta function is one, we get a series of delta functions that record the value of <math> h(t) </math> at intervals of <math> T </math>. This gives us a result that looks like: <math> h(t)\sum_{n=-\infty}^\infty \delta (t-nT) = \sum_{n=-\infty}^\infty h(t) \delta (t-nT)</math>


'''In Frequency Domain:'''
'''In Frequency Domain:'''
Line 167: Line 177:
Now we can solve for <math> \alpha_m </math>.
Now we can solve for <math> \alpha_m </math>.


<math> \alpha_m = \frac {1}{T} \int_{\frac{-T}{2}}^{\frac{T}{2}} \sum_{n=-\infty}^\infty \delta (t-nT) \frac {j 2 \pi m t}{T} dt = \frac {1}{T} \int_{\frac{-T}{2}}^{\frac{-T}{2}} \delta (t) \frac {j 2 \pi m t}{T} dt = \frac {1}{T} </math>
<math> \alpha_m = \frac {1}{T} \int_{\frac{-T}{2}}^{\frac{T}{2}} \sum_{n=-\infty}^\infty \delta (t-nT) e^\frac {j 2 \pi m t}{T} dt = \frac {1}{T} \int_{\frac{-T}{2}}^{\frac{T}{2}} \delta (t) e^\frac {j 2 \pi m t}{T} dt = \frac {1}{T} </math>


Since the only delta function within the integration limits is the delta function at <math> t=0 </math>, we can take out the summation and just leave one delta function. Then, evaluating the integral at <math> t=0 </math> we get <math> \frac{1}{T} </math>
Since the only delta function within the integration limits is the delta function at <math> t=0 </math>, we can take out the summation and just leave one delta function. Then, evaluating the integral at <math> t=0 </math> we get <math> \frac{1}{T} </math>


<math> \sum_{n=-\infty}^\infty \delta (t-nT) = \sum_{n=-\infty}^\infty \frac {1}{T} e^ \frac {j 2 \pi k t}{T} </math>
<math> \sum_{n=-\infty}^\infty \delta (t-nT) = \sum_{n=-\infty}^\infty \frac {1}{T} e^ \frac {j 2 \pi k t}{T} </math>
<math> \mathcal{F} \sum_{n=-\infty}^\infty \delta (t-nT) = \mathcal{F} \sum_{n=-\infty}^\infty \frac {1}{T} e^ \frac {j 2 \pi k t}{T} = \sum_{n=-\infty}^\infty \frac {1}{T} \int_{-\infty}^\infty e^ \frac {j 2 \pi k t}{T} e^ {-j 2 \pi f t} dt= \frac {1}{T} \sum_{n=-\infty}^\infty \int_{-\infty}^\infty e^ {-j 2 \pi (f-\frac{m}{T} t} dt</math>
<math> \mathcal{F} [\sum_{n=-\infty}^\infty \delta (t-nT)] = \mathcal{F} [\sum_{n=-\infty}^\infty \frac {1}{T} e^ \frac {j 2 \pi k t}{T}] = \sum_{n=-\infty}^\infty \frac {1}{T} \int_{-\infty}^\infty e^ \frac {j 2 \pi k t}{T} e^ {-j 2 \pi f t} dt= \frac {1}{T} \sum_{n=-\infty}^\infty \int_{-\infty}^\infty e^ {-j 2 \pi (f-\frac{m}{T} t} dt = \frac {1}{T} \sum_{n=-\infty}^\infty \delta (f-\frac{n}{T})</math>

Now wer are ready to take the convolution.

<math> H(f)* \frac {1}{T} \sum_{n=-\infty}^\infty \delta (f-\frac{n}{T}) = \frac{1}{T} \sum_{n=-\infty}^\infty H(f-\frac{n}{T})</math>


[[Image:barnsasample.jpg|Picture uploaded by Sam Barnes]]
[[Image:barnsasample.jpg|Picture uploaded by Sam Barnes]]

'''Time Domain'''

In order to output as sound any of the signals that we have we must run them through a D/A converter. This is like convolving the below signal by a step function <math> p(t) = U(t+\frac{T}{2})- U(t-\frac{T}{2}) </math>.

This gives us <math> \sum (nt)p(t-nT)</math>. This is what the signal looks like as it is output through the D/A converter.
'''Frequency Domain'''

To find out what we would multiply by in the frequency domain we just take the inverse fourier transform of <math> p(t) </math> and we get <math>P(f) = \frac{sin (\frac{\pi t}{T})}{\frac{\pi t}{T}} </math>.

By multiplying <math> \frac {1}{T} \sum_{n=-\infty}^\infty X(f-\frac{n}{T})P(f) = X(f) </math>. This is hopefully close to what we started with for a signal.


[[Image:barnsaDA.jpg|Picture uploaded by Sam Barnes]]
[[Image:barnsaDA.jpg|Picture uploaded by Sam Barnes]]

For 2 times oversampling:

In time, multiply: <math> \sum_{n=-\infty}^\infty x(nT)\delta(t-nT)</math> by <math> \sum_{n=-M}^M h(m \frac{T}{2}) \delta (t-\frac{mT}{2})</math>. This profides points that are interpolated and makes our output sound better because it looks closer to the original wave.

In frequency, convolve: <math> \frac {1}{T} \sum_{n=-\infty}^\infty X(f- \frac{n}{T} ) </math> with <math> \sum_{m=-M}^M h(\frac{mT}{2}) e ^\frac{-j2 \pi m f}{\frac{2}{T}} </math>. The X(f) that you get is great because there is little distortion near the original frequency plot. This means that you can use a cheaper low-pass filter then you would otherwise have been able to.

==Nyquist Frequency==

If you are sampling at a frequency of 40 KHz, then the highest frequency that you can reproduce is 20 KHz. The nyquist frequency, would be 20 KHz, the highest frequency that can be reproduced for a given sampling rate.

==FIR Filters==

A finite impulse response filter (FIR filter) is a digital filter that is applied to data before sending it out a D/A converter. This type of filter allows for compensation of the signal before is it destorted so that it will look as it was originally recorded. Using an FIR filter also allows us to put a cheap low-pass filter on after the D/A converter because the signal has been compensated so it doesn't take an expensive low-pass filter, as it would without the FIR filter.

The coefficients that are sent out to the D/A converter are:

<math>
h_m = { T } \int_{T} H(f)e^{j2 \pi m f T}\,df
</math>

where <math> H(f)=\sum_{m=-M}^{M}h(mT)e^{-j 2 \pi f m T} </math>

Example: Design a FIR low-pass filter to pass between <math> -\frac{1}{4T} < f < \frac{1}{4T} </math> and reject the rest.

Our desired response is: <math> H_{hat} = 1 </math>, if |f| is less then or equal to <math> \frac{1}{4T} </math> or <math> H_{hat} = 0 </math> otherwise.

So, <math> h(mT) = T \int_{} . . . </math>

Note: From the Circular Convolution we get: <math> y(n) = \sum_{m=0}^{N-1}h(m)x(n-m)</math>

==Discrete Fourier Transforms (DFTs)==

The DFT allows us to take a sample of some signal that is not periodic with time and take the Fourier series of it. There is the DFT and the Inverse DFT listed below.

'''DFT'''

<math> x(m) = \sum_{n=0}^{N-1} x(n) e^{\frac{-j 2 \pi m n}{N}}</math>

'''IDFT'''

<math> x(k) = \frac{1}{N}\sum_{n=0}^{N-1} x(n) e^{\frac{j 2 \pi k n}{N}}</math>

With the DFT all the negative frequency components are just the complex conjugate of the positive frequency components.

One problem with the DFT is that if the sample taken does not begin and end at zero, (or the same point) then we get what is called leakage. Because the DFT is discrete, if the end of the sample is not at the same place it began then it will make a jump back to the point that it began (leakage). This is because the DFT repeats the recorded section of signal over and over. It is this periodic manner of the DFT that allows us to reproduce a discrete signal that is not periodic. The DFT and IDFT are periodic with period N. This can be easily proved by simplifying <math> x(n+N) </math>.

==Adaptive FIR Filters==

[[Image:Adaptive.JPG]]

It should be noted that in the above diagram, <math> e(n)=y(n)-r(n) = [\sum_{k=0}^{N-1} h_n(k) x(n-k)] - r(n) </math>. The goal of an adaptive FIR filter is to drive the error, e(n), to zero. If we consider that this is a two coefficient filter and we have a contour plot of <math> e^2(n) </math> then we want to travel in the direction of the negative gradient to minimize the error. Let us say that <math> \mu </math> is the stepping size. So...
<math> \triangle h_n(m) = - \frac{\partial (e^2(n))}{\partial h_n(m)} \mu = - \mu 2 e(n)\frac{\partial (e(n))}{\partial h_n(m)} = - 2 \mu e(n) x(n-m) </math>

What would <math> h_{n+1}(m) </math> look like?

<math> h_{n+1}(m)= h_n(m) + \triangle h_n(m) = h_n(m) - 2 \mu (y(n)-r(n)) x(n-m) = h_n(m) - 2 \mu ([\sum_{k=0}^{N-1} h_n(k) x(n-k)] - r(n)) x(n-m)

</math>

How might one find an unknown transfer function? Lets use the example of the tuner upper. The idea here is that we want to remove a sine wave from the signal and leave the original signal(voice) in place.

[[Image:AdaptiveFilter.JPG]]

Latest revision as of 14:51, 8 October 2006

Fourier Series

If

then we can write

The above equation is called the complex Fourier Series. Given , we may determine by taking the inner product of with . Let us assume a solution for of the form . Now we take the inner product of with over the interval of one period, .

If then,

If then,

We can simplify the above two conclusions into one equation. (What is the delta function below?)

So, we conclude

Orthogonal Functions

The function and are orthogonal on if and only if .

The set of functions are orthonormal if and only if .

Linear Systems

Let us say we have a linear time invarient system, where is the input and is the output. What outputs do we get as we put different inputs into this system? File:Linear System.JPG

If we put in an impulse response, , then we get out . What would happen if we put a time delayed impulse signal, , into the system? The output response would be a time delayed , or , because the system is time invarient. So, no matter when we put in our signal the response would come out the same (just time delayed).

What if we now multiplied our impulse by a coefficient? Since our system is linear, the proportionality property applies. If we put into our system then we should get out .

By the superposition property(because we have a linear system) we may put into the system the integral of with respect to u and we would get out . This is because What would we get if we put into our system? We could find out by plugging in for in the integral that we just found the output for above. If we do a change of variables (, and ) we get . By pulling out of the integral and calling the remaining integral we get .



INPUT OUTPUT REASON
Given
Time Invarient
Proportionality
Superposition
Superposition
Superposition (from above)

Fourier Series (indepth)

I would like to take a closer look at in the Fourier Series. Hopefully this will provide a better understanding of .

We will seperate x(t) into three parts; where is negative, zero, and positive.

Now, by substituting into the summation where is negative and substituting into the summation where is positive we get:

Recall that

If is real, then . Let us assume that is real.

Recall that Here is further clarification on this property

So, we may write:

In terms of cosine where is an angle.

Fourier Transform

Fourier transforms emerge because we want to be able to make Fourier expransions of non-periodic functions. We can accomplish this by taking the limit of x(t).

Remember that:

Let's substitute in x(t) for k/T substitute f, for 1/T substitute df, and for the summation substitute the integral.

So,

From the above limit we define and .

By using the above transforms we can now change a function from the frequency domain to the time domain or vise versa. We are not limited to just one domain but can use both of them.

We can take the derivitive of and then put it in terms of the reverse fourier transform.

What happens if we just shift the time of ?

In the same way, if we shift the frequency we get:

What would be the Fourier transform of ?

What would happen if we multiplied our time (time scaling) by a constant in ? We will substitute and . If :

Ok, lets take the fourier transform of the fourier series.

Remember:

CD Player

Below is a diagram of how the information on a CD player is read and processed. As you can see the information on the CD is processed by the D/A converter and then sent through a low pass filter and then to the speaker. If you were recording sound, the sound would be captured by a microphone. Then, it should be sent through a low pass filter. The reason you want a low-pass filter is to keep high frequencies (that you don't intend to record) from being recorded. If a high frequency was recorded at say 30 KHz and the maximum frequency you intended to record was 20KHz, then when you played back the recording you would here a tone at 10KHz. From the filter the signal goes onto the A/D converter and then it is ready to be put on the CD. Recording signals (as just described) is essentially the reverse of the operation pictured below.

File:CDsystem.jpg

In Time Domain:

Let's start with a signal , as shown in the below picture. In this signal there is an infinite amount of information. Obviously, we can't hold it all in a computer, but we could take samples every seconds. Lets do that by multiplying by . Since the magnitude of our delta function is one, we get a series of delta functions that record the value of at intervals of . This gives us a result that looks like:

In Frequency Domain:

In the frequency domain we start with . Now we are in frequency, so we must convolve instead of multiply like we did in the time domain. We would have to convolve with .

Aside:

This result looks it could be a fourier series. We would like to get our result in terms of delta functions. As shown below, the periodic delta functions could be represented as a fourier series with coefficients .

Now we can solve for .

Since the only delta function within the integration limits is the delta function at , we can take out the summation and just leave one delta function. Then, evaluating the integral at we get

Now wer are ready to take the convolution.

File:Barnsasample.jpg

Time Domain

In order to output as sound any of the signals that we have we must run them through a D/A converter. This is like convolving the below signal by a step function .

This gives us . This is what the signal looks like as it is output through the D/A converter.

Frequency Domain

To find out what we would multiply by in the frequency domain we just take the inverse fourier transform of and we get .

By multiplying . This is hopefully close to what we started with for a signal.

File:BarnsaDA.jpg

For 2 times oversampling:

In time, multiply: by . This profides points that are interpolated and makes our output sound better because it looks closer to the original wave.

In frequency, convolve: with . The X(f) that you get is great because there is little distortion near the original frequency plot. This means that you can use a cheaper low-pass filter then you would otherwise have been able to.

Nyquist Frequency

If you are sampling at a frequency of 40 KHz, then the highest frequency that you can reproduce is 20 KHz. The nyquist frequency, would be 20 KHz, the highest frequency that can be reproduced for a given sampling rate.

FIR Filters

A finite impulse response filter (FIR filter) is a digital filter that is applied to data before sending it out a D/A converter. This type of filter allows for compensation of the signal before is it destorted so that it will look as it was originally recorded. Using an FIR filter also allows us to put a cheap low-pass filter on after the D/A converter because the signal has been compensated so it doesn't take an expensive low-pass filter, as it would without the FIR filter.

The coefficients that are sent out to the D/A converter are:

where

Example: Design a FIR low-pass filter to pass between and reject the rest.

Our desired response is: , if |f| is less then or equal to or otherwise.

So,

Note: From the Circular Convolution we get:

Discrete Fourier Transforms (DFTs)

The DFT allows us to take a sample of some signal that is not periodic with time and take the Fourier series of it. There is the DFT and the Inverse DFT listed below.

DFT

IDFT

With the DFT all the negative frequency components are just the complex conjugate of the positive frequency components.

One problem with the DFT is that if the sample taken does not begin and end at zero, (or the same point) then we get what is called leakage. Because the DFT is discrete, if the end of the sample is not at the same place it began then it will make a jump back to the point that it began (leakage). This is because the DFT repeats the recorded section of signal over and over. It is this periodic manner of the DFT that allows us to reproduce a discrete signal that is not periodic. The DFT and IDFT are periodic with period N. This can be easily proved by simplifying .

Adaptive FIR Filters

File:Adaptive.JPG

It should be noted that in the above diagram, . The goal of an adaptive FIR filter is to drive the error, e(n), to zero. If we consider that this is a two coefficient filter and we have a contour plot of then we want to travel in the direction of the negative gradient to minimize the error. Let us say that is the stepping size. So...

What would look like?

How might one find an unknown transfer function? Lets use the example of the tuner upper. The idea here is that we want to remove a sine wave from the signal and leave the original signal(voice) in place.

File:AdaptiveFilter.JPG