# Fourier series - by Ray Betz

## Fourier Series

If

1. ${\displaystyle x(t)=x(t+T)}$

then we can write

${\displaystyle {\mathbf {x}}(t)=\sum _{k=-\infty }^{\infty }\alpha _{k}e^{\frac {j2\pi kt}{T}}}$

The above equation is called the complex Fourier Series. Given ${\displaystyle x(t)}$, we may determine ${\displaystyle \alpha _{k}}$ by taking the inner product of ${\displaystyle \alpha _{k}}$ with ${\displaystyle x(t)}$. Let us assume a solution for ${\displaystyle \alpha _{k}}$ of the form ${\displaystyle e^{\frac {j2\pi nt}{T}}}$. Now we take the inner product of ${\displaystyle e^{\frac {j2\pi nt}{T}}}$ with ${\displaystyle x(t)}$ over the interval of one period, ${\displaystyle T}$. ${\displaystyle =}$ ${\displaystyle =\int _{-{\frac {T}{2}}}^{\frac {T}{2}}x(t)e^{\frac {-j2\pi nt}{T}}dt}$ ${\displaystyle =\int _{-{\frac {T}{2}}}^{\frac {T}{2}}\sum _{k=-\infty }^{\infty }\alpha _{k}e^{\frac {j2\pi kt}{T}}e^{\frac {-j2\pi nt}{T}}dt}$ ${\displaystyle =\sum _{k=-\infty }^{\infty }\alpha _{k}\int _{-{\frac {T}{2}}}^{\frac {T}{2}}e^{\frac {j2\pi (k-n)t}{T}}dt}$

If ${\displaystyle k=n}$ then,

${\displaystyle \int _{-{\frac {T}{2}}}^{\frac {T}{2}}e^{\frac {j2\pi (k-n)t}{T}}dt=\int _{-{\frac {T}{2}}}^{\frac {T}{2}}1dt=T}$

If ${\displaystyle k\neq n}$ then,

${\displaystyle \int _{-{\frac {T}{2}}}^{\frac {T}{2}}e^{\frac {j2\pi (k-n)t}{T}}dt=0}$

We can simplify the above two conclusions into one equation. (What is the delta function below?)

${\displaystyle \sum _{k=-\infty }^{\infty }\alpha _{k}\int _{-{\frac {T}{2}}}^{\frac {T}{2}}e^{\frac {j2\pi (k-n)t}{T}}dt=\sum _{k=-\infty }^{\infty }T\delta _{k,n}\alpha _{k}=T\alpha _{n}}$

So, we conclude ${\displaystyle \alpha _{n}={\frac {1}{T}}\int _{-{\frac {T}{2}}}^{\frac {T}{2}}x(t)e^{\frac {-j2\pi nt}{T}}dt}$

## Orthogonal Functions

The function ${\displaystyle y_{n}(t)}$ and ${\displaystyle y_{m}(t)}$ are orthogonal on ${\displaystyle (a,b)}$ if and only if ${\displaystyle =\int _{a}^{b}y_{n}^{*}(t)y_{m}(t)dt=0}$.

The set of functions are orthonormal if and only if ${\displaystyle =\int _{a}^{b}y_{n}^{*}(t)y_{m}(t)dt=\delta _{m,n}}$.

## Linear Systems

Let us say we have a linear time invarient system, where ${\displaystyle x(t)}$ is the input and ${\displaystyle y(t)}$ is the output. What outputs do we get as we put different inputs into this system? File:Linear System.JPG

If we put in an impulse response, ${\displaystyle \delta (t)}$, then we get out ${\displaystyle h(t)}$. What would happen if we put a time delayed impulse signal, ${\displaystyle \delta (t-u)}$, into the system? The output response would be a time delayed ${\displaystyle h(t)}$, or ${\displaystyle h(t-u)}$, because the system is time invarient. So, no matter when we put in our signal the response would come out the same (just time delayed).

What if we now multiplied our impulse by a coefficient? Since our system is linear, the proportionality property applies. If we put ${\displaystyle x(u)\delta (t-u)}$ into our system then we should get out ${\displaystyle x(u)h(t-u)}$.

By the superposition property(because we have a linear system) we may put into the system the integral of ${\displaystyle x(u)\delta (t-u)}$ with respect to u and we would get out ${\displaystyle \int _{-\infty }^{\infty }x(u)h(t-u)du}$. This is because What would we get if we put ${\displaystyle e^{j2\pi ft}}$ into our system? We could find out by plugging ${\displaystyle e^{j2\pi ft}}$ in for ${\displaystyle x(u)}$ in the integral that we just found the output for above. If we do a change of variables (${\displaystyle v=t-u}$, and ${\displaystyle dv=-du}$) we get ${\displaystyle \int _{-\infty }^{\infty }x(u)h(t-u)du=\int _{-\infty }^{\infty }e^{j2\pi ft}h(t-u)du=-\int _{\infty }^{-\infty }e^{j2\pi f(t-v)}h(v)dv=e^{j2\pi ft}\int _{-\infty }^{\infty }h(v)e^{-j2\pi fv}dv}$. By pulling ${\displaystyle e^{j2\pi ft}}$ out of the integral and calling the remaining integral ${\displaystyle H_{f}}$ we get ${\displaystyle e^{j2\pi ft}H_{f}}$.

 INPUT OUTPUT REASON ${\displaystyle \delta (t)}$ ${\displaystyle h(t)}$ Given ${\displaystyle \delta (t-u)}$ ${\displaystyle h(t-u)}$ Time Invarient ${\displaystyle x(u)\delta (t-u)}$ ${\displaystyle x(u)h(t-u)}$ Proportionality ${\displaystyle \int _{-\infty }^{\infty }x(u)\delta (t-u)du}$ ${\displaystyle \int _{-\infty }^{\infty }x(u)h(t-u)du}$ Superposition ${\displaystyle \int _{-\infty }^{\infty }e^{j2\pi ft}h(t-u)du}$ ${\displaystyle e^{j2\pi ft}\int _{-\infty }^{\infty }e^{j2\pi vt}h(v)dv}$ Superposition ${\displaystyle e^{j2\pi ft}}$ ${\displaystyle e^{j2\pi ft}H_{f}}$ Superposition (from above)

## Fourier Series (indepth)

I would like to take a closer look at ${\displaystyle \alpha _{k}}$ in the Fourier Series. Hopefully this will provide a better understanding of ${\displaystyle \alpha _{k}}$.

We will seperate x(t) into three parts; where ${\displaystyle \alpha _{k}}$ is negative, zero, and positive. ${\displaystyle {\mathbf {x}}(t)=\sum _{k=-\infty }^{\infty }\alpha _{k}e^{\frac {j2\pi kt}{T}}=\sum _{k=-\infty }^{-1}\alpha _{k}e^{\frac {j2\pi kt}{T}}+\alpha _{0}+\sum _{k=1}^{\infty }\alpha _{k}e^{\frac {j2\pi kt}{T}}}$

Now, by substituting ${\displaystyle n=-k}$ into the summation where ${\displaystyle k}$ is negative and substituting ${\displaystyle n=k}$ into the summation where ${\displaystyle k}$ is positive we get: ${\displaystyle \sum _{n=1}^{\infty }\alpha _{-n}e^{\frac {-j2\pi nt}{T}}+\alpha _{0}+\sum _{n=1}^{\infty }\alpha _{n}e^{\frac {j2\pi nt}{T}}}$

Recall that ${\displaystyle \alpha _{n}={\frac {1}{T}}\int _{-{\frac {T}{2}}}^{\frac {T}{2}}x(u)e^{\frac {-j2\pi nt}{T}}dt}$

If ${\displaystyle x(t)}$ is real, then ${\displaystyle \alpha _{n}^{*}=\alpha _{-n}}$. Let us assume that ${\displaystyle x(t)}$ is real.

${\displaystyle x(t)=\alpha _{0}+\sum _{n=1}^{\infty }(\alpha _{n}e^{\frac {j2\pi nt}{T}}+\alpha _{n}^{*}e^{\frac {-j2\pi nt}{T}})}$

Recall that ${\displaystyle y+y^{*}=2Re(y)}$ Here is further clarification on this property

So, we may write:

${\displaystyle x(t)=\alpha _{0}+\sum _{n=1}^{\infty }2Re(\alpha _{n}e^{\frac {j2\pi nt}{T}})}$

In terms of cosine ${\displaystyle x(t)=\alpha _{0}+\sum _{n=1}^{\infty }2|\alpha _{n}|cos({\frac {2\pi nt}{T}}+\omega _{n})}$ where ${\displaystyle \omega _{n}}$ is an angle.

## Fourier Transform

Fourier transforms emerge because we want to be able to make Fourier expransions of non-periodic functions. We can accomplish this by taking the limit of x(t).

Remember that: ${\displaystyle x(t)=x(t+T)=\sum _{k=-\infty }^{\infty }\alpha _{k}e^{\frac {j2\pi kt}{T}}=\sum _{k=-\infty }^{\infty }1/T\int _{-{\frac {T}{2}}}^{\frac {T}{2}}x(u)e^{\frac {-j2\pi ku}{T}}due^{\frac {j2\pi kt}{T}}}$

Let's substitute in x(t) for k/T substitute f, for 1/T substitute df, and for the summation substitute the integral.

So, ${\displaystyle \lim _{T\to \infty }x(t)=\int _{-\infty }^{\infty }(\int _{-\infty }^{\infty }x(u)e^{-j2\pi fu}du)e^{j2\pi ft}df}$

From the above limit we define ${\displaystyle x(t)}$ and ${\displaystyle X(f)}$.

${\displaystyle x(t)={\mathcal {F}}^{-1}[X(f)]=\int _{-\infty }^{\infty }X(f)e^{j2\pi ft}df}$

${\displaystyle X(f)={\mathcal {F}}[x(t)]=\int _{-\infty }^{\infty }x(t)e^{-j2\pi ft}dt}$

By using the above transforms we can now change a function from the frequency domain to the time domain or vise versa. We are not limited to just one domain but can use both of them.

We can take the derivitive of ${\displaystyle x(t)}$ and then put it in terms of the reverse fourier transform.

${\displaystyle {\frac {dx}{dt}}=\int _{-\infty }^{\infty }j2\pi fX(f)e^{j2\pi ft}df={\mathcal {F}}^{-1}[j2\pi fX(f)]}$

What happens if we just shift the time of ${\displaystyle x(t)}$?

${\displaystyle x(t-t_{0})=\int _{-\infty }^{\infty }X(f)e^{j2\pi f(t-t_{0})}df=\int _{-\infty }^{\infty }e^{-j2\pi ft_{0}}X(f)e^{j2\pi ft}df={\mathcal {F}}^{-1}[e^{-j2\pi ft_{0}}X(f)]}$

In the same way, if we shift the frequency we get:

${\displaystyle X(f-f_{0})=\int _{-\infty }^{\infty }x(t)e^{j2\pi (f-f_{0})t}dt=\int _{-\infty }^{\infty }e^{-j2\pi tf_{0}}x(t)e^{j2\pi ft}df={\mathcal {F}}[e^{-j2\pi tf_{0}}x(t)]}$

What would be the Fourier transform of ${\displaystyle cos(2/pif_{0}t)x(t)}$?

${\displaystyle {\mathcal {F}}[cos(2\pi f_{0}t)x(t)]=\int _{-\infty }^{\infty }x(t)cos(2\pi f_{0}t)e^{-j2\pi ft}dt=\int _{-\infty }^{\infty }{\frac {e^{j2\pi f_{0}t}+e^{-j2\pi f_{0}t}}{2}}x(t)e^{-j2\pi ft}dt}$

${\displaystyle ={\frac {1}{2}}\int _{-\infty }^{\infty }x(t)e^{-j2\pi (f-f_{0})t}dt+{\frac {1}{2}}\int _{-\infty }^{\infty }x(t)e^{j2\pi (f+f_{0})t}dt={\frac {1}{2}}X(f-f_{0})+{\frac {1}{2}}X(f+f_{0})}$

What would happen if we multiplied our time (time scaling) by a constant in ${\displaystyle x(t)}$? We will substitute ${\displaystyle u=at}$ and ${\displaystyle du=adt}$. If ${\displaystyle a\neq 0}$:

${\displaystyle {\mathcal {F}}[x(at)]=\int _{-\infty }^{\infty }x(at)e^{-j2\pi ft}dt=\int _{-\infty }^{\infty }x(u)e^{\frac {-j2\pi fu}{a}}{\frac {du}{|a|}}={\frac {1}{|a|}}X({\frac {f}{a}})}$

Ok, lets take the fourier transform of the fourier series.

${\displaystyle {\mathcal {F}}[\sum _{n=-\infty }^{\infty }\alpha _{n}e^{\frac {j2\pi nt}{T}}]=\int _{-\infty }^{\infty }\sum _{n=-\infty }^{\infty }\alpha _{n}e^{\frac {j2\pi nt}{T}}e^{-j2\pi ft}dt=\sum _{n=-\infty }^{\infty }\alpha _{n}\int _{-\infty }^{\infty }e^{-j2\pi (f-{\frac {n}{T}})t}dt=\sum _{n=-\infty }^{\infty }\alpha _{n}\delta (f-{\frac {n}{T}})}$

Remember: ${\displaystyle \delta (f)=\int _{-\infty }^{\infty }e^{-j2\pi ft}dt}$

## CD Player

Below is a diagram of how the information on a CD player is read and processed. As you can see the information on the CD is processed by the D/A converter and then sent through a low pass filter and then to the speaker. If you were recording sound, the sound would be captured by a microphone. Then, it should be sent through a low pass filter. The reason you want a low-pass filter is to keep high frequencies (that you don't intend to record) from being recorded. If a high frequency was recorded at say 30 KHz and the maximum frequency you intended to record was 20KHz, then when you played back the recording you would here a tone at 10KHz. From the filter the signal goes onto the A/D converter and then it is ready to be put on the CD. Recording signals (as just described) is essentially the reverse of the operation pictured below.

In Time Domain:

Let's start with a signal ${\displaystyle h(t)}$, as shown in the below picture. In this signal there is an infinite amount of information. Obviously, we can't hold it all in a computer, but we could take samples every ${\displaystyle T}$ seconds. Lets do that by multiplying ${\displaystyle h(t)}$ by ${\displaystyle \sum _{n=-\infty }^{\infty }\delta (t-nT)}$. Since the magnitude of our delta function is one, we get a series of delta functions that record the value of ${\displaystyle h(t)}$ at intervals of ${\displaystyle T}$. This gives us a result that looks like: ${\displaystyle h(t)\sum _{n=-\infty }^{\infty }\delta (t-nT)=\sum _{n=-\infty }^{\infty }h(t)\delta (t-nT)}$

In Frequency Domain:

In the frequency domain we start with ${\displaystyle H(f)}$. Now we are in frequency, so we must convolve instead of multiply like we did in the time domain. We would have to convolve ${\displaystyle H(f)}$ with ${\displaystyle {\mathcal {F}}[\sum _{n=-\infty }^{\infty }\delta (t-nT)]}$.

Aside:${\displaystyle {\mathcal {F}}[\sum _{n=-\infty }^{\infty }\delta (t-nT)]=\int _{-\infty }^{\infty }\sum _{n=-\infty }^{\infty }\delta (t-nT)e^{j2\pi ft}dt=\sum _{n=-\infty }^{\infty }\int _{-\infty }^{\infty }\delta (t-nT)e^{j2\pi ft}dt=\sum _{n=-\infty }^{\infty }e^{j2\pi fnT}}$

This result looks it could be a fourier series. We would like to get our result in terms of delta functions. As shown below, the periodic delta functions could be represented as a fourier series with coefficients ${\displaystyle \alpha _{m}}$.

${\displaystyle \sum _{n=-\infty }^{\infty }\delta (t-nT)=\sum _{m=-\infty }^{\infty }\alpha _{m}e^{j2\pi mt}}$

Now we can solve for ${\displaystyle \alpha _{m}}$.

${\displaystyle \alpha _{m}={\frac {1}{T}}\int _{\frac {-T}{2}}^{\frac {T}{2}}\sum _{n=-\infty }^{\infty }\delta (t-nT)e^{\frac {j2\pi mt}{T}}dt={\frac {1}{T}}\int _{\frac {-T}{2}}^{\frac {T}{2}}\delta (t)e^{\frac {j2\pi mt}{T}}dt={\frac {1}{T}}}$

Since the only delta function within the integration limits is the delta function at ${\displaystyle t=0}$, we can take out the summation and just leave one delta function. Then, evaluating the integral at ${\displaystyle t=0}$ we get ${\displaystyle {\frac {1}{T}}}$

${\displaystyle \sum _{n=-\infty }^{\infty }\delta (t-nT)=\sum _{n=-\infty }^{\infty }{\frac {1}{T}}e^{\frac {j2\pi kt}{T}}}$ ${\displaystyle {\mathcal {F}}[\sum _{n=-\infty }^{\infty }\delta (t-nT)]={\mathcal {F}}[\sum _{n=-\infty }^{\infty }{\frac {1}{T}}e^{\frac {j2\pi kt}{T}}]=\sum _{n=-\infty }^{\infty }{\frac {1}{T}}\int _{-\infty }^{\infty }e^{\frac {j2\pi kt}{T}}e^{-j2\pi ft}dt={\frac {1}{T}}\sum _{n=-\infty }^{\infty }\int _{-\infty }^{\infty }e^{-j2\pi (f-{\frac {m}{T}}t}dt={\frac {1}{T}}\sum _{n=-\infty }^{\infty }\delta (f-{\frac {n}{T}})}$

Now wer are ready to take the convolution.

${\displaystyle H(f)*{\frac {1}{T}}\sum _{n=-\infty }^{\infty }\delta (f-{\frac {n}{T}})={\frac {1}{T}}\sum _{n=-\infty }^{\infty }H(f-{\frac {n}{T}})}$

Time Domain

In order to output as sound any of the signals that we have we must run them through a D/A converter. This is like convolving the below signal by a step function ${\displaystyle p(t)=U(t+{\frac {T}{2}})-U(t-{\frac {T}{2}})}$.

This gives us ${\displaystyle \sum (nt)p(t-nT)}$. This is what the signal looks like as it is output through the D/A converter.

Frequency Domain

To find out what we would multiply by in the frequency domain we just take the inverse fourier transform of ${\displaystyle p(t)}$ and we get ${\displaystyle P(f)={\frac {sin({\frac {\pi t}{T}})}{\frac {\pi t}{T}}}}$.

By multiplying ${\displaystyle {\frac {1}{T}}\sum _{n=-\infty }^{\infty }X(f-{\frac {n}{T}})P(f)=X(f)}$. This is hopefully close to what we started with for a signal.

For 2 times oversampling:

In time, multiply: ${\displaystyle \sum _{n=-\infty }^{\infty }x(nT)\delta (t-nT)}$ by ${\displaystyle \sum _{n=-M}^{M}h(m{\frac {T}{2}})\delta (t-{\frac {mT}{2}})}$. This profides points that are interpolated and makes our output sound better because it looks closer to the original wave.

In frequency, convolve: ${\displaystyle {\frac {1}{T}}\sum _{n=-\infty }^{\infty }X(f-{\frac {n}{T}})}$ with ${\displaystyle \sum _{m=-M}^{M}h({\frac {mT}{2}})e^{\frac {-j2\pi mf}{\frac {2}{T}}}}$. The X(f) that you get is great because there is little distortion near the original frequency plot. This means that you can use a cheaper low-pass filter then you would otherwise have been able to.

## Nyquist Frequency

If you are sampling at a frequency of 40 KHz, then the highest frequency that you can reproduce is 20 KHz. The nyquist frequency, would be 20 KHz, the highest frequency that can be reproduced for a given sampling rate.

## FIR Filters

A finite impulse response filter (FIR filter) is a digital filter that is applied to data before sending it out a D/A converter. This type of filter allows for compensation of the signal before is it destorted so that it will look as it was originally recorded. Using an FIR filter also allows us to put a cheap low-pass filter on after the D/A converter because the signal has been compensated so it doesn't take an expensive low-pass filter, as it would without the FIR filter.

The coefficients that are sent out to the D/A converter are:

${\displaystyle h_{m}={T}\int _{T}H(f)e^{j2\pi mfT}\,df}$

where ${\displaystyle H(f)=\sum _{m=-M}^{M}h(mT)e^{-j2\pi fmT}}$

Example: Design a FIR low-pass filter to pass between ${\displaystyle -{\frac {1}{4T}} and reject the rest.

Our desired response is: ${\displaystyle H_{hat}=1}$, if |f| is less then or equal to ${\displaystyle {\frac {1}{4T}}}$ or ${\displaystyle H_{hat}=0}$ otherwise.

So, ${\displaystyle h(mT)=T\int _{}...}$

Note: From the Circular Convolution we get: ${\displaystyle y(n)=\sum _{m=0}^{N-1}h(m)x(n-m)}$

## Discrete Fourier Transforms (DFTs)

The DFT allows us to take a sample of some signal that is not periodic with time and take the Fourier series of it. There is the DFT and the Inverse DFT listed below.

DFT

${\displaystyle x(m)=\sum _{n=0}^{N-1}x(n)e^{\frac {-j2\pi mn}{N}}}$

IDFT

${\displaystyle x(k)={\frac {1}{N}}\sum _{n=0}^{N-1}x(n)e^{\frac {j2\pi kn}{N}}}$

With the DFT all the negative frequency components are just the complex conjugate of the positive frequency components.

One problem with the DFT is that if the sample taken does not begin and end at zero, (or the same point) then we get what is called leakage. Because the DFT is discrete, if the end of the sample is not at the same place it began then it will make a jump back to the point that it began (leakage). This is because the DFT repeats the recorded section of signal over and over. It is this periodic manner of the DFT that allows us to reproduce a discrete signal that is not periodic. The DFT and IDFT are periodic with period N. This can be easily proved by simplifying ${\displaystyle x(n+N)}$.

It should be noted that in the above diagram, ${\displaystyle e(n)=y(n)-r(n)=[\sum _{k=0}^{N-1}h_{n}(k)x(n-k)]-r(n)}$. The goal of an adaptive FIR filter is to drive the error, e(n), to zero. If we consider that this is a two coefficient filter and we have a contour plot of ${\displaystyle e^{2}(n)}$ then we want to travel in the direction of the negative gradient to minimize the error. Let us say that ${\displaystyle \mu }$ is the stepping size. So... ${\displaystyle \triangle h_{n}(m)=-{\frac {\partial (e^{2}(n))}{\partial h_{n}(m)}}\mu =-\mu 2e(n){\frac {\partial (e(n))}{\partial h_{n}(m)}}=-2\mu e(n)x(n-m)}$
What would ${\displaystyle h_{n+1}(m)}$ look like?
${\displaystyle h_{n+1}(m)=h_{n}(m)+\triangle h_{n}(m)=h_{n}(m)-2\mu (y(n)-r(n))x(n-m)=h_{n}(m)-2\mu ([\sum _{k=0}^{N-1}h_{n}(k)x(n-k)]-r(n))x(n-m)}$