<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://fweb.wallawalla.edu/class-wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Andrew</id>
	<title>Class Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://fweb.wallawalla.edu/class-wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Andrew"/>
	<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php/Special:Contributions/Andrew"/>
	<updated>2026-04-05T19:46:56Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.0</generator>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=Dirichlet_Conditions&amp;diff=3833</id>
		<title>Dirichlet Conditions</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=Dirichlet_Conditions&amp;diff=3833"/>
		<updated>2006-10-04T06:01:10Z</updated>

		<summary type="html">&lt;p&gt;Andrew: /* Condition 3. */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Dirichlet Conditions==&lt;br /&gt;
&lt;br /&gt;
===Condition 1.===&lt;br /&gt;
&lt;br /&gt;
Over any period &amp;lt;math&amp;gt; [t, t + T],\,f(t)&amp;lt;/math&amp;gt; must have the property: &amp;lt;math&amp;gt; \int_t^{t+T} \vert f(t)\vert \, dt &amp;lt; \infty &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In other words, &amp;lt;math&amp;gt; f(t) &amp;lt;/math&amp;gt; is abosolutely integrable. The result of this property is that each of the Fourier coefficients &amp;lt;math&amp;gt; c_n &amp;lt;/math&amp;gt; is finite.&lt;br /&gt;
&lt;br /&gt;
===Condition 2.===&lt;br /&gt;
&lt;br /&gt;
Over any period of the signal, there must be only a finite number of minima and maxima. in other words, functions like &amp;lt;math&amp;gt; \sin \left ( \frac{1}{t} \right ) &amp;lt;/math&amp;gt; are excluded. These functions are known as bounded variations.&lt;br /&gt;
&lt;br /&gt;
===Condition 3.===&lt;br /&gt;
&lt;br /&gt;
Over any period, &amp;lt;math&amp;gt; f(t) &amp;lt;/math&amp;gt; can have only a finite number of discontinuities.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;small&amp;gt; Information Referenced from &amp;lt;u&amp;gt;Linear Circuit Analysis &amp;lt;math&amp;gt; 2^{nd} &amp;lt;/math&amp;gt; Edition&amp;lt;/u&amp;gt; by DeCarlo &amp;amp; Lin&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=Dirichlet_Conditions&amp;diff=2498</id>
		<title>Dirichlet Conditions</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=Dirichlet_Conditions&amp;diff=2498"/>
		<updated>2006-10-04T06:00:20Z</updated>

		<summary type="html">&lt;p&gt;Andrew: /* Condition 3. */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Dirichlet Conditions==&lt;br /&gt;
&lt;br /&gt;
===Condition 1.===&lt;br /&gt;
&lt;br /&gt;
Over any period &amp;lt;math&amp;gt; [t, t + T],\,f(t)&amp;lt;/math&amp;gt; must have the property: &amp;lt;math&amp;gt; \int_t^{t+T} \vert f(t)\vert \, dt &amp;lt; \infty &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In other words, &amp;lt;math&amp;gt; f(t) &amp;lt;/math&amp;gt; is abosolutely integrable. The result of this property is that each of the Fourier coefficients &amp;lt;math&amp;gt; c_n &amp;lt;/math&amp;gt; is finite.&lt;br /&gt;
&lt;br /&gt;
===Condition 2.===&lt;br /&gt;
&lt;br /&gt;
Over any period of the signal, there must be only a finite number of minima and maxima. in other words, functions like &amp;lt;math&amp;gt; \sin \left ( \frac{1}{t} \right ) &amp;lt;/math&amp;gt; are excluded. These functions are known as bounded variations.&lt;br /&gt;
&lt;br /&gt;
===Condition 3.===&lt;br /&gt;
&lt;br /&gt;
Over any period, &amp;lt;math&amp;gt; f(t) &amp;lt;/math&amp;gt; can have only a finite number of discontinuities.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;small&amp;gt; Information Referenced from &amp;lt;u&amp;gt;Linear Circuit Analysis &amp;lt;math&amp;gt; 2^{nd} &amp;lt;/math&amp;gt; Edition&amp;lt;/u&amp;gt; by DeCarlo &amp;amp; Lin&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=Dirichlet_Conditions&amp;diff=2497</id>
		<title>Dirichlet Conditions</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=Dirichlet_Conditions&amp;diff=2497"/>
		<updated>2006-10-04T05:55:08Z</updated>

		<summary type="html">&lt;p&gt;Andrew: /* Condition 1. */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Dirichlet Conditions==&lt;br /&gt;
&lt;br /&gt;
===Condition 1.===&lt;br /&gt;
&lt;br /&gt;
Over any period &amp;lt;math&amp;gt; [t, t + T],\,f(t)&amp;lt;/math&amp;gt; must have the property: &amp;lt;math&amp;gt; \int_t^{t+T} \vert f(t)\vert \, dt &amp;lt; \infty &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In other words, &amp;lt;math&amp;gt; f(t) &amp;lt;/math&amp;gt; is abosolutely integrable. The result of this property is that each of the Fourier coefficients &amp;lt;math&amp;gt; c_n &amp;lt;/math&amp;gt; is finite.&lt;br /&gt;
&lt;br /&gt;
===Condition 2.===&lt;br /&gt;
&lt;br /&gt;
Over any period of the signal, there must be only a finite number of minima and maxima. in other words, functions like &amp;lt;math&amp;gt; \sin \left ( \frac{1}{t} \right ) &amp;lt;/math&amp;gt; are excluded. These functions are known as bounded variations.&lt;br /&gt;
&lt;br /&gt;
===Condition 3.===&lt;br /&gt;
&lt;br /&gt;
Over any period, &amp;lt;math&amp;gt; f(t) &amp;lt;/math&amp;gt; can have only a finite number of discontinuities.&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=Dirichlet_Conditions&amp;diff=2495</id>
		<title>Dirichlet Conditions</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=Dirichlet_Conditions&amp;diff=2495"/>
		<updated>2006-10-04T05:53:22Z</updated>

		<summary type="html">&lt;p&gt;Andrew: /* Condition 3. */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Dirichlet Conditions==&lt;br /&gt;
&lt;br /&gt;
===Condition 1.===&lt;br /&gt;
&lt;br /&gt;
Over any period &amp;lt;math&amp;gt; [t, t + T],\,f(t)&amp;lt;/math&amp;gt; must have the property: &amp;lt;math&amp;gt; \int_t^{t+T} \mid f(t)\mid dt &amp;lt; \infty &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In other words, &amp;lt;math&amp;gt; f(t) &amp;lt;/math&amp;gt; is abosolutely integrable. The result of this property is that each of the Fourier coefficients &amp;lt;math&amp;gt; c_n &amp;lt;/math&amp;gt; is finite.&lt;br /&gt;
&lt;br /&gt;
===Condition 2.===&lt;br /&gt;
&lt;br /&gt;
Over any period of the signal, there must be only a finite number of minima and maxima. in other words, functions like &amp;lt;math&amp;gt; \sin \left ( \frac{1}{t} \right ) &amp;lt;/math&amp;gt; are excluded. These functions are known as bounded variations.&lt;br /&gt;
&lt;br /&gt;
===Condition 3.===&lt;br /&gt;
&lt;br /&gt;
Over any period, &amp;lt;math&amp;gt; f(t) &amp;lt;/math&amp;gt; can have only a finite number of discontinuities.&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=Dirichlet_Conditions&amp;diff=2494</id>
		<title>Dirichlet Conditions</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=Dirichlet_Conditions&amp;diff=2494"/>
		<updated>2006-10-04T05:53:12Z</updated>

		<summary type="html">&lt;p&gt;Andrew: /* Condition 3. */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Dirichlet Conditions==&lt;br /&gt;
&lt;br /&gt;
===Condition 1.===&lt;br /&gt;
&lt;br /&gt;
Over any period &amp;lt;math&amp;gt; [t, t + T],\,f(t)&amp;lt;/math&amp;gt; must have the property: &amp;lt;math&amp;gt; \int_t^{t+T} \mid f(t)\mid dt &amp;lt; \infty &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In other words, &amp;lt;math&amp;gt; f(t) &amp;lt;/math&amp;gt; is abosolutely integrable. The result of this property is that each of the Fourier coefficients &amp;lt;math&amp;gt; c_n &amp;lt;/math&amp;gt; is finite.&lt;br /&gt;
&lt;br /&gt;
===Condition 2.===&lt;br /&gt;
&lt;br /&gt;
Over any period of the signal, there must be only a finite number of minima and maxima. in other words, functions like &amp;lt;math&amp;gt; \sin \left ( \frac{1}{t} \right ) &amp;lt;/math&amp;gt; are excluded. These functions are known as bounded variations.&lt;br /&gt;
&lt;br /&gt;
===Condition 3.===&lt;br /&gt;
&lt;br /&gt;
Over any period, &amp;lt;math&amp;gt; f \,(t) &amp;lt;/math&amp;gt; can have only a finite number of discontinuities.&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=Dirichlet_Conditions&amp;diff=2493</id>
		<title>Dirichlet Conditions</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=Dirichlet_Conditions&amp;diff=2493"/>
		<updated>2006-10-04T05:52:18Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Dirichlet Conditions==&lt;br /&gt;
&lt;br /&gt;
===Condition 1.===&lt;br /&gt;
&lt;br /&gt;
Over any period &amp;lt;math&amp;gt; [t, t + T],\,f(t)&amp;lt;/math&amp;gt; must have the property: &amp;lt;math&amp;gt; \int_t^{t+T} \mid f(t)\mid dt &amp;lt; \infty &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In other words, &amp;lt;math&amp;gt; f(t) &amp;lt;/math&amp;gt; is abosolutely integrable. The result of this property is that each of the Fourier coefficients &amp;lt;math&amp;gt; c_n &amp;lt;/math&amp;gt; is finite.&lt;br /&gt;
&lt;br /&gt;
===Condition 2.===&lt;br /&gt;
&lt;br /&gt;
Over any period of the signal, there must be only a finite number of minima and maxima. in other words, functions like &amp;lt;math&amp;gt; \sin \left ( \frac{1}{t} \right ) &amp;lt;/math&amp;gt; are excluded. These functions are known as bounded variations.&lt;br /&gt;
&lt;br /&gt;
===Condition 3.===&lt;br /&gt;
&lt;br /&gt;
Over any period, &amp;lt;math&amp;gt; f(t) &amp;lt;/math&amp;gt; can have only a finite number of discontinuities.&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=Fourier_series_-_by_Ray_Betz&amp;diff=2520</id>
		<title>Fourier series - by Ray Betz</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=Fourier_series_-_by_Ray_Betz&amp;diff=2520"/>
		<updated>2006-10-04T05:30:41Z</updated>

		<summary type="html">&lt;p&gt;Andrew: /* Fourier Series */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Fourier Series==&lt;br /&gt;
If &lt;br /&gt;
# &amp;lt;math&amp;gt; x(t) = x(t + T)&amp;lt;/math&amp;gt;&lt;br /&gt;
*[[Dirichlet Conditions]] are satisfied&lt;br /&gt;
then we can write&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt; \bold x(t) = \sum_{k=-\infty}^\infty \alpha_k e^ \frac {j 2 \pi k t}{T}&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
The above equation is called the complex Fourier Series. Given &amp;lt;math&amp;gt;x(t)&amp;lt;/math&amp;gt;, we may determine &amp;lt;math&amp;gt; \alpha_k &amp;lt;/math&amp;gt; by taking the [[inner product]] of &amp;lt;math&amp;gt;\alpha_k&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;x(t)&amp;lt;/math&amp;gt;.&lt;br /&gt;
Let us assume a solution for &amp;lt;math&amp;gt;\alpha_k&amp;lt;/math&amp;gt; of the form &amp;lt;math&amp;gt;e^ \frac {j 2 \pi n t}{T}&amp;lt;/math&amp;gt;. Now we take the inner product of &amp;lt;math&amp;gt;e^ \frac {j 2 \pi n t}{T}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;x(t)&amp;lt;/math&amp;gt; over the interval of one period, &amp;lt;math&amp;gt; T &amp;lt;/math&amp;gt;.&lt;br /&gt;
&amp;lt;math&amp;gt; &amp;lt;e^ \frac {j 2 \pi n t}{T}|x(t)&amp;gt; = &amp;lt;e^ \frac {j 2 \pi n t}{T}|\sum_{k=-\infty}^\infty \alpha_k e^ \frac {j 2 \pi k t}{T}&amp;gt; &amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;= \int_{-\frac{T}{2}}^\frac{T}{2} x(t)e^ \frac {-j 2 \pi n t}{T} dt &amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;= \int_{-\frac{T}{2}}^\frac{T}{2} \sum_{k=-\infty}^\infty \alpha_k e^ \frac {j 2 \pi k t}{T}e^ \frac {-j 2 \pi n t}{T} dt &amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;= \sum_{k=-\infty}^\infty \alpha_k \int_{-\frac{T}{2}}^\frac{T}{2}  e^ \frac {j 2 \pi (k-n) t}{T} dt &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;k=n&amp;lt;/math&amp;gt; then,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \int_{-\frac{T}{2}}^\frac{T}{2}  e^ \frac {j 2 \pi (k-n) t}{T} dt = \int_{-\frac{T}{2}}^\frac{T}{2}  1 dt = T&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;k \ne n &amp;lt;/math&amp;gt; then,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \int_{-\frac{T}{2}}^\frac{T}{2}  e^ \frac {j 2 \pi (k-n) t}{T} dt = 0 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We can simplify the above two conclusions into one equation. (What is the [[delta function]] below?)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \sum_{k=-\infty}^\infty \alpha_k \int_{-\frac{T}{2}}^\frac{T}{2}  e^ \frac {j 2 \pi (k-n) t}{T} dt = \sum_{k=-\infty}^\infty T \delta_{k,n} \alpha_k = T \alpha_n &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, we conclude&lt;br /&gt;
&amp;lt;math&amp;gt;\alpha_n = \frac{1}{T}\int_{-\frac{T}{2}}^\frac{T}{2} x(t) e^ \frac {-j 2 \pi n t}{T} dt &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Orthogonal Functions==&lt;br /&gt;
&lt;br /&gt;
The function &amp;lt;math&amp;gt; y_n(t) &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; y_m(t) &amp;lt;/math&amp;gt; are orthogonal on &amp;lt;math&amp;gt; (a,b) &amp;lt;/math&amp;gt; if and only if &amp;lt;math&amp;gt; &amp;lt;y_n(t)|y_m(t)&amp;gt; = \int_{a}^{b} y_n^*(t)y_m(t) dt = 0   &amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
The set of functions are orthonormal if and only if &amp;lt;math&amp;gt; &amp;lt;y_n(t)|y_m(t)&amp;gt; = \int_{a}^{b} y_n^*(t)y_m(t) dt = \delta_{m,n}  &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Linear Systems==&lt;br /&gt;
&lt;br /&gt;
Let us say we have a linear time invarient system, where &amp;lt;math&amp;gt; x(t) &amp;lt;/math&amp;gt; is the input and &amp;lt;math&amp;gt; y(t) &amp;lt;/math&amp;gt; is the output.  What outputs do we get as we put different inputs into this system?  &lt;br /&gt;
[[Image:Linear_System.JPG]]&lt;br /&gt;
&lt;br /&gt;
If we put in an impulse response, &amp;lt;math&amp;gt; \delta(t)&amp;lt;/math&amp;gt;, then we get out &amp;lt;math&amp;gt;h(t)&amp;lt;/math&amp;gt;. What would happen if we put a time delayed impulse signal, &amp;lt;math&amp;gt; \delta(t-u)&amp;lt;/math&amp;gt;, into the system?  The output response would be a time delayed &amp;lt;math&amp;gt;h(t)&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt;h(t-u)&amp;lt;/math&amp;gt;, because the system is time invarient. So, no matter when we put in our signal the response would come out the same (just time delayed).  &lt;br /&gt;
&lt;br /&gt;
What if we now multiplied our impulse by a coefficient?  Since our system is linear, the proportionality property applies.  If we put &amp;lt;math&amp;gt; x(u)\delta(t-u)&amp;lt;/math&amp;gt; into our system then we should get out &amp;lt;math&amp;gt;x(u)h(t-u)&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
By the superposition property(because we have a linear system) we may put into the system the integral of &amp;lt;math&amp;gt; x(u)\delta(t-u)&amp;lt;/math&amp;gt; with respect to u and we would get out &amp;lt;math&amp;gt; \int_{-\infty}^\infty x(u)h(t-u) du&amp;lt;/math&amp;gt;. This is because  What would we get if we put &amp;lt;math&amp;gt; e^{j 2 \pi f t} &amp;lt;/math&amp;gt; into our system?  We could find out by plugging &amp;lt;math&amp;gt; e^{j 2 \pi f t} &amp;lt;/math&amp;gt; in for &amp;lt;math&amp;gt; x(u) &amp;lt;/math&amp;gt; in the integral that we just found the output for above.  If we do a change of variables (&amp;lt;math&amp;gt; v = t-u &amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt; dv = -du &amp;lt;/math&amp;gt;) we get &amp;lt;math&amp;gt; \int_{-\infty}^\infty x(u)h(t-u) du = \int_{-\infty}^\infty e^{j 2 \pi f t} h(t-u) du = -\int_{\infty}^{-\infty} e^{j 2 \pi f (t-v)} h(v) dv = e^{j 2 \pi f t} \int_{-\infty}^\infty h(v)e^{-j 2 \pi f v} dv&amp;lt;/math&amp;gt;. By pulling &amp;lt;math&amp;gt; e^{j 2 \pi f t} &amp;lt;/math&amp;gt; out of the integral and calling the remaining integral &amp;lt;math&amp;gt; H_f &amp;lt;/math&amp;gt; we get &amp;lt;math&amp;gt; e^{j 2 \pi f t} H_f&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width:600px; height:100px&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
| &#039;&#039;&#039;INPUT&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;OUTPUT&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;REASON&#039;&#039;&#039;&lt;br /&gt;
|-  &lt;br /&gt;
| &amp;lt;math&amp;gt; \delta(t)&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;h(t)&amp;lt;/math&amp;gt; &lt;br /&gt;
| Given&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt; \delta(t-u)&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;h(t-u)&amp;lt;/math&amp;gt; &lt;br /&gt;
| Time Invarient&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;math&amp;gt; x(u)\delta(t-u)&amp;lt;/math&amp;gt;&lt;br /&gt;
| &amp;lt;math&amp;gt;x(u)h(t-u)&amp;lt;/math&amp;gt; &lt;br /&gt;
| Proportionality&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt; \int_{-\infty}^\infty x(u)\delta(t-u) du&amp;lt;/math&amp;gt;&lt;br /&gt;
|&amp;lt;math&amp;gt; \int_{-\infty}^\infty x(u)h(t-u) du&amp;lt;/math&amp;gt;&lt;br /&gt;
|Superposition&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt; \int_{-\infty}^\infty e^{j 2 \pi f t} h(t-u) du&amp;lt;/math&amp;gt;&lt;br /&gt;
|&amp;lt;math&amp;gt; e^{j 2 \pi f t} \int_{-\infty}^\infty e^{j 2 \pi v t} h(v) dv&amp;lt;/math&amp;gt;&lt;br /&gt;
|Superposition&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt; e^{j 2 \pi f t} &amp;lt;/math&amp;gt;&lt;br /&gt;
|&amp;lt;math&amp;gt; e^{j 2 \pi f t} H_f&amp;lt;/math&amp;gt;&lt;br /&gt;
|Superposition (from above)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Fourier Series (indepth)==&lt;br /&gt;
&lt;br /&gt;
I would like to take a closer look at &amp;lt;math&amp;gt; \alpha_k &amp;lt;/math&amp;gt; in the Fourier Series.  Hopefully this will provide a better understanding of &amp;lt;math&amp;gt; \alpha_k &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We will seperate x(t) into three parts; where &amp;lt;math&amp;gt; \alpha_k &amp;lt;/math&amp;gt; is negative, zero, and positive.  &lt;br /&gt;
&amp;lt;math&amp;gt; \bold x(t) = \sum_{k=-\infty}^\infty \alpha_k e^ \frac {j 2 \pi k t}{T} = \sum_{k=-\infty}^{-1} \alpha_k e^ \frac {j 2 \pi k t}{T} + \alpha_0 + \sum_{k=1}^\infty \alpha_k e^ \frac {j 2 \pi k t}{T}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, by substituting &amp;lt;math&amp;gt; n = -k &amp;lt;/math&amp;gt; into the summation where &amp;lt;math&amp;gt; k &amp;lt;/math&amp;gt; is negative and substituting &amp;lt;math&amp;gt; n = k &amp;lt;/math&amp;gt; into the summation where &amp;lt;math&amp;gt; k &amp;lt;/math&amp;gt; is positive we get:&lt;br /&gt;
&amp;lt;math&amp;gt; \sum_{n=1}^{\infty} \alpha_{-n} e^ \frac {-j 2 \pi n t}{T} + \alpha_0 + \sum_{n=1}^\infty \alpha_n e^ \frac {j 2 \pi n t}{T} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Recall that &amp;lt;math&amp;gt;\alpha_n = \frac{1}{T}\int_{-\frac{T}{2}}^\frac{T}{2} x(u) e^ \frac {-j 2 \pi n t}{T} dt &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt; x(t) &amp;lt;/math&amp;gt; is real, then &amp;lt;math&amp;gt; \alpha_n^* = \alpha_{-n} &amp;lt;/math&amp;gt;. Let us assume that &amp;lt;math&amp;gt; x(t) &amp;lt;/math&amp;gt; is real.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; x(t) = \alpha_0 +\sum_{n=1}^\infty (\alpha_n e^ \frac {j 2 \pi n t}{T} + \alpha_n^* e^ \frac {-j 2 \pi n t}{T}) &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Recall that &amp;lt;math&amp;gt; y + y^* = 2Re(y) &amp;lt;/math&amp;gt; [[Here is further clarification on this property]]&lt;br /&gt;
&lt;br /&gt;
So, we may write:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; x(t) = \alpha_0 +\sum_{n=1}^\infty 2Re(\alpha_n e^ \frac {j 2 \pi n t}{T}) &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In terms of cosine &amp;lt;math&amp;gt; x(t) = \alpha_0 +\sum_{n=1}^\infty 2 |\alpha_n| cos(\frac{2 \pi n t}{T} + \omega_n) &amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt; \omega_n &amp;lt;/math&amp;gt; is an angle.&lt;br /&gt;
&lt;br /&gt;
==Fourier Transform==&lt;br /&gt;
&lt;br /&gt;
Fourier transforms emerge because we want to be able to make Fourier expransions of non-periodic functions.  We can accomplish this by taking the limit of x(t).&lt;br /&gt;
&lt;br /&gt;
Remember that:&lt;br /&gt;
&amp;lt;math&amp;gt;x(t)=x(t+T)= \sum_{k=-\infty}^\infty \alpha_k e^ \frac {j 2 \pi k t}{T} = \sum_{k=-\infty}^\infty 1/T \int_{-\frac{T}{2}}^\frac{T}{2} x(u)e^ \frac {-j 2 \pi k u }{T} du e^ \frac {j 2 \pi k t}{T} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s substitute in x(t) for k/T substitute f, for 1/T substitute df, and for the summation substitute the integral.  &lt;br /&gt;
&lt;br /&gt;
So, &lt;br /&gt;
&amp;lt;math&amp;gt; \lim_{T \to \infty}x(t)= \int_{-\infty}^\infty (\int_{-\infty}^\infty  x(u) e^{-j 2 \pi f u} du) e^{j 2 \pi f t} df&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From the above limit we define &amp;lt;math&amp;gt; x(t)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; X(f) &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; x(t) = \mathcal{F}^{-1}[X(f)] = \int_{-\infty}^\infty  X(f) e^ {j 2 \pi f t} df&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; X(f) = \mathcal{F}[x(t)] = \int_{-\infty}^\infty  x(t) e^ {-j 2 \pi f t} dt&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By using the above transforms we can now change a function from the frequency domain to the time domain or vise versa.  We are not limited to just one domain but can use both of them.  &lt;br /&gt;
&lt;br /&gt;
We can take the derivitive of &amp;lt;math&amp;gt; x(t) &amp;lt;/math&amp;gt; and then put it in terms of the reverse fourier transform.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \frac{dx}{dt} = \int_{-\infty}^\infty  j 2 \pi f X(f) e^ {j 2 \pi f t} df = \mathcal{F}^{-1}[j 2 \pi f X(f)]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What happens if we just shift the time of &amp;lt;math&amp;gt; x(t) &amp;lt;/math&amp;gt;?  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; x(t-t_0) = \int_{-\infty}^\infty X(f) e^{j 2 \pi f(t-t_0)} df = \int_{-\infty}^\infty e^{-j 2 \pi f t_0} X(f) e^{j 2 \pi f t} df = \mathcal{F}^{-1}[e^{-j 2 \pi f t_0} X(f)] &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the same way, if we shift the frequency we get:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; X(f-f_0) = \int_{-\infty}^\infty x(t) e^{j 2 \pi (f-f_0)t} dt = \int_{-\infty}^\infty e^{-j 2 \pi t f_0} x(t) e^{j 2 \pi f t} df = \mathcal{F} [e^{-j 2 \pi t f_0} x(t)] &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What would be the Fourier transform of &amp;lt;math&amp;gt; cos(2 /pi f_0 t) x(t) &amp;lt;/math&amp;gt;?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \mathcal{F} [cos(2 \pi f_0 t) x(t)] = \int_{-\infty}^\infty x(t) cos(2 \pi f_0 t) e^{-j 2 \pi f t} dt = \int_{-\infty}^\infty \frac{e^{j 2 \pi f_0 t} + e^{-j 2 \pi f_0 t}}{2} x(t) e^{-j 2 \pi f t} dt  &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; = \frac{1}{2} \int_{-\infty}^\infty x(t) e^{-j 2 \pi (f-f_0) t} dt + \frac{1}{2} \int_{-\infty}^\infty x(t) e^{j 2 \pi (f+f_0) t} dt  = \frac{1}{2} X(f-f_0) +  \frac{1}{2} X(f+f_0)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What would happen if we multiplied our time (time scaling) by a constant in &amp;lt;math&amp;gt; x(t) &amp;lt;/math&amp;gt;? We will substitute &amp;lt;math&amp;gt; u=at &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; du = adt &amp;lt;/math&amp;gt;.  If &amp;lt;math&amp;gt; a \ne 0 &amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \mathcal{F} [x(a t)] = \int_{-\infty}^\infty x(at) e^{-j 2 \pi f t} dt = \int_{-\infty}^\infty x(u) e^\frac{-j 2 \pi f u}{a} \frac{du}{|a|} = \frac{1}{|a|} X(\frac{f}{a})&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ok, lets take the fourier transform of the fourier series.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \mathcal{F} [\sum_{n=-\infty}^{\infty} \alpha_n e^\frac{j 2 \pi n t}{T}] = \int_{-\infty}^\infty \sum_{n=-\infty}^{\infty} \alpha_n e^\frac{j 2 \pi n t}{T}  e^{-j 2 \pi f t} dt = \sum_{n=-\infty}^{\infty} \alpha_n \int_{-\infty}^\infty e^{-j 2 \pi (f-\frac{n}{T}) t} dt = \sum_{n=-\infty}^{\infty} \alpha_n\delta(f-\frac{n}{T}) &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Remember: &amp;lt;math&amp;gt; \delta (f) = \int_{-\infty}^\infty e^{-j 2 \pi f t} dt &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==CD Player==&lt;br /&gt;
&lt;br /&gt;
Below is a diagram of how the information on a CD player is read and processed.  As you can see the information on the CD is processed by the D/A converter and then sent through a low pass filter and then to the speaker.  If you were recording sound, the sound would be captured by a microphone. Then, it should be sent through a low pass filter.  The reason you want a low-pass filter is to keep high frequencies (that you don&#039;t intend to record) from being recorded.  If a high frequency was recorded at say 30 KHz and the maximum frequency you intended to record was 20KHz, then when you played back the recording you would here a tone at 10KHz.  From the filter the signal goes onto the A/D converter and then it is ready to be put on the CD.  Recording signals (as just described) is essentially the reverse of the operation pictured below.&lt;br /&gt;
&lt;br /&gt;
[[Image:CDsystem.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;In Time Domain:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s start with a signal &amp;lt;math&amp;gt; h(t) &amp;lt;/math&amp;gt;, as shown in the below picture. In this signal there is an infinite amount of information.  Obviously, we can&#039;t hold it all in a computer, but we could take samples every &amp;lt;math&amp;gt; T &amp;lt;/math&amp;gt; seconds.  Lets do that by multiplying &amp;lt;math&amp;gt; h(t) &amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt; \sum_{n=-\infty}^\infty  \delta (t-nT) &amp;lt;/math&amp;gt;. Since the magnitude of our delta function is one, we get a series of delta functions that record the value of &amp;lt;math&amp;gt; h(t) &amp;lt;/math&amp;gt; at intervals of &amp;lt;math&amp;gt; T &amp;lt;/math&amp;gt;. This gives us a result that looks like: &amp;lt;math&amp;gt; h(t)\sum_{n=-\infty}^\infty  \delta (t-nT) = \sum_{n=-\infty}^\infty h(t) \delta (t-nT)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;In Frequency Domain:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In the frequency domain we start with &amp;lt;math&amp;gt; H(f) &amp;lt;/math&amp;gt;.  Now we are in frequency, so we must convolve instead of multiply like we did in the time domain.  We would have to convolve &amp;lt;math&amp;gt; H(f) &amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt; \mathcal{F}[ \sum_{n=-\infty}^\infty  \delta (t-nT) ]&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Aside:&amp;lt;math&amp;gt; \mathcal{F}[ \sum_{n=-\infty}^\infty  \delta (t-nT) ] = \int_{-\infty}^\infty \sum_{n=-\infty}^\infty \delta (t-nT) e^{j 2 \pi f t} dt = \sum_{n=-\infty}^\infty \int_{-\infty}^\infty \delta (t-nT) e^{j 2 \pi f t} dt = \sum_{n=-\infty}^\infty e^{j 2 \pi f n T}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This result looks it could be a fourier series. We would like to get our result in terms of delta functions.  As shown below, the periodic delta functions could be represented as a fourier series with coefficients &amp;lt;math&amp;gt; \alpha_m &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \sum_{n=-\infty}^\infty  \delta (t-nT) = \sum_{m=-\infty}^\infty \alpha_m e^ {j 2 \pi m t} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we can solve for &amp;lt;math&amp;gt; \alpha_m &amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \alpha_m =  \frac {1}{T} \int_{\frac{-T}{2}}^{\frac{T}{2}}  \sum_{n=-\infty}^\infty   \delta (t-nT)  e^\frac {j 2 \pi m t}{T} dt =  \frac {1}{T} \int_{\frac{-T}{2}}^{\frac{T}{2}} \delta (t) e^\frac {j 2 \pi m t}{T} dt =  \frac {1}{T} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since the only delta function within the integration limits is the delta function at &amp;lt;math&amp;gt; t=0 &amp;lt;/math&amp;gt;, we can take out the summation and just leave one delta function.  Then, evaluating the integral at &amp;lt;math&amp;gt; t=0 &amp;lt;/math&amp;gt; we get &amp;lt;math&amp;gt; \frac{1}{T} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \sum_{n=-\infty}^\infty  \delta (t-nT) = \sum_{n=-\infty}^\infty \frac {1}{T} e^ \frac {j 2 \pi k t}{T} &amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt; \mathcal{F} [\sum_{n=-\infty}^\infty  \delta (t-nT)] = \mathcal{F} [\sum_{n=-\infty}^\infty \frac {1}{T} e^ \frac {j 2 \pi k t}{T}] = \sum_{n=-\infty}^\infty \frac {1}{T} \int_{-\infty}^\infty e^ \frac {j 2 \pi k t}{T} e^ {-j 2 \pi f t} dt= \frac {1}{T} \sum_{n=-\infty}^\infty \int_{-\infty}^\infty  e^ {-j 2 \pi (f-\frac{m}{T} t} dt = \frac {1}{T} \sum_{n=-\infty}^\infty \delta (f-\frac{n}{T})&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now wer are ready to take the convolution. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; H(f)* \frac {1}{T} \sum_{n=-\infty}^\infty \delta (f-\frac{n}{T}) = \frac{1}{T} \sum_{n=-\infty}^\infty H(f-\frac{n}{T})&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:barnsasample.jpg|Picture uploaded by Sam Barnes]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Time Domain&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to output as sound any of the signals that we have we must run them through a D/A converter.  This is like convolving the below signal by a step function &amp;lt;math&amp;gt; p(t) = U(t+\frac{T}{2})- U(t-\frac{T}{2}) &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
This gives us &amp;lt;math&amp;gt; \sum (nt)p(t-nT)&amp;lt;/math&amp;gt;.  This is what the signal looks like as it is output through the D/A converter.&lt;br /&gt;
  &lt;br /&gt;
&#039;&#039;&#039;Frequency Domain&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To find out what we would multiply by in the frequency domain we just take the inverse fourier transform of &amp;lt;math&amp;gt; p(t) &amp;lt;/math&amp;gt; and we get &amp;lt;math&amp;gt;P(f) =  \frac{sin (\frac{\pi t}{T})}{\frac{\pi t}{T}} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
By multiplying &amp;lt;math&amp;gt; \frac {1}{T} \sum_{n=-\infty}^\infty X(f-\frac{n}{T})P(f) = X(f) &amp;lt;/math&amp;gt;.  This is hopefully close to what we started with for a signal.     &lt;br /&gt;
&lt;br /&gt;
[[Image:barnsaDA.jpg|Picture uploaded by Sam Barnes]]&lt;br /&gt;
&lt;br /&gt;
For 2 times oversampling:&lt;br /&gt;
&lt;br /&gt;
In time, multiply: &amp;lt;math&amp;gt; \sum_{n=-\infty}^\infty x(nT)\delta(t-nT)&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt; \sum_{n=-M}^M h(m \frac{T}{2}) \delta (t-\frac{mT}{2})&amp;lt;/math&amp;gt;.  This profides points that are interpolated and makes our output sound better because it looks closer to the original wave.  &lt;br /&gt;
&lt;br /&gt;
In frequency, convolve: &amp;lt;math&amp;gt; \frac {1}{T} \sum_{n=-\infty}^\infty X(f- \frac{n}{T} ) &amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt; \sum_{m=-M}^M h(\frac{mT}{2}) e ^\frac{-j2 \pi m f}{\frac{2}{T}} &amp;lt;/math&amp;gt;.  The X(f) that you get is great because there is little distortion near the original frequency plot.  This means that you can use a cheaper low-pass filter then you would otherwise have been able to.&lt;br /&gt;
&lt;br /&gt;
==Nyquist Frequency==&lt;br /&gt;
&lt;br /&gt;
If you are sampling at a frequency of 40 KHz, then the highest frequency that you can reproduce is 20 KHz. The nyquist frequency, would be 20 KHz, the highest frequency that can be reproduced for a given sampling rate.&lt;br /&gt;
&lt;br /&gt;
==FIR Filters==&lt;br /&gt;
&lt;br /&gt;
A finite impulse response filter (FIR filter) is a digital filter that is applied to data before sending it out a D/A converter.  This type of filter allows for compensation of the signal before is it destorted so that it will look as it was originally recorded.  Using an FIR filter also allows us to put a cheap low-pass filter on after the D/A converter because the signal has been compensated so it doesn&#039;t take an expensive low-pass filter, as it would without the FIR filter.&lt;br /&gt;
&lt;br /&gt;
The coefficients that are sent out to the D/A converter are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	h_m = { T } \int_{T} H(f)e^{j2 \pi m f T}\,df&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt; H(f)=\sum_{m=-M}^{M}h(mT)e^{-j 2 \pi f m T} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example: Design a FIR low-pass filter to pass between &amp;lt;math&amp;gt; -\frac{1}{4T} &amp;lt; f &amp;lt; \frac{1}{4T} &amp;lt;/math&amp;gt; and reject the rest.  &lt;br /&gt;
&lt;br /&gt;
Our desired response is: &amp;lt;math&amp;gt; H_{hat} = 1 &amp;lt;/math&amp;gt;, if |f| is less then or equal to &amp;lt;math&amp;gt; \frac{1}{4T} &amp;lt;/math&amp;gt;  or &amp;lt;math&amp;gt; H_{hat} = 0 &amp;lt;/math&amp;gt; otherwise.  &lt;br /&gt;
&lt;br /&gt;
So, &amp;lt;math&amp;gt; h(mT) = T \int_{} . . . &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: From the Circular Convolution we get: &amp;lt;math&amp;gt; y(n) = \sum_{m=0}^{N-1}h(m)x(n-m)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Discrete Fourier Transforms (DFTs)==&lt;br /&gt;
&lt;br /&gt;
The DFT allows us to take a sample of some signal that is not periodic with time and take the Fourier series of it. There is the DFT and the Inverse DFT listed below.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DFT&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; x(m) = \sum_{n=0}^{N-1} x(n) e^{\frac{-j 2 \pi m n}{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IDFT&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; x(k) = \frac{1}{N}\sum_{n=0}^{N-1} x(n) e^{\frac{j 2 \pi k n}{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the DFT all the negative frequency components are just the complex conjugate of the positive frequency components.  &lt;br /&gt;
&lt;br /&gt;
One problem with the DFT is that if the sample taken does not begin and end at zero, (or the same point) then we get what is called leakage.  Because the DFT is discrete, if the end of the sample is not at the same place it began then it will make a jump back to the point that it began (leakage).  This is because the DFT repeats the recorded section of signal over and over.  It is this periodic manner of the DFT that allows us to reproduce a discrete signal that is not periodic.  The DFT and IDFT are periodic with period N.  This can be easily proved by simplifying &amp;lt;math&amp;gt; x(n+N) &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Adaptive FIR Filters==&lt;br /&gt;
&lt;br /&gt;
[[Image:Adaptive.JPG]]&lt;br /&gt;
&lt;br /&gt;
It should be noted that in the above diagram, &amp;lt;math&amp;gt; e(n)=y(n)-r(n) = [\sum_{k=0}^{N-1} h_n(k) x(n-k)] - r(n) &amp;lt;/math&amp;gt;.  The goal of an adaptive FIR filter is to drive the error, e(n), to zero.  If we consider that this is a two coefficient filter and we have a contour plot of &amp;lt;math&amp;gt; e^2(n) &amp;lt;/math&amp;gt; then we want to travel in the direction of the negative gradient to minimize the error.  Let us say that &amp;lt;math&amp;gt; \mu &amp;lt;/math&amp;gt; is the stepping size.  So...&lt;br /&gt;
&amp;lt;math&amp;gt;  \triangle h_n(m) = - \frac{\partial (e^2(n))}{\partial h_n(m)} \mu = - \mu 2 e(n)\frac{\partial (e(n))}{\partial h_n(m)} = - 2 \mu e(n) x(n-m) &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What would &amp;lt;math&amp;gt; h_{n+1}(m) &amp;lt;/math&amp;gt; look like? &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; h_{n+1}(m)= h_n(m) + \triangle h_n(m) = h_n(m) - 2 \mu (y(n)-r(n)) x(n-m) = h_n(m) - 2 \mu ([\sum_{k=0}^{N-1} h_n(k) x(n-k)] - r(n)) x(n-m)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
How might one find an unknown transfer function?  Lets use the example of the tuner upper.  The idea here is that we want to remove a sine wave from the signal and leave the original signal(voice) in place.  &lt;br /&gt;
&lt;br /&gt;
[[Image:AdaptiveFilter.JPG]]&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=Fourier_transform&amp;diff=2517</id>
		<title>Fourier transform</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=Fourier_transform&amp;diff=2517"/>
		<updated>2006-10-04T05:21:11Z</updated>

		<summary type="html">&lt;p&gt;Andrew: /* Some Useful Fourier Transform Pairs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==From the Fourier Transform to the Inverse Fourier Transform==&lt;br /&gt;
An initially identity that is useful:&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
X(f)=\int_{-\infty}^{\infty} x(t) e^{-j2\pi ft}\, dt&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Suppose that we have some function, say &amp;lt;math&amp;gt; \beta (t) &amp;lt;/math&amp;gt;, that is nonperiodic and finite in duration.&amp;lt;br&amp;gt;&lt;br /&gt;
This means that &amp;lt;math&amp;gt; \beta(t)=0 &amp;lt;/math&amp;gt; for some &amp;lt;math&amp;gt; T_\alpha &amp;lt; \left | t \right | &amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Now let&#039;s make a periodic function&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\gamma(t)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
by repeating&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	\beta(t)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
with a fundamental period&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	T_\zeta&lt;br /&gt;
&amp;lt;/math&amp;gt;.&lt;br /&gt;
Note that &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	\lim_{T_\zeta \to \infty}\gamma(t)=\beta(t)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The Fourier Series representation of &amp;lt;math&amp;gt; \gamma(t) &amp;lt;/math&amp;gt; is&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	\gamma(t)=\sum_{k=-\infty}^\infty \alpha_k e^{j2\pi fkt}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where&lt;br /&gt;
&amp;lt;math&amp;gt; &lt;br /&gt;
	f={1\over T_\zeta}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;and&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	\alpha_k={1\over T_\zeta}\int_{-{T_\zeta\over 2}}^{{T_\zeta\over 2}} \gamma(t) e^{-j2\pi kt}\,dt&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt; \alpha_k &amp;lt;/math&amp;gt; can now be rewritten as&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	\alpha_k={1\over T_\zeta}\int_{-\infty}^{\infty} \beta(t) e^{-j2\pi kt}\,dt&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;From our initial identity then, we can write &amp;lt;math&amp;gt; \alpha_k &amp;lt;/math&amp;gt; as&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	\alpha_k={1\over T_\zeta}\Beta(kf)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt; and &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	\gamma(t)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
becomes&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	\gamma(t)=\sum_{k=-\infty}^\infty {1\over T_\zeta}\Beta(kf) e^{j2\pi fkt}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Now remember that&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	\beta(t)=\lim_{T_\zeta \to \infty}\gamma(t)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
and&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
{1\over {T_\zeta}} = f.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Which means that&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	\beta(t)=\lim_{f \to 0}\gamma(t)=\lim_{f \to 0}\sum_{k=-\infty}^\infty f \Beta(kf) e^{j2\pi fkt}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Which is just to say that&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\beta(t)=\int_{-\infty}^\infty \Beta(f) e^{j2\pi fkt}\,df&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
So we have that&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\mathcal{F}[\beta(t)]=\Beta(f)=\int_{-\infty}^{\infty} \beta(t) e^{-j2\pi ft}\, dt&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Further&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\mathcal{F}^{-1}[\Beta(f)]=\beta(t)=\int_{-\infty}^\infty \Beta(f) e^{j2\pi fkt}\,df&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
==Some Useful Fourier Transform Pairs==&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\mathcal{F}[\alpha(t)]=\frac{1}{\mid \alpha \mid}f(\frac{\omega}{\alpha})&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathcal{F}[c_1\alpha(t)+c_2\beta(t)]&amp;lt;/math&amp;gt;&lt;br /&gt;
|&amp;lt;math&amp;gt;=\int_{-\infty}^{\infty} (c_1\alpha(t)+c_2\beta(t)) e^{-j2\pi ft}\, dt&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&amp;lt;math&amp;gt;=\int_{-\infty}^{\infty}c_1\alpha(t)e^{-j2\pi ft}\, dt+\int_{-\infty}^{\infty}c_2\beta(t)e^{-j2\pi ft}\, dt&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&amp;lt;math&amp;gt;=c_1\int_{-\infty}^{\infty}\alpha(t)e^{-j2\pi ft}\, dt+c_2\int_{-\infty}^{\infty}\beta(t)e^{-j2\pi ft}\, dt=c_1\Alpha(f)+c_2\Beta(f)&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\mathcal{F}[\alpha(t-\gamma)]=e^{-j2\pi f\gamma}\Alpha(f)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\mathcal{F}[\alpha(t)*\beta(t)]=\Alpha(f)\Beta(f)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\mathcal{F}[\alpha(t)\beta(t)]=\Alpha(f)*\Beta(f)&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==A Second Approach to Fourier Transforms==&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=Orthogonal_functions&amp;diff=2500</id>
		<title>Orthogonal functions</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=Orthogonal_functions&amp;diff=2500"/>
		<updated>2006-10-04T04:48:42Z</updated>

		<summary type="html">&lt;p&gt;Andrew: /* Other resources on orthogonality */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
In this article we will examine another viewpoint for functions than that traditionally taken.  Normally we think of a function &amp;lt;math&amp;gt; f(t) &amp;lt;/math&amp;gt; as a complicated entity - &amp;lt;math&amp;gt; f() &amp;lt;/math&amp;gt; in a simple environment (one dimension, or along the t axis).  Now we want to think of a function as a vector or point (a simple thing) in a very complicated environment (possibly an infinite dimensional space).&lt;br /&gt;
&lt;br /&gt;
==Vectors==&lt;br /&gt;
Recall that vectors consist of an ordered set of numbers.  Often the numbers are Real numbers, but we shall allow them to be Complex for our purposes.  The numbers represent the amount of the vector in the direction denoted by the position of the number in the list.  Each position in the list is associated with a direction.  For example, the vector&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v = &amp;lt;1, 4, 3&amp;gt; &amp;lt;/math&amp;gt; means that the vector &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; is one unit in the first direction (often the x direction), four units in the second direction (often the y direction), and three units in the last direction (often the z direction).  We say the component of &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; in the second direction is 4.  This is often written as &amp;lt;math&amp;gt; v_y = 4 &amp;lt;/math&amp;gt;.&lt;br /&gt;
====Vector notation====&lt;br /&gt;
We don&#039;t have to use x, y, and z as the direction names; we can use numbers, like 1, 2, and 3 instead.  The advantage of this is that it leads to more compact notation, and extends to more than three dimensions much better.  For example we could say &amp;lt;math&amp;gt; v_2 = 4 &amp;lt;/math&amp;gt; instead of &amp;lt;math&amp;gt; v_y = 4 &amp;lt;/math&amp;gt;.  Instead of writing &amp;lt;math&amp;gt; \vec \bold v = &amp;lt;1, 4, 3&amp;gt; &amp;lt;/math&amp;gt; we can write &amp;lt;math&amp;gt; \vec \bold v = \sum_{k=1}^3 v_k \hat \bold a_k &amp;lt;/math&amp;gt; where the &amp;lt;math&amp;gt;\hat \bold a_k &amp;lt;/math&amp;gt; denotes a basis vector in the kth direction, &amp;lt;math&amp;gt;v_1 = 1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt; v_2 = 4, &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; v_3 = 3&amp;lt;/math&amp;gt;.  The idea of basis vectors was implicit in the notation &amp;lt;math&amp;gt; \vec \bold v = &amp;lt;1, 4, 3&amp;gt; &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Inner products for vectors===&lt;br /&gt;
When vectors are real, inner products (sometimes called dot products) give the component of one vector in another vector&#039;s direction, scaled by the magnitude (length) of the second vector.  Inner products are useful to find components of vectors.  We commonly use a dot as the symbol for inner product.  For example, the inner product of &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \vec \bold a_n &amp;lt;/math&amp;gt; is written:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v \bullet \vec \bold a_n &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Orthogonality for vectors====&lt;br /&gt;
It is quite handy to pick the directions used so that they are perpendicular (or orthogonal).  With this arrangement the basis vectors have no components in each other&#039;s directions, which means that &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec \bold a_k \bullet \vec \bold a_n = w_k \delta_{k,n} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;lt;math&amp;gt; w_k &amp;lt;/math&amp;gt; is the square of the length of &amp;lt;math&amp;gt; \vec \bold a_k &amp;lt;/math&amp;gt; and the symbol &amp;lt;math&amp;gt; \delta_{k,n} &amp;lt;/math&amp;gt;, known as the [http://en.wikipedia.org/wiki/Kronecker_delta Kronecker delta], is one when k = n and zero otherwise.&lt;br /&gt;
=====Normalization=====&lt;br /&gt;
When the &amp;lt;math&amp;gt; w_k = 1&amp;lt;/math&amp;gt; we have an orthonormal basis set, meaning that it is both orthogonal and that the basis vectors are normalized to unity (or have length one).  Orthonormal vector systems are very popular.  In fact they are the most common vector systems you will find.  The reason they are so handy is each direction is uncoupled from the others.&lt;br /&gt;
&lt;br /&gt;
For example, to find &amp;lt;math&amp;gt; v_n &amp;lt;/math&amp;gt;, we take the inner product of the vector &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; with a unit vector in the nth direction, &amp;lt;math&amp;gt; \vec \bold a_n &amp;lt;/math&amp;gt;.  We write this operation like this:  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v \bullet \vec \bold a_n = \sum_{k=1}^3 v_k \vec \bold a_k \bullet \vec \bold a_n = \sum_{k=1}^3 v_k \delta_{k,n} =  v_n &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Suppose we have two vectors from an orthonormal system, &amp;lt;math&amp;gt; \vec \bold u &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt;.  Taking the inner product of these vectors, we get&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold u \bullet \vec \bold v = \sum_{k=1}^3 u_k \vec \bold a_k \bullet \sum_{m=1}^3 v_m \vec \bold a_m  = \sum_{k=1}^3 u_k \sum_{m=1}^3  v_m \vec \bold a_k \bullet  \vec \bold a_m = \sum_{k=1}^3 u_k \sum_{m=1}^3  v_m \delta_{k,m} = \sum_{k=1}^3 v_k u_k &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This shows that when we have an orthonormal vector space, inner products boil down to summing the products of like components.  Also note that if we take the inner product of &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; with itself, we get&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v \bullet \vec \bold v = \sum_{k=1}^3 v_k \vec \bold a_k \bullet \sum_{m=1}^3 v_m \vec \bold a_m  = \sum_{k=1}^3 v_k \sum_{m=1}^3  v_m \vec \bold a_k \bullet  \vec \bold a_m = \sum_{k=1}^3 v_k \sum_{m=1}^3  v_m \delta_{k,m} = \sum_{k=1}^3 v_k^2&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is the magnitude of the vector &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; squared (&amp;lt;math&amp;gt; | \vec \bold v |^2 &amp;lt;/math&amp;gt;) from the Pythagorean Theorem.&lt;br /&gt;
&lt;br /&gt;
====Changing vector basis sets====&lt;br /&gt;
Sometimes in our studies we find it useful to change basis sets.  For example, when solving a physics problem with cylindrical symmetry, it is often easier to use cylindrical coordinates, and the basis vectors that go with that system, rather than the more usual Cartesian coordinates and basis vectors. &lt;br /&gt;
=====So, how do I change the basis set?=====&lt;br /&gt;
If the new basis set is orthonormal, it is really pretty simple.  You need to project the vector you want changed onto each of the new basis vectors.  This means that the new components are just the inner product of the vector and the appropriate basis function.  If the new basis set is not orthonormal, and if there are n dimensions in each basis set, you will have n linear coupled equations in n unknowns to solve.&lt;br /&gt;
&lt;br /&gt;
===More vector questions===&lt;br /&gt;
&lt;br /&gt;
[[Complex vector inner products|What if the vectors have complex components?]]&lt;br /&gt;
&lt;br /&gt;
[[Vector weighting functions|What if not all components of the vectors have the same units?]]&lt;br /&gt;
&lt;br /&gt;
[[Multiple dimensional vectors|What if there are more than three dimensions?]]&lt;br /&gt;
&lt;br /&gt;
==Functions and vectors, an analogy==&lt;br /&gt;
We may think of the number of the direction, &amp;lt;math&amp;gt; k &amp;lt;/math&amp;gt;, as the independent variable of a vector and the component in that direction, &amp;lt;math&amp;gt; v_k &amp;lt;/math&amp;gt; as the dependent variable of the vector &amp;lt;math&amp;gt; \vec \bold  v &amp;lt;/math&amp;gt; in a similar way to the way we think of t as the independent variable of a function f(), where f(t) is the dependent variable of f.  Probably the biggest difference here is that t often takes on real values from &amp;lt;math&amp;gt; - \infty &amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt; \infty &amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt; k \in {1, 2, 3} &amp;lt;/math&amp;gt;.  Using this analogy, we may think of a function as a vector having an uncountably infinite number of dimensions.  &lt;br /&gt;
&lt;br /&gt;
====Can we write functions in an analogous way compared to the way we write vectors?====&lt;br /&gt;
&lt;br /&gt;
Remember we wrote &amp;lt;math&amp;gt; \vec \bold v = \sum_{k=1}^3 v_k \hat \bold a_k &amp;lt;/math&amp;gt;.  Can we write something similar for a function, f(t) defined for a t element of the reals?  Well maybe....  If the sum over the dummy index k becomes an integral over the dummy variable, x, and the unit vectors &amp;lt;math&amp;gt; \vec \bold a_k &amp;lt;/math&amp;gt; are replaced with something like &amp;lt;math&amp;gt; \delta(x-t) &amp;lt;/math&amp;gt;, the [http://en.wikipedia.org/wiki/Delta_function Dirac delta function].  The result would look something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; f(t) = \int_{- \infty}^\infty f(x) \delta (x-t) dx &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
This works!  The Dirac delta functions, playing the roll of the basis vectors, are called basis functions.  The function f(x) plays the roll of the vector coefficients &amp;lt;math&amp;gt;v_k&amp;lt;/math&amp;gt;.  This gives us another way to think of the function f().&lt;br /&gt;
&lt;br /&gt;
===Inner products for functions===&lt;br /&gt;
[[Orthogonal functions#Inner products for vectors|Above]] we found that a vector inner product between &amp;lt;math&amp;gt;\vec \bold u &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec \bold v &amp;lt;/math&amp;gt; could be written as &amp;lt;math&amp;gt; \vec \bold u \bullet \vec \bold v = \sum_{k=1}^3 u_k v_k &amp;lt;/math&amp;gt;.  If we follow our above analogy, we should be able to replace the sum over k with an integral over x.  There is one little notational problem, and that is we don&#039;t want to confuse the functional inner product with a simple muliply, so we need some new notation to denote this new inner product.  In [http://en.wikipedia.org/wiki/Quantum_mechanics quantum mechanics], physicists use the [http://en.wikipedia.org/wiki/Bra-ket_notation bra-ket] notation.  Let&#039;s borrow that.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; &amp;lt;u|v&amp;gt; = \int_{-\infty}^\infty u^*(x) v(x) dx &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note the complex conjugate on the function u(x).  That is in case u(x) is a complex valued function.  For the analogous case with vectors see [[Complex vector inner products]].&lt;br /&gt;
====Orthogonality for functions====&lt;br /&gt;
Two functions, &amp;lt;math&amp;gt;u(t)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;v(t)&amp;lt;/math&amp;gt; are said to be orthogonal on the interval &amp;lt;math&amp;gt; (a,b) &amp;lt;/math&amp;gt; with respect to the weighting function &amp;lt;math&amp;gt; w(t) &amp;lt;/math&amp;gt;  if and only if &lt;br /&gt;
&amp;lt;math&amp;gt;\int_a^b w(x) u^*(x) v(x) dx = 0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
The weighting function is often unity, but it is included so that different values of t can be weighted appropriately in analogy to the way the &amp;lt;math&amp;gt;w_k&amp;lt;/math&amp;gt; weight was used when the vector basis set was orthogonal, but not orthonormal (that is, different basis vectors had different numerical lengths), as we discussed [[Vector weighting functions|here]].  Unless otherwise noted we will use &amp;lt;math&amp;gt; w(t) = 1 &amp;lt;/math&amp;gt;, so that the defining relation for orthogonality of functions &amp;lt;math&amp;gt; u &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; v &amp;lt;/math&amp;gt; becomes&lt;br /&gt;
&amp;lt;math&amp;gt;\int_a^b  u^*(x) v(x) dx = 0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Changing basis sets with functions====&lt;br /&gt;
====Examples====&lt;br /&gt;
*[[Fourier series]]&lt;br /&gt;
*[[Reconstructing bandlimited signals from sample points]]&lt;br /&gt;
&lt;br /&gt;
==Other resources on orthogonality==&lt;br /&gt;
[http://en.wikipedia.org/wiki/Inner_product Wikipedia inner product]&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Orthogonal Wikipedia Orthogonality]&lt;br /&gt;
&lt;br /&gt;
Principle author of this page:  [[User:Frohro|Rob Frohne]]&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=Orthogonal_functions&amp;diff=2489</id>
		<title>Orthogonal functions</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=Orthogonal_functions&amp;diff=2489"/>
		<updated>2006-10-04T04:42:46Z</updated>

		<summary type="html">&lt;p&gt;Andrew: /* Other resources on orthogonality */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
In this article we will examine another viewpoint for functions than that traditionally taken.  Normally we think of a function &amp;lt;math&amp;gt; f(t) &amp;lt;/math&amp;gt; as a complicated entity - &amp;lt;math&amp;gt; f() &amp;lt;/math&amp;gt; in a simple environment (one dimension, or along the t axis).  Now we want to think of a function as a vector or point (a simple thing) in a very complicated environment (possibly an infinite dimensional space).&lt;br /&gt;
&lt;br /&gt;
==Vectors==&lt;br /&gt;
Recall that vectors consist of an ordered set of numbers.  Often the numbers are Real numbers, but we shall allow them to be Complex for our purposes.  The numbers represent the amount of the vector in the direction denoted by the position of the number in the list.  Each position in the list is associated with a direction.  For example, the vector&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v = &amp;lt;1, 4, 3&amp;gt; &amp;lt;/math&amp;gt; means that the vector &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; is one unit in the first direction (often the x direction), four units in the second direction (often the y direction), and three units in the last direction (often the z direction).  We say the component of &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; in the second direction is 4.  This is often written as &amp;lt;math&amp;gt; v_y = 4 &amp;lt;/math&amp;gt;.&lt;br /&gt;
====Vector notation====&lt;br /&gt;
We don&#039;t have to use x, y, and z as the direction names; we can use numbers, like 1, 2, and 3 instead.  The advantage of this is that it leads to more compact notation, and extends to more than three dimensions much better.  For example we could say &amp;lt;math&amp;gt; v_2 = 4 &amp;lt;/math&amp;gt; instead of &amp;lt;math&amp;gt; v_y = 4 &amp;lt;/math&amp;gt;.  Instead of writing &amp;lt;math&amp;gt; \vec \bold v = &amp;lt;1, 4, 3&amp;gt; &amp;lt;/math&amp;gt; we can write &amp;lt;math&amp;gt; \vec \bold v = \sum_{k=1}^3 v_k \hat \bold a_k &amp;lt;/math&amp;gt; where the &amp;lt;math&amp;gt;\hat \bold a_k &amp;lt;/math&amp;gt; denotes a basis vector in the kth direction, &amp;lt;math&amp;gt;v_1 = 1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt; v_2 = 4, &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; v_3 = 3&amp;lt;/math&amp;gt;.  The idea of basis vectors was implicit in the notation &amp;lt;math&amp;gt; \vec \bold v = &amp;lt;1, 4, 3&amp;gt; &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Inner products for vectors===&lt;br /&gt;
When vectors are real, inner products (sometimes called dot products) give the component of one vector in another vector&#039;s direction, scaled by the magnitude (length) of the second vector.  Inner products are useful to find components of vectors.  We commonly use a dot as the symbol for inner product.  For example, the inner product of &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \vec \bold a_n &amp;lt;/math&amp;gt; is written:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v \bullet \vec \bold a_n &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Orthogonality for vectors====&lt;br /&gt;
It is quite handy to pick the directions used so that they are perpendicular (or orthogonal).  With this arrangement the basis vectors have no components in each other&#039;s directions, which means that &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec \bold a_k \bullet \vec \bold a_n = w_k \delta_{k,n} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;lt;math&amp;gt; w_k &amp;lt;/math&amp;gt; is the square of the length of &amp;lt;math&amp;gt; \vec \bold a_k &amp;lt;/math&amp;gt; and the symbol &amp;lt;math&amp;gt; \delta_{k,n} &amp;lt;/math&amp;gt;, known as the [http://en.wikipedia.org/wiki/Kronecker_delta Kronecker delta], is one when k = n and zero otherwise.&lt;br /&gt;
=====Normalization=====&lt;br /&gt;
When the &amp;lt;math&amp;gt; w_k = 1&amp;lt;/math&amp;gt; we have an orthonormal basis set, meaning that it is both orthogonal and that the basis vectors are normalized to unity (or have length one).  Orthonormal vector systems are very popular.  In fact they are the most common vector systems you will find.  The reason they are so handy is each direction is uncoupled from the others.&lt;br /&gt;
&lt;br /&gt;
For example, to find &amp;lt;math&amp;gt; v_n &amp;lt;/math&amp;gt;, we take the inner product of the vector &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; with a unit vector in the nth direction, &amp;lt;math&amp;gt; \vec \bold a_n &amp;lt;/math&amp;gt;.  We write this operation like this:  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v \bullet \vec \bold a_n = \sum_{k=1}^3 v_k \vec \bold a_k \bullet \vec \bold a_n = \sum_{k=1}^3 v_k \delta_{k,n} =  v_n &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Suppose we have two vectors from an orthonormal system, &amp;lt;math&amp;gt; \vec \bold u &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt;.  Taking the inner product of these vectors, we get&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold u \bullet \vec \bold v = \sum_{k=1}^3 u_k \vec \bold a_k \bullet \sum_{m=1}^3 v_m \vec \bold a_m  = \sum_{k=1}^3 u_k \sum_{m=1}^3  v_m \vec \bold a_k \bullet  \vec \bold a_m = \sum_{k=1}^3 u_k \sum_{m=1}^3  v_m \delta_{k,m} = \sum_{k=1}^3 v_k u_k &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This shows that when we have an orthonormal vector space, inner products boil down to summing the products of like components.  Also note that if we take the inner product of &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; with itself, we get&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v \bullet \vec \bold v = \sum_{k=1}^3 v_k \vec \bold a_k \bullet \sum_{m=1}^3 v_m \vec \bold a_m  = \sum_{k=1}^3 v_k \sum_{m=1}^3  v_m \vec \bold a_k \bullet  \vec \bold a_m = \sum_{k=1}^3 v_k \sum_{m=1}^3  v_m \delta_{k,m} = \sum_{k=1}^3 v_k^2&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is the magnitude of the vector &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; squared (&amp;lt;math&amp;gt; | \vec \bold v |^2 &amp;lt;/math&amp;gt;) from the Pythagorean Theorem.&lt;br /&gt;
&lt;br /&gt;
====Changing vector basis sets====&lt;br /&gt;
Sometimes in our studies we find it useful to change basis sets.  For example, when solving a physics problem with cylindrical symmetry, it is often easier to use cylindrical coordinates, and the basis vectors that go with that system, rather than the more usual Cartesian coordinates and basis vectors. &lt;br /&gt;
=====So, how do I change the basis set?=====&lt;br /&gt;
If the new basis set is orthonormal, it is really pretty simple.  You need to project the vector you want changed onto each of the new basis vectors.  This means that the new components are just the inner product of the vector and the appropriate basis function.  If the new basis set is not orthonormal, and if there are n dimensions in each basis set, you will have n linear coupled equations in n unknowns to solve.&lt;br /&gt;
&lt;br /&gt;
===More vector questions===&lt;br /&gt;
&lt;br /&gt;
[[Complex vector inner products|What if the vectors have complex components?]]&lt;br /&gt;
&lt;br /&gt;
[[Vector weighting functions|What if not all components of the vectors have the same units?]]&lt;br /&gt;
&lt;br /&gt;
[[Multiple dimensional vectors|What if there are more than three dimensions?]]&lt;br /&gt;
&lt;br /&gt;
==Functions and vectors, an analogy==&lt;br /&gt;
We may think of the number of the direction, &amp;lt;math&amp;gt; k &amp;lt;/math&amp;gt;, as the independent variable of a vector and the component in that direction, &amp;lt;math&amp;gt; v_k &amp;lt;/math&amp;gt; as the dependent variable of the vector &amp;lt;math&amp;gt; \vec \bold  v &amp;lt;/math&amp;gt; in a similar way to the way we think of t as the independent variable of a function f(), where f(t) is the dependent variable of f.  Probably the biggest difference here is that t often takes on real values from &amp;lt;math&amp;gt; - \infty &amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt; \infty &amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt; k \in {1, 2, 3} &amp;lt;/math&amp;gt;.  Using this analogy, we may think of a function as a vector having an uncountably infinite number of dimensions.  &lt;br /&gt;
&lt;br /&gt;
====Can we write functions in an analogous way compared to the way we write vectors?====&lt;br /&gt;
&lt;br /&gt;
Remember we wrote &amp;lt;math&amp;gt; \vec \bold v = \sum_{k=1}^3 v_k \hat \bold a_k &amp;lt;/math&amp;gt;.  Can we write something similar for a function, f(t) defined for a t element of the reals?  Well maybe....  If the sum over the dummy index k becomes an integral over the dummy variable, x, and the unit vectors &amp;lt;math&amp;gt; \vec \bold a_k &amp;lt;/math&amp;gt; are replaced with something like &amp;lt;math&amp;gt; \delta(x-t) &amp;lt;/math&amp;gt;, the [http://en.wikipedia.org/wiki/Delta_function Dirac delta function].  The result would look something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; f(t) = \int_{- \infty}^\infty f(x) \delta (x-t) dx &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
This works!  The Dirac delta functions, playing the roll of the basis vectors, are called basis functions.  The function f(x) plays the roll of the vector coefficients &amp;lt;math&amp;gt;v_k&amp;lt;/math&amp;gt;.  This gives us another way to think of the function f().&lt;br /&gt;
&lt;br /&gt;
===Inner products for functions===&lt;br /&gt;
[[Orthogonal functions#Inner products for vectors|Above]] we found that a vector inner product between &amp;lt;math&amp;gt;\vec \bold u &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec \bold v &amp;lt;/math&amp;gt; could be written as &amp;lt;math&amp;gt; \vec \bold u \bullet \vec \bold v = \sum_{k=1}^3 u_k v_k &amp;lt;/math&amp;gt;.  If we follow our above analogy, we should be able to replace the sum over k with an integral over x.  There is one little notational problem, and that is we don&#039;t want to confuse the functional inner product with a simple muliply, so we need some new notation to denote this new inner product.  In [http://en.wikipedia.org/wiki/Quantum_mechanics quantum mechanics], physicists use the [http://en.wikipedia.org/wiki/Bra-ket_notation bra-ket] notation.  Let&#039;s borrow that.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; &amp;lt;u|v&amp;gt; = \int_{-\infty}^\infty u^*(x) v(x) dx &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note the complex conjugate on the function u(x).  That is in case u(x) is a complex valued function.  For the analogous case with vectors see [[Complex vector inner products]].&lt;br /&gt;
====Orthogonality for functions====&lt;br /&gt;
Two functions, &amp;lt;math&amp;gt;u(t)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;v(t)&amp;lt;/math&amp;gt; are said to be orthogonal on the interval &amp;lt;math&amp;gt; (a,b) &amp;lt;/math&amp;gt; with respect to the weighting function &amp;lt;math&amp;gt; w(t) &amp;lt;/math&amp;gt;  if and only if &lt;br /&gt;
&amp;lt;math&amp;gt;\int_a^b w(x) u^*(x) v(x) dx = 0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
The weighting function is often unity, but it is included so that different values of t can be weighted appropriately in analogy to the way the &amp;lt;math&amp;gt;w_k&amp;lt;/math&amp;gt; weight was used when the vector basis set was orthogonal, but not orthonormal (that is, different basis vectors had different numerical lengths), as we discussed [[Vector weighting functions|here]].  Unless otherwise noted we will use &amp;lt;math&amp;gt; w(t) = 1 &amp;lt;/math&amp;gt;, so that the defining relation for orthogonality of functions &amp;lt;math&amp;gt; u &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; v &amp;lt;/math&amp;gt; becomes&lt;br /&gt;
&amp;lt;math&amp;gt;\int_a^b  u^*(x) v(x) dx = 0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Changing basis sets with functions====&lt;br /&gt;
====Examples====&lt;br /&gt;
*[[Fourier series]]&lt;br /&gt;
*[[Reconstructing bandlimited signals from sample points]]&lt;br /&gt;
&lt;br /&gt;
==Other resources on orthogonality==&lt;br /&gt;
[http://en.wikipedia.org/wiki/Inner_product Wikipedia inner product] &amp;lt;/math&amp;gt;&lt;br /&gt;
[http://en.wikipedia.org/wiki/Orthogonal Wikipedia Orthogonality]&lt;br /&gt;
&lt;br /&gt;
Principle author of this page:  [[User:Frohro|Rob Frohne]]&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=Orthogonal_functions&amp;diff=2483</id>
		<title>Orthogonal functions</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=Orthogonal_functions&amp;diff=2483"/>
		<updated>2006-10-04T04:38:39Z</updated>

		<summary type="html">&lt;p&gt;Andrew: /* Other resources on orthogonality */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
In this article we will examine another viewpoint for functions than that traditionally taken.  Normally we think of a function &amp;lt;math&amp;gt; f(t) &amp;lt;/math&amp;gt; as a complicated entity - &amp;lt;math&amp;gt; f() &amp;lt;/math&amp;gt; in a simple environment (one dimension, or along the t axis).  Now we want to think of a function as a vector or point (a simple thing) in a very complicated environment (possibly an infinite dimensional space).&lt;br /&gt;
&lt;br /&gt;
==Vectors==&lt;br /&gt;
Recall that vectors consist of an ordered set of numbers.  Often the numbers are Real numbers, but we shall allow them to be Complex for our purposes.  The numbers represent the amount of the vector in the direction denoted by the position of the number in the list.  Each position in the list is associated with a direction.  For example, the vector&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v = &amp;lt;1, 4, 3&amp;gt; &amp;lt;/math&amp;gt; means that the vector &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; is one unit in the first direction (often the x direction), four units in the second direction (often the y direction), and three units in the last direction (often the z direction).  We say the component of &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; in the second direction is 4.  This is often written as &amp;lt;math&amp;gt; v_y = 4 &amp;lt;/math&amp;gt;.&lt;br /&gt;
====Vector notation====&lt;br /&gt;
We don&#039;t have to use x, y, and z as the direction names; we can use numbers, like 1, 2, and 3 instead.  The advantage of this is that it leads to more compact notation, and extends to more than three dimensions much better.  For example we could say &amp;lt;math&amp;gt; v_2 = 4 &amp;lt;/math&amp;gt; instead of &amp;lt;math&amp;gt; v_y = 4 &amp;lt;/math&amp;gt;.  Instead of writing &amp;lt;math&amp;gt; \vec \bold v = &amp;lt;1, 4, 3&amp;gt; &amp;lt;/math&amp;gt; we can write &amp;lt;math&amp;gt; \vec \bold v = \sum_{k=1}^3 v_k \hat \bold a_k &amp;lt;/math&amp;gt; where the &amp;lt;math&amp;gt;\hat \bold a_k &amp;lt;/math&amp;gt; denotes a basis vector in the kth direction, &amp;lt;math&amp;gt;v_1 = 1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt; v_2 = 4, &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; v_3 = 3&amp;lt;/math&amp;gt;.  The idea of basis vectors was implicit in the notation &amp;lt;math&amp;gt; \vec \bold v = &amp;lt;1, 4, 3&amp;gt; &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Inner products for vectors===&lt;br /&gt;
When vectors are real, inner products (sometimes called dot products) give the component of one vector in another vector&#039;s direction, scaled by the magnitude (length) of the second vector.  Inner products are useful to find components of vectors.  We commonly use a dot as the symbol for inner product.  For example, the inner product of &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \vec \bold a_n &amp;lt;/math&amp;gt; is written:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v \bullet \vec \bold a_n &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Orthogonality for vectors====&lt;br /&gt;
It is quite handy to pick the directions used so that they are perpendicular (or orthogonal).  With this arrangement the basis vectors have no components in each other&#039;s directions, which means that &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec \bold a_k \bullet \vec \bold a_n = w_k \delta_{k,n} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;lt;math&amp;gt; w_k &amp;lt;/math&amp;gt; is the square of the length of &amp;lt;math&amp;gt; \vec \bold a_k &amp;lt;/math&amp;gt; and the symbol &amp;lt;math&amp;gt; \delta_{k,n} &amp;lt;/math&amp;gt;, known as the [http://en.wikipedia.org/wiki/Kronecker_delta Kronecker delta], is one when k = n and zero otherwise.&lt;br /&gt;
=====Normalization=====&lt;br /&gt;
When the &amp;lt;math&amp;gt; w_k = 1&amp;lt;/math&amp;gt; we have an orthonormal basis set, meaning that it is both orthogonal and that the basis vectors are normalized to unity (or have length one).  Orthonormal vector systems are very popular.  In fact they are the most common vector systems you will find.  The reason they are so handy is each direction is uncoupled from the others.&lt;br /&gt;
&lt;br /&gt;
For example, to find &amp;lt;math&amp;gt; v_n &amp;lt;/math&amp;gt;, we take the inner product of the vector &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; with a unit vector in the nth direction, &amp;lt;math&amp;gt; \vec \bold a_n &amp;lt;/math&amp;gt;.  We write this operation like this:  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v \bullet \vec \bold a_n = \sum_{k=1}^3 v_k \vec \bold a_k \bullet \vec \bold a_n = \sum_{k=1}^3 v_k \delta_{k,n} =  v_n &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Suppose we have two vectors from an orthonormal system, &amp;lt;math&amp;gt; \vec \bold u &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt;.  Taking the inner product of these vectors, we get&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold u \bullet \vec \bold v = \sum_{k=1}^3 u_k \vec \bold a_k \bullet \sum_{m=1}^3 v_m \vec \bold a_m  = \sum_{k=1}^3 u_k \sum_{m=1}^3  v_m \vec \bold a_k \bullet  \vec \bold a_m = \sum_{k=1}^3 u_k \sum_{m=1}^3  v_m \delta_{k,m} = \sum_{k=1}^3 v_k u_k &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This shows that when we have an orthonormal vector space, inner products boil down to summing the products of like components.  Also note that if we take the inner product of &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; with itself, we get&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v \bullet \vec \bold v = \sum_{k=1}^3 v_k \vec \bold a_k \bullet \sum_{m=1}^3 v_m \vec \bold a_m  = \sum_{k=1}^3 v_k \sum_{m=1}^3  v_m \vec \bold a_k \bullet  \vec \bold a_m = \sum_{k=1}^3 v_k \sum_{m=1}^3  v_m \delta_{k,m} = \sum_{k=1}^3 v_k^2&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is the magnitude of the vector &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; squared (&amp;lt;math&amp;gt; | \vec \bold v |^2 &amp;lt;/math&amp;gt;) from the Pythagorean Theorem.&lt;br /&gt;
&lt;br /&gt;
====Changing vector basis sets====&lt;br /&gt;
Sometimes in our studies we find it useful to change basis sets.  For example, when solving a physics problem with cylindrical symmetry, it is often easier to use cylindrical coordinates, and the basis vectors that go with that system, rather than the more usual Cartesian coordinates and basis vectors. &lt;br /&gt;
=====So, how do I change the basis set?=====&lt;br /&gt;
If the new basis set is orthonormal, it is really pretty simple.  You need to project the vector you want changed onto each of the new basis vectors.  This means that the new components are just the inner product of the vector and the appropriate basis function.  If the new basis set is not orthonormal, and if there are n dimensions in each basis set, you will have n linear coupled equations in n unknowns to solve.&lt;br /&gt;
&lt;br /&gt;
===More vector questions===&lt;br /&gt;
&lt;br /&gt;
[[Complex vector inner products|What if the vectors have complex components?]]&lt;br /&gt;
&lt;br /&gt;
[[Vector weighting functions|What if not all components of the vectors have the same units?]]&lt;br /&gt;
&lt;br /&gt;
[[Multiple dimensional vectors|What if there are more than three dimensions?]]&lt;br /&gt;
&lt;br /&gt;
==Functions and vectors, an analogy==&lt;br /&gt;
We may think of the number of the direction, &amp;lt;math&amp;gt; k &amp;lt;/math&amp;gt;, as the independent variable of a vector and the component in that direction, &amp;lt;math&amp;gt; v_k &amp;lt;/math&amp;gt; as the dependent variable of the vector &amp;lt;math&amp;gt; \vec \bold  v &amp;lt;/math&amp;gt; in a similar way to the way we think of t as the independent variable of a function f(), where f(t) is the dependent variable of f.  Probably the biggest difference here is that t often takes on real values from &amp;lt;math&amp;gt; - \infty &amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt; \infty &amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt; k \in {1, 2, 3} &amp;lt;/math&amp;gt;.  Using this analogy, we may think of a function as a vector having an uncountably infinite number of dimensions.  &lt;br /&gt;
&lt;br /&gt;
====Can we write functions in an analogous way compared to the way we write vectors?====&lt;br /&gt;
&lt;br /&gt;
Remember we wrote &amp;lt;math&amp;gt; \vec \bold v = \sum_{k=1}^3 v_k \hat \bold a_k &amp;lt;/math&amp;gt;.  Can we write something similar for a function, f(t) defined for a t element of the reals?  Well maybe....  If the sum over the dummy index k becomes an integral over the dummy variable, x, and the unit vectors &amp;lt;math&amp;gt; \vec \bold a_k &amp;lt;/math&amp;gt; are replaced with something like &amp;lt;math&amp;gt; \delta(x-t) &amp;lt;/math&amp;gt;, the [http://en.wikipedia.org/wiki/Delta_function Dirac delta function].  The result would look something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; f(t) = \int_{- \infty}^\infty f(x) \delta (x-t) dx &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
This works!  The Dirac delta functions, playing the roll of the basis vectors, are called basis functions.  The function f(x) plays the roll of the vector coefficients &amp;lt;math&amp;gt;v_k&amp;lt;/math&amp;gt;.  This gives us another way to think of the function f().&lt;br /&gt;
&lt;br /&gt;
===Inner products for functions===&lt;br /&gt;
[[Orthogonal functions#Inner products for vectors|Above]] we found that a vector inner product between &amp;lt;math&amp;gt;\vec \bold u &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec \bold v &amp;lt;/math&amp;gt; could be written as &amp;lt;math&amp;gt; \vec \bold u \bullet \vec \bold v = \sum_{k=1}^3 u_k v_k &amp;lt;/math&amp;gt;.  If we follow our above analogy, we should be able to replace the sum over k with an integral over x.  There is one little notational problem, and that is we don&#039;t want to confuse the functional inner product with a simple muliply, so we need some new notation to denote this new inner product.  In [http://en.wikipedia.org/wiki/Quantum_mechanics quantum mechanics], physicists use the [http://en.wikipedia.org/wiki/Bra-ket_notation bra-ket] notation.  Let&#039;s borrow that.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; &amp;lt;u|v&amp;gt; = \int_{-\infty}^\infty u^*(x) v(x) dx &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note the complex conjugate on the function u(x).  That is in case u(x) is a complex valued function.  For the analogous case with vectors see [[Complex vector inner products]].&lt;br /&gt;
====Orthogonality for functions====&lt;br /&gt;
Two functions, &amp;lt;math&amp;gt;u(t)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;v(t)&amp;lt;/math&amp;gt; are said to be orthogonal on the interval &amp;lt;math&amp;gt; (a,b) &amp;lt;/math&amp;gt; with respect to the weighting function &amp;lt;math&amp;gt; w(t) &amp;lt;/math&amp;gt;  if and only if &lt;br /&gt;
&amp;lt;math&amp;gt;\int_a^b w(x) u^*(x) v(x) dx = 0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
The weighting function is often unity, but it is included so that different values of t can be weighted appropriately in analogy to the way the &amp;lt;math&amp;gt;w_k&amp;lt;/math&amp;gt; weight was used when the vector basis set was orthogonal, but not orthonormal (that is, different basis vectors had different numerical lengths), as we discussed [[Vector weighting functions|here]].  Unless otherwise noted we will use &amp;lt;math&amp;gt; w(t) = 1 &amp;lt;/math&amp;gt;, so that the defining relation for orthogonality of functions &amp;lt;math&amp;gt; u &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; v &amp;lt;/math&amp;gt; becomes&lt;br /&gt;
&amp;lt;math&amp;gt;\int_a^b  u^*(x) v(x) dx = 0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Changing basis sets with functions====&lt;br /&gt;
====Examples====&lt;br /&gt;
*[[Fourier series]]&lt;br /&gt;
*[[Reconstructing bandlimited signals from sample points]]&lt;br /&gt;
&lt;br /&gt;
==Other resources on orthogonality==&lt;br /&gt;
[http://en.wikipedia.org/wiki/Inner_product Wikipedia inner product]&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Orthogonal Wikipedia Orthogonality]&lt;br /&gt;
&lt;br /&gt;
Principle author of this page:  [[User:Frohro|Rob Frohne]]&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=Orthogonal_functions&amp;diff=2482</id>
		<title>Orthogonal functions</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=Orthogonal_functions&amp;diff=2482"/>
		<updated>2006-10-04T04:38:23Z</updated>

		<summary type="html">&lt;p&gt;Andrew: /* Other resources on orthogonality */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
In this article we will examine another viewpoint for functions than that traditionally taken.  Normally we think of a function &amp;lt;math&amp;gt; f(t) &amp;lt;/math&amp;gt; as a complicated entity - &amp;lt;math&amp;gt; f() &amp;lt;/math&amp;gt; in a simple environment (one dimension, or along the t axis).  Now we want to think of a function as a vector or point (a simple thing) in a very complicated environment (possibly an infinite dimensional space).&lt;br /&gt;
&lt;br /&gt;
==Vectors==&lt;br /&gt;
Recall that vectors consist of an ordered set of numbers.  Often the numbers are Real numbers, but we shall allow them to be Complex for our purposes.  The numbers represent the amount of the vector in the direction denoted by the position of the number in the list.  Each position in the list is associated with a direction.  For example, the vector&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v = &amp;lt;1, 4, 3&amp;gt; &amp;lt;/math&amp;gt; means that the vector &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; is one unit in the first direction (often the x direction), four units in the second direction (often the y direction), and three units in the last direction (often the z direction).  We say the component of &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; in the second direction is 4.  This is often written as &amp;lt;math&amp;gt; v_y = 4 &amp;lt;/math&amp;gt;.&lt;br /&gt;
====Vector notation====&lt;br /&gt;
We don&#039;t have to use x, y, and z as the direction names; we can use numbers, like 1, 2, and 3 instead.  The advantage of this is that it leads to more compact notation, and extends to more than three dimensions much better.  For example we could say &amp;lt;math&amp;gt; v_2 = 4 &amp;lt;/math&amp;gt; instead of &amp;lt;math&amp;gt; v_y = 4 &amp;lt;/math&amp;gt;.  Instead of writing &amp;lt;math&amp;gt; \vec \bold v = &amp;lt;1, 4, 3&amp;gt; &amp;lt;/math&amp;gt; we can write &amp;lt;math&amp;gt; \vec \bold v = \sum_{k=1}^3 v_k \hat \bold a_k &amp;lt;/math&amp;gt; where the &amp;lt;math&amp;gt;\hat \bold a_k &amp;lt;/math&amp;gt; denotes a basis vector in the kth direction, &amp;lt;math&amp;gt;v_1 = 1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt; v_2 = 4, &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; v_3 = 3&amp;lt;/math&amp;gt;.  The idea of basis vectors was implicit in the notation &amp;lt;math&amp;gt; \vec \bold v = &amp;lt;1, 4, 3&amp;gt; &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Inner products for vectors===&lt;br /&gt;
When vectors are real, inner products (sometimes called dot products) give the component of one vector in another vector&#039;s direction, scaled by the magnitude (length) of the second vector.  Inner products are useful to find components of vectors.  We commonly use a dot as the symbol for inner product.  For example, the inner product of &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \vec \bold a_n &amp;lt;/math&amp;gt; is written:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v \bullet \vec \bold a_n &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Orthogonality for vectors====&lt;br /&gt;
It is quite handy to pick the directions used so that they are perpendicular (or orthogonal).  With this arrangement the basis vectors have no components in each other&#039;s directions, which means that &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec \bold a_k \bullet \vec \bold a_n = w_k \delta_{k,n} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;lt;math&amp;gt; w_k &amp;lt;/math&amp;gt; is the square of the length of &amp;lt;math&amp;gt; \vec \bold a_k &amp;lt;/math&amp;gt; and the symbol &amp;lt;math&amp;gt; \delta_{k,n} &amp;lt;/math&amp;gt;, known as the [http://en.wikipedia.org/wiki/Kronecker_delta Kronecker delta], is one when k = n and zero otherwise.&lt;br /&gt;
=====Normalization=====&lt;br /&gt;
When the &amp;lt;math&amp;gt; w_k = 1&amp;lt;/math&amp;gt; we have an orthonormal basis set, meaning that it is both orthogonal and that the basis vectors are normalized to unity (or have length one).  Orthonormal vector systems are very popular.  In fact they are the most common vector systems you will find.  The reason they are so handy is each direction is uncoupled from the others.&lt;br /&gt;
&lt;br /&gt;
For example, to find &amp;lt;math&amp;gt; v_n &amp;lt;/math&amp;gt;, we take the inner product of the vector &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; with a unit vector in the nth direction, &amp;lt;math&amp;gt; \vec \bold a_n &amp;lt;/math&amp;gt;.  We write this operation like this:  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v \bullet \vec \bold a_n = \sum_{k=1}^3 v_k \vec \bold a_k \bullet \vec \bold a_n = \sum_{k=1}^3 v_k \delta_{k,n} =  v_n &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Suppose we have two vectors from an orthonormal system, &amp;lt;math&amp;gt; \vec \bold u &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt;.  Taking the inner product of these vectors, we get&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold u \bullet \vec \bold v = \sum_{k=1}^3 u_k \vec \bold a_k \bullet \sum_{m=1}^3 v_m \vec \bold a_m  = \sum_{k=1}^3 u_k \sum_{m=1}^3  v_m \vec \bold a_k \bullet  \vec \bold a_m = \sum_{k=1}^3 u_k \sum_{m=1}^3  v_m \delta_{k,m} = \sum_{k=1}^3 v_k u_k &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This shows that when we have an orthonormal vector space, inner products boil down to summing the products of like components.  Also note that if we take the inner product of &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; with itself, we get&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v \bullet \vec \bold v = \sum_{k=1}^3 v_k \vec \bold a_k \bullet \sum_{m=1}^3 v_m \vec \bold a_m  = \sum_{k=1}^3 v_k \sum_{m=1}^3  v_m \vec \bold a_k \bullet  \vec \bold a_m = \sum_{k=1}^3 v_k \sum_{m=1}^3  v_m \delta_{k,m} = \sum_{k=1}^3 v_k^2&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is the magnitude of the vector &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; squared (&amp;lt;math&amp;gt; | \vec \bold v |^2 &amp;lt;/math&amp;gt;) from the Pythagorean Theorem.&lt;br /&gt;
&lt;br /&gt;
====Changing vector basis sets====&lt;br /&gt;
Sometimes in our studies we find it useful to change basis sets.  For example, when solving a physics problem with cylindrical symmetry, it is often easier to use cylindrical coordinates, and the basis vectors that go with that system, rather than the more usual Cartesian coordinates and basis vectors. &lt;br /&gt;
=====So, how do I change the basis set?=====&lt;br /&gt;
If the new basis set is orthonormal, it is really pretty simple.  You need to project the vector you want changed onto each of the new basis vectors.  This means that the new components are just the inner product of the vector and the appropriate basis function.  If the new basis set is not orthonormal, and if there are n dimensions in each basis set, you will have n linear coupled equations in n unknowns to solve.&lt;br /&gt;
&lt;br /&gt;
===More vector questions===&lt;br /&gt;
&lt;br /&gt;
[[Complex vector inner products|What if the vectors have complex components?]]&lt;br /&gt;
&lt;br /&gt;
[[Vector weighting functions|What if not all components of the vectors have the same units?]]&lt;br /&gt;
&lt;br /&gt;
[[Multiple dimensional vectors|What if there are more than three dimensions?]]&lt;br /&gt;
&lt;br /&gt;
==Functions and vectors, an analogy==&lt;br /&gt;
We may think of the number of the direction, &amp;lt;math&amp;gt; k &amp;lt;/math&amp;gt;, as the independent variable of a vector and the component in that direction, &amp;lt;math&amp;gt; v_k &amp;lt;/math&amp;gt; as the dependent variable of the vector &amp;lt;math&amp;gt; \vec \bold  v &amp;lt;/math&amp;gt; in a similar way to the way we think of t as the independent variable of a function f(), where f(t) is the dependent variable of f.  Probably the biggest difference here is that t often takes on real values from &amp;lt;math&amp;gt; - \infty &amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt; \infty &amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt; k \in {1, 2, 3} &amp;lt;/math&amp;gt;.  Using this analogy, we may think of a function as a vector having an uncountably infinite number of dimensions.  &lt;br /&gt;
&lt;br /&gt;
====Can we write functions in an analogous way compared to the way we write vectors?====&lt;br /&gt;
&lt;br /&gt;
Remember we wrote &amp;lt;math&amp;gt; \vec \bold v = \sum_{k=1}^3 v_k \hat \bold a_k &amp;lt;/math&amp;gt;.  Can we write something similar for a function, f(t) defined for a t element of the reals?  Well maybe....  If the sum over the dummy index k becomes an integral over the dummy variable, x, and the unit vectors &amp;lt;math&amp;gt; \vec \bold a_k &amp;lt;/math&amp;gt; are replaced with something like &amp;lt;math&amp;gt; \delta(x-t) &amp;lt;/math&amp;gt;, the [http://en.wikipedia.org/wiki/Delta_function Dirac delta function].  The result would look something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; f(t) = \int_{- \infty}^\infty f(x) \delta (x-t) dx &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
This works!  The Dirac delta functions, playing the roll of the basis vectors, are called basis functions.  The function f(x) plays the roll of the vector coefficients &amp;lt;math&amp;gt;v_k&amp;lt;/math&amp;gt;.  This gives us another way to think of the function f().&lt;br /&gt;
&lt;br /&gt;
===Inner products for functions===&lt;br /&gt;
[[Orthogonal functions#Inner products for vectors|Above]] we found that a vector inner product between &amp;lt;math&amp;gt;\vec \bold u &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec \bold v &amp;lt;/math&amp;gt; could be written as &amp;lt;math&amp;gt; \vec \bold u \bullet \vec \bold v = \sum_{k=1}^3 u_k v_k &amp;lt;/math&amp;gt;.  If we follow our above analogy, we should be able to replace the sum over k with an integral over x.  There is one little notational problem, and that is we don&#039;t want to confuse the functional inner product with a simple muliply, so we need some new notation to denote this new inner product.  In [http://en.wikipedia.org/wiki/Quantum_mechanics quantum mechanics], physicists use the [http://en.wikipedia.org/wiki/Bra-ket_notation bra-ket] notation.  Let&#039;s borrow that.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; &amp;lt;u|v&amp;gt; = \int_{-\infty}^\infty u^*(x) v(x) dx &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note the complex conjugate on the function u(x).  That is in case u(x) is a complex valued function.  For the analogous case with vectors see [[Complex vector inner products]].&lt;br /&gt;
====Orthogonality for functions====&lt;br /&gt;
Two functions, &amp;lt;math&amp;gt;u(t)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;v(t)&amp;lt;/math&amp;gt; are said to be orthogonal on the interval &amp;lt;math&amp;gt; (a,b) &amp;lt;/math&amp;gt; with respect to the weighting function &amp;lt;math&amp;gt; w(t) &amp;lt;/math&amp;gt;  if and only if &lt;br /&gt;
&amp;lt;math&amp;gt;\int_a^b w(x) u^*(x) v(x) dx = 0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
The weighting function is often unity, but it is included so that different values of t can be weighted appropriately in analogy to the way the &amp;lt;math&amp;gt;w_k&amp;lt;/math&amp;gt; weight was used when the vector basis set was orthogonal, but not orthonormal (that is, different basis vectors had different numerical lengths), as we discussed [[Vector weighting functions|here]].  Unless otherwise noted we will use &amp;lt;math&amp;gt; w(t) = 1 &amp;lt;/math&amp;gt;, so that the defining relation for orthogonality of functions &amp;lt;math&amp;gt; u &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; v &amp;lt;/math&amp;gt; becomes&lt;br /&gt;
&amp;lt;math&amp;gt;\int_a^b  u^*(x) v(x) dx = 0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Changing basis sets with functions====&lt;br /&gt;
====Examples====&lt;br /&gt;
*[[Fourier series]]&lt;br /&gt;
*[[Reconstructing bandlimited signals from sample points]]&lt;br /&gt;
&lt;br /&gt;
==Other resources on orthogonality==&lt;br /&gt;
[http://en.wikipedia.org/wiki/Inner_product Wikipedia inner product]&lt;br /&gt;
[http://en.wikipedia.org/wiki/Orthogonal Wikipedia Orthogonality]&lt;br /&gt;
&lt;br /&gt;
Principle author of this page:  [[User:Frohro|Rob Frohne]]&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=Orthogonal_functions&amp;diff=2481</id>
		<title>Orthogonal functions</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=Orthogonal_functions&amp;diff=2481"/>
		<updated>2006-10-04T04:33:23Z</updated>

		<summary type="html">&lt;p&gt;Andrew: /* Can we write functions in an analogous way compared to the way we write vectors? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
In this article we will examine another viewpoint for functions than that traditionally taken.  Normally we think of a function &amp;lt;math&amp;gt; f(t) &amp;lt;/math&amp;gt; as a complicated entity - &amp;lt;math&amp;gt; f() &amp;lt;/math&amp;gt; in a simple environment (one dimension, or along the t axis).  Now we want to think of a function as a vector or point (a simple thing) in a very complicated environment (possibly an infinite dimensional space).&lt;br /&gt;
&lt;br /&gt;
==Vectors==&lt;br /&gt;
Recall that vectors consist of an ordered set of numbers.  Often the numbers are Real numbers, but we shall allow them to be Complex for our purposes.  The numbers represent the amount of the vector in the direction denoted by the position of the number in the list.  Each position in the list is associated with a direction.  For example, the vector&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v = &amp;lt;1, 4, 3&amp;gt; &amp;lt;/math&amp;gt; means that the vector &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; is one unit in the first direction (often the x direction), four units in the second direction (often the y direction), and three units in the last direction (often the z direction).  We say the component of &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; in the second direction is 4.  This is often written as &amp;lt;math&amp;gt; v_y = 4 &amp;lt;/math&amp;gt;.&lt;br /&gt;
====Vector notation====&lt;br /&gt;
We don&#039;t have to use x, y, and z as the direction names; we can use numbers, like 1, 2, and 3 instead.  The advantage of this is that it leads to more compact notation, and extends to more than three dimensions much better.  For example we could say &amp;lt;math&amp;gt; v_2 = 4 &amp;lt;/math&amp;gt; instead of &amp;lt;math&amp;gt; v_y = 4 &amp;lt;/math&amp;gt;.  Instead of writing &amp;lt;math&amp;gt; \vec \bold v = &amp;lt;1, 4, 3&amp;gt; &amp;lt;/math&amp;gt; we can write &amp;lt;math&amp;gt; \vec \bold v = \sum_{k=1}^3 v_k \hat \bold a_k &amp;lt;/math&amp;gt; where the &amp;lt;math&amp;gt;\hat \bold a_k &amp;lt;/math&amp;gt; denotes a basis vector in the kth direction, &amp;lt;math&amp;gt;v_1 = 1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt; v_2 = 4, &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; v_3 = 3&amp;lt;/math&amp;gt;.  The idea of basis vectors was implicit in the notation &amp;lt;math&amp;gt; \vec \bold v = &amp;lt;1, 4, 3&amp;gt; &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Inner products for vectors===&lt;br /&gt;
When vectors are real, inner products (sometimes called dot products) give the component of one vector in another vector&#039;s direction, scaled by the magnitude (length) of the second vector.  Inner products are useful to find components of vectors.  We commonly use a dot as the symbol for inner product.  For example, the inner product of &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \vec \bold a_n &amp;lt;/math&amp;gt; is written:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v \bullet \vec \bold a_n &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Orthogonality for vectors====&lt;br /&gt;
It is quite handy to pick the directions used so that they are perpendicular (or orthogonal).  With this arrangement the basis vectors have no components in each other&#039;s directions, which means that &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec \bold a_k \bullet \vec \bold a_n = w_k \delta_{k,n} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;lt;math&amp;gt; w_k &amp;lt;/math&amp;gt; is the square of the length of &amp;lt;math&amp;gt; \vec \bold a_k &amp;lt;/math&amp;gt; and the symbol &amp;lt;math&amp;gt; \delta_{k,n} &amp;lt;/math&amp;gt;, known as the [http://en.wikipedia.org/wiki/Kronecker_delta Kronecker delta], is one when k = n and zero otherwise.&lt;br /&gt;
=====Normalization=====&lt;br /&gt;
When the &amp;lt;math&amp;gt; w_k = 1&amp;lt;/math&amp;gt; we have an orthonormal basis set, meaning that it is both orthogonal and that the basis vectors are normalized to unity (or have length one).  Orthonormal vector systems are very popular.  In fact they are the most common vector systems you will find.  The reason they are so handy is each direction is uncoupled from the others.&lt;br /&gt;
&lt;br /&gt;
For example, to find &amp;lt;math&amp;gt; v_n &amp;lt;/math&amp;gt;, we take the inner product of the vector &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; with a unit vector in the nth direction, &amp;lt;math&amp;gt; \vec \bold a_n &amp;lt;/math&amp;gt;.  We write this operation like this:  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v \bullet \vec \bold a_n = \sum_{k=1}^3 v_k \vec \bold a_k \bullet \vec \bold a_n = \sum_{k=1}^3 v_k \delta_{k,n} =  v_n &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Suppose we have two vectors from an orthonormal system, &amp;lt;math&amp;gt; \vec \bold u &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt;.  Taking the inner product of these vectors, we get&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold u \bullet \vec \bold v = \sum_{k=1}^3 u_k \vec \bold a_k \bullet \sum_{m=1}^3 v_m \vec \bold a_m  = \sum_{k=1}^3 u_k \sum_{m=1}^3  v_m \vec \bold a_k \bullet  \vec \bold a_m = \sum_{k=1}^3 u_k \sum_{m=1}^3  v_m \delta_{k,m} = \sum_{k=1}^3 v_k u_k &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This shows that when we have an orthonormal vector space, inner products boil down to summing the products of like components.  Also note that if we take the inner product of &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; with itself, we get&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v \bullet \vec \bold v = \sum_{k=1}^3 v_k \vec \bold a_k \bullet \sum_{m=1}^3 v_m \vec \bold a_m  = \sum_{k=1}^3 v_k \sum_{m=1}^3  v_m \vec \bold a_k \bullet  \vec \bold a_m = \sum_{k=1}^3 v_k \sum_{m=1}^3  v_m \delta_{k,m} = \sum_{k=1}^3 v_k^2&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is the magnitude of the vector &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; squared (&amp;lt;math&amp;gt; | \vec \bold v |^2 &amp;lt;/math&amp;gt;) from the Pythagorean Theorem.&lt;br /&gt;
&lt;br /&gt;
====Changing vector basis sets====&lt;br /&gt;
Sometimes in our studies we find it useful to change basis sets.  For example, when solving a physics problem with cylindrical symmetry, it is often easier to use cylindrical coordinates, and the basis vectors that go with that system, rather than the more usual Cartesian coordinates and basis vectors. &lt;br /&gt;
=====So, how do I change the basis set?=====&lt;br /&gt;
If the new basis set is orthonormal, it is really pretty simple.  You need to project the vector you want changed onto each of the new basis vectors.  This means that the new components are just the inner product of the vector and the appropriate basis function.  If the new basis set is not orthonormal, and if there are n dimensions in each basis set, you will have n linear coupled equations in n unknowns to solve.&lt;br /&gt;
&lt;br /&gt;
===More vector questions===&lt;br /&gt;
&lt;br /&gt;
[[Complex vector inner products|What if the vectors have complex components?]]&lt;br /&gt;
&lt;br /&gt;
[[Vector weighting functions|What if not all components of the vectors have the same units?]]&lt;br /&gt;
&lt;br /&gt;
[[Multiple dimensional vectors|What if there are more than three dimensions?]]&lt;br /&gt;
&lt;br /&gt;
==Functions and vectors, an analogy==&lt;br /&gt;
We may think of the number of the direction, &amp;lt;math&amp;gt; k &amp;lt;/math&amp;gt;, as the independent variable of a vector and the component in that direction, &amp;lt;math&amp;gt; v_k &amp;lt;/math&amp;gt; as the dependent variable of the vector &amp;lt;math&amp;gt; \vec \bold  v &amp;lt;/math&amp;gt; in a similar way to the way we think of t as the independent variable of a function f(), where f(t) is the dependent variable of f.  Probably the biggest difference here is that t often takes on real values from &amp;lt;math&amp;gt; - \infty &amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt; \infty &amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt; k \in {1, 2, 3} &amp;lt;/math&amp;gt;.  Using this analogy, we may think of a function as a vector having an uncountably infinite number of dimensions.  &lt;br /&gt;
&lt;br /&gt;
====Can we write functions in an analogous way compared to the way we write vectors?====&lt;br /&gt;
&lt;br /&gt;
Remember we wrote &amp;lt;math&amp;gt; \vec \bold v = \sum_{k=1}^3 v_k \hat \bold a_k &amp;lt;/math&amp;gt;.  Can we write something similar for a function, f(t) defined for a t element of the reals?  Well maybe....  If the sum over the dummy index k becomes an integral over the dummy variable, x, and the unit vectors &amp;lt;math&amp;gt; \vec \bold a_k &amp;lt;/math&amp;gt; are replaced with something like &amp;lt;math&amp;gt; \delta(x-t) &amp;lt;/math&amp;gt;, the [http://en.wikipedia.org/wiki/Delta_function Dirac delta function].  The result would look something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; f(t) = \int_{- \infty}^\infty f(x) \delta (x-t) dx &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
This works!  The Dirac delta functions, playing the roll of the basis vectors, are called basis functions.  The function f(x) plays the roll of the vector coefficients &amp;lt;math&amp;gt;v_k&amp;lt;/math&amp;gt;.  This gives us another way to think of the function f().&lt;br /&gt;
&lt;br /&gt;
===Inner products for functions===&lt;br /&gt;
[[Orthogonal functions#Inner products for vectors|Above]] we found that a vector inner product between &amp;lt;math&amp;gt;\vec \bold u &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec \bold v &amp;lt;/math&amp;gt; could be written as &amp;lt;math&amp;gt; \vec \bold u \bullet \vec \bold v = \sum_{k=1}^3 u_k v_k &amp;lt;/math&amp;gt;.  If we follow our above analogy, we should be able to replace the sum over k with an integral over x.  There is one little notational problem, and that is we don&#039;t want to confuse the functional inner product with a simple muliply, so we need some new notation to denote this new inner product.  In [http://en.wikipedia.org/wiki/Quantum_mechanics quantum mechanics], physicists use the [http://en.wikipedia.org/wiki/Bra-ket_notation bra-ket] notation.  Let&#039;s borrow that.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; &amp;lt;u|v&amp;gt; = \int_{-\infty}^\infty u^*(x) v(x) dx &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note the complex conjugate on the function u(x).  That is in case u(x) is a complex valued function.  For the analogous case with vectors see [[Complex vector inner products]].&lt;br /&gt;
====Orthogonality for functions====&lt;br /&gt;
Two functions, &amp;lt;math&amp;gt;u(t)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;v(t)&amp;lt;/math&amp;gt; are said to be orthogonal on the interval &amp;lt;math&amp;gt; (a,b) &amp;lt;/math&amp;gt; with respect to the weighting function &amp;lt;math&amp;gt; w(t) &amp;lt;/math&amp;gt;  if and only if &lt;br /&gt;
&amp;lt;math&amp;gt;\int_a^b w(x) u^*(x) v(x) dx = 0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
The weighting function is often unity, but it is included so that different values of t can be weighted appropriately in analogy to the way the &amp;lt;math&amp;gt;w_k&amp;lt;/math&amp;gt; weight was used when the vector basis set was orthogonal, but not orthonormal (that is, different basis vectors had different numerical lengths), as we discussed [[Vector weighting functions|here]].  Unless otherwise noted we will use &amp;lt;math&amp;gt; w(t) = 1 &amp;lt;/math&amp;gt;, so that the defining relation for orthogonality of functions &amp;lt;math&amp;gt; u &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; v &amp;lt;/math&amp;gt; becomes&lt;br /&gt;
&amp;lt;math&amp;gt;\int_a^b  u^*(x) v(x) dx = 0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Changing basis sets with functions====&lt;br /&gt;
====Examples====&lt;br /&gt;
*[[Fourier series]]&lt;br /&gt;
*[[Reconstructing bandlimited signals from sample points]]&lt;br /&gt;
&lt;br /&gt;
==Other resources on orthogonality==&lt;br /&gt;
[http://en.wikipedia.org/wiki/Inner_product Wikipedia inner product]&lt;br /&gt;
&lt;br /&gt;
Principle author of this page:  [[User:Frohro|Rob Frohne]]&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=Orthogonal_functions&amp;diff=2479</id>
		<title>Orthogonal functions</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=Orthogonal_functions&amp;diff=2479"/>
		<updated>2006-10-04T04:32:52Z</updated>

		<summary type="html">&lt;p&gt;Andrew: /* Can we write functions in an analogous way compared to the way we write vectors? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
In this article we will examine another viewpoint for functions than that traditionally taken.  Normally we think of a function &amp;lt;math&amp;gt; f(t) &amp;lt;/math&amp;gt; as a complicated entity - &amp;lt;math&amp;gt; f() &amp;lt;/math&amp;gt; in a simple environment (one dimension, or along the t axis).  Now we want to think of a function as a vector or point (a simple thing) in a very complicated environment (possibly an infinite dimensional space).&lt;br /&gt;
&lt;br /&gt;
==Vectors==&lt;br /&gt;
Recall that vectors consist of an ordered set of numbers.  Often the numbers are Real numbers, but we shall allow them to be Complex for our purposes.  The numbers represent the amount of the vector in the direction denoted by the position of the number in the list.  Each position in the list is associated with a direction.  For example, the vector&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v = &amp;lt;1, 4, 3&amp;gt; &amp;lt;/math&amp;gt; means that the vector &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; is one unit in the first direction (often the x direction), four units in the second direction (often the y direction), and three units in the last direction (often the z direction).  We say the component of &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; in the second direction is 4.  This is often written as &amp;lt;math&amp;gt; v_y = 4 &amp;lt;/math&amp;gt;.&lt;br /&gt;
====Vector notation====&lt;br /&gt;
We don&#039;t have to use x, y, and z as the direction names; we can use numbers, like 1, 2, and 3 instead.  The advantage of this is that it leads to more compact notation, and extends to more than three dimensions much better.  For example we could say &amp;lt;math&amp;gt; v_2 = 4 &amp;lt;/math&amp;gt; instead of &amp;lt;math&amp;gt; v_y = 4 &amp;lt;/math&amp;gt;.  Instead of writing &amp;lt;math&amp;gt; \vec \bold v = &amp;lt;1, 4, 3&amp;gt; &amp;lt;/math&amp;gt; we can write &amp;lt;math&amp;gt; \vec \bold v = \sum_{k=1}^3 v_k \hat \bold a_k &amp;lt;/math&amp;gt; where the &amp;lt;math&amp;gt;\hat \bold a_k &amp;lt;/math&amp;gt; denotes a basis vector in the kth direction, &amp;lt;math&amp;gt;v_1 = 1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt; v_2 = 4, &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; v_3 = 3&amp;lt;/math&amp;gt;.  The idea of basis vectors was implicit in the notation &amp;lt;math&amp;gt; \vec \bold v = &amp;lt;1, 4, 3&amp;gt; &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Inner products for vectors===&lt;br /&gt;
When vectors are real, inner products (sometimes called dot products) give the component of one vector in another vector&#039;s direction, scaled by the magnitude (length) of the second vector.  Inner products are useful to find components of vectors.  We commonly use a dot as the symbol for inner product.  For example, the inner product of &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \vec \bold a_n &amp;lt;/math&amp;gt; is written:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v \bullet \vec \bold a_n &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Orthogonality for vectors====&lt;br /&gt;
It is quite handy to pick the directions used so that they are perpendicular (or orthogonal).  With this arrangement the basis vectors have no components in each other&#039;s directions, which means that &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec \bold a_k \bullet \vec \bold a_n = w_k \delta_{k,n} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;lt;math&amp;gt; w_k &amp;lt;/math&amp;gt; is the square of the length of &amp;lt;math&amp;gt; \vec \bold a_k &amp;lt;/math&amp;gt; and the symbol &amp;lt;math&amp;gt; \delta_{k,n} &amp;lt;/math&amp;gt;, known as the [http://en.wikipedia.org/wiki/Kronecker_delta Kronecker delta], is one when k = n and zero otherwise.&lt;br /&gt;
=====Normalization=====&lt;br /&gt;
When the &amp;lt;math&amp;gt; w_k = 1&amp;lt;/math&amp;gt; we have an orthonormal basis set, meaning that it is both orthogonal and that the basis vectors are normalized to unity (or have length one).  Orthonormal vector systems are very popular.  In fact they are the most common vector systems you will find.  The reason they are so handy is each direction is uncoupled from the others.&lt;br /&gt;
&lt;br /&gt;
For example, to find &amp;lt;math&amp;gt; v_n &amp;lt;/math&amp;gt;, we take the inner product of the vector &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; with a unit vector in the nth direction, &amp;lt;math&amp;gt; \vec \bold a_n &amp;lt;/math&amp;gt;.  We write this operation like this:  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v \bullet \vec \bold a_n = \sum_{k=1}^3 v_k \vec \bold a_k \bullet \vec \bold a_n = \sum_{k=1}^3 v_k \delta_{k,n} =  v_n &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Suppose we have two vectors from an orthonormal system, &amp;lt;math&amp;gt; \vec \bold u &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt;.  Taking the inner product of these vectors, we get&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold u \bullet \vec \bold v = \sum_{k=1}^3 u_k \vec \bold a_k \bullet \sum_{m=1}^3 v_m \vec \bold a_m  = \sum_{k=1}^3 u_k \sum_{m=1}^3  v_m \vec \bold a_k \bullet  \vec \bold a_m = \sum_{k=1}^3 u_k \sum_{m=1}^3  v_m \delta_{k,m} = \sum_{k=1}^3 v_k u_k &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This shows that when we have an orthonormal vector space, inner products boil down to summing the products of like components.  Also note that if we take the inner product of &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; with itself, we get&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v \bullet \vec \bold v = \sum_{k=1}^3 v_k \vec \bold a_k \bullet \sum_{m=1}^3 v_m \vec \bold a_m  = \sum_{k=1}^3 v_k \sum_{m=1}^3  v_m \vec \bold a_k \bullet  \vec \bold a_m = \sum_{k=1}^3 v_k \sum_{m=1}^3  v_m \delta_{k,m} = \sum_{k=1}^3 v_k^2&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is the magnitude of the vector &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; squared (&amp;lt;math&amp;gt; | \vec \bold v |^2 &amp;lt;/math&amp;gt;) from the Pythagorean Theorem.&lt;br /&gt;
&lt;br /&gt;
====Changing vector basis sets====&lt;br /&gt;
Sometimes in our studies we find it useful to change basis sets.  For example, when solving a physics problem with cylindrical symmetry, it is often easier to use cylindrical coordinates, and the basis vectors that go with that system, rather than the more usual Cartesian coordinates and basis vectors. &lt;br /&gt;
=====So, how do I change the basis set?=====&lt;br /&gt;
If the new basis set is orthonormal, it is really pretty simple.  You need to project the vector you want changed onto each of the new basis vectors.  This means that the new components are just the inner product of the vector and the appropriate basis function.  If the new basis set is not orthonormal, and if there are n dimensions in each basis set, you will have n linear coupled equations in n unknowns to solve.&lt;br /&gt;
&lt;br /&gt;
===More vector questions===&lt;br /&gt;
&lt;br /&gt;
[[Complex vector inner products|What if the vectors have complex components?]]&lt;br /&gt;
&lt;br /&gt;
[[Vector weighting functions|What if not all components of the vectors have the same units?]]&lt;br /&gt;
&lt;br /&gt;
[[Multiple dimensional vectors|What if there are more than three dimensions?]]&lt;br /&gt;
&lt;br /&gt;
==Functions and vectors, an analogy==&lt;br /&gt;
We may think of the number of the direction, &amp;lt;math&amp;gt; k &amp;lt;/math&amp;gt;, as the independent variable of a vector and the component in that direction, &amp;lt;math&amp;gt; v_k &amp;lt;/math&amp;gt; as the dependent variable of the vector &amp;lt;math&amp;gt; \vec \bold  v &amp;lt;/math&amp;gt; in a similar way to the way we think of t as the independent variable of a function f(), where f(t) is the dependent variable of f.  Probably the biggest difference here is that t often takes on real values from &amp;lt;math&amp;gt; - \infty &amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt; \infty &amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt; k \in {1, 2, 3} &amp;lt;/math&amp;gt;.  Using this analogy, we may think of a function as a vector having an uncountably infinite number of dimensions.  &lt;br /&gt;
&lt;br /&gt;
====Can we write functions in an analogous way compared to the way we write vectors?====&lt;br /&gt;
&lt;br /&gt;
Remember we wrote &amp;lt;math&amp;gt; \vec \bold v = \sum_{k=1}^3 v_k \hat \bold a_k &amp;lt;/math&amp;gt;.  Can we write something similar for a function, f(t) defined for t of an element of the reals?  Well maybe....  If the sum over the dummy index k becomes an integral over the dummy variable, x, and the unit vectors &amp;lt;math&amp;gt; \vec \bold a_k &amp;lt;/math&amp;gt; are replaced with something like &amp;lt;math&amp;gt; \delta(x-t) &amp;lt;/math&amp;gt;, the [http://en.wikipedia.org/wiki/Delta_function Dirac delta function].  The result would look something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; f(t) = \int_{- \infty}^\infty f(x) \delta (x-t) dx &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
This works!  The Dirac delta functions, playing the roll of the basis vectors, are called basis functions.  The function f(x) plays the roll of the vector coefficients &amp;lt;math&amp;gt;v_k&amp;lt;/math&amp;gt;.  This gives us another way to think of the function f().&lt;br /&gt;
&lt;br /&gt;
===Inner products for functions===&lt;br /&gt;
[[Orthogonal functions#Inner products for vectors|Above]] we found that a vector inner product between &amp;lt;math&amp;gt;\vec \bold u &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec \bold v &amp;lt;/math&amp;gt; could be written as &amp;lt;math&amp;gt; \vec \bold u \bullet \vec \bold v = \sum_{k=1}^3 u_k v_k &amp;lt;/math&amp;gt;.  If we follow our above analogy, we should be able to replace the sum over k with an integral over x.  There is one little notational problem, and that is we don&#039;t want to confuse the functional inner product with a simple muliply, so we need some new notation to denote this new inner product.  In [http://en.wikipedia.org/wiki/Quantum_mechanics quantum mechanics], physicists use the [http://en.wikipedia.org/wiki/Bra-ket_notation bra-ket] notation.  Let&#039;s borrow that.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; &amp;lt;u|v&amp;gt; = \int_{-\infty}^\infty u^*(x) v(x) dx &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note the complex conjugate on the function u(x).  That is in case u(x) is a complex valued function.  For the analogous case with vectors see [[Complex vector inner products]].&lt;br /&gt;
====Orthogonality for functions====&lt;br /&gt;
Two functions, &amp;lt;math&amp;gt;u(t)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;v(t)&amp;lt;/math&amp;gt; are said to be orthogonal on the interval &amp;lt;math&amp;gt; (a,b) &amp;lt;/math&amp;gt; with respect to the weighting function &amp;lt;math&amp;gt; w(t) &amp;lt;/math&amp;gt;  if and only if &lt;br /&gt;
&amp;lt;math&amp;gt;\int_a^b w(x) u^*(x) v(x) dx = 0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
The weighting function is often unity, but it is included so that different values of t can be weighted appropriately in analogy to the way the &amp;lt;math&amp;gt;w_k&amp;lt;/math&amp;gt; weight was used when the vector basis set was orthogonal, but not orthonormal (that is, different basis vectors had different numerical lengths), as we discussed [[Vector weighting functions|here]].  Unless otherwise noted we will use &amp;lt;math&amp;gt; w(t) = 1 &amp;lt;/math&amp;gt;, so that the defining relation for orthogonality of functions &amp;lt;math&amp;gt; u &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; v &amp;lt;/math&amp;gt; becomes&lt;br /&gt;
&amp;lt;math&amp;gt;\int_a^b  u^*(x) v(x) dx = 0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Changing basis sets with functions====&lt;br /&gt;
====Examples====&lt;br /&gt;
*[[Fourier series]]&lt;br /&gt;
*[[Reconstructing bandlimited signals from sample points]]&lt;br /&gt;
&lt;br /&gt;
==Other resources on orthogonality==&lt;br /&gt;
[http://en.wikipedia.org/wiki/Inner_product Wikipedia inner product]&lt;br /&gt;
&lt;br /&gt;
Principle author of this page:  [[User:Frohro|Rob Frohne]]&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=Orthogonal_functions&amp;diff=2478</id>
		<title>Orthogonal functions</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=Orthogonal_functions&amp;diff=2478"/>
		<updated>2006-10-04T04:32:15Z</updated>

		<summary type="html">&lt;p&gt;Andrew: /* Can we write functions in an analogous way to the way we write vectors? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
In this article we will examine another viewpoint for functions than that traditionally taken.  Normally we think of a function &amp;lt;math&amp;gt; f(t) &amp;lt;/math&amp;gt; as a complicated entity - &amp;lt;math&amp;gt; f() &amp;lt;/math&amp;gt; in a simple environment (one dimension, or along the t axis).  Now we want to think of a function as a vector or point (a simple thing) in a very complicated environment (possibly an infinite dimensional space).&lt;br /&gt;
&lt;br /&gt;
==Vectors==&lt;br /&gt;
Recall that vectors consist of an ordered set of numbers.  Often the numbers are Real numbers, but we shall allow them to be Complex for our purposes.  The numbers represent the amount of the vector in the direction denoted by the position of the number in the list.  Each position in the list is associated with a direction.  For example, the vector&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v = &amp;lt;1, 4, 3&amp;gt; &amp;lt;/math&amp;gt; means that the vector &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; is one unit in the first direction (often the x direction), four units in the second direction (often the y direction), and three units in the last direction (often the z direction).  We say the component of &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; in the second direction is 4.  This is often written as &amp;lt;math&amp;gt; v_y = 4 &amp;lt;/math&amp;gt;.&lt;br /&gt;
====Vector notation====&lt;br /&gt;
We don&#039;t have to use x, y, and z as the direction names; we can use numbers, like 1, 2, and 3 instead.  The advantage of this is that it leads to more compact notation, and extends to more than three dimensions much better.  For example we could say &amp;lt;math&amp;gt; v_2 = 4 &amp;lt;/math&amp;gt; instead of &amp;lt;math&amp;gt; v_y = 4 &amp;lt;/math&amp;gt;.  Instead of writing &amp;lt;math&amp;gt; \vec \bold v = &amp;lt;1, 4, 3&amp;gt; &amp;lt;/math&amp;gt; we can write &amp;lt;math&amp;gt; \vec \bold v = \sum_{k=1}^3 v_k \hat \bold a_k &amp;lt;/math&amp;gt; where the &amp;lt;math&amp;gt;\hat \bold a_k &amp;lt;/math&amp;gt; denotes a basis vector in the kth direction, &amp;lt;math&amp;gt;v_1 = 1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt; v_2 = 4, &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; v_3 = 3&amp;lt;/math&amp;gt;.  The idea of basis vectors was implicit in the notation &amp;lt;math&amp;gt; \vec \bold v = &amp;lt;1, 4, 3&amp;gt; &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Inner products for vectors===&lt;br /&gt;
When vectors are real, inner products (sometimes called dot products) give the component of one vector in another vector&#039;s direction, scaled by the magnitude (length) of the second vector.  Inner products are useful to find components of vectors.  We commonly use a dot as the symbol for inner product.  For example, the inner product of &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \vec \bold a_n &amp;lt;/math&amp;gt; is written:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v \bullet \vec \bold a_n &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Orthogonality for vectors====&lt;br /&gt;
It is quite handy to pick the directions used so that they are perpendicular (or orthogonal).  With this arrangement the basis vectors have no components in each other&#039;s directions, which means that &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec \bold a_k \bullet \vec \bold a_n = w_k \delta_{k,n} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;lt;math&amp;gt; w_k &amp;lt;/math&amp;gt; is the square of the length of &amp;lt;math&amp;gt; \vec \bold a_k &amp;lt;/math&amp;gt; and the symbol &amp;lt;math&amp;gt; \delta_{k,n} &amp;lt;/math&amp;gt;, known as the [http://en.wikipedia.org/wiki/Kronecker_delta Kronecker delta], is one when k = n and zero otherwise.&lt;br /&gt;
=====Normalization=====&lt;br /&gt;
When the &amp;lt;math&amp;gt; w_k = 1&amp;lt;/math&amp;gt; we have an orthonormal basis set, meaning that it is both orthogonal and that the basis vectors are normalized to unity (or have length one).  Orthonormal vector systems are very popular.  In fact they are the most common vector systems you will find.  The reason they are so handy is each direction is uncoupled from the others.&lt;br /&gt;
&lt;br /&gt;
For example, to find &amp;lt;math&amp;gt; v_n &amp;lt;/math&amp;gt;, we take the inner product of the vector &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; with a unit vector in the nth direction, &amp;lt;math&amp;gt; \vec \bold a_n &amp;lt;/math&amp;gt;.  We write this operation like this:  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v \bullet \vec \bold a_n = \sum_{k=1}^3 v_k \vec \bold a_k \bullet \vec \bold a_n = \sum_{k=1}^3 v_k \delta_{k,n} =  v_n &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Suppose we have two vectors from an orthonormal system, &amp;lt;math&amp;gt; \vec \bold u &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt;.  Taking the inner product of these vectors, we get&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold u \bullet \vec \bold v = \sum_{k=1}^3 u_k \vec \bold a_k \bullet \sum_{m=1}^3 v_m \vec \bold a_m  = \sum_{k=1}^3 u_k \sum_{m=1}^3  v_m \vec \bold a_k \bullet  \vec \bold a_m = \sum_{k=1}^3 u_k \sum_{m=1}^3  v_m \delta_{k,m} = \sum_{k=1}^3 v_k u_k &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This shows that when we have an orthonormal vector space, inner products boil down to summing the products of like components.  Also note that if we take the inner product of &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; with itself, we get&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \vec \bold v \bullet \vec \bold v = \sum_{k=1}^3 v_k \vec \bold a_k \bullet \sum_{m=1}^3 v_m \vec \bold a_m  = \sum_{k=1}^3 v_k \sum_{m=1}^3  v_m \vec \bold a_k \bullet  \vec \bold a_m = \sum_{k=1}^3 v_k \sum_{m=1}^3  v_m \delta_{k,m} = \sum_{k=1}^3 v_k^2&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is the magnitude of the vector &amp;lt;math&amp;gt; \vec \bold v &amp;lt;/math&amp;gt; squared (&amp;lt;math&amp;gt; | \vec \bold v |^2 &amp;lt;/math&amp;gt;) from the Pythagorean Theorem.&lt;br /&gt;
&lt;br /&gt;
====Changing vector basis sets====&lt;br /&gt;
Sometimes in our studies we find it useful to change basis sets.  For example, when solving a physics problem with cylindrical symmetry, it is often easier to use cylindrical coordinates, and the basis vectors that go with that system, rather than the more usual Cartesian coordinates and basis vectors. &lt;br /&gt;
=====So, how do I change the basis set?=====&lt;br /&gt;
If the new basis set is orthonormal, it is really pretty simple.  You need to project the vector you want changed onto each of the new basis vectors.  This means that the new components are just the inner product of the vector and the appropriate basis function.  If the new basis set is not orthonormal, and if there are n dimensions in each basis set, you will have n linear coupled equations in n unknowns to solve.&lt;br /&gt;
&lt;br /&gt;
===More vector questions===&lt;br /&gt;
&lt;br /&gt;
[[Complex vector inner products|What if the vectors have complex components?]]&lt;br /&gt;
&lt;br /&gt;
[[Vector weighting functions|What if not all components of the vectors have the same units?]]&lt;br /&gt;
&lt;br /&gt;
[[Multiple dimensional vectors|What if there are more than three dimensions?]]&lt;br /&gt;
&lt;br /&gt;
==Functions and vectors, an analogy==&lt;br /&gt;
We may think of the number of the direction, &amp;lt;math&amp;gt; k &amp;lt;/math&amp;gt;, as the independent variable of a vector and the component in that direction, &amp;lt;math&amp;gt; v_k &amp;lt;/math&amp;gt; as the dependent variable of the vector &amp;lt;math&amp;gt; \vec \bold  v &amp;lt;/math&amp;gt; in a similar way to the way we think of t as the independent variable of a function f(), where f(t) is the dependent variable of f.  Probably the biggest difference here is that t often takes on real values from &amp;lt;math&amp;gt; - \infty &amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt; \infty &amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt; k \in {1, 2, 3} &amp;lt;/math&amp;gt;.  Using this analogy, we may think of a function as a vector having an uncountably infinite number of dimensions.  &lt;br /&gt;
&lt;br /&gt;
====Can we write functions in an analogous way compared to the way we write vectors?====&lt;br /&gt;
&lt;br /&gt;
Remember we wrote &amp;lt;math&amp;gt; \vec \bold v = \sum_{k=1}^3 v_k \hat \bold a_k &amp;lt;/math&amp;gt;.  Can we write something similar for a function, f(t) defined for t an element of the reals?  Well maybe....  If the sum over the dummy index k becomes an integral over the dummy variable, x, and the unit vectors &amp;lt;math&amp;gt; \vec \bold a_k &amp;lt;/math&amp;gt; are replaced with something like &amp;lt;math&amp;gt; \delta(x-t) &amp;lt;/math&amp;gt;, the [http://en.wikipedia.org/wiki/Delta_function Dirac delta function].  The result would look something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; f(t) = \int_{- \infty}^\infty f(x) \delta (x-t) dx &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
This works!  The Dirac delta functions, playing the roll of the basis vectors, are called basis functions.  The function f(x) plays the roll of the vector coefficients &amp;lt;math&amp;gt;v_k&amp;lt;/math&amp;gt;.  This gives us another way to think of the function f().&lt;br /&gt;
&lt;br /&gt;
===Inner products for functions===&lt;br /&gt;
[[Orthogonal functions#Inner products for vectors|Above]] we found that a vector inner product between &amp;lt;math&amp;gt;\vec \bold u &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec \bold v &amp;lt;/math&amp;gt; could be written as &amp;lt;math&amp;gt; \vec \bold u \bullet \vec \bold v = \sum_{k=1}^3 u_k v_k &amp;lt;/math&amp;gt;.  If we follow our above analogy, we should be able to replace the sum over k with an integral over x.  There is one little notational problem, and that is we don&#039;t want to confuse the functional inner product with a simple muliply, so we need some new notation to denote this new inner product.  In [http://en.wikipedia.org/wiki/Quantum_mechanics quantum mechanics], physicists use the [http://en.wikipedia.org/wiki/Bra-ket_notation bra-ket] notation.  Let&#039;s borrow that.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; &amp;lt;u|v&amp;gt; = \int_{-\infty}^\infty u^*(x) v(x) dx &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note the complex conjugate on the function u(x).  That is in case u(x) is a complex valued function.  For the analogous case with vectors see [[Complex vector inner products]].&lt;br /&gt;
====Orthogonality for functions====&lt;br /&gt;
Two functions, &amp;lt;math&amp;gt;u(t)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;v(t)&amp;lt;/math&amp;gt; are said to be orthogonal on the interval &amp;lt;math&amp;gt; (a,b) &amp;lt;/math&amp;gt; with respect to the weighting function &amp;lt;math&amp;gt; w(t) &amp;lt;/math&amp;gt;  if and only if &lt;br /&gt;
&amp;lt;math&amp;gt;\int_a^b w(x) u^*(x) v(x) dx = 0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
The weighting function is often unity, but it is included so that different values of t can be weighted appropriately in analogy to the way the &amp;lt;math&amp;gt;w_k&amp;lt;/math&amp;gt; weight was used when the vector basis set was orthogonal, but not orthonormal (that is, different basis vectors had different numerical lengths), as we discussed [[Vector weighting functions|here]].  Unless otherwise noted we will use &amp;lt;math&amp;gt; w(t) = 1 &amp;lt;/math&amp;gt;, so that the defining relation for orthogonality of functions &amp;lt;math&amp;gt; u &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; v &amp;lt;/math&amp;gt; becomes&lt;br /&gt;
&amp;lt;math&amp;gt;\int_a^b  u^*(x) v(x) dx = 0 &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
====Changing basis sets with functions====&lt;br /&gt;
====Examples====&lt;br /&gt;
*[[Fourier series]]&lt;br /&gt;
*[[Reconstructing bandlimited signals from sample points]]&lt;br /&gt;
&lt;br /&gt;
==Other resources on orthogonality==&lt;br /&gt;
[http://en.wikipedia.org/wiki/Inner_product Wikipedia inner product]&lt;br /&gt;
&lt;br /&gt;
Principle author of this page:  [[User:Frohro|Rob Frohne]]&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=2006-2007_Assignments&amp;diff=2501</id>
		<title>2006-2007 Assignments</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=2006-2007_Assignments&amp;diff=2501"/>
		<updated>2006-10-04T04:05:58Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Fall 2006 Homework Assignments==&lt;br /&gt;
&lt;br /&gt;
Assignments for this quarter will be listed here so that there is an easy place to look up the assignments. &lt;br /&gt;
&lt;br /&gt;
===HW #1===&lt;br /&gt;
&lt;br /&gt;
9/21/06-10/02/06&lt;br /&gt;
&lt;br /&gt;
Follow Instructions Given in Handout&lt;br /&gt;
&lt;br /&gt;
===HW #2===&lt;br /&gt;
&lt;br /&gt;
9/29/06&lt;br /&gt;
&lt;br /&gt;
Look at the Wiki &amp;amp; add your personal page. Add one thing to improve the Wiki.&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=2006-2007_Assignments&amp;diff=2467</id>
		<title>2006-2007 Assignments</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=2006-2007_Assignments&amp;diff=2467"/>
		<updated>2006-10-02T08:14:23Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Fall 2006 Homework Assignments==&lt;br /&gt;
&lt;br /&gt;
Assignments for this quarter will be listed here so that there is an easy place to look up the assignments. &lt;br /&gt;
&lt;br /&gt;
HW #1&lt;br /&gt;
&lt;br /&gt;
9/21/06-10/02/06&lt;br /&gt;
&lt;br /&gt;
Follow Instructions Given in Handout&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=2006-2007_Assignments&amp;diff=2466</id>
		<title>2006-2007 Assignments</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=2006-2007_Assignments&amp;diff=2466"/>
		<updated>2006-10-02T08:11:41Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Fall 2006 Homework Assignments==&lt;br /&gt;
&lt;br /&gt;
Assignments for this quarter will be listed here so that there is an easy place to look up the assignments. &lt;br /&gt;
&lt;br /&gt;
HW #1&lt;br /&gt;
&lt;br /&gt;
Follow Instructions Given in Handout&lt;br /&gt;
&lt;br /&gt;
Assigned:&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=2006-2007_Assignments&amp;diff=2465</id>
		<title>2006-2007 Assignments</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=2006-2007_Assignments&amp;diff=2465"/>
		<updated>2006-10-02T08:11:01Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Fall 2006 Homework Assignments==&lt;br /&gt;
&lt;br /&gt;
Assignments for this quarter will be listed here so that there is an easy place to look up the assignments. &lt;br /&gt;
&lt;br /&gt;
Below each assignment is the date that it was assigned and due date.&lt;br /&gt;
&lt;br /&gt;
HW #1&lt;br /&gt;
&lt;br /&gt;
Follow Instructions Given in Handout&lt;br /&gt;
&lt;br /&gt;
- 9/26/05&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=2006-2007_Assignments&amp;diff=2464</id>
		<title>2006-2007 Assignments</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=2006-2007_Assignments&amp;diff=2464"/>
		<updated>2006-10-02T08:10:51Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Fall 2006 Homework Assignments==&lt;br /&gt;
&lt;br /&gt;
Assignments for this quarter will be listed here so that there is an easy place to look up the assignments. &lt;br /&gt;
&lt;br /&gt;
Below each assignment is the date that it was assigned and Due Date.&lt;br /&gt;
&lt;br /&gt;
HW #1&lt;br /&gt;
&lt;br /&gt;
Follow Instructions Given in Handout&lt;br /&gt;
&lt;br /&gt;
- 9/26/05&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=2006-2007_Assignments&amp;diff=2463</id>
		<title>2006-2007 Assignments</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=2006-2007_Assignments&amp;diff=2463"/>
		<updated>2006-10-02T08:10:43Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Fall 2006 Homework Assignments==&lt;br /&gt;
&lt;br /&gt;
Assignments for this quarter will be listed here so that there is an easy place to look up the assignments. &lt;br /&gt;
Below each assignment is the date that it was assigned and Due Date.&lt;br /&gt;
&lt;br /&gt;
HW #1&lt;br /&gt;
&lt;br /&gt;
Follow Instructions Given in Handout&lt;br /&gt;
&lt;br /&gt;
- 9/26/05&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=2006-2007_Assignments&amp;diff=2462</id>
		<title>2006-2007 Assignments</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=2006-2007_Assignments&amp;diff=2462"/>
		<updated>2006-10-02T08:09:22Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Fall 2006 Homework Assignments==&lt;br /&gt;
&lt;br /&gt;
Assignments for this quarter will be listed here so that there is an easy place to look up the assignments. &lt;br /&gt;
Below each assignment is the date that it was assigned.&lt;br /&gt;
&lt;br /&gt;
HW #1&lt;br /&gt;
&lt;br /&gt;
Look at the Wiki &amp;amp; add your personal page. Spend two hours.&lt;br /&gt;
&lt;br /&gt;
- 9/26/05&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=2006-2007_Assignments&amp;diff=2461</id>
		<title>2006-2007 Assignments</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=2006-2007_Assignments&amp;diff=2461"/>
		<updated>2006-10-02T08:08:45Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Fall 2005 Homework Assignments==&lt;br /&gt;
&lt;br /&gt;
Assignments for this quarter will be listed here so that there is an easy place to look up the assignments. Below each assignment is the date that it was assigned.&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=2006-2007_Assignments&amp;diff=2460</id>
		<title>2006-2007 Assignments</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=2006-2007_Assignments&amp;diff=2460"/>
		<updated>2006-10-02T08:08:05Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Fall 2005 Homework Assignments==&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=Signals_and_Systems&amp;diff=2472</id>
		<title>Signals and Systems</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=Signals_and_Systems&amp;diff=2472"/>
		<updated>2006-10-02T08:07:15Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.wwc.edu/~frohro/ClassNotes/engr455index.htm Class notes for Signals &amp;amp; Systems]&lt;br /&gt;
&lt;br /&gt;
== Topics ==&lt;br /&gt;
*[[Orthogonal functions]]&lt;br /&gt;
*[[Fourier series]]&lt;br /&gt;
*[[Fourier transform]]&lt;br /&gt;
*[[Sampling]]&lt;br /&gt;
*[[Discrete Fourier transform]]&lt;br /&gt;
*[[Fourier series - by Ray Betz|Signals and Systems - by Ray Betz]]&lt;br /&gt;
*[[FIR Filter Example]]&lt;br /&gt;
*[[2005-2006 Assignments]]&lt;br /&gt;
*[[2006-2007 Assignments]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I couldn&#039;t figure out how to get to others Users pages easily so I decided to start posting them here, please add yours:&lt;br /&gt;
&lt;br /&gt;
[[User:Frohro|Rob Frohne]]&lt;br /&gt;
&lt;br /&gt;
==2004-2005 contributors==&lt;br /&gt;
&lt;br /&gt;
[[User:Barnsa|Sam Barnes]]&lt;br /&gt;
&lt;br /&gt;
[[User:Santsh|Shawn Santana]]&lt;br /&gt;
&lt;br /&gt;
[[User:Goeari|Aric Goe]]&lt;br /&gt;
&lt;br /&gt;
[[User:Caswto|Todd Caswell]]&lt;br /&gt;
&lt;br /&gt;
[[User:Andeda|David Anderson]]&lt;br /&gt;
&lt;br /&gt;
[[User:Guenan|Anthony Guenterberg]]&lt;br /&gt;
&lt;br /&gt;
==2005-2006 contributors==&lt;br /&gt;
&lt;br /&gt;
[[User:GabrielaV|Gabriela Valdivia]]&lt;br /&gt;
&lt;br /&gt;
[[User:SDiver|Raymond Betz]]&lt;br /&gt;
&lt;br /&gt;
[[User:chrijen|Jenni Christensen]]&lt;br /&gt;
&lt;br /&gt;
[[User:wonoje|Jeffrey Wonoprabowo]]&lt;br /&gt;
&lt;br /&gt;
[[User:wilspa|Paul Wilson]]&lt;br /&gt;
&lt;br /&gt;
==2006-2007 contributors==&lt;br /&gt;
&lt;br /&gt;
[[User:Smitry|Ryan J Smith]]&lt;br /&gt;
&lt;br /&gt;
[[User:Nathan|Nathan Ferch]]&lt;br /&gt;
&lt;br /&gt;
[[User:Andrew|Andrew Lopez]]&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=Signals_and_Systems&amp;diff=2459</id>
		<title>Signals and Systems</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=Signals_and_Systems&amp;diff=2459"/>
		<updated>2006-10-02T07:59:23Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.wwc.edu/~frohro/ClassNotes/engr455index.htm Class notes for Signals &amp;amp; Systems]&lt;br /&gt;
&lt;br /&gt;
== Topics ==&lt;br /&gt;
*[[Orthogonal functions]]&lt;br /&gt;
*[[Fourier series]]&lt;br /&gt;
*[[Fourier transform]]&lt;br /&gt;
*[[Sampling]]&lt;br /&gt;
*[[Discrete Fourier transform]]&lt;br /&gt;
*[[Fourier series - by Ray Betz|Signals and Systems - by Ray Betz]]&lt;br /&gt;
*[[FIR Filter Example]]&lt;br /&gt;
*[[2005-2006 Assignments]]&lt;br /&gt;
&lt;br /&gt;
I couldn&#039;t figure out how to get to others Users pages easily so I decided to start posting them here, please add yours:&lt;br /&gt;
&lt;br /&gt;
[[User:Frohro|Rob Frohne]]&lt;br /&gt;
&lt;br /&gt;
==2004-2005 contributors==&lt;br /&gt;
&lt;br /&gt;
[[User:Barnsa|Sam Barnes]]&lt;br /&gt;
&lt;br /&gt;
[[User:Santsh|Shawn Santana]]&lt;br /&gt;
&lt;br /&gt;
[[User:Goeari|Aric Goe]]&lt;br /&gt;
&lt;br /&gt;
[[User:Caswto|Todd Caswell]]&lt;br /&gt;
&lt;br /&gt;
[[User:Andeda|David Anderson]]&lt;br /&gt;
&lt;br /&gt;
[[User:Guenan|Anthony Guenterberg]]&lt;br /&gt;
&lt;br /&gt;
==2005-2006 contributors==&lt;br /&gt;
&lt;br /&gt;
[[User:GabrielaV|Gabriela Valdivia]]&lt;br /&gt;
&lt;br /&gt;
[[User:SDiver|Raymond Betz]]&lt;br /&gt;
&lt;br /&gt;
[[User:chrijen|Jenni Christensen]]&lt;br /&gt;
&lt;br /&gt;
[[User:wonoje|Jeffrey Wonoprabowo]]&lt;br /&gt;
&lt;br /&gt;
[[User:wilspa|Paul Wilson]]&lt;br /&gt;
&lt;br /&gt;
==2006-2007 contributors==&lt;br /&gt;
&lt;br /&gt;
[[User:Smitry|Ryan J Smith]]&lt;br /&gt;
&lt;br /&gt;
[[User:Nathan|Nathan Ferch]]&lt;br /&gt;
&lt;br /&gt;
[[User:Andrew|Andrew Lopez]]&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=4087</id>
		<title>User:Andrew</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=4087"/>
		<updated>2006-10-02T07:57:54Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==The Comatose Stare==&lt;br /&gt;
[[Image:ALO.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Andrew - Origin: Greek, French | Meaning: Manly, Valiant, Courageous&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For more information about me click here --- &amp;gt; [http://www.mask.wwc.edu/profile/show/826 Mask]&lt;br /&gt;
&lt;br /&gt;
Add Me to your [http://www.myspace.com/andrewlop Myspace]&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=2457</id>
		<title>User:Andrew</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=2457"/>
		<updated>2006-10-02T07:56:27Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Andrew Lopez==&lt;br /&gt;
[[Image:ALO.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Andrew - Origin: Greek, French | Meaning: Manly, Valiant, Courageous&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For more information about me click here --- &amp;gt; [http://www.mask.wwc.edu/profile/show/826 Mask]&lt;br /&gt;
&lt;br /&gt;
Add Me to your [http://www.myspace.com/andrewlop Myspace]&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=2456</id>
		<title>User:Andrew</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=2456"/>
		<updated>2006-10-02T07:56:04Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Andrew Lopez==&lt;br /&gt;
[[Image:ALO.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Andrew - Origin: Greek French | Meaning: Manly, Valiant, Courageous&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For more information about me click here --- &amp;gt; [http://www.mask.wwc.edu/profile/show/826 Mask]&lt;br /&gt;
&lt;br /&gt;
Add Me to your [http://www.myspace.com/andrewlop Myspace]&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=2455</id>
		<title>User:Andrew</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=2455"/>
		<updated>2006-10-02T07:52:19Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Andrew Lopez==&lt;br /&gt;
[[Image:ALO.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Andrew - Origin: Greek French | Meaning: Manly, Valiant, Courageous&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For more information about me click here --- &amp;gt; [http://www.mask.wwc.edu/profile/show/826 Mask]&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=File:ALO.jpg&amp;diff=4090</id>
		<title>File:ALO.jpg</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=File:ALO.jpg&amp;diff=4090"/>
		<updated>2006-10-02T07:50:36Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=2454</id>
		<title>User:Andrew</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=2454"/>
		<updated>2006-10-02T01:49:27Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://fweb.wwc.edu/mediawiki/images/d/d8/DSCF0001.JPG Andrew]] - &#039;&#039;&#039;Origin: Greek French | Meaning: Manly, Valiant, Courageous&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For more information about me click here --- &amp;gt; [http://www.mask.wwc.edu/profile/show/826 Mask]&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=2444</id>
		<title>User:Andrew</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=2444"/>
		<updated>2006-10-01T23:59:27Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://fweb.wwc.edu/mediawiki/images/d/d8/DSCF0001.JPG Andrew]]&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=2443</id>
		<title>User:Andrew</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=2443"/>
		<updated>2006-10-01T23:56:07Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=2442</id>
		<title>User:Andrew</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=2442"/>
		<updated>2006-10-01T23:55:19Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://fweb.wwc.edu/mediawiki/images/d/d8/DSCF0001.JPG Andrew Lopez]]&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=2441</id>
		<title>User:Andrew</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=2441"/>
		<updated>2006-10-01T23:55:03Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Andrew Lopez http://fweb.wwc.edu/mediawiki/images/d/d8/DSCF0001.JPG]]&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=2440</id>
		<title>User:Andrew</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=2440"/>
		<updated>2006-10-01T23:54:43Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://fweb.wwc.edu/mediawiki/images/d/d8/DSCF0001.JPG]]&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=2439</id>
		<title>User:Andrew</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=User:Andrew&amp;diff=2439"/>
		<updated>2006-10-01T23:54:19Z</updated>

		<summary type="html">&lt;p&gt;Andrew: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:http://fweb.wwc.edu/mediawiki/images/d/d8/DSCF0001.JPG]]&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
	<entry>
		<id>https://fweb.wallawalla.edu/class-wiki/index.php?title=File:DSCF0001.JPG&amp;diff=4086</id>
		<title>File:DSCF0001.JPG</title>
		<link rel="alternate" type="text/html" href="https://fweb.wallawalla.edu/class-wiki/index.php?title=File:DSCF0001.JPG&amp;diff=4086"/>
		<updated>2006-10-01T23:51:22Z</updated>

		<summary type="html">&lt;p&gt;Andrew: Andrew Lopez &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Andrew Lopez&lt;/div&gt;</summary>
		<author><name>Andrew</name></author>
	</entry>
</feed>