Orthogonal functions
Introduction
In this article we will examine another viewpoint for functions than that traditionally taken. Normally we think of a function, f(t), as a complicated entity, f(), in a in a simple environment (one dimension, or along the t axis). Now we want to think of a function as a vector or point (a simple thing) in a very complicated environment (possibly an infinite dimensional space).
Vectors
Recall that vectors consist of an ordered set of numbers. Often the numbers are Real numbers, but we shall allow them to be from the Complex numbers for our purposes. The numbers represent the amount of the vector in the direction denoted by the position of the number in the list. Each position in the list is associated with a direction. For example, the vector means that the vector is one unit in the first direction (often the x direction), four units in the second direction (often the y direction), and three units in the last direction (often the z direction). We say the component of in the second direction is 4. This is often written as .
Notation
We don't have to use x, y, and z as the direction names; we can use numbers, like 1, 2, and 3 instead. The advantage of this is that it leads to more compact notation, and extends to more than three dimensions much better. For example we could say instead of . Instead of writing we can write where the denotes a basis vector in the kth direction, and . The idea of basis vectors was implicit in the notation .
Changing Basis Sets
Sometimes in our studies we find it useful to change basis sets. For example, when solving a physics problem with cylindrical symmetry, it is often easier to use cylindrical coordinates, and the basis vectors that go with that system, rather than the more usual Cartesian coordinates and basis vectors. Let us remind ourselves how we do the transformation from a one coordinate system to another.
Inner Products
When vectors are real, inner products, sometimes called dot products give the component of one vector in another vector's direction, scaled by the magnitude (length) of the second vector. Inner products are useful to find components of vectors. We commonly use a dot as the symbol for inner product. For example, the inner product of and is written:
Orthogonality for Vectors
It is quite handy to pick the directions used so that they are perpendicular (or orthogonal). With this arrangement the basis vectors have no components in each other's directions, which means that
where the is the square of the length of and the symbol is one when k = n and zero otherwise.
Normalization
When the we have an orthonormal basis set, meaning that it is both orthogonal and that the basis vectors are normalized to unity (or have length one). Orthonormal vector systems are very popular. In fact they are the most common vector systems you will find. The reason they are so handy is each direction is uncoupled from the others.
For example, to find , we take the inner product of the vector with a unit vector in the nth direction, . We write this operation like this:
Suppose we have two vectors from an orthonormal system, and . Taking the inner product of these vectors, we get
This shows that when we have an orthonormal vector space, inner products boil down to summing the products of like components. Also note that if we take the inner product of with itself, we get
which is the magnitude of the vector squared () from the Pythagorean Theorem.
So, how do I change the basis set?
If the new basis set is orthonormal, it is really pretty simple. You need to project the vector you want changed onto each of the new basis vectors. This means that the new components are just the inner product of the vector and the appropriate basis function. If the new basis set is not orthonormal, and if there are n dimensions in each basis set, you will have n linear coupled equations in n unknowns to solve.
More Vector Questions
What if the vectors have complex components?
What if not all components of the vectors have the same units?
Functions and Vectors, an Analogy
Independent and Dependent Variables
We may think of the number of the direction, , as the independent variable of a vector and the component in that direction, as the dependent variable of the vector in a similar way to the way we think of t as the independent variable of a function f(), where f(t) is the dependent variable of f. Probably the biggest difference here is that t often takes on real values from to , and .
Changing Basis Sets with Functions
Inner Products
(Put more here.)
Orthogonality for Vectors
(Put more here.)