As you recall, the ith coefficient in the series is multiplied by the ith "basis" function, and the results are added together across an infinite sum. If the sum converges at x, then t(a), as a function, is defined at x. I put the word basis in quotes because a basis usually spans only finite sums. In this case the basis spans an infinite series.
Assume the range of t is a vector space that has a dot product. Furthermore, assume the basis functions, employed by t, are orthogonal in the image space. In other words, the ith function (which is multiplied by ai in the series), and the jth function (which is multiplied by aj in the series), have a dot product of 0, and are orthogonal.
The implementation of t() is pretty straightforward. Multiply basis functions by their coefficients, and add up the result. In the real world, the terms of the series go to zero pretty quickly, so a computer can approximate t(a) without too much trouble. But what about the inverse operator u()? Given a function g, how can we recover the series?
Let g be the sum of aifi, as i runs from 0 to infinity. Here ai is the coefficient from the series, yet to be determined, and fi is the ith basis function. Take the dot product of g and fi, and assume the series can be integrated term by term. (This is the case when convergence is uniform.) The integral of fj×fi is zero whenever j ≠ i. This is because distinct basis functions are orthogonal. We are left with the integral of fi×fi. This yields the following formula for the ith coefficient in the series, and a practical way to implement u().
ai = (∫ g×fi) over (∫ fi×fi)
If the functions are complex, we need to conjugate the second factor fi in each of the two integrands. This comes from the definition of dot product in complex space; g.fi is the integral of g times the conjugate of fi.
Let g be a function that arises from a scientific experiment. It exists as a graph, or as points in the xy plane, and you would like to find an accurate polynomial approximation. You might be able to estimate the first derivative (slope) at 0, and maybe even the second derivative (curvature), but the higher derivatives cannot be inferred. And the slightest error in a derivative leads to exponential inaccuracies as you move away from the origin. Clearly another method is called for.
The legendre polynomial ln(x) is an nth degree polynomial that is part of an orthogonal basis. There is one legendre polynomial for each degree n. Note that the legendre polynomials can be combined to produce a taylor series, and conversely, a taylor series implies a unique legendre series. Either is a basis for the analytic functions on the interval [-1,1], however, the legendre polynomials form an orthogonal basis. A computer can derive the legendre series, or at least a finite legendre approximation (the first n coefficients), without too much trouble. If you stop at degree n, the resulting polynomial approximation is as accurate as an nth degreee polynomial can be.
I've already described the legendre polynomials in detail in another section.