Polynomial and Functional Vector Spaces


Definition


Functional Vector Space

Let $V$ be a vector space over a field $F$ and $A$ be any set. If we define functions $f : A \rightarrow V$ and $g : A \rightarrow V$ and for any x $\in A, c \in F$, then
           1)  $(f + g)(x) = f(x) + g(x)$
           2)  $(cf)(x) = cf(x)$

 
Polynomial Vector Space

Let $P(\mathbb{R})$ be a set of all the polynomials with the coefficients from the field $\mathbb{R}$ and
$P_1(x), P_2(x) \in$ $P(\mathbb{R}$) then $P(\mathbb{R})$ will be a vector space if
            1)  $0 \in P(\mathbb{R}$)
            2)  $P_1 + P_2$ $\in P(\mathbb{R}$) 
            3)  $c\cdot P_1$ $\in P(\mathbb{R}$), $\forall c \in \mathbb{R}$

 

0

Motivation

One must have spent a considerable amount of time understanding what exactly a vector apace is before reaching at this point. We are also aware that a vector space can be something much more than just being a Euclidean space. It is something where we can perform linear algebraic operations: Addition and scalar multiplication and it can be done anywhere. Suppose we take any two shapes and combine them then the combined area will be the sum of their areas and the same goes with scalar multiplication. So we can see those shapes as vectors since they follow the vector space axioms. There are many more things other than the typically considered arrows and an array of numbers that can be modeled as vectors which we are unaware of. 
        We are very much familiar with the method of  'Integration' and its properties. But what do we integrate and how does it works? We integrate 'Functions' and when we do it, we compute the area under the curve of that function between certain bounds. So, here we will use the analogy similar to that of adding the area of the shapes and see how integrable functions can also form a vector space.

0

Bird's Eye View

Consider a set of functions $F = \{ f(x) |  \int_{a}^{b} f(x) dx \in \mathbb{R}\}$
We already know all the properties of integrable functions which are as follows,

If $f, g \in F$  then,

         1)  $\int_{a}^{b} ( f(x) + g(x) ) dx$ = $\int_{a}^{b} f(x) dx$ + $\int_{a}^{b} g(x) dx$

         2)  $\int_{a}^{b} c f(x) dx$ = $c$$ \int_{a}^{b} f(x) dx$

         3)  $\int_{a}^{b} (-f(x)) dx$ = - $\int_{a}^{b}  f(x) dx$

         4)  $\int_{a}^{a} f(x)  dx$ = 0

There are other properties as well that are left for the readers to list out and observe that it verifies all the axioms of a vector space. They are consequences of the same axioms holding in Euclidean space. Hence, these integrable functions can be unquestionably accepted as vectors and the set $F$ to be a vector space.

0

0

Figure 1: Integrable functions form a vector space.

Context of the Definition


A function is a relation between the two sets that relates every element of the first set to that of the other. It behaves like a machine that takes the input from one set and gives output to the other set. In a function, if we map the first set to a set of vectors, then this whole arrangement will be called a Functional vector space. Functions are just another type of vectors having all the vector-ish properties.
Let $V$ be a vector space over a field and if we define two functions $f : A \rightarrow V$ and $g : A \rightarrow V$ then for any $x \in A$, $c \in F$, 
              1)  $(f + g)(x) = f(x) + g(x)$
              2)  $(cf)(x) = cf(x)$

Note:  $(f + g)(x)$ and $(cf)(x)$ are the notions for the new function obatined after adding and scaling the functions respectively.

0

0

Figure 2: Addition of two functions

The above illustration works in the same manner as when we add the two vectors coordinate by coordinate in a Euclidean space. Just like them, we can also add the two functions and multiply them by a scalar to obtain another new function. 

0

0

Figure 3: Scalar multiplication of a function

Examples

1)  If $V$ is a set of all the functions such that $f: \mathbb{R} \rightarrow \mathbb{R}$ then it forms a vector space over a real field.
Consider $f_1, f_2 \in$ V where $f_1(x) = e^x$ and $f_2(x) = cos x$ for all $x \in \mathbb{R}$
We know, $(f_1 + f_2)(x) = f_1(x) + f_2(x)$
$\implies (f_1 + f_2)(x) = sin x + e^x$ $\in V$
and $(kf_1)(x) = kf_1(x) = ke^x$ $\in V$, where $k \in \mathbb{R}$
Therefore, $V$ is closed under vector addition and scalar multiplication.
As a consequence of the closure of $V$ under scalar multiplication,
$0\cdot f_1$ = $0 \in V$
Hence, $V$ is a vector space over $\mathbb{R}$


2)  Let $V$ be a set such that $V = \{f | f(x) =  a cos x + b sin x$, where $x \in \mathbb{R}\}$.
We'll show that the set of functions V forms a function vector space.
Suppose $f, g \in V$ where $f(x) =  a cos x + b sin x $ and $g(x) =  c cos x  + d sin x$,
for all $a, b, c, d \in \mathbb{R}$.
We have, $( f + g )(x) = f(x) + g(x)$
$\implies ( f + g )(x) = a cos x + b sin x + c cos x + d sin x$
$\implies  (f + g )(x) = (a + c) cos x + (b + d) sin x$ $\in V$
and $(c f)(x) = cf(x)$
$\implies (c f)(x)$ = c $(a cos x + b sin x) = (ca) cos x + (cb) sin x$ $\in V$
Also, $0 \cdot cos x + 0 \cdot sin x $ $\in V$ 
Hence, $V$ forms a functional vector space. 

 

A set of 'Polynomial Functions' is a Vector Space too!

We already know what the functions are and we get another function when we add them and multiply them with scalars and do all other stuff like taking derivatives, integrating them, plotting their graphs, etc so, nothing is new here. What's new is to check that how they behave like vectors. 

A Polynomial is also a kind of function whose variables have non-negative exponents. We will briefly discuss here about a polynomial function behaving like a vector and the set of each one of them forms a vector space. It is also abundantly clear that the polynomials satisfy the distributive, associative, and commutative property.

Consider a set of polynomial functions $P(\mathbb{R})$ with all the real coefficients from the field $\mathbb{R}$.
In $P(\mathbb{R})$, there be two elements say, $P_1 = (a_0 + a_1x  + a_2x^2 + a_3x^3 + . . . . . . . . . .+a_mx^m)$ and
$P_2 =  (b_0 + b_1x + b_2x^2 + . . . . . . . . b_nx^n)$ of degree m and n respectively,  where $a_1, a_2, . . . . . . .+ a_n \in \mathbb{R}$ and $a_m \neq 0$, $b_n \neq 0$. 
Now there are certain things to be kept in mind:

  • These polynomials are said to be equal $\iff$ m = n and $a_i = b_i $ $\forall$ i's.
  • We define the degree of the zero polynomial ($0 + 0x  + 0x^2 + 0x^3 + . . . . . . . . . .+ 0x^n$) to be - $\infty$ and a constant polynomial with degree 0 to be $a_0$.

We've already seen that the zero polynomial exists in the set. Addition and scalar multiplication is defined as "component-wise" in P$(\mathbb{R})$.
More precisely if $P_1$ and $P_2$ are two arbitrary polynomials of degree m and n respectively and m > n then,

The addition of the two polynomials $P_1 + P_2  = (a_0 + a_1x  + a_2x^2 + a_3x^3 + . . . . . . . . . .+a_mx^m) + (b_0 + b_1x + b_2x^2 + . . . . . . . . b_nx^n)$, 
$\implies$ $P_1 + P_2 = (a_0 + b_0) + (a_1 + b_1)x  + (a_2 + b_2)x^2 + . . . . .+ (a_n + b_n)x^n + (a_{n+1} + b_{n+1}x^{n+1}). . . . . . .+(a_m + 0)x^m$
   
Let $c \in \mathbb{R}$.
$c P_1$ = $c (a_0 + a_1x  + a_2x^2 + a_3x^3 + . . . . . . . . . .+a_mx^m)$ 
$\implies$ $cP_1 = (ca_0 + ca_1x  + ca_2x^2 + ca_3x^3 + . . . . . . . . . .+ ca_mx^m)$ 
It is also closed under the scalar multiplication.

So, now we can conclude that the set of Polynomial functions is a vector space.

 

Finite and Infinite Dimensional Vector Spaces

The vector space $V$ over a field $F$ is said to be a Finite-Dimensional Vector Space if it is spanned by a finite set of vectors $\{ v_1, v_2, . . . . . . , v_k \}$. If $V$ cannot be spanned by a finite set of vectors, then $V$ is said to be an Infinite-Dimensional Vector Space.


A Euclidean space $\mathbb{R}^n$ is a finite-dimensional vector space. The standard basis of $\mathbb{R}^n$ is $\{ e_i \}_{i=1}^{n}$ ($e_i$ is the vector with 1 at the ith position, and 0 elsewhere). The natural generalization of the finite-dimensional Euclidean space $\mathbb{R}^n$ is $\mathbb{R}^{\infty}$ which is an infinite-dimensional space which is spanned by an infinite set of vectors, since no countable set of vectors in $\mathbb{R}^{\infty}$ spans $\mathbb{R}^{\infty}$. 
Similarly, the vector space $P(\mathbb{R})$ of all polynomial functions with the real coefficients is not generated by any finite set of vectors, and therefore, it is also an infinite-dimensional vector space.

 

0

Applications

Function Spaces have played an important role in applied sciences as well as in mathematics itself. It was used for the development of the modern analysis of partial differential operators, distribution theory, numerical analysis, integral equations, approximation theory, and so on. 
In ordinary calculus, the process of limiting the finite-dimensional vector spaces $(\mathbb{R}$ or $\mathbb{R}^n)$ is involved but for solving the above list of problems, calculus of function spaces is required which are infinite-dimensional.  


History

The evolution of functional analysis which had a broad range of applications was one of the major accomplishments of the 20th century. Many books and some incredibly important articles have been dedicated to the research of its origin and evolution. The part of the functional analysis is "Function spaces" which is a topological space that has its each point as a function. 
The notion of a function space was dormant in the 19th century. It took half of the century to reach the concept of function spaces which wouldn't have been possible without the set theory and the concepts of general topology. The growth of the function space owes very much to the theories of differential and integral equations and the development of "modern" algebra. By the most mathematicians, the idea of functions was taken for granted earlier but then it started to unfold slowly. In 1707-1783 the work of Leonhard Euler was based on real special functions as they appeared in many applications like geometry, astronomy, probability, etc. Euler visualized the arbitrary functions as being given by their graphs, but then he didn't plan it systematically to develop this theory. Then the same idea was remarkably substantiated by Joseph Fourier (1768-1830). He introduced the orthogonality of the trigonometric system of functions that came up as a trigonometric series which was named as Fourier series. 
After this, Gustav Lejeune-Dirichlet who knew Fourier gave the first proof for the Fourier series to be convergent, in 1829. Dirichlet also gave the modern definition of an arbitrary function. It still took long to surmise all these theories. Even after this, some more conclusions were made by the other mathematicians. The notion of function was still developing and that of space was still at an initial stage. The general idea of the space was proposed by Mechanics. The dynamical systems are those whose configurations depends on the arbitrarily many coordinates. This kind of spaces were then studied by some mathematicians in the 19th century and then finally in 1844, Arthur Cayley gave a theory on analytical geometry of dimensions and in the same year, Hermann Grassman published his work which had n-dimensional vector spaces in it and then, Bernhard Riemann who was the initiator of the topology said that the theory of continuous quantities disregards the metric properties. He magnified the nature of space in geometry conceptually and formulated in a nutshell the notion of "Function Spaces" of infinite dimension.

0

Pause and Ponder

1)  Let V be a vector space, W is a subspace of V and A be the set. 
     Now if we define the functions $f: A \rightarrow V$ and $g: A \rightarrow W$. Is $g$ a subspace of $f$?

2)  Will a set of all the differentiable real functions form a vector space?

0

References

[1]  Mapa, S. K. (2003). Higher Algebra: Abstract And Linear (revised Ninth Edition). Sarat Book Distributors.

[2]  Gallian, J. (2012). Contemporary abstract algebra. Nelson Education.

[3]  Birkhoff, G., & Kreyszig, E. (1984). The establishment of functional analysis. Historia Mathematica11(3), 258-321.

[4]  Sasane, Amol. "Functional analysis and its applications."

[5]  https://www.math.tamu.edu/~dallen/linear_algebra/chpt3.pdf

[6]  https://math.stackexchange.com/questions/2538458/do-all-integrable-functions-on-0-1-form-a-vector-space

[7]  https://www.math.ubc.ca/~lior/teaching/1617/412_F16/Notes/InfiniteDimensions.pdf

[8]  http://www.math.lsa.umich.edu/~kesmith/infinite.pdf

[9]  https://dmpeli.math.mcmaster.ca/TeachProjects/Math1B03/Slides/lesson33.pdf

0

Further Reading

[1]  https://dmpeli.math.mcmaster.ca/TeachProjects/Math1B03/Slides/lesson33.pdf

[2]  https://math.stackexchange.com/questions/902133/intuitive-and-convincing-argument-that-functions-are-vectors

[3]  http://www.math.lsa.umich.edu/~kesmith/infinite.pdf

0


Contributor:
Mentor & Editor:
Verified by:
Approved On:

The following notes and their corrosponding animations were created by the above-mentioned contributor and are freely avilable under CC (by SA) licence. The source code for the said animations is avilable on GitHub and is licenced under the MIT licence.




The work under this website is licenced under a Creative Commons Attribution-Share Alike 4.0 International License CC BY-SA