We know the Taylor's theorem:
\[ f(x) = \sum_{i=0}^{\infty} \frac{f^{(k)}(a)}{k!} (x-a)^k ...(1)\]
Taking , we get
This is what I just learned. Well, does exponentiating a differential operator makes any sense? We always take derivative of some function (function acts as an operand). What is an operator without an operand? Separating the differential operator in (3) from its function seems to be unreasonable?
What do you think about , , ? I think its garbage.
Raising a value to some 'operator' seems non-sense. Please explain this thing to me. How do I justify that what I have been taught is correct and not meaningless?
And also tell me about so called Shift Operator - it can be obtained by removing from expression (5). What it does?
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
or_italics_
**bold**
or__bold__
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
\(
...\)
or\[
...\]
to ensure proper formatting.2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Comments
This is a very short and incomplete answer to your question. Think of the exponential of an operator as another operator. Before addressing your specific question, in general, you can make sense of functions on an operator. Remember that an operator is defined by the way it acts on the underlying object. To see how to make sense of functions on operators, let us consider something which we know well and use frequently : matrix algebra. Remember that a matrix A is a linear operator and is defined based on the way it acts on a vector x. Have you ever wondered what A2 means i.e. what is the square of a linear operator? In general, have you ever wondered what the product of two matrices/linear operators mean, and why we multiply matrices in the weird way as we do? Matrix multiplication was initially defined by Cayley in 1858, in order to reflect the effect of composition of linear transformations. See paragraph 3 at this link i.e. we have the matrix/operator A that does the the linear transformation x→Ax and matrix/operator B that does the the linear transformation x→Bx, then the operator BA is to denote the operator that does the linear transformation x→B(Ax). In the above lines, we tried to make sense of multiplication of two linear operators/matrices. In general, you can use the above idea inductively to define what An and Anx means for n∈Z+. Once you have this you can make sense of matrix exponentiation i.e. eA. It is the unique operator that acts on any vector x and outputs x+n=1∑∞n!Anx. (Remember that for what we have written on the right side to make sense, you need to prove that this converges for all x.) To cut the long story short, ed/dx is an operator that acts on a function f as follows: (ed/dx)f=f+n=1∑∞n!1(dxd)nf=f+n=1∑∞n!1dxndnf(x) You can also make sense of the operator e! as follows: e!(m)=m+n=1∑∞n!1(!)nmwhere (!)nm=n factorials((((⋯(m!)!)!)!)⋯) However, you will find that the series on the right converges/makes sense only for m=0,1 and 2.
Log in to reply
Good to see you here Marvis! Great explanation as always (assuming familiarity of matrices)