A Problem of Coefﬁcient Determination in Parabolic Equations Solved as Moment Problem

The problem is to ﬁnd a ( t ) y w ( x , t ) such that w t = a ( t )( w x ) x + r ( x , t ) under the initial condition w ( x , 0 ) = ϕ ( x ) and the boundary conditions w ( 0 , t ) = 0 ; w x ( 0 , t ) = w x ( 1 , t )+ α w ( 1 , t ) about a region D = { ( x , t ) , 0 < x < 1 , t > 0 } . In addition it must be fulﬁlled (cid:82) 10 w ( x , t ) dx = E ( t ) where ϕ ( x ) , r ( x , t ) and E ( t ) are known functions and α is an arbitrary real number other than zero. The objective is to solve the problem as an application of the inverse moment problem. Will be found an approximated solution and bounds for the error of the estimated solution using the techniques on moments problem. In addition, the method is illustrated with several examples.


Introduction
We want to find a(t) y w(x,t) such that w t = a(t) (w x ) x + r(x,t) under the initial condition w(x, 0) = ϕ(x) (1) and the boundary conditions w(0,t) = 0 w x (0,t) = w x (1,t) + αw(1,t) about a region D = {(x,t), 0 < x < 1, t > 0} In addition it must be fulfilled where ϕ(x) , r(x,t) and E(t) are known functions and α is an arbitrary real number other than zero.
We also assume that the underlying space is L 2 (D).This problem is studied in [1].Citing the abstract of this work: "this paper investigates the inverse problem of simultaneously determining the time-dependent thermal diffusivity and the temperature distribution in a parabolic equation in the case of nonlocal boundary conditions containing a real parameter and integral overdetermination conditions, and under some consistency conditions on the input data the existence, uniqueness and continuously dependence upon the data of the classical solution are shown by using the generalized Fourier method".
In general the methods applied to solve the problem are varied.Other works that solve the parabolic equation but under different conditions are [2,3,4].
There is a great variety of inverse problems in which a parabolic equation must be solved and additionally we must determine an unknown parameter [5,6,7], to name a few examples.
The objective of this work is to show that we can solve the problem using the techniques of inverse moments problem.
We focus the study on the numerical approximation.
First deduce an exact expression for a(t)w(1,t).Then, we wrote w * (x,t) = a(t)w(x,t), and is resolved in a first step in numerical form the integral equation where ψ1(m) is written in terms of known expressions, and it is the function to be determined.In a second step the following integral equation is solved in where w * (x,t) it's the unknown function, ψ2(m, n) is an expression in function of G(x,t) with K(m, n, x,t) known.Both integral equations are solved numerically by applying the moment problems techniques.
Then we find an approximation for w(x,t) , this approximation we write wAp(x,t), using the solution found in the second step and condition (3).
Finally we find an approximation for a(t) using a(t)w(1,t) and wAp(x,t).

Inverse Generalized Moment Problem
The d-dimensional generalized moment problem [8,9,10] and [11,12] can be posed as follows: find a function f on a domain Ω ⊂ R d satisfying the sequence of equations where (g i ) is a given sequence of functions lying in L 2 (Ω) linearly independent, and the sequence of real numbers {µ i } i∈N are the known data.
The moments problem of Hausdorff is a classic example of moments problem, is to find a function f (x) in (a, b) such that In this case g i (x) = x i iεN.If the interval of integration is (0, ∞) we have the problem of moments of Stieltjes, if the interval of integration is (−∞, ∞) we have the problem of moments of Hamburger.
It can be proved that [12] a necessary and sufficient condition for the existence of a solution of (4) is that < ∞ where C i j are given by ( 11) and (12).
Moment problem are usually ill-posed in the sense that there may be no solution and if there is no continuous dependence on the given data.There are various methods of constructing regularized solutions, that is, approximate solutions stable with respect to the given data.One of them is the method of truncated expansion.The method of truncated expansion consists in approximating (4) by finite moment problems and consider as an approximate solution of f (x) to p n (x) = ∑ n i=0 λ i ϕ i (x).The ϕ i (x) result from ortonormalize g 1 , g 2 , ..., g n and λ i are coefficients as a function of the µ i .Solved in the subspace g 1 , g 2 , ..., g n generated by g 1 , g 2 , ..., g n (5) is stable.Considering the case where the data µ = (µ 1, µ 2 , ..., µ n ) are inexact, convergence theorems and error estimates for the regularized solutions they are applied.

Resolution of the Parabolic Partial Differential Equation
We consider the equation w t = a(t) (w x ) x + r(x,t).If we integrate with respect to x between 0 and 1 we obtain Thus On the other hand we consider the vector field Let u(i, z, x,t) be the auxiliary function where ∇u = (u x , u t ) besides Then of ( 7) and ( 8) Can be proven that, after several calculations, ( 9) is written as and We solve the integral equation numerically and we will obtain an approximate solution for G(x,t) We can apply the truncated expansion method detailed in [11] and generalized in [12,13] to find an approximation p 1n (x,t) for G(x,t) for the corresponding finite problem with i = 0, 1, ...n where n is the number of moments µ i .We consider the base φ i (x,t) i = 0, 1, 2, ... obtained by applying the Gram-Schmidt orthonormalization process on H i (x,t) i = 0, 1, 2, ...n and adding to the resulting set the necessary functions until reaching an orthonormal basis.
We approach the solution G(x,t) with [12,13]: And the coefficients C i j verifies The terms of the diagonal are The proof of the following theorem is in [14,15].In [15] he proof is done for b 2 finite.If b 2 = ∞ instead of taking polynomials the Legendre are taken polynomials of Laguerre.
In [16] the demonstration is done for the one-dimensional case.
Theorem.Let {µ i } n i=0 be a set of real numbers and suppose that f where C is the triangular matrix with elements C i j (1 It must be fulfilled that If we apply the truncated expansion method to solve equation (10) we obtain an approximation p 1n (x,t) for G(x,t) = −xw * x (x,t) − w * t (x,t).
Then we have an equation in first order partial derivatives of the form where A 1 (x,t) = −x and A 2 (x,t) = −1.It is solved as in [15], ie, we can prove that solving this equation is equivalent to solving the integral equation Again we consider the base φ iz (x,t) i = 0, 1, 2, .., ; z = i + 1, ...... obtained by applying the Gram-Schmidt orthonormalization process on u(i, z, x,t)(z − i) = K iz (x,t) i = 0, 1, 2, ...; z = i + 1, ..... and then the above equation can be transformed into a generalized moment problem Applying again the techniques of generalized moments problem to the corresponding finite problem, we found an approximate solution p 2n (x,t) for w * (x,t).
To find a numerical approximation for w(x,t) we use condition (3): And a numerical approximation for a(t) will be We can measure the accuracy of the approximation (13) using the previous theorem, where µ i would be the ith generalized moment of wAp(x,t), that is, we consider the moments of w(x,t) measured with error.An analogous argument is used to measure the accuracy of the approximation aAp(t)
In other words, it applies the Gram-Schmidt orthonormalization process on {e −t , xe −2t , x 2 e −3t , ..., x n−1 e −nt }, and is taken as a measure D e −t dtdx.We will obtain, by applying the truncated expansion method, p * 1n (x,t) so that e t p * 1n (x,t) = p 1n (x,t).Analogously to obtain p 2n (x,t), we consider the base φ iz (x,t) i = 0, 1, 2, ..., n 1 , ; z = i + 1, ..., n 2 obtained by applying the Gram-Schmidt orthonormalization process on and is taken as a measure D e −2t dtdx.We will obtain, by applying the truncated expansion method, p * 2n (x,t) so that e 2t p * 2n (x,t) = p 2n (x,t).To apply the method must be w(1, 0) = 0.It may happen that (13) or ( 14) have discontinuities because the denominator is overridden for certain values of t.In this case we can vary the number of moments that are taken so that the denominator does not have real roots that cancel it.

Example 1
We consider the equation The following conditions are met:

Example 2
We consider the equation and conditions The following conditions are met:  dtdx = 0.0195688.In Fig. 3 and Fig. 4 the exact solution and the approximate solution are compared.

Example 3
We consider the equation The following conditions are met:

Conclusion
We consider the problem of finding a(t) y w(x,t) such that w t = a(t) (w x ) x + r(x,t) under the initial condition w(x, 0) = ϕ(x) and the boundary conditions w(0,t) = 0 and w x (0,t) = w x (1,t)+αw(1,t) about a region D = {(x,t), 0 < x < 1, t > 0}.In addition it must be fulfilled 1 0 w(x,t)dx = E(t) where ϕ(x) , r(x,t) and E(t) are known functions and α is an arbitrary real number other than zero.We also assume that the underlying space is L 2 (D).First deduce an exact expression for a(t)w(1,t).Then, we wrote w * (x,t) = a(t)w(x,t), and is resolved in a first step in numerical form the integral equation where w * (x,t) it's the unknown function, ψ2(m, n) is an expression in function of G(x,t) with K(m, n, x,t) known.
Both integral equations are solved numerically by applying the moment problems techniques.
Then we find an approximation for w(x,t) , this approximation we write wAp(x,t), using the solution found in the second step and condition 1 0 w(x,t)dx = E(t).Finally we find an approximation for a(t) using a(t)w(1,t) and wAp(x,t).