Several years ago, Barry Simon investigated a new approach to inverse spectral
theory for the half-line Schrödinger operator, in in  .
A new A-function introduced in    , is related to Weyl-Titchmarsh func- tion by the following relation:
where for all a.
In  , the key discovery is that satisfies the following integro-diffe- rential Equation:
Given the fact that
(at least in the sense), can be determined directly from . And can be calculated from (which is essentially the inverse Laplace transform of the data), by solving an equation which does not involve . Thus the inverse problem to determine q from m, becomes a problem to solve the integro-differential Equation (2). Properties of (2) are discussed in     . To construct numerical solvers to this integro-differential equation, one needs to study sets of exact analytic solutions.
In this paper, we study a larger class of analytic solutions of (2), which is of the form
This ansatz is motivated by the explicit example in  , where is calculated for Bargmann potentials using inverse scattering theory (which is valid only under restrictive assumptions). Our aim is to determine the behavior of such solutions for all and to do so using only (2).
Substituting (4) in (2), we find that , and satisfy the nonlinear equations:
Then we give a method for solving (5) explicitly in Section 3. The idea is to introduce new variables , the symmetric functions of ( ), that is . Via this “change of variable”, (5) yields a new non- linear system:
This nonlinear system turns out to be solvable. Calogero proved that a certain family of n-body problems is solvable in a 2004 J. Math. Phys. paper and his model includes system (6), and the method we use in this method is different from his approach. Our method also shows some insightful connection to scattering problems. In Section 3, first we find n constants of motion for the system (6) which allow us to reduce it to a first order nonlinear system. Explicitly we will prove
Theorem 1. (i) Supposing that for any x in an open interval I, are solutions of the second order nonlinear system (6). Then on I, solves the first order system
for . Here are constants and .
(ii) Conversely, if is solutions of (7) with and for , then (6) holds.
The latter is then solved by finding a nonlinear analogue of the method of integrating factors (Theorem 13).
We note that is zeros of polynomials with coefficients . Calogero pointed out in   that some nonlinear systems can be linearized by non- linear mapping between coefficients of polynomial and its zero, and thus is integrable. The novelty in this paper is that the nonlinear mapping from to relates the system (5) to a solvable yet still nonlinear system. Interestingly, a system similar to (6) arises (   ) if one seeks potentials for which the large frequency WKB series is finite and yields solutions of the corresponding Sch- rödinger equations (with no error).
Section 4 shows how we obtain analytic examples of (2) by following this systematic procedure.
2. The g Equation
As described in introduction, we relate a large class of exact solution of A- Equation (2) to a second order non-linear system (5).
Without loss generality, we assume for all . Then the following proposition can be followed by direct calculation.
Proposition 2. If is of the form (4), and satisfies (2), then , and satisfy (5). Conversely, if satisfy (5), then the function solves (2).
Our goal is to solve (5) explicitly. To begin with, we need some notations. Let be the lth symmetric function on , :
Lemma 3. If are distinct, and are the symmetric functions on , , , then the matrix
Proof. Suppose the matrix is not invertible; then there exists a non-zero vector , such that
Evaluate the above at :
We assumed are distinct, so for all . This contradicts our assumption, and proves the given matrix must be invertible.
3. A Transformed System and Explicit Solutions
3.1. Non-Linear Integrable Equation
To solve Equation (5) explicitly, we construct a nonlinear mapping from to new dependent variables c. We take to be the j-th symmetric function of ,
For convenience, we define for or .
Proposition 4. If satisfy Equation (5), then as defined by (13), satisfy the system:
Proof. It follows directly by calculation, that for every ,
And for , we have
Conversely, we have
Proposition 5. If satisfy the system (14), and are the distinct roots of the polynomial with coefficients , then satisfy the system (5).
Proof. As in the previous calculations, for every , we have
By assumption, . The proof of Lemma 3 then shows that the matrix ( ), where
is invertible. Thus
and satisfy (5).
3.2. Second Order Nonlinear System to First Order Nonlinear System
We have identified n constants of motion for the system (14). This will allow us to reduce the second order system to a first order system.
Proposition 6. The nonlinear system (14) has the following constants of mo- tion:
for all the . Here , when or .
Proof. Since , , we can write
Multiplying the first of these equations by and the second equation by , and subtracting, we have
Using this equation we obtain (compare with (22))
It follows by induction that
This identity, together with (22) shows that
We can also write (19) as
for . Here are constants and .
Theorem 1 presents the equivalence of the first order system (23) and the second order system (14).
Proof. of Theorem 1
(i) this result follows directly from Proposition 6.
(ii) let . Differentiate (23), for ,
If we write the above equation in matrix form, we have
The coefficient matrix of above Equation (24) is a Sylvester resultant matrix. A well known theorem from linear algebra then expands the determinant of the Sylvester resultant matrix as the resultant of the two polynomials,
The coefficient matrix of (24) is nonsingular if and only if and are coprime for . Let be the polynomial
We observe that , and since is j-th symmetric function of , we have
If and are not coprime, they have a common root , such that . Let , be the two distinct square roots of . Substitute and in (26); this yields and respectively. Thus there exist , such that , .
Since are assumed distinct, we obtain , which contradicts the fact that .
Therefore and must be coprime for , and (24) has the unique trivial solution:
Thus solve the second order system (14).
3.3. Method of Integrating Factor
We have reduced the second order non-linear system (14) to the first order non-linear system (23). To solve the latter system explicitly, we begin by writing it in matrix form. Let
We will assume n is even from now on. When n is odd, we obtain similar results. The Equation (23) can be written as a matrix equation.
We will show that the nonlinear system (28) can be solved explicitly.
The nonlinear system (28) can be written as
Our goal is to find an integrating factor M such that after multiplication on the left by M, (30) takes the form
Thus we would like to find an matrix M and an matrix N such that
This leads to
is the first column vector of and is the last column vector of .
For any matrix which satisfies (37), and Math_176#, ,
We must have , , for .
Let , then these conditions show that N must be of the form
and moreover , for .
To find M, we now rewrite (34) and (36) in matrix form,
Thus each of the n rows of M solves an over-determined linear system, con- sisting of equations and unknowns.
Studying the structure of the matrix , we notice the following algebraic identity,
As an immediate corollary, we have the following
Lemma 8. Given a nontrivial solution of (28), the overdeter- mined system (39) is solvable only if N satisfies . Moreover, we also have .
For each overdetermined system
where is the i-th column vector of , Lemma 7 and Lemma 8 show that the rank of the augmented matrix is less than . The over-determined system has at most linear independent equations.
Let denote the sub-matrix of obtained by deleting the first row, the n-th row, the (n + 1)-th row, first column and last column of . We observe that is also a Sylvester resultant matrix. The determinant of a Sylvester resultant matrix is the resultant of the two polynomials
Two cases need to be considered here. (1) and are coprime: Then is nonsingular. The augmented matrix of (40) has the same rank as the corresponding coefficient matrix. Thus (40) is solvable. (2) and are noncoprime: We then use the following result of Laidacker  : let be the greatest common divisor of two polynomials , , then the rank of the Sylvester resultant matrix is . Thus the rank of the augmented matrix should be also , if (40) is to be solvable.
We obtain an algebraic fact about the Sylvester matrix.
Lemma 9. Suppose that , and that , and do not vanish simultaneously. Then the following alge- braic system is always solvable.
Proof. Let be the polynomial with coefficients consisting of the i-th row of and let be the polynomial with coefficients consisting of the i-th row of in echelon form. It should be noted that is a linear combination of the polynomials of . The algebraic system is solva- ble if and only if each zero row of in echelon form corresponds to a zero row of the augmented matrix in echelon form.
Suppose the i-th row of in echelon form is zero, that is From the structure of ,
Let be the greatest common divisor of the two polynomials and , let , , where . The above shows that there exists a polynomial such that
We need to prove
Using the above identities, the left side of the equation can be written as
The left side of this equation vanishes, since the condition in the hypothesis can be represented as .
Inspired by the fact that and by Lemma 9, we prove the existence of N by constructing such that . To be more specific, let
where are arbitrarily distinct and non-zero constants.
Lemma 8 also shows that (39) is solvable only if satisfies .
To calculate , we first introduce some new notation. For each , we define as follows,
Then, we have
only if for , i.e .
The following propostion proves that N satisfies both and .
Proposition 10. If (28) is satisfied and are defined as above, then
Proof. Straight forward calculation.
Recall that our objective is to construct an integrating factor to reduce (23) to an algebraic system. As described above, let
We assume in this transformation.
Lemma 11. if is odd and if is even.
N can be rewritten as
With as given above, we can define a matrix ,
The following proposition shows that is a solution of
(39), i.e. M is an integrating factor of (30).
Proposition 12. Given , fj by (46), and ( ) be distinct roots of , then M solves (34), (35), (36).
Proof. Straightforward calculation shows
thus, . Similar calculation proves . At the same time, we have
Theorem 13. if , then , Conversely, if and , then , moreover, ck satisfy the linear system,
The system (28) is integrable and equivalent to this linear system.
Remark 14. can be solved directly from Equation (48) as Math_294#. The algebraic system (51) will then lead to the solution .
Proof. Multiplying Equation (30) by , and using Proposition 12, yields
. Further, multiplying both sides by , we have
Thus . The fact that
forces the constant to be zero.
Conversely if , since is a matrix and are assumed distinct, , so the dimension of the kernel is 1. The solution which satisfies and is unique. Since the first entry of is , it follows that .
This proves that (28) is integrable and provides a procedure to obtain explicit solutions from the linear system (51).
4. Exact Analytic Examples
We will illustrate the procedure of section 2 and section 3 by a simple exact analytic example in this section.
We explicitly discuss the case, where in (4). Then . We construct the non-linear mapping from to ,
Then satisfy (14). To solve for c, we reduce the second order system (14) to the first order system (30):
Given , where , we construct M of the form (50)
where are solution of (48):
We can write
After multiplying (54) by M on the left, the first order system is solved explicitly. Indeed, satisfy linear system (51). Let
Then (51) takes the form
To invert the mapping (53) we find , as the roots of the equation
This gives the following exact solutions of A-equation:
Theorem 15. For any distinct non-zero complex , and , there exists a solution of the A-equation with the form
, where ,
Proof. This theorem is a direct corollary of the results in section 3.
Remark 16. This theorem does not cover all solutions of the form . Consider an example from  ,
Working through the procedure in section 3, we get
Here the values of the constants are:
so that and are not distinct.
This leads to . Proposition 10 holds, but the nonlinear transformation (46) is not defined.
A large class of exact Equations to A-Equation was found in this work. Techniques used in our approach include non-linear transformation between coefficient of a polynomial and its zero, constants of motion, and an interesting integrating factor method. The nonlinear system studied here is of interest not only for its connection to inverse problems. It represents a larger category of integrable system than C-integrable system and is worth further investigation.
A special thanks goes to reviewers for their valuable suggestions.
 Gesztesy, F. and Simon, B. (2000) A New Approach to Inverse Spectral Theory. II. General Real Potentials and the Connection to the Spectral Measure. Annals of Mathematics, 152, 593-643.
 Zhang, Y. (2006) Solvability of a Class of Integro-Differential Equations and Connections to One Dimensional Inverse Problems. Journal of Mathematical Analysis and Applications, 321, 286-298.
 Varley, E. and Seymour, B.R. (1988) A Method of Obtaining Exact Solutions to Partial Differential Equations with Variable Coefficients. Studies in Applied Mathematics, 78, 183-205.
 Varley, E. and Seymour, B.R. (1998) A Simple Derivation of the N-Soliton Solutions to the Korteweg-Devries Equation. SIAM Journal on Applied Mathematics, 58, 904-911.