AN ALGORITHM TO PROCESS SIGNALS CONVEYING STATISTICAL INFORMATION

. A method for processing signals containing information about the state distri bution of a physical system is presented. The concomitant algorithm is specifically devised to suitably adapt lineal restrictions so as to take into account the presence of noise due to experimental errors.


Introduction
We call Statistical Signals the ones which convey information about systems that consist of subsystems of known properties whose relative proportions we want to find.We shall adopt a vectorial representation denoting a signal f as a vector If) and a measurement as a mapping that assigns to it a real number.
For the sake of definiteness we assume that the system S we are interested in consists of a number M of subsystems Sn.Our purpose is that of finding out the relative population of S, assuming that the one corresponding to Sn is Cn ~ 0 (unknown).We take the view [1] that in order to study S one interacts with it by means of an input signal II), the interaction between the signal II) and S resulting in a response signal If).The corresponding process is represented according to WII) = If), (1) where the linear operator W portrays the effect that the system produces upon the input signal and can be decomposed in the following fashion where WnII) = In).We work under the hypothesis that we know the response In) evoked by Sn and that this set of vectors gives rise to a linear space UM of dimension M. From L. REBOLLO NEIRA, A. PLASTINO (1) and ( 2) it is clear that the response If) is contained whiting UM and carries information concerning the numbers Cn we are tying to find out.In order to accomplish such a goal one needs to perform observations upon If}.The corresponding measurement procedure provides numbers {II, ... , fN} out of If} which can be regarded as the numerical representation of the signal.

Treatment of a Numerical Representation
Let's suppose that the numerical representation of If) is obtained in such a way that measurements are performed as a function of a parameter x which adopts the values Xi with (i = 1, ... ,N).If the measurements are performed independently, we can regard the Xi as defining an (orthogonal) set of vectors IXi) that span an N-dimensional linear space E. We associate this space with the measurement instrument [2].
The expressions (Xi If) in a general case represent bilinear forms [2] and they are supposed to be given by experimental observations, so what we really have are numbers ff affected by uncertainties 6.ff.Thus, instead of (3) we have, for the representation of The problem we face is that of building up a vector IJ*) n=l out of the {If, i = 1, ... , N}-set, such that the C~ constitute a good approximation to the "true" Cn .For this purpose we construct the representatives in E of In) and IJ*) The nearest vector If*)p to Ir)p that can be built is the one that fulfills the least distance equations [2].These equations can be written in the form where the an,k are constructed out of the projections ofvector In) in E while the Fn contains the experimental data Of course, as the It are affected by the experimental uncertainties !l.1t so will the Fn be subjected to corresponding uncertainties !l.Fn .Furthermore, the set of conditions (7) do not restrict the C~ to the domain of the non-negative real numbers, so we will adopt an algorithm to obtain a non-negative set of Cn that fulfills the set of equations (7), within the margin allowed by the uncertainties !l.Fn .

A Maximum Entropy Algorithm
We start by writing the equations (7) in the form where A ~ 0 is a constant such that L!l Pn = 1.We can now think of the weights Pn as defining a probability space over a discrete set of M events whose informational content is given by n=l We regard each Fk in (9) as proportional to the mean value of a random variable that adopts the values ak,n j (n = 1, ... , M) with a probability distribution given by the {Pn}-set.As A is an unknown constant, we employ one of the equations, say the l-th one, to determine it and are now in a position to solve the set of equations in an iterative fashion.We start our iterative process, by employing the Maximum Entropy Principle [3] in each step in order to construct an "optimal conjecture", that improves upon the results obtained in the previous step.
The zeroth-order approximation (first step) is devised by requiring that the zerothorder weights Pn maximize H.This entails p~O) = 11M so that we predict a zeroth-order value for the Fk.The quality of our conjecture can be measured by defining the "predictive error" fk as j k= 1, ... ,M.

(11)
In order to construct our first order approximation we select, among the fk' the largest one, let us call it fll.We shall then obtain the first-order weights p~l) by requiring that they maximize H with a constraint that ensures that the ll-th equation in (9) be fulfilled.We evaluate now the F!ll and the concomitant (new) set of fk.After selecting the largest one, f12, say, we obtain the p~2) by maximizing H with the constraint that both the equations (9) for k = 11 and k = 12 be fulfilled, etc ,etc.The j-th order approximation is given by where the j Lagrange multipliers Ai are obtained by solving the j equations The iterative process is to be stopped when (14) Let us assume that the "convergence" ( 14) is attained at the L-th iteration.With this solution we can evaluate the numerical values n=l If these conjectures are such that (16) the number of iterations can be augmented until the direction of the inequality is reversed.However, there is no guarantee that this type of convergence will always be achieved.Even more, in the realistic cases where we only can guess same estimations for the errors, to require that the direction of the inequality be reversed for all i it becomes a too stringent requirement.Although in the application we will discuss this type of convergence can be achieved, we wish to keep the discussion open so as to suitably adapt the "stop" point to the errors concomitant to any given particular model.

Numerical Text
Consider that we have a mixture of M = 11 different rare earth elements which satisfies a simple paramgnetic model [4], their respective proportions in the mixture being denoted by Pn.For any given n we list the corresponding quantum number Sn, Ln and I n in Table I and set In) == ISnLnJn).We take a series of N =40 values of the magnetic field at the temperature T, which generates the parameters Xi = H;/T ; (i = 1, ... ,40) The projection of vector In) for a given value Xi is given by the magnetization of the ion n in Table I (17 where JIB is the Bohr magneton, gn is the spectral factor for the ion n, and Bn(Xi) the appropriate Brillouin function [4] (18)

Fig. 1 .
Fig.1.Magnetizat.ionvs external applied field at the temperatllre T .The error bars are the input data of the munerical test and allow for a 3% distortion.The continuous curve represents both the predictions obtained with the present.approach and with the least-squares approximation.Curve a) correspond to system 5 and curve b) to system 5'