Linear Independence of Time–Frequency Translates in L p Spaces

We study the Heil–Ramanathan–Topiwala conjecture in L p spaces by reformulating it as a ﬁxed point problem. This reformulation shows that a function with linearly dependent time–frequency translates has a very rigid structure, which is encoded in a family of linear operators. This is used to give an elementary proof that if f ∈ L p ( R ) , p ∈ [ 1 , 2 ] , and (cid:2) ⊆ R × R is contained in a lattice then the set of time frequency translates ( f ( a , b ) ) ( a , b ) ∈ (cid:2) is linearly independent. Our proof also works for the case 2 < p < ∞ if (cid:2) is contained in a lattice of the form α Z × β Z .


Notations
Given f ∈ S(R), f denote the Fourier transform normalized as Byf we denote the inverse Fourier transform of f . The same notation is used for the extension of the Fourier transform to the L p (R) spaces, 1 ≤ p ≤ 2.
If (a, b) ∈ R 2 , then f (a,b) denotes the time frequency translation of f given by .
is a finite family of points of the plane, the set { f (a k ,b k ) } N k=1 is shortened as S ( f , ).
Given an interval I ⊆ R, the space L p (I ) will be naturally identified with the subspace of L p (R) consisting of those elements f such that f χ I c = 0, where if E is a measurable set, then χ E denotes the characteristic function associated to E. On the other hand, C 0 (R) is the space of continuous functions vanishing at infinity.
Finally, given x ∈ R, x will denote the integer part of x, and {x} = x − x the fractional part of x.

Statement of the Problem and Main Results
By the standard windowed Fourier transform theory, for arbitrary non-zero f ∈ L 2 (R), the f (a,b) are a sort of basic building atoms in the following sense: for any h ∈ L 2 (R) one has (in an appropriate sense) that where ·, · denotes the inner product in L 2 (R). In this context, the following conjecture raised by Heil, Ramanathan and Topiwala in [10] is completely natural: Conjecture 2.1 If f ∈ L p (R), 1 ≤ p < +∞ is nonzero and := {(a k , b k )} N k=1 is any set of finitely many distinct points in R 2 , then S ( f , ) is a linearly independent set of functions, that is, if k c k e 2πi xb k f (x − a k ) = 0, (1) and the constants c k are not all zero, then f = 0.
A function f with linearly dependent time-frequency translations has a very rigid structure. In Sect. 3 we will show that this rigidity can be encoded in a family of linear operators that allow to recover the function from its values in a compact set. This new way to approach the problem allows us to provide a simple proof of the following result, which as far as we know, is new for p = 2: Theorem 2.2 Given a finite set of points belonging to a lattice, the family S ( f , ) is linearly independent for any non-zero f ∈ L p (R), for 1 ≤ p ≤ 2.
Using symplectic transformations (see Sect. 3.1 for more details), we can restrict our attention to sets of time-frequency translation of the form where n j ∈ Z + . The price to pay for this reduction is to consider functions in any L q (R), 1 ≤ q < ∞ or in C 0 (R). So, Theorem 2.2 is a consequence of the following result:

Theorem 2.3 Given a set of points
= {(α, n 1 ), . . . , (αN , n N )} for some α ∈ (0, +∞), the family S ( f , ) is linearly independent for any non-zero f belonging We provide the proof of this theorem in Sect. 3.3. Notice that each translation parameter has a unique integer frequency associated. As we have already mentioned, we reduce the problem to this special situation by using symplectic transformations. However, a careful reading of the proof in Sect. 3.3 shows that the same arguments can be used to prove the case where for each translation there are more than one modulation. Moreover, the following more general result can be proved mutatis mutandis. Theorem 2.4 Let α > 0 and let h 1 , . . . , h N : R → C ∪ {∞} be 1-periodic functions that are finite and different from zero almost everywhere. Then, the equation only admits the trivial solution in C 0 (R) and in any L q (R) with 1 ≤ q < ∞.
For a sake of simplicity in Sect. 3.3 we only prove Theorem 2.3, and we leave to the reader the minor changes that are necessary to adapt that proof to the general version stated in Theorem 2.4. The proof in both cases is close to the original idea of [10], referred as conjugates trick by Demeter (see also [5,6,13]).

Previous Related Results
In case all modulation parameters b k are zero, Eq. (1) is a convolution equation (convolving with a finite sum of delta masses) and the result is essentially trivial. Applying Fourier transform we obtainf (ξ ) k c k e 2πiξ a k = 0, implying thatf is supported in a discrete set. In case 1 ≤ p ≤ 2,f is a function in L p (R), p being the conjugate exponent, and hencef , and so f , is zero almost everywhere. If p > 2 an extra regularization argument is needed (see [7]). If the dimension is greater than one, then Rosenblatt found a function, which simultaneously belongs to C 0 (R n ) and all the L p (R n ) spaces for p ≥ 2n n−1 , and it has linearly dependent translates (see [14]).
We deal only with the case n = 1, where a positive answer is known in the following cases: (1) If f has the form f = q(x)e −x 2 , where q is a nonzero polynomial (see [10] and [9]).
(3) If the points of are collinear (see [7,9]), (4) If the points of are collinear, except for one exceptional point (see [9,10]). (5) If p = 2 and is contained in a lattice. This result was originally proved by Linnell using an argument involving operator algebras (see [12] and also [11]). Although Linnell's proof is also valid in higher dimensions, it can not be extended to other L p (R) spaces. Later on, Bownik and Speegle in [1] obtained an important simplification of Linnell's result. It is interesting that their argument is specific for dimension one. As in the case of Linnell's approach, their approach can not be extended to other L p (R) spaces.
The next stability results are also known for any p and any dimension (see [10]): (1) If the independence conclusion holds for a particular f and a particular choice of points {(a k , b k )} N k=1 , then there exists an ε > 0 such that it also holds for any g satisfying g − f p < ε using the same set of points.
(2) If the independence conclusion holds for one particular f and a particular choice of points {(a k , b k )} N k=1 , then there exists an ε > 0 such that it also holds for that f and any set of N points in R 2 within ε of the original ones.

A Reformulation
In this section we will explain a reformulation of the problem. This reformulation requires some elementary facts about symplectic transforms already pointed out in [10].

Invariance by Symplectic Transformations
Let S ( f , ) be a family of time-frequency translates. Note that this family is linearly dependent if and only if is linearly dependent. Therefore, the points of can be shifted vertically or horizontally by paying the price of replacing f by a convenient time-frequency translation of f .
The same idea can be done with other transformations of the set . These transformations correspond to the so called symplectic group Sp(d). In our case, d = 1, this group coincides with the special linear group SL(2, R) (see [3], [4] or [8] for more details). In particular, any lattice in R 2 has the form αGZ 2 for some G ∈ Sp(1) and α > 0. This will allow us to reduce Theorems 2.2 to 2.3.
The symplectic group is generated by three different kind of matrices As in the case of the horizontal and vertical translations aforementioned, the linear dependence of family S ( f , ) is equivalent to the linear dependence of the system Note that, as before, if we modify the set we have to modify the function too. In the first case, f is replaced by the D r f (x) = f (r x). In the second case it is replace by C r f (x) = e πir x 2 f (x). Finally, in the the third case , f is replaced by the Fourier transform of f . The equivalence between the linear dependence of these systems is not difficult to check. Indeed, applying the dilation operator Therefore, S ( f , ) is linearly dependent if and only if S (C r f , B r ( )) is a linearly dependent. Finally, f satisfies a similar equation as f where the modulation parameters and the translation parameters interchange their roles, and the constants changed only their argument. In consequence, S ( f , ) is linearly dependent if and only if S(f , J ( )) is linearly dependent. It is very important to note that this transformation is available only if f is again a function either in L q (R) for some q < ∞ or in C 0 (R). This holds only if 1 ≤ p ≤ 2. For this reason, from now on we will assume that p belongs to this range. Now, let us show how the symplectic transfoms can be used to simplify the problem. Let g ∈ L p (R) for some p ∈ [1,2], and let = {(α k , β k )} N k=0 be a finite set of points in the plane such that where the scalars γ j satisfy that γ 0 γ N = 0. Using the aforementioned symplectic transforms, the set of points can be changed by a set of points = {(a k , b k )} N k=1 such that 0 = a 0 < · · · < a N and b 0 = 0.
Indeed, suppose that the original points do not satisfy these conditions. Note that we can always assume that α 0 = β 0 = 0 and the rest of α k and β k are non-negative. Otherwise, changing g by a convenient time-frequency shift of it, we set the problem in this situation. If the new points still do not satisfy the above mentioned conditions, we proceed in two steps. Firstly we use the transformation B r , for r > 0 big enough, in order to get a new set of time-frequency shifts such that their projection onto the frequency component is injective. Secondly, we use the J transform to interchange the roles of translations and modulations. The new set of points satisfies the aforementioned properties.
For this new set of points there exists a function f such that where c 0 c N = 0. Moreover, f = 0 if and only if g = 0, where g is the original function in L p (R). The function f belongs to L p (R) provided we did not use J to transform into . Indeed, if we use J for the set of points, then we need to use the Fourier transform for the function, and the Fourier transform maps L p (R) into L q (R) and L 1 (R) into C 0 (R). So, in that case the function f will belong either to L q (R) or to C 0 (R).

The Reformulation
As we have mentioned in Sect. 3.1, the Conjecture 2.1 for p ∈ [1,2] has positive answer if we prove that given a finite set of points in the plane = {(a k , b k )} N k=1 such that 0 = a 0 < · · · < a N .
only admits the trivial solution in C 0 (R) or L q (R) (1 ≤ q < ∞), provided the coefficients are not all equal to zero. Without loss of generality we can assume that c 0 = −1 and b 0 = 0. Hence, f satisfies the following identity Equation (4) has a dual version, obtained by the change of variable u = x − a N and some simple algebraic manipulations: Hence, a measurable function f in the real line satisfies (4) if and only if it satisfies (5). The coefficients in (5) are related with the coefficients in (4) in the following way: where a 0 = b 0 = 0, c 0 = −1. Note that, 0 < a 1 < · · · < a N = a N . Note that the solutions of (4) and (5)  Proving the conjecture amounts to prove that no nontrivial g can be deployed in an L p (R) function. Equations (4) and (5) motivate the introduction of the following operators.

Definition 3.2
For any x ∈ R and 0 < ≤ min{a 1 ,â 1 } we define the operator . For = min{a 1 ,â 1 }, and n ∈ Z, let f n denote the restriction of f to the interval [n , a N + n ), and R n = R (n ), L n = R −1 n . Then, f satisfies (4) if and only if R n ( f n ) = f n+1 or, iterating, The operator products here, as well as in the rest of this note, should be understood as an ordered product from the left to the right. Now note that R n , L n ≤ M for some constant M, whence f k+n p ≤ M n f k p , k, n ∈ Z.
With this reformulation, one can recover previously known results regarding sufficient conditions on the decay of f at infinity (see [2]). Suppose that f ∈ L p (R) satisfies (4) and one of the following two decay conditions hold Then, f = 0. Indeed, assume that f satisfies the second condition. With the above notations, this second condition can be written as  (4) holds. The general case requires other techniques and a slightly faster decay (see [2] for more details).

Proof of Theorem 2.3
Now, we will use the aforementioned reformulation to prove Theorem 2.3. Suppose that there exists function f in C 0 (R) or in some L q (R) for some q ∈ [1, ∞), which satisfies Then it also satisfies the symmetric formula where the coefficients c k and n k are computed using (6). Note that in particular for every k n k = n N −k − n N ∈ Z.
Motivated by these two formulas, for each x ∈ R, we define the linear operator M(x) : C N → C N whose matrix in the canonical basis is where β k (x) = c k e 2πi(n k )x . Note that M(x) is 1-periodic. If for every x ∈ R we define then Eq. (11) reads 1 The matrix of the inverse operator M(x) −1 , mapping By induction, for any k > 0 we get that If f ∈ L q (R), then for any integer m Choosing m = ( α −1 + 1), the minimal number of intervals of size α needed to cover the interval [0, 1] we get and we obtain that In particular On the other hand, if f ∈ C 0 (R) then (16) also holds by (14). Actually, in this case it holds not only almost everywhere, but it holds for every x ∈ R. From now on, the strategy is different depending on whether α is rational or not.

The Rational Case
In conclusion, if f ∈ L q (R) satisfies (11) and F is defined by (13) On the other hand, if f ∈ C 0 (R), then same should hold as a consequence of (14). The following simple lemma shows that this is possible only if F(x) = 0, which proves Theorem 2.3 in the case α ∈ Q.

Lemma 3.3 Let T be an invertible operator on
then v = 0.
Proof Suppose that (19) holds for some non zero vector v ∈ C N . Then, it does for every element of the subspace S generated by {T n v : n ∈ Z}. Since this subspace is T -invariant, T induces an invertible operator on S, which we denote by T S . Note that Let λ be an eigenvalue of T S . Note that λ = 0 because T S is invertible. Now, take a unitary vector v λ ∈ S such that T S v λ = λv λ . Then which clearly does not satisfies (20).
The Irrational Case Now, we assume that α / ∈ Q. There are two key facts in the argument for the irrational case. Firstly, the matrix valued function M is 1-periodic, i.e., M(x + 1) = M(x). Secondly, for almost every x ∈ R, the matrix M(x) is invertible. The periodicity of M reduces the problem to the torus, which we will identify with the interval [0, 1). Consider the map τ α : [0, 1) → [0, 1) defined by τ α (x) = {x + α}, where {·} denotes the fractional part function. This map is ergodic. The main idea of the proof is that in (16) the products inside the norms are in some sense inverse one to each other, so that they cannot be small simultaneously.
For every x ∈ [0, 1), we define the following subspace of C N : Using the fact that τ α is ergodic and the operators M(x) are invertible we get the following result. , there exists only one d such that | d | = 1. Our assumption on the existence of a non-zero f satisfying (11) implies that (16) holds (at least) almost everywhere. So, for almost every x the dimension of L x is positive, which concludes the proof of the lemma.
Before going on, we will introduce some notation. For each x ∈ , we will identify L x with C d and S d will denote the unit sphere of C d with respect to the p-norm, i.e.
We denote by S the (measurable) vector bundle × C d , and by S 1 = × S d . In S 1 we will consider the probability measure given by the product of the Lebesgue measure in and the rotational invariant measure in S d .
Consider a set 0 as in the previous lemma. Then, there exists n ≥ 1 so that Fix this n ≥ 1. Since τ α is measure preserving, |τ n α ( 0 )| ≥ 3 4 . Therefore So, there exists x ∈ 0 so that τ n α (x) also belongs to 0 . In consequence we have that for every v, w ∈ S This contradiction completes the proof of Theorem 2.3.