On a generalized entropic uncertainty relation in the case of the qubit

We revisit generalized entropic formulations of the uncertainty principle for an arbitrary pair of quantum observables in two-dimensional Hilbert space. R\'enyi entropy is used as uncertainty measure associated with the distribution probabilities corresponding to the outcomes of the observables. We derive a general expression for the tight lower bound of the sum of R\'enyi entropies for any couple of (positive) entropic indices (\alpha,\beta). Thus, we have overcome the H\"older conjugacy constraint imposed on the entropic indices by Riesz-Thorin theorem. In addition, we present an analytical expression for the tight bound inside the square [0 , 1/2] x [0 , 1/2] in the \alpha-\beta plane, and a semi-analytical expression on the line \beta = \alpha. It is seen that previous results are included as particular cases. Moreover, we present an analytical but suboptimal bound for any couple of indices. In all cases, we provide the minimizing states.


Introduction
The uncertainty principle (UP) is a fundamental concept in physics that states the impossibility to predict with absolute certainty and simultaneously the outcomes of measurements for pairs of noncommuting quantum observables. In its primary quantitative formulation, the principle is described by the existence of a nontrivial lower bound for the product of the variances of the operators [1,2,3]. However, such formulations are not always adequate due to various reasons. As an example, there exist variables with infinite variance [4], so that the second-order moment is not always convenient for describing the dispersion of a random variable. Moreover, in the case of discrete-spectrum observables, there is no universal nontrivial lower bound, and thus Heisenberg-like inequalities do not quantify the UP [5,6,7]. In order to overcome the potential inadequacy of the variance-based expression of the UP, many formulations based on other measures of dispersion have been proposed, for instance issued from information theory [8,9,10]. The pioneering works of Hirschman [11], Bialynicki-Birula and Mycielski [12], or Maassen and Uffink [13], have given rise to many versions based on generalized information entropies (or entropic moments) [14,15,16,17,18,19,20,21,22,23], on Fisher information [24,25,26], or on moments of various orders [27]. Recently, generalized versions of entropic and support inequalities in the context of variables described by frames instead of bases, have been proposed [28]. In this paper, we focus on the Rényi-entropy formulation of UP in the case of discrete-spectrum operators. Specifically, we search for (tight) lower bounds for the sum of Rényi entropies associated with the outcomes of a pair of observables. In the majority of previous related studies, the entropic indices corresponding to both observables are considered to be conjugated in the sense of Hölder, since the proofs make use of Riesz-Thorin or Young-Hausdorff theorems. Extensions for nonconjugated indices exist, based on the decreasing property of Rényi entropy versus its index, leading then to suboptimal bounds [20,28]. These bounds have been refined in the case of 2-level systems (or qubits) when the entropic indices coincide and have the value 1 2 [29] or 2 [30,31]. Here we extend these results beyond the scope of Riesz' theorem, allowing for arbitrary couples of indices. We provide a semi-analytical treatment of the problem and we find significant, nontrivial inequalities expressing UP for qubits. Moreover, we supply the minimizing states for the uncertainty relations established. The paper is organized as follows. In Sec. 2, we begin with basic definitions and notation, and summarize known results concerning generalized entropic uncertainty relations for N -level systems. In Sec. 3 we state the problem for qubits and present our major results. A discussion is provided in Sec. 4. The proofs of our results are given in the appendices.
2 Statement of the problem: notation and previous results We consider pairs of quantum observables, say A and B, with discrete spectra on an N -dimensional Hilbert space H. Pure states |Ψ ∈ H can be expanded onto the corresponding orthonormal eigenbases {|a k } N k=1 and {|b l } N l=1 . In order to fix the notation, we write |Ψ = N k=1 ψ k |a k = N l=1 ψ l |b l where the ψ k and ψ l are complex coefficients, that we arrange in column vectors: ψ = [ψ 1 · · · ψ N ] t and ψ = ψ 1 · · · ψ N t . From orthonormality of the bases, one has being T an N × N unitary matrix. Vectors ψ and ψ are such that ψ 2 2 = k |ψ k | 2 = 1 and similarly ψ 2 2 = 1. A vector with components |ψ k | 2 = | a k |Ψ | 2 is interpreted as a probability vector, where |ψ k | 2 represents the probability of measuring eigenvalue a k as outcome for observable A when the quantum system is in the state |Ψ resp. | ψ l | We are interested in uncertainty relations concerning simultaneous observations of two magnitudes, particularly their statement through the use of information-theoretic quantities [10]. The measure of ignorance or lack of information that we employ is the Rényi entropy of a probability set p = p k : p k ≥ 0, N k=1 p k = 1 , defined as [9] H λ (p) = 1 where λ ≥ 0 is the entropic index and log stands for natural logarithm. The limiting case λ → 1 is well defined and gives Shannon entropy H 1 (p) ≡ H(p) = − k p k log p k . The index λ plays the role of a magnifying glass, in the following sense: when λ < 1, the contribution of the different terms in the sum (2) becomes more uniform with respect to the case λ = 1; conversely, when λ > 1, the leading probabilities of the distribution are stressed in the summation. Indeed, in the extreme case λ = 0, H 0 is simply a measure of the cardinality of the support of the probability set, regardless of the values of the (nonzero) probabilities; this measure is closely linked to the L 0 norm which is relevant in signal processing [28,32,33]. At the opposite, H ∞ = − log(max k p k ) takes only into account the maximum probability of the set, and is known as min-entropy due to the nonincreasing property of H λ versus λ for a given probability distribution. Another relevant property is that Rényi entropy H λ is concave for λ ∈ [0 , 1], or even when λ ∈ [0 , λ * (N )] where the upper limit depends on the cardinality of the probability set [e.g. λ * (2) = 2] [34, p. 57]. Rényi entropies appear naturally in several contexts, as signal processing (Chernoff bound, Panter-Dite formula) [10, 35, 36, 37, and refs. therein], multifractal analysis [38,39,40], or quantum physics (collision entropy, purity, informational energy, Gini-Simpson index, index of coincidence, repeat rate) [23, 40, 41, 42, 43, and refs. therein]; see also [22] for a recent survey.
One can easily verify that Rényi entropies are positive and that in the N -states case they are upper-bounded by log N : 0 ≤ H λ (p) ≤ log N . The lower bound is achieved when the probability distribution is a Kronecker-delta, p k = δ k,i for certain i, and the upper bound corresponds to the uniform distribution, p k = 1/N .
In this contribution we will consider the Rényi entropies of the probability sets |ψ| 2 ≡ {|ψ l | 2 } and | ψ| 2 ≡ {| ψ k | 2 }, associated with the measurement of observables A and B respectively. Our goal is to find uncertainty relations of the type for any couple of (positive) entropic indices (α, β), where the bound B α,β;N should be nontrivial, i.e. nonzero, and universal in the sense of being independent of the state |Ψ of the quantum system. By definition, the tightest bound is obtained by minimization of the left-hand side, thus It comes out that the tight bound B α,β;N only depends on the transformation matrix T in which an important characteristic is the so-called overlap (or coherence) between the eigenbases, given by From the unitarity property of matrix T , the overlap is in the range c ∈ 1 √ N , 1 . The case c = 1 √ N corresponds to observables A and B being complementary meaning that maximum certainty in the measure of one of them implies maximum ignorance about the other, while c = 1 corresponds to a pair of commuting observables. The problem has been addressed in various contexts, and in some cases numerical and/or analytical bounds have been found. Several results correspond to conjugated indices (in the sense of Hölder 1 , i.e. 1 2α + 1 2β = 1) as they are based on the Riesz-Thorin theorem [44]; however there exist few results for nonconjugated indices. We summarize results available in the literature: • For β ≤ α 2α−1 , N -level systems, and c = 1 √ N (complementary observables) : B α,β;N = log N is the tight bound [18,20].

General Rényi entropic uncertainty relations for qubits
In this contribution we deal with the problem of generalizing the last three developments summarized in the preceding section, i.e. for qubits (N = 2) and any overlap c, to the case of arbitrary Rényi-entropy indices (α, β) to measure uncertainty. We seek for the minimum of the entropies' sum in this general situation and study, as well, those states that saturate the bound. Our main results are given by the following propositions: Proposition 1. Let us consider a pair of quantum observables A and B acting on a two-dimensional Hilbert space, and the corresponding eigenbases {|a 1 , |a 2 } and {|b 1 , |b 2 }. Consider a quantum system in the qubit pure state |Ψ described by the projections ψ = [ψ 1 ψ 2 ] t or ψ = [ ψ 1 ψ 2 ] t = T ψ on those bases respectively, where T lk = b l |a k for k, l = 1, 2. Then, for any couple of Rényi entropic indices (α, β) ∈ R 2 + , the following uncertainty relation holds: where the tight lower bound for the sum of Rényi entropies is obtained as Furthermore, for any pair of two-dimensional observables we advance the minimizing solution.
Proposition 2. Under the conditions of Proposition 1, let us parameterize the matrix T in the form [49,50] in terms of γ T ∈ 0 , π 2 and the 2D vectors u, v. Denote by {θ (i) opt } i∈I the set of arguments that minimize the expression in Eq. (6), where I lists all the different possible solutions. Then the bound is achieved (up to a global phase factor) for the qubits whose projections onto the A-eigenbasis are − γ T and n = 0, 1 (9) We now concentrate on discussing some derivations of our approach, and postpone the proofs of the propositions to the appendices. To start with, we make a connection with so-called Landau-Pollak uncertainty inequality [51]. Although our proofs do not rely on this uncertainty relation, we can link a posteriori both results when the inequalities are saturated. For that purpose, let us introduce the probability vectors P A and P B respectively issued from the optimal states ψ (i,ϕ,n) opt and ψ A rapid inspection of the different cases for γ T and n allows us to obtain arccos max k=1,2 where arccos c = γ = min γ T , π 2 − γ T = π 4 − π 4 − γ T ∈ 0 , π 4 . This corresponds precisely to the equality in the Landau-Pollak relation. This relation is explicitly used by Maassen and Uffink [13], and by de Vicente and Sanchez-Ruiz [46] to obtain their respective inequalities. We mention also that in the general case of arbitrary Rényi-entropy indices to measure uncertainty, the bound B α,β;2 , Eq. (6), has to be sought numerically. However, for indices in some regions of the α-β plane, we are able to obtain analytical or semi-analytical results. These are presented in the following corollaries.
there exists an analytical expression for the bound as Moreover, the wave-vectors that saturate the inequality correspond to: First, one can observe a transition in terms of entropic indices at α = β, since only in this situation both angles lead to wave-functions that saturate the inequality. We notice that Corollary 1 includes some of the situations discussed at the end of Sec. 2 as particular cases. On the one hand, when c is fixed to 1 √ 2 , the optimal bound of Refs. [18,20] is recovered, and if α = β this bound coincides with that given in Ref. [31]. On the other hand, when c is unrestricted and if α = β = 1 2 , one recovers the bound obtained in Ref. [29]. We stress that these results have proven analytically and extended its scope for any c and for any couple (α, β) in the square 0 , 1 2 2 .
On the line β = α, we obtain a semi-analytical result as follows: Corollary 2. In the context of Propositions 1 and 2, if the entropic indices are equal (β = α), the bound can be expressed as = log 2 and with α (c) given by both Fig. 1 and Table 1 and where δ x,y denotes the Kronecker symbol.  Moreover, the bound is achieved for θ 1} and θ opt = 0 in the first regime, θ opt is the (unique, numerical) solution of the minimization in the second regime, and θ opt = γ 2 in the last regime (thus the two solutions reduce to only one).
From this corollary one can observe the following facts: • When c = 1 √ 2 , one has α † = α . Thus, the second expression in (13) reduces to the first one, leading to a transition in the value of the bound at α = α † . This can also be seen from the minimizers, since optimal values are θ opt = 0, or θ opt = γ 2 or both, depending on whether α is smaller than, larger than or equal to α † . These observations are in concordance with the results in [31]. Besides, the value α † in the transition was already been observed implicitly in [6] as the index that vanish the second derivative versus θ of log Dα(θ)+log Dα(γ−θ) • When 1 √ 2 < c < 1, α decreases from α † to 0.5. The first and last expressions in (13) reached and tend to 0 (see Fig. 2(a)). Moreover, the intermediate expression, the optimal angle θ opt increases continuously from 0 to γ 2 (see Fig. 2(b)). In this context, there is no transition in the value of the bound. Moreover, α is not given in general as the index so that the second derivative of log Dα(θ)+log Dα(γ−θ) 1−α in θ = γ 2 vanish: the reasoning gave in [6] does not hold this case.
We can observe that some situations discussed at the end of Sec. 2 are include in this corollary as particular cases. On the one hand, for α → 1, the de Vicente-Sanchez-Ruiz bound [46] is recovered. Therefore, it is optimal for qubit systems, although it was calculated treating separately ψ and ψ without taking into account the relation between them, except through the Landau-Pollak inequality. On the other hand, for α = 2, one recovers the tight bound obtained by Bosyk et al. [47]. Therefore, we extend previous results along all the line β = α giving a semi-analytical expression for the bound. Finally, note that using the fact that H λ decreases with λ [10, 13,44]   Corollary 3. In the context of Proposition 1, for any couple (α, β) ∈ R 2 + , the entropies sum is lower bounded by where B λ,λ;2 (c) is given in Eq. (13).
This bound is clearly suboptimal, as it can be see for example in the case (α, β) ∈ 0 , 1 2 2 .
In Fig. 3, we represent schematically the scope of Proposition 1, Corollaries 2 and 3 with shadowed region, solid line and dotted lines in the α-β plane. Note that starting from α → ∞ and appealing to the decreasing property of the entropy versus the index, we recover the relation H α (|ψ| 2 ) + H β (| ψ| 2 ) ≥ −2 log 1 + c 2 (15) which is precisely the bound obtained by Deutsch [14] in the context of Shannon entropies, or the one given by Maassen and Uffink for any couple of indices (before being refined in the same article) [

Discussion
For pure states of qubit systems, we obtain the most general entropic formulation of the uncertainty principle in terms of the sum of Rényi entropies associated with any given pair of quantum observables, namely an inequality of the form H α (|ψ| 2 )+H β (| ψ| 2 ) ≥ B α,β;2 (c) where c is the overlap of the transformation between the eigenspaces of the observables. Our derivation focusses on obtaining the minimum of the entropies sum and we do not use Riesz-Thorin theorem in contrast to many results in the literature. In this way, we avoid the Hölder conjugacy constraint on indices α and β and our bound is tight and valid for any couple of indices. Indeed, the bound obtained is universal in the sense that it does not depend on the state of the quantum system. This is the main result of the paper, given in Propositions 1 and 2. Unfortunately, we do not always obtain an analytical expression for the bound. However, we do obtain in some domains of the plane (α, β) that the bound takes an analytical or semi-analytical expression. In effect, in Corollary 1 we present an analytical expression for the tight bound in the square (α, β) ∈ 0 , 1 2 2 ; whereas in Corollary 2 we show a semi-analytical expression for the tight bound on the line β = α. Accordingly, we recovered many bounds derived in the literature for particular points of the plane (α, β). Moreover, using the nonincreasing property of the Rényi entropy versus the entropic index, an analytical bound is obtained, this last one being suboptimal (Corollary 3). The same propert allows also to recover the suboptimal bound primarily derived by Maassen and Uffink. For mixed states, it is easy to extend the validity of Proposition 1 and Corollaries 1 and 2 within the domain (α, β) = [0, 2] 2 using the concavity property of Rényi entropy. Indeed for N -level systems, if one has a universal relation H α (|ψ| 2 )+H β (| ψ| 2 ) ≥ B α,β;N satisfied for any pure state, with (α, β) ∈ [0, λ * (N )] 2 then one has H α m µ m |ψ (m) In other words, any uncertainty formulation for pure states in the domain (α, β) ∈ [0 , λ * (N )] 2 remains valid for mixed states. It remains to be studied the way of overcome this constraint in the domain of entropic indices due to the concavity property and the generalization of the results to N -level systems.
Acknowledgments: SZ is grateful to the Région Rhône-Alpes (France) for the grant that enabled this work. GMB and MP acknowledge financial support from CONICET and ANPCyT (Argentina).

A Proof of Proposition 1 and Corollaries 1 and 2 A.1 Simplification of the problem
Since ψ 2 = 1, such a vector can be written under the form where s ∈ S (1) the unit sphere on R 2 (i.e. the circle) and where matrix Φ is diagonal and writes We parameterize the unitary matrix T as the product of three unitary matrices (see [50,Eqs. (1)- (19)] or [49, Th. 1]) (the other possible angles can be taken into account playing with phases u and v). Then, from the relation (1) between ψ and ψ and from the form (18) of T , one obtains Note first that the overlap does not depend on the phases, namely c = max k,l |T lk | = max k,l |V lk (γ T )|. The goal is then to solve the minimization problem The problem simplifies due to numerous invariances and symmetries.
• Invariance under a phase shift applied to the wavevector (multiplication by a matrix Φ): At this step, one can notice that the bound depends only on γ T .
• Additional invariance under a permutation of the components: the entropy does not depend on the order of the components, thus, playing with the phases one sees that and thus the minimization problem (21) reduces a step more, This result prove now that the bound B α,β;2 = B α,β;2 (c) only depends on the overlap • Symmetries and periodicities on s: Note that s ∈ S (1) can write in terms of angle as s(θ) = [cos θ sin θ] t π-periodicity: clearly, so that one can restrict the search to θ ∈ − π 2 , π 2 . -π 2 -symmetry: playing with the permutations and phases, it can be shown that where J = 0 1 1 0 , allowing one to restrict a little bit more the interval θ ∈ − π 4 , π 4 .

A.3.2 Minimization over the angle θ
Before specializing the problem in different parts of the plane (α, β), one can simplify one step more the interval in θ where the minimum of the entropies sum has to be sought. From the preceding section, minimization (24) reduces to B α,β;2 (c) = min Deriving the functional to minimize gives where the derivative in θ of functions D λ writes Since θ ∈ [0 , π/4], one has both sin(2θ) ≥ 0 and sin 2 θ ≤ cos 2 θ and thus the first fraction of the right-hand side (rhs) of (36) is positive. Moreover, θ − γ ∈ [−π/4 , π/4] and thus by the same reasoning the second fraction of the rhs of (36) has the same signum as sin(2(γ − θ)). Thus, for θ ∈ (γ , π/4] the entropies sum is increasing. Necessarily the minimum of the entropies sum is given by 0 ≤ θ ≤ γ, reducing the interval where θ has to be sought, i.e. B α,β;2 (c) = min At this step, the minimum of (38) can only be sought numerically, leading to Proposition 1. One can go a step further in special cases as we will see now.
For λ ≤ 1 2 , since one has then also λ < 1 we can immediately conclude that the second derivative in θ of the entropies sum is strictly negative, so that function θ → log Dα(θ) with D λ given in Eq. (7). Since the Rényi entropy H λ is a decreasing function versus λ [10,13,44], together with the expression (29) of D λ and c = cos γ one obtains Corollary 1.
In order to simplify the notation, let us denote the function to minimize as with D α defined in Eq. (29), so that the problem is to minimize F α over θ ∈ [0, γ]. One can go a step further observing the trivial symmetry F α (θ) = F α (γ − θ), so the the problem reduces to B α,α;2 (c) = min Since the case (α, β) ∈ [0 , 1 2 ] 2 is already treated, one concentrates here in the context α = β > 1 2 . Recall also that we exclude here the case α = 1: it will be recover by taking the limit α → 1. Deriving F α versus θ leads to where D α was already explicited in Eq. (37). F α clearly vanish when θ = γ 2 which gives a possible solution. The question now is to determine if the extremum is a global minimum or not.
• For α ≥ α (c) the only minimum is given for θ = γ 2 , leading to the bound Let us first recall that the entropies sum is insensitive to the multiplication of the wavector by a scalar e ıϕ , and thus from a minimizer ψ 0 we will obtain families of minimizers of the form {e ıϕ ψ 0 } ϕ∈[0 , 2π) . Recall that any unitary matrix T can be parameterized under the form T = Φ(u) V (γ T ) Φ(v) with V (γ T ) = cos γ T sin γ T − sin γ T cos γ T , Φ(x) = exp(ı diag(x)), γ T ∈ 0 , π 2  • Case γ T = γ ∈ 0 , π 4 . We have that T ψ = Φ(u)V (γ)Φ(v)ψ so that an ensemble of minimizers takes the form e ıϕ Φ(−v)s(θ is also a family of minimizers. It turns out that it is the same family, than that obtained from the θ . -Symmetry θ → θ + π: Clearly, one obtains the same families (starting respectively from the θ (i) opt and from the the θ (i) opt + π 2 ). In a conclusion, for γ T = γ the minimizers take the form One can unify both cases by noting than the signum before the angle θ (i) opt is nothing moe than sign π 4 − γ T , leading to the expression given in Proposition 2.
B.2 A step more in the case α = β.
One has seen numerically the existence of a unique optimal angle θ opt ∈ 0 , γ 2 (with γ = arccos c) leading to the minimal bound of the entropies sum. Moreover, we have seen that the entropies sum is invariant under the transformation θ → γ − θ. This leads to the possible angles represented in Fig. 6(b), respectively for γ T ∈ 0 , π 4 (circles) and γ T ∈ π 4 , π 2 (crosses). As a conclusion, leading to the minimizers given in corolary 2.