Comment on"Improved bounds on entropic uncertainty relations"

We provide an analytical proof of the entropic uncertainty relations presented by de Vicente and Sanchez-Ruiz in [Phys. Rev. A 77, 042110 (2008)] and also show that the replacement of Eq. (27) by Eq. (29) in that reference introduces solutions that do not take fully into account the contraints of the problem, which in turn leads to some mistakes in their treatment.

Consider two observables A and B, nondegenerate, with discrete spectra and complete orthonormal sets of eigenvectors {|a i } N i=1 and {|b j } N j=1 , respectively. Denote by p i (A) = | a i |Ψ | 2 the probabilities for the outcomes of observable A (and analogously for B) when the system is in the (pure) quantum state |Ψ . Let c = max i,j | a i | b j | ∈ 1 √ N ; 1 be the so called overlap between the observables. Maassen and Uffink (MU) [1] proved a non trivial universal lower bound for the sum where H = − i p i ln p i denotes Shannon entropy. This entropic uncertainty relation (EUR) has also been proved by Bialynicki-Birula and Mycielski [2] in the special case when the observables are conjugated, namely a i |Ψ and b i |Ψ are linked by a Fourier transform; then the bound is sharp in the sense that there exists a state |Ψ for which the inequality is saturated, with H(A) + H(B) = ln N .
In Ref. [3], de Vicente and Sánchez-Ruiz present an improvement of the MU-EUR (1), showing numerically that H(A) + H(B) ≥ B V S where the bound reads with and where P A ≡ cos 2 α, P B ≡ cos 2 (θ − α), c ≡ cos θ, and α is a (numerical) solution of the equation where α = θ/2 and α = θ/2 + π/4 in order to specify P A = P B . The approximate value of c * is determined numerically in [3]. We show that the replacement of Eq. (27) of Ref. [3] by Eq. (6), via the change of variables (5), introduces solutions that do not take fully into account the constraints of the problem. This could potentially lead to erroneous conclusions. In the sequel we provide an analytical proof of such results, discussing in detail all possible cases. At the end of the Comment we give some concluding remarks.
The mechanism proposed in Ref. [3] to improve the bound (1) introduces the Landau-Pollak inequality (LPI) arccos P A + arccos P B ≥ arccos c, where P I = max i p i (I) for I = A, B, in two steps. First, for I = A and B, minimize Shannon entropy H(I) subject to a fixed maximum probability P I , which leads to the minimal entropies H min (P I ); second, search for the infimum of over the possible P A and P B , subject to the LPI. According to Ref. [3], the normalized probability distribution that minimizes H(I) subject to fixed P I is (P I , . . . , P I MI , 1− M I P I , 0, . . . , 0), where M I is a positive integer such that Then, one has for I = A and B, and the minimization of M is restricted by the inequality constraints (7) and (9). We present rigorous solutions to the problem, for different cases.
Case P A = 1 MA and P B = 1 MB : The minimization is solved introducing the Lagrangian with Lagrange parameters µ A , µ B , ν A , ν B and λ. Deriving with respect to P A and P B and using Karush-Khun-Tucker necessary conditions for a minimum, one has λ arccos c − arccos P A + arccos P B = 0 (13) with µ I , ν I , λ ≥ 0, for I = A and B.
Since the inequalities (9) are strict, from (12) one has µ I = ν I = 0. It is proved in Ref. [3] that λ = 0 (otherwise P I = 1/(M I + 1)); therefore, the LPI becomes an equality. Taking the cosine of this equality and using the constraints (9) one has As M I is a positive integer and c > 0, at least one M I must be unity, and then One can assume M A = 1 and M B = M ≥ 1. Extracting λ from Eqs. (11) when I = A and B, one gets These equations have several solutions. For example, a solution for M = 1 is given by P A = P B = 1+c 2 , making the function F given in Eq. (3) a possible candidate for a lower bound of the entropy sum. For P A = P B , Eqs. (14)-(17) do not have analytic solutions. At this stage, the authors in Ref. [3] perform the change of variables (5) and solve numerically Eq. (6), instead of Eq. (17) with M = 1, proposing the function H 1 given in Eq. (4) as a possible minimum in a range 0 < c ≤ c * . This is the critical point that motivates this Comment.
We will present below a detailed analysis that exhibits the following facts, depending on the value of the overlap: 1. In the range 0 < c < 1 √ 2 : • For M = 1 Eq. (17) has only the trivial solution P A = P B , leading to F . The value of H 1 reported in Ref. [3] corresponds to P A and P B outside the allowed interval [see Eqs. eqs. (9) and (14)]. In fact, Eq. (17) for M = 1 (together with (9) and (14)) have only the trivial solution P A = P B , leading to F . This seems to open the way to improve the MU bound in this range.
• However, the extremum attained at 1+c 2 happens to be a maximum for M. If this value lies below the MU bound, so is the minimum; but, for the range c ∈ (c † ; 1 √ 2 ) with c † ≈ 0.61, the maximum is higher than −2 ln c.
• In fact P A (resp. P B ) "lives" within a given interval and the minimum of M is attained at the end points of that interval. Moreover, this minimum is less than −2 ln c, which analytically and rigorously proves the result of Ref. [3] 2 still corresponds to a maximum for M(P A , P B ). However, Eq. (17) (with M = 1) admits two symmetrical solutions yielding the same minimum H 1 . We prove analytically that the extremizing values of P A and P B satisfy the constraints (9) and (14). However, the value H 1 (c) can be evaluated only numerically. The result given in [3] is then confirmed in this range. In passing, we prove that c * is a solution of the transcendental equation then c * ≈ 0.834 as found in Ref. [3].
3. In the range c * < c ≤ 1: only the solution 1+c 2 remains, as observed in Ref. [3]. Moreover, it corresponds there to a minimum. We justify this analytically, confirming the result of Ref. [3] The proofs are as follows. First, we rewrite Eq. (14) As both sides are positive, they can be squared without further ado leading to a quadratic equation in √ 1 − P B whose only allowed solution is Solving for P B for given c gives We realize that P A , appart from lying between 1 2 and 1 (from (9) since M A = 1), is constrained to be larger than c 2 (from the positivity of (19)). Furthermore, the bounds (9) for I = B applied to (19) yield additional constraints on P A . Summarizing, when M = 1, we get (21) Next, we consider M M (P A ) = M(P A , P B (P A )). The goal is to study its behavior versus P A so as to determine its minimum. To such end we compute successive derivatives of M M , with the help of some auxiliary functions. Noting that, for given c, dPB , where has the same sign as M ′ M . It is obvious that setting E M = 0 solves (17). In the sequel, let us restrict ourselves to the case M = 1 (we will confirm later that the cases M > 1 need not to be considered). We demonstrate now that E 1 has only four types of behavior. Its derivative writes as has the opposite sign of E ′ 1 . We will see that K has always the same behavior versus P A independently of c: it increases up to a maximum and then decreases. This behavior, together with the sign of the maximum of K, completely determines the shape of E 1 . The derivative of has the same sign as K ′ . Finally, the derivative of N is . From the negativity of R(x) for x ∈ ( 1 2 ; 1) and of dPB dPA , we conclude that N ′ < 0. Summing up, for any c, N is continuous, strictly decreases in (P − A ; P + A ), and has only one root: N 1+c 2 = 0. As a result, K increases with P A in the interval (P − A ; 1+c 2 ) and decreases in ( 1+c 2 ; P + A ). Furthermore, we notice that lim PA→P − A K(P A ) = lim PA→P + A K(P A ) due to the fact that when P A → P − A then P B → P + A , and vice versa. With respect to the sign of K, we show that only three situations arise: 1. when c ∈ 0; 1 √ 2 , the maximum of K, given by K 1+c 2 = −2c ln 1+c 1−c + 4, is positive. Besides, the value of K at the end points, given by lim decreases with c from 4 to −∞ and thus can have either sign.
2. when c ∈ 1 √ 2 ; c * , the maximum of K is also positive, while lim 3. when c ∈ (c * ; 1), the maximum of K is negative and thus K < 0 for all P A .
Going back to functions E 1 and M 1 we conclude that: 1. when c ∈ 0; 1 √ 2 , if K(P ± A ) ≥ 0, E ′ 1 ≤ 0 and thus E 1 is strictly decreasing; on the contrary, if K(P ± A ) < 0, E 1 increases, decreases, and again increases. In both cases, lim hence E 1 = 0 has only one solution, given by P A = 1+c 2 . This justifies that the value of H 1 is computed from values of P A and P B that do not satisfy constraints eq. (9) in this range. Moreover, E 1 changes from positive to negative sign at 1+c 2 , implying that: (i) the extremum of M 1 at 1+c 2 corresponds in fact to a maximum as we previously claimed, and (ii) the minimum of M 1 would be attained at the end points P ± A . As a conclusion, M 1 is lower-bounded by lim PA→P ± A M 1 (P A ) given by We now define the difference of MU-bound to this infimum: which can be analytically proved easily to be always negative in the range c ∈ (0; 1 √ 2 ). This implies that ∆ M inf (c) is decreasing, with the lowest difference given by lim This analytically proves that M inf < B MU : it is impossible to improve the MU-EUR in the range 0 < c < 1 √ 2 . This confirms the result of Ref. [3]. Notice that studying what happens for M > 1 or for P I = 1 MI is then not necessary.
2. when c ∈ 1 √ 2 ; c * , E 1 increases, decreases, and again increases, with lim hence E 1 = 0 has now three solutions: 1+c 2 , corresponding to a maximum of M 1 (E 1 locally decreases), and other two giving the same minimum for M 1 (by symmetry). The minimum value of M 1 in this range is denoted as H 1 in Ref. [3], where it is obtained after solving numerically for α in Eqs. (5)-(6). The same result is obtained here directly from (17) and (20), taking care of the constraints (21) for P A (notice that only M = 1 has to be considered as (16) enforces M ≤ 1 c 2 ). We also numerically confirm that H 1 (c) > −2 ln c, thus giving the possibility of improving MU-EUR in this range (see the cases P I = 1/M I below).
3. when c ∈ (c * ; 1), E 1 always increases, the limiting values at the end points are the same as in 2 above, but the unique root 1+c 2 corresponds to a minimum: (3) (notice that only M = 1 has to be considered since M ≤ 1 c 2 ). Consider now the difference ∆ F (c) = B MU (c) − F (c), whose derivative is 2 c c ln 1+c 1−c − 2 > 0 in the range c > c * . Thus ∆ F (c) increases; since ∆ F (1) = 0, then ∆ F (c) < 0 in this range and therefore we analytically prove that F > B MU , as observed in [3]. Again the MU-EUR can possibly be improved in this range (see the cases P I = 1/M I below).
What happens in the particular cases c = 1 √ 2 and c = c * , follows from the continuity of the functions involved.
Cases P A = 1 MA and/or P B = 1 MB : In Ref. [3] the authors check the case P I = 1 MI for I = A or B and find as a possible minimum for M(P A , P B ) where ⌊·⌋ indicates integer part (floor). They base their procedure on the equivalent of Eqs. (11) and claim that M must be either 1 or 2, assuming the nonnegativity of the Lagrange multipliers. As far as we understand, this reasoning seems erroneous, i.e. the multiplier µ I corresponding to the "equality constraint" P I = 1 MI is not necessarily nonnegative, as should be for "inequality constraints".  20) gives P B = c 2 , which means also that, from (9), M B = ⌊ 1 c 2 ⌋. In turn, one arrives at the function G given above, valid only for c > 1 √ 2 . This also entails that 1 c 2 = 1 and then G = −c 2 ln c 2 − (1 − c 2 ) ln(1 − c 2 ) which corresponds, as it should, to the Shannon entropy of the probability distribution (c 2 , 1 − c 2 , 0, . . . , 0). One can numerically prove that G > H 1 and, analytically, that for c ≥ c * , G > F ; therefore G does not correspond to the minimal M.
Finally, we study the case P A = 1 MA and P B = Summing up, we revisit analytically the full resolution of the problem presented in Ref. [3] that deals with the uncertainty related to the measurement of two discrete quantum observables, using as measure the sum of Shannon entropies associated to both distributions constrained by the Landau-Pollak inequality. De Vicente and Sánchez-Ruiz show in [3] that the Maassen-Uffink bound can be improved using this constraint when the overlap c between observables is in the range (1/ √ 2; 1); we confirm analytically this result. Our central contributions were to provide an analytical proof of the non-improvement of the bound when c is in the range (0, 1/ √ 2), and the analytical proof that F is indeed a minimum of the entropy sum M for c in the range (c * ; 1). Additionally, we obtained the value of c * from an analytical expression, given in Eq. (18). We detected a mistake in the VS-treatment of the constrained extremization problem: the function H 1 was computed for solutions of Eq. (6) that do not take into account the whole set of restrictions on the pertinent probabilities. This seemed to open the possibility of improving Maassen-Uffink bound in the range c ∈ (0; 1 √ 2 ). But in fact, we rigorously show that it is impossible to improve MU-EUR with the LPI in this range.
Moreover, let us comment that the function F (c) can be interpreted as half the Jensen-Shannon divergence between the pure states |a i a i | and |b j b j |, for which the overlap is maximum [4]. An interesting future research is to exploit this relationship for establishing new entropic uncertainty relations.