We have recently introduced a new method of searching a time series for periodic variability. The method uses the Shannon entropy to measure the amount of information provided by a set of observations that may contain an underlying periodic signal, as a function of the assumed period of this hypothetical periodic signal. Here we present the analytical arguments that support this algorithm within the broader frame of information theory. We also show that, in the absence of a periodic signal, the entropies follow a Gaussian distribution, which then provides an easy way of assessing the signicance of a positive detection. We test this method using simulated data with non-sinusoidal variability, and we show that it is more sensitivethan the classical periodograms or those variations adapted to deal with cases where harmonics are involved. Finally, we show that this method is capable of resolving two, almost identical, frequencies present in a given time series, even in cases where the classical periodograms fail to do so.