In this work, we present a method for signal-to-noise ratio maximization using a linear filter based on minor component analysis of the noise covariance matrix. As we will see, the greatest benefits are obtained when both filter and signal design are treated as a single problem.
This general problem is then related to the minimization of the probability of error of a digital communication. In particular, the classical binary detection problem is considered when nonstationary and (possibly) nonwhite additive Gaussian noise is present. Two algorithms are given to solve the problem at hand with cuadratic and linear computational complexity with respect to the dimension of the problem.