We study the existence of optimal strategies and value function of non stationary Markov decision processes under variable discounted criteria, when the action space is assumed to be Borel and the action space to be compact. With this new way of defining the value of a policy, we show existence of Markov deterministic optimal policies in the finite-horizon case, and a recursive method to obtain such ones. For the infinite horizon problem we characterize the value function and show existence of stationary deterministic policies. The approach presented is based on the use of adequate dynamic programming operators.