The design of intelligent agents is greatly influenced by the many different models that exist to represent knowledge. It is essential that such agents have computationally adequate mechanisms to manage its knowledge, which more often than not is incomplete and/or inconsistent. It is also important for an agent to be able to obtain new conclusions that allow it to reason about the state of the world in which it is embedded.
It has been proven that this problem cannot be solved within the realm of Classic Logic.
This situation has triggered the development of a series of logical formalisms that extend the classic ones. These proposals often carry the names of Nonmonotonic Reasoning, or Defeasible Reasoning. Some examples of such models are McDermott and Doyle’s Nonmonotonic Logics, Reiter’s Default Logic, Moore’s Autoepistemic Logic, McCarthy’s Circumscription Model, and Belief Revision (also called Belief Change). This last formalism was introduced by G¨ardenfors and later extended by Alchourrón, G¨ardenfors, and Makinson [1, 4]