Brain Computer Interfaces (BCIs) are useful devices that provide new ways of communication to people who have lost the capability of interacting with their environment. Although several paradigms have resulted in large improvements in the construction of BCIs, quite often they require great efforts from the patient or they are not able to generate natural and efficient interfaces. In that scenario, inner speech appears as a promising paradigm for tackling those problems. Nevertheless, the lack of publicly available databases largely precludes the analysis and development of methods for using this paradigm. In this work we use a recently released database to show that it is possible to classify and differentiate inner speech signals from signals acquired within other two well known paradigms. This is undoubtedly a first step in the search and construction of an inner speech based BCI.