This paper deals with the problem of emotions recognition in a multimodal way, presents a model of fusion of emotional records coming from EEG signals provided by the use of brain-machine interfaces, face image capture and the record of the response from the user before stimuli induced by images (IAPS, OLAF). It particularly approaches a proposal of multimodal fusion which validates the user´s potential subjective responses facing those stimuli.