Faces and facial expressions recognition is an interesting topic for researchers in machine vision.
Viola-Jones algorithm is the most spread algorithm for this task. Building a classification model for face recognition can take many years if the implementation of its training phase is not optimized.
In this study, we analyze different implementations for the training phase. The aim was to reduce the time needed during training phase when using one computer with a cheap graphical processing unit (GPU). The execution times were analyzed and compared with previous studies. Results showed that combining C language, CUDA, etc., it is possible to reach acceptable times for training phase. Further research may involve the measurement of the performance of our approach computers with better GPU capacity and exploring a multi-GPU approach.