More information


Digital Processing and Pattern Recognition for Technical Aids to Functional diversity (PREPEATE) is a research project, with reference number TEC2016-80326-R, which belongs to the National Programme for Research Aimed at the Challenges of Society, 2016 call, of the Ministry of Science, Innovation and Universities.

With PREPEATE we aim to address the challenge of attending to functional diversity by applying advanced techniques of computer vision and artificial intelligence. Prepearse is a Spanish word which means: to be placed in a privileged place. That is precisely what we intend with this project: to develop technical solutions which allow to place in an advantageous position people with functional diversity.

The concrete scientific objectives of the project are:

  1. Implement and evaluate new models of cognitive perception, based on computer vision and artificial intelligence techniques, capable of performing an efficient and close human-machine interaction.
  2. Develop and configure new platforms that integrate the perception solutions designed, and which address different specific situations of functional diversity.
  3. To create new mechanisms of joint interpretation of the information originated at different sources, and to facilitate the taking of interpretation,recognition, classification or demand of new information steps.
  4. Propose navigation solutions for mobile platforms, able to work in an unsupervised and autonomous way, so as to interact with the environment and people.
  5. As a demonstrator, we propose to implement and configure a friendly and autonomous mobile platform, focused on sensory stimulation of children with cerebral palsy.


Rethinking Online Action Detection in Untrimmed Videos: A Novel Online Evaluation Protocol

M. Baptista-Ríos, R. J. López-Sastre, F. Caba Heilbron, Jan van Gemert, F. J. Acevedo-Rodríguez, S. Maldonado-Bascón.

IEEE Access, 2020. PDF Code

Unsupervised Action Proposals Using Support Vector Classifiers for Online Video Processing

M. Baptista-Ríos, R. J. López-Sastre, F. J. Acevedo-Rodríguez, P. Martín-Martín, S. Maldonado-Bascón.

Sensors, 2020. PDF Code

The Instantaneous Accuracy: a Novel Metric for the Problem of Online Human Behaviour Recognition in Untrimmed Videos

M. Baptista-Ríos, R. J. López-Sastre, F. Caba Heilbron, Jan van Gemert, F. J. Acevedo-Rodríguez, S. Maldonado-Bascón.

10th International Workshop on Human Bevaviour Undertstanding, ICCV, 2019. PDF Code

Fallen People Detection Capabilities Using Assistive Robot

S. Maldonado-Bascón, C. Iglesias-Iglesias, P. Martín-Martín, S. Lafuente-Arroyo.

Electronics, 2019. PDF Data

A Novel Approach for a Leg-Based Stair-Climbing Wheelchair based on Electrical Linear Actuators

E. Pereira, H. Gómez-Moreno, C. Alen-Cordero, P. Gil-Jiménez, S. Maldonado-Bascón.


Segmentation in Corridor Environments: Combining floor and ceiling detection

S. Lafuente-Arroyo, S. Maldonado-Bascón, H. Gómez-Moreno, C. Alen-Cordero.


Combining Online Clustering and Rank Pooling Dynamics for Action Proposals

Nadjia Khatir, Roberto J. López-Sastre, Marcos Baptista-Ríos, Safia Nait-Bahloul, Francisco Javier Acevedo-Rodríguez.


Collision anticipation via deep reinforcement learning for visual navigation

E. Gutiérrez-Maestro, R. J. López-Sastre, S. Maldonado-Bascón.


On-Board Correction of Systematic Odometry Errors in Differential Robots

S. Maldonado-Bascón, R. J. López-Sastre, F. J. Acevedo-Rodríguez, P. Gil-Jiménez.

Journal of Sensors, 2019. PDF Data

Learning to Exploit the Prior Network Knowledge for Weakly-Supervised Semantic Segmentation

C. Redondo-Cabrera, M. Baptista-Ríos, R. J. López-Sastre.

IEEE Transactions on Image Processing, 2019 PDF Code

Unsupervised learning from videos using temporal coherency deep networks

C. Redondo-Cabrera and R. J. López-Sastre.

Computer Vision and Image Understanding, 2019. PDF Code

Embarrassingly Simple Model for Early Action Proposal

M. Baptista-Ríos, R. J. López-Sastre, F. J. Acevedo-Rodríguez, S. Maldonado-Bascón.

Anticipating Human Behavior Workshop, ECCV, 2018. PDF

The challenge of simultaneous object detection and pose estimation: a comparative study

D. Oñoro-Rubio, R. J. López-Sastre, C. Redondo-Cabrera, P. Gil-Jiménez.

Image and Vision Computing, 2018. PDF Code

In pixels we trust: From Pixel Labeling to Object Localization and Scene Categorization

C. Herránz-Perdigureo, C. Redondo-Cabrera and R. J. López-Sastre.

IROS, 2018. PDF Code Video

Low cost robot for indoor cognitive disorder people orientation

S. Maldonado-Bascón, F.J. Acevedo-Rodríguez F. Montoya-Andugar and P. Gil-Jiménez.

International Conference on Industrial Technology, 2018. PDF

Data, Software and Results


This video shows in action our developed collision anticipation navigation module, based on reinforcement learning.

Watch this video with scene recognition, semantic segmentation and object localization results for our robotics platforms!

These videos show one of the robotic platforms developed in the PREPEATE project. It has been fully designed at the GRAM research group. For video processing we use a Jetson TX2 board, where several deep learning models have been integrated. It can navigate with just visual information to track a particular person (first video) or to reach an specific target (second video - the target is the door between the elevators).

Watch this video, to see our Online Action Detection system in action.




Gram Reseach Group

Funded by

This project, with reference TEC2016-80326-R has been funded by: