Hi! I'm a PhD student at Ropert: Robotics, Computer Vision and Artificial Intelligence group at the University of Zaragoza (Unizar), Spain, supervised by Dr. Jose J. Guerrero since 2022.
Previously I studied a Bachelor’s degree in Industrial Technologies Engineering and a Master’s degree in Industrial Engineering, both at Unizar where I started my career as a Computer Vision researcher.
My work revolves around Egocentric Vision, focusing on how it can enhance the way humans understand and interact with their surroundings.
PhD in Computer Vision, 2022-Present
University of Zaragoza
MSc in Industrial Engineering, specialty in Industrial Automation and Robotics, 2019-2021
University of Zaragoza
BSc in Industrial Technologies Engineering, 2015-2019
University of Zaragoza
Predoctoral Researcher in Computer Vision:
Teaching Assistant:
Real-time simulator of prosthetic vision (SPV) that uses communication between a Windows computer and an Ubuntu computer through a TCP/IP socket. Supervised by Dr. Jesús Bermúdez Cameo and Dr. Alejandro Pérez Yus
Action recognition is an essential task in egocentric vision due to its wide range of applications across many fields. While deep learning methods have been proposed to address this task, most rely on a single modality, typically video. However, including additional modalities may improve the robustness of the approaches to common issues in egocentric videos, such as blurriness and occlusions. Recent efforts in multimodal egocentric action recognition often assume the availability of all modalities, leading to failures or performance drops when any modality is missing. To address this, we introduce an efficient multimodal knowledge distillation approach for egocentric action recognition that is robust to missing modalities (KARMMA) while still benefiting when multiple modalities are available. Our method focuses on resource-efficient development by leveraging pre-trained models as unimodal feature extractors in our teacher model, which distills knowledge into a much smaller and faster student model. Experiments on the Epic-Kitchens and Something-Something datasets demonstrate that our student model effectively handles missing modalities while reducing its accuracy drop in this scenario.