The VisLab group conducts research on computer and robot vision, focussing on video analysis, surveillance, learning, humanoid robotics and cognition. We strive to develop new methodologies and tools for computer and robot vision, and demonstrate such methodologies in challenging applications. We take a multidisciplinary perspective, combining disciplines like engineering, neuroscience and psychology, with the twin goals of drawing inspiration from biology to develop artificial systems, and modeling biological systems with computational and robotic tools.
Our main areas of application are:
- IMAGE ANALYSIS and SURVEILLANCE: medical imaging, skin cancer detection, clinically inspired systems, human activity recognition in indoor/outdoor scenes, re-identification,aerial monitoring using drones.
- VISUAL NAVIGATION AND CALIBRATION: visual navigation based on the parsimonious use of hardware and computational resources; design of nonconventional cameras combining reflective surfaces with lenses, space-variant imaging, depth sensing; aggregation of multiple cameras.
- BIOINSPIRED VISION AND LEARNING: Oculomotor control, Visual Attention, Active Vision, Foveal Vision, Saliency, Bottom-up and Top-downattention, Visual Search, Biomimetic Visual Systems.
- COGNITIVE ROBOTS: Sensorimotor coordination in humanoid robots, affordances learning and modeling, human-robot interaction, learning by demonstration, human action recognition and anticipation.
VisLab designed the head of the iCub, the first open-source humanoid robot with 30+ copies worldwide, and leads the Robotics, Brain and Cognition (RBCog) Lab, that was selected for the Portuguese Roadmap of Research Infrastructures and includes an iCub, the social robot Vizzy (built at VisLab), and eye tracking/motion capture systems. This infrastructure is key for developing frontier research in computer and robot vision, in a rich ecosystem with 100+ academic/corporate partners from around the world.