Deep learning algorithms are a key feature of artificial intelligence research. However, there are drawbacks to these algorithms in that they need a large amount of good quality data in order to build efficient models. We can’t just collect data and use it. We need to annotate it (ie interpret it for the machines) and prepare it for use. This is time consuming and expensive so researchers are trying to build models that try to cut down on this process. Many techniques aim to use already annotated data from other sources to avoid this process. Other approaches involve trying to build models around lower quality data that do not require such exhaustive preparation.
Eric’s research mixes these two approaches. His focus is on studying how to build deep learning models that are able to use almost raw data together with data already annotated for other domains. This will allow the deep learning community to take advantage of the large amounts of data that is being collected daily through smartphones, computers and all these devices that are integrated in our lives.
His other deep learning interests concern the application of models to processing visual data, like videos and photographs, automatically generating captions for videos, classifying images or segmenting images based on the objects in the image. All of these applications will help the much larger research picture created by the computer vision community, that is currently developing impressive technology such as self driving cars, improving search engines or enhancing current techniques for medical diagnostics.