InsightInsight
IPIC-Ribbon-Horizontal-2-Small
  • About
    • What We Do
    • Governance
    • Equality, Diversity and Inclusion
  • People
    • Work With Us
    • Senior Leadership
    • Principal Investigators
    • Funded Investigators
    • Research and Operations
  • Research
    • Central Bank PhD Programme
    • Excellence
    • Funding Collaboration
    • MSCA Postdoctoral Fellowships
    • National Projects
    • European Projects
  • Industry
    • Collaborate
    • Insight Brochure
    • Commercialisation
    • Contact
  • Public Engagement
    • Meet the Team
    • Highlights
    • Insight Scholarship
  • News
    • Spotlight on Research
    • Events
    • Newsletter
    • Press Releases
  • Contact
  • About
    • What We Do
    • Governance
    • Equality, Diversity and Inclusion
  • People
    • Work With Us
    • Senior Leadership
    • Principal Investigators
    • Funded Investigators
    • Research and Operations
  • Research
    • Central Bank PhD Programme
    • Excellence
    • Funding Collaboration
    • MSCA Postdoctoral Fellowships
    • National Projects
    • European Projects
  • Industry
    • Collaborate
    • Insight Brochure
    • Commercialisation
    • Contact
  • Public Engagement
    • Meet the Team
    • Highlights
    • Insight Scholarship
  • News
    • Spotlight on Research
    • Events
    • Newsletter
    • Press Releases
  • Contact
Back

Research Challenge 4: Multimodal Data Analysis

 

Team Lead:  Suzanne Little

Team Lead: Paul Buitelaar

Strand 4.1: Multimodal Sensor Fusion — Stephen Redmond (UCD)

Strand 4.2: Multimodal Data Interpretation — Suzanne Little (DCU)

Strand 4.3: Multimodal Interaction — Cathal Gurrin (DCU)

 

Data is available across a wide range of modalities, from visual data in the form of images and video, language data in the form of text and speech, audio data such as music and sounds, as well as other sensory data such as smell, taste or touch. Additionally, data often comes in a form that crosses several modalities such as video material, which typically includes at least image and sound (speech, music) as well as text in the form of subtitles that are, moreover, likely to be in a language different from the corresponding speech data.

Other settings also provide data in multiple modalities such as in human sensing, in which for instance facial expression data in the form of images can be combined with auditory (speech, sound), haptic (touch) or other sensory data.

The research challenge on Multimodal Data Analysis is therefore concerned with the integration and interpretation of data within and across modalities as well as human interaction with multimodal data and the knowledge and insights derived from it. The outcomes from our research will enable improved understanding and modelling of rich data sources in domains such as business, health, environment and education.

Insight_host_partners_funder
Ireland's European Structural and Investment Funds Programme 2014-2022 logo
European Union European Regional Development Fund logo
  • Privacy Statement
  • Copyright Statement
  • Data Protection Notice
  • Accessibility Statement