Modern technology has facilitated a huge growth in the amount, and variety of media types with which we can communicate information. Text, image, moving video, 3D, as well as data from real-time feeds, used in isolation or more likely, in combination are now commonly used to encode information digitally. When we add to that the myriad of ways in which personal sensing can sense the person in real time including sensing activities, sensing behaviour, even sensing physiology, this leaves us with the opportunity to monitor the complex real-time interaction and interplay between people, and the information we consume in work and as leisure.
In this strand we focus on multi-modal analysis and interaction tools which we can use to extract and leverage useful information from multimedia data to help improve this interplay between people and information. As new and affordable technologies for data capture come on-stream, including examples like Microsoft’s Kinect, brain-computer interfaces and affordable eye-tracking, these offer opportunities for exploiting them usefully.
The emergence of the real-time web, which provides new forms of media, and how we might interact with this new media, is also a focus for our work which can be summarized as developing analytics for new and traditional forms of media.