SAGA 2013: 1st International Workshop on Solutions for Automatic Gaze Data Analysis
from October, 24-25th 2013 at Bielefeld University, Germany
With recent technical progress and the miniaturization of electronic devices, mobile eye tracking has become more and more popular in recent years. Whereas the presentation software of eye trackers is flexible and robust, permitting indoor and outdoor studies, software and techniques for the analysis of the recorded scene video and gaze data are still in their infancy. Existing software provides various analysis function for still images, but lack solutions when it comes to the analysis of complex dynamic scenes, which require a cumbersome installation of markers in the scene. In order to analyze the data, the scene video has to be manually annotated – a very time consuming (frame-by-frame) and error-prone process, which suffers from subjectiveness, fatigue of annotators and changes in the annotation schemata.
In order to address these topics, we will carry out an international workshop SAGA 2013 (Solutions for Automatic Gaze-Data Analysis) at Bielefeld University, Germany . SAGA 2013 will focus on automatic solutions for gaze videos as a trailblazer for mobile eye-based interaction and eye-based context-awareness. We are providing a forum for researchers and industrialists from human-computer interaction, robotics, computer science, psychology, linguistics, context-aware computing, and eye tracking to discuss techniques and applications that go beyond classical eye tracking and stationary eye-based interaction. We want to stimulate and explore the creativity of these communities with respect to the implications, key research challenges, and new techniques and application areas. The long-term goal is to create a strong interdisciplinary research community linking these fields together and to establish the workshop as the premier forum for research on automatic annotation of gaze videos.
We have four excellent keynotes covering the range from basic research to application, from assistive systems to marketing research. A detailed description of the keynotes is available here.
Talks and Posters:
We have three research sessions and one session presenting available technical solutions. Some of the presented work can also be explored in the live sessions. Please see our complete schedule of the SAGA Workshop with all the talks.
In a live session, several researchers present technical solutions for gaze analysis, object tracking and recognition. The purpose of the live session is to:
- demonstrate different technical approaches that can be used for the automatic analysis of gaze videos
- encourage the community to work on a set of specific software solutions and research questions
- continuously improve on earlier results obtained for these problems over the years
Therefore, we will provide several technical demonstrations.
During the live session, researchers will provide an in-depth demonstration of their solutions and they are pleased to answer any further questions you may have.
- provide an overview of the state of art
- provide an exchange and presentation platform for interested people working on the automatic annotation and on mobile application techniques
- establish an exchange platform to coordinate future research
- define research specifications and application requirements for (semi-) automatic annotation solutions
- define standard setups as benchmark (Videos/System)
- bring together cutting edge research on visual perception in natural environments
- unite keynote speakers from academia and industry