ICMI B

61 papers

YearTitle / Authors
2009A framework for continuous multimodal sign language recognition.
Daniel Kelly, Jane Reilly Delannoy, John Mc Donald, Charles Markham
2009A fusion framework for multimodal interactive applications.
Hildeberto Mendonça, Jean-Yves Lionel Lawson, Olga Vybornova, Benoît Macq, Jean Vanderdonckt
2009A multimedia retrieval system using speech input.
Andrei Popescu-Belis, Peter Poller, Jonathan Kilgour
2009A multimodal predictive-interactive application for computer assisted transcription and translation.
Vicente Alabau, Daniel Ortiz, Verónica Romero, Jorge Ocampo
2009A speaker diarization method based on the probabilistic fusion of audio-visual location information.
Kentaro Ishizuka, Shoko Araki, Kazuhiro Otsuka, Tomohiro Nakatani, Masakiyo Fujimoto
2009A speech mashup framework for multimodal mobile services.
Giuseppe Di Fabbrizio, Thomas Okken, Jay G. Wilpon
2009Activity-aware ECG-based patient authentication for remote health monitoring.
Janani C. Sriram, Minho Shin, Tanzeem Choudhury, David Kotz
2009Adaptation from partially supervised handwritten text transcriptions.
Nicolás Serrano, Daniel Pérez, Alberto Sanchís, Alfons Juan
2009Agreement detection in multiparty conversation.
Sebastian Germesin, Theresa Wilson
2009Are gesture-based interfaces the future of human computer interaction?
Frédéric Kaplan
2009Augmented reality target finding based on tactile cues.
Teemu Tuomas Ahmaniemi, Vuokko Lantz
2009Benchmarking fusion engines of multimodal interactive systems.
Bruno Dumas, Rolf Ingold, Denis Lalanne
2009Between linguistic attention and gaze fixations inmultimodal conversational interfaces.
Rui Fang, Joyce Y. Chai, Fernanda Ferreira
2009Building multimodal applications with EMMA.
Michael Johnston
2009Cache-based language model adaptation using visual attention for ASR in meeting scenarios.
Neil Cooke, Martin J. Russell
2009Classification of patient case discussions through analysis of vocalisation graphs.
Saturnino Luz, Bridget Kane
2009Communicative gestures in coreference identification in multiparty meetings.
Tyler Baldwin, Joyce Y. Chai, Katrin Kirchhoff
2009Demonstration: first steps in emotional expression of the humanoid robot Nao.
Jérôme Monceaux, Joffrey Becker, Céline Boudier, Alexandre Mazel
2009Detecting user engagement with a robot companion using task and social interaction-based features.
Ginevra Castellano, André Pereira, Iolanda Leite, Ana Paiva, Peter W. McOwan
2009Detecting, tracking and interacting with people in a public space.
Sunsern Cheamanunkul, Evan Ettinger, Matthew Jacobsen, Patrick Lai, Yoav Freund
2009Dialog in the open world: platform and applications.
Dan Bohus, Eric Horvitz
2009Discovering group nonverbal conversational patterns with topics.
Dinesh Babu Jayagopi, Daniel Gatica-Perez
2009Dynamic robot autonomy: investigating the effects of robot decision-making in a human-robot team task.
Paul W. Schermerhorn, Matthias Scheutz
2009Evaluating the effect of temporal parameters for vibrotactile saltatory patterns.
Jukka Raisamo, Roope Raisamo, Veikko Surakka
2009Formal description techniques to support the design, construction and evaluation of fusion engines for sure (safe, usable, reliable and evolvable) multimodal interfaces.
Jean-François Ladry, David Navarre, Philippe A. Palanque
2009Fusion engines for multimodal input: a survey.
Denis Lalanne, Laurence Nigay, Philippe A. Palanque, Peter Robinson, Jean Vanderdonckt, Jean-François Ladry
2009GaZIR: gaze-based zooming interface for image retrieval.
László Kozma, Arto Klami, Samuel Kaski
2009Grounding spatial prepositions for video search.
Stefanie Tellex, Deb Roy
2009Guiding hand: a teaching tool for handwriting.
Nalini Vishnoi, Cody Narber, Zoran Duric, Naomi Lynn Gerber
2009Head-up interaction: can we break our addiction to the screen and keyboard?
Stephen A. Brewster
2009HephaisTK: a toolkit for rapid prototyping of multimodal interfaces.
Bruno Dumas, Denis Lalanne, Rolf Ingold
2009Learning and predicting multimodal daily life patterns from cell phones.
Katayoun Farrahi, Daniel Gatica-Perez
2009Learning from preferences and selected multimodal features of players.
Georgios N. Yannakakis
2009Living better with robots.
Cynthia Breazeal
2009Mapping information to audio and tactile icons.
Eve E. Hoggan, Roope Raisamo, Stephen A. Brewster
2009Mediated attention with multimodal augmented reality.
Angelika Dierker, Christian Mertes, Thomas Hermann, Marc Hanheide, Gerhard Sagerer
2009MirrorTrack: tracking with reflection - comparison with top-down approach.
Yannick Verdie, Bing Fang, Francis K. H. Quek
2009Modeling culturally authentic style shifting with virtual peers.
Justine Cassell, Kathleen Geraghty, Berto Gonzalez, John Borland
2009Multi-modal and multi-camera attention in smart environments.
Boris Schauerte, Jan Richarz, Thomas Plötz, Christian Thurau, Gernot A. Fink
2009Multi-modal communication system.
Victor S. Finomore, Dianne K. Popik, Douglas Brungart, Brian D. Simpson
2009Multi-modal features for real-time detection of human-robot interaction categories.
Ian R. Fasel, Masahiro Shiomi, Pilippe-Emmanuel Chadutaud, Takayuki Kanda, Norihiro Hagita, Hiroshi Ishiguro
2009Multimodal end-of-turn prediction in multi-party meetings.
Iwan de Kok, Dirk Heylen
2009Multimodal floor control shift detection.
Lei Chen, Mary P. Harper
2009Multimodal inference for driver-vehicle interaction.
Tevfik Metin Sezgin, Ian Davies, Peter Robinson
2009Multimodal integration of natural gaze behavior for intention recognition during object manipulation.
Thomas Bader, Matthias Vogelgesang, Edmund Klaus
2009Navigation with a passive brain based interface.
Jan B. F. van Erp, Peter J. Werkhoven, Marieke E. Thurlings, Anne-Marie Brouwer
2009Proceedings of the 11th International Conference on Multimodal Interfaces, ICMI 2009, Cambridge, Massachusetts, USA, November 2-4, 2009
James L. Crowley, Yuri A. Ivanov, Christopher Richard Wren, Daniel Gatica-Perez, Michael Johnston, Rainer Stiefelhagen
2009Providing expressive eye movement to virtual agents.
Zheng Li, Xia Mao, Lei Liu
2009RVDT: a design space for multiple input devices, multipleviews and multiple display surfaces combination.
Rami Ajaj, Christian Jacquemin, Frédéric Vernier
2009Realtime meeting analysis and 3D meeting viewer based on omnidirectional multimodal sensors.
Kazuhiro Otsuka, Shoko Araki, Dan Mikami, Kentaro Ishizuka, Masakiyo Fujimoto, Junji Yamato
2009Recognizing communicative facial expressions for discovering interpersonal emotions in group meetings.
Shiro Kumano, Kazuhiro Otsuka, Dan Mikami, Junji Yamato
2009Recognizing events with temporal random forests.
David Demirdjian, Chenna Varri
2009Salience in the generation of multimodal referring acts.
Paul Piwek
2009Speaker change detection with privacy-preserving audio cues.
Sree Hari Krishnan Parthasarathi, Mathew Magimai-Doss, Daniel Gatica-Perez, Hervé Bourlard
2009State, : an assisted document transcription system.
David Llorens, Andrés Marzal, Federico Prat, Juan Miguel Vilar
2009Static vs. dynamic modeling of human nonverbal behavior from multiple cues and modalities.
Stavros Petridis, Hatice Gunes, Sebastian Kaltwang, Maja Pantic
2009Temporal aspects of CARE-based multimodal fusion: from a fusion mechanism to composition components and WoZ components.
Marcos Serrano, Laurence Nigay
2009Towards adapting fantasy, curiosity and challenge in multimodal dialogue systems for preschoolers.
Theofanis Kannetis, Alexandros Potamianos
2009Visual based picking supported by context awareness: comparing picking performance using paper-based lists versus lists presented on a head mounted display with contextual support.
Hendrik Iben, Hannes Baumann, Carmen Ruthenbeck, Tobias Klug
2009Voice key board: multimodal indic text input.
Prasenjit Dey, Ramchandrula Sitaram, Rahul Ajmera, Kalika Bali
2009WiiNote: multimodal application facilitating multi-user photo annotation activity.
Elena Mugellini, Maria Sokhn, Stefano Carrino, Omar Abou Khaled