ICMI B

48 papers

YearTitle / Authors
2005A comparison of two methods of scaling on form perception via a haptic interface.
Mounia Ziat, Olivier Gapenne, John Stewart, Charles Lenay
2005A first evaluation study of a database of kinetic facial expressions (DaFEx).
Alberto Battocchi, Fabio Pianesi, Dina Goren-Bar
2005A joint particle filter for audio-visual speaker tracking.
Kai Nickel, Tobias Gehrig, Rainer Stiefelhagen, John W. McDonough
2005A look under the hood: design and development of the first SmartWeb system demonstrator.
Norbert Reithinger, Simon Bergweiler, Ralf Engel, Gerd Herzog, Norbert Pfleger, Massimo Romanelli, Daniel Sonntag
2005A multimodal perceptual user interface for video-surveillance environments.
Giancarlo Iannizzotto, Carlo Costanzo, Francesco La Rosa, Pietro Lanzafame
2005A pattern mining method for interpretation of interaction.
Tomoyuki Morita, Yasushi Hirano, Yasuyuki Sumi, Shoji Kajita, Kenji Mase
2005A probabilistic inference of multiparty-conversation structure based on Markov-switching models of gaze patterns, head directions, and utterances.
Kazuhiro Otsuka, Yoshinao Takemae, Junji Yamato
2005A study of manual gesture-based selection for the PEMMI multimodal transport management interface.
Fang Chen, Eric H. C. Choi, Julien Epps, Serge Lichman, Natalie Ruiz, Yu (David) Shi, Ronnie Taib, Mike Wu
2005A transformational approach for multimodal web user interfaces based on UsiXML.
Adrian Stanciulescu, Quentin Limbourg, Jean Vanderdonckt, Benjamin Michotte, Francisco Montero Simarro
2005A user interface framework for multimodal VR interactions.
Marc Erich Latoschik
2005Agent-based architecture for implementing multimodal learning environments for visually impaired children.
Rami Saarinen, Janne Järvi, Roope Raisamo, Jouni Salo
2005An initial usability assessment for symbolic haptic rendering of music parameters.
Meghan Allen, Jennifer Gluck, Karon E. MacLean, Erwin Tang
2005Analyzing and predicting focus of attention in remote collaborative tasks.
Jiazhi Ou, Lui Min Oh, Susan R. Fussell, Tal Blum, Jie Yang
2005Audio-visual cues distinguishing self- from system-directed speech in younger and older adults.
Rebecca Lunsford, Sharon L. Oviatt, Rachel Coulston
2005Augmenting conversational dialogue by means of latent semantic googling.
Robin Senior, Roel Vertegaal
2005Automatic detection of interaction groups.
Oliver Brdiczka, Jérôme Maisonnasse, Patrick Reignier
2005Combining environmental cues & head gestures to interact with wearable devices.
Marc Hanheide, Christian Bauckhage, Gerhard Sagerer
2005Contextual recognition of head gestures.
Louis-Philippe Morency, Candace L. Sidner, Christopher Lee, Trevor Darrell
2005Distributed pointing for multimodal collaboration over sketched diagrams.
Paulo Barthelmess, Edward C. Kaiser, Xiao Huang, David Demirdjian
2005Exploring multimodality in the laboratory and the field.
Lynne Baillie, Raimund Schatz
2005Gaze-based selection of standard-size menu items.
Oleg Spakov, Darius Miniotas
2005Gesture-driven American sign language phraselator.
Jose L. Hernandez-Rebollar
2005Hapticat: exploration of affective touch.
Steve Yohanan, Mavis Chan, Jeremy Hopkins, Haibo Sun, Karon E. MacLean
2005Human-style interaction with a robot for cooperative learning of scene objects.
Shuyin Li, Axel Haasch, Britta Wrede, Jannik Fritsch, Gerhard Sagerer
2005Identifying the intended addressee in mixed human-human and human-computer interaction from non-verbal features.
Koen van Turnhout, Jacques M. B. Terken, Ilse Bakx, Berry Eggen
2005Inferring body pose using speech content.
Sy Bor Wang, David Demirdjian
2005Integrating sketch and speech inputs using spatial information.
Bee-Wah Lee, Alvin W. Yeo
2005Interactive humanoids and androids as ideal interfaces for humans.
Hiroshi Ishiguro
2005Interactive vision to detect target objects for helper robots.
Md. Altab Hossain, Rahmadi Kurnia, Akio Nakamura, Yoshinori Kuno
2005Meeting room configuration and multiple camera calibration in meeting analysis.
Yingen Xiong, Francis K. H. Quek
2005Migratory MultiModal interfaces in MultiDevice environments.
Silvia Berti, Fabio Paternò
2005Multimodal multispeaker probabilistic tracking in meetings.
Daniel Gatica-Perez, Guillaume Lathoud, Jean-Marc Odobez, Iain McCowan
2005Multimodal output specification / simulation platform.
Cyril Rousseau, Yacine Bellik, Frédéric Vernier
2005Perceiving ordinal data haptically under workload.
Anthony Tang, Peter McLachlan, Karen Lowe, Chalapati Rao Saka, Karon E. MacLean
2005Probabilistic grounding of situated speech using plan recognition and reference resolution.
Peter Gorniak, Deb Roy
2005Proceedings of the 7th International Conference on Multimodal Interfaces, ICMI 2005, Trento, Italy, October 4-6, 2005
Gianni Lazzari, Fabio Pianesi, James L. Crowley, Kenji Mase, Sharon L. Oviatt
2005Recognition of sign language subwords based on boosted hidden Markov models.
Liang-Guo Zhang, Xilin Chen, Chunli Wang, Yiqiang Chen, Wen Gao
2005Region extraction of a gaze object using the gaze point and view image sequences.
Norimichi Ukita, Tomohisa Ono, Masatsugu Kidode
2005Socially aware computation and communication.
Alex Pentland
2005Synthetic characters as multichannel interfaces.
Elena Not, Koray Balci, Fabio Pianesi, Massimo Zancanaro
2005Tangible user interfaces for 3D clipping plane interaction with volumetric data: a case study.
Wen Qi, Jean-Bernard Martens
2005The "puzzle" of sensory perception: putting together multisensory information.
Marc O. Ernst
2005The connector: facilitating context-aware communication.
Maria Danninger, G. Flaherty, Keni Bernardin, Hazim Kemal Ekenel, Thilo Köhler, Robert G. Malkin, Rainer Stiefelhagen, Alex Waibel
2005The contrastive evaluation of unimodal and multimodal interfaces for voice otput communication aids.
Melanie Baljko
2005Understanding the effect of life-like interface agents through users' eye movements.
Helmut Prendinger, Chunling Ma, Jin Yingzi, Arturo Nakasone, Mitsuru Ishizuka
2005Using observations of real designers at work to inform the development of a novel haptic modeling system.
Umberto Giraudo, Monica Bordegoni
2005Virtual tangible widgets: seamless universal interaction with personal sensing devices.
Eiji Tokunaga, Hiroaki Kimura, Nobuyuki Kobayashi, Tatsuo Nakajima
2005XfaceEd: authoring tool for embodied conversational agents.
Koray Balci