ICMI B

54 papers

YearTitle / Authors
2008A Fitts Law comparison of eye tracking and manual input in the selection of visual targets.
Roel Vertegaal
2008A Wizard of Oz study for an AR multimodal interface.
Minkyung Lee, Mark Billinghurst
2008A browser-based multimodal interaction system.
Kouichi Katsurada, Teruki Kirihata, Masashi Kudo, Junki Takada, Tsuneo Nitta
2008A high-performance dual-wizard infrastructure for designing speech, pen, and multimodal interfaces.
Philip R. Cohen, Colin Swindells, Sharon L. Oviatt, Alexander M. Arthur
2008A multi-modal spoken dialog system for interactive TV.
Rajesh Balchandran, Mark E. Epstein, Gerasimos Potamianos, Ladislav Serédi
2008A realtime multimodal system for analyzing group meetings by combining face pose tracking and speaker diarization.
Kazuhiro Otsuka, Shoko Araki, Kentaro Ishizuka, Masakiyo Fujimoto, Martin Heinrich, Junji Yamato
2008A three-dimensional characterization space of software components for rapidly developing multimodal interfaces.
Marcos Serrano, David Juras, Laurence Nigay
2008AcceleSpell, a gestural interactive game to learn and practice finger spelling.
José Luis Hernandez-Rebollar, Ethar Ibrahim Elsakay, José D. Alanís-Urquieta
2008An audio-haptic interface based on auditory depth cues.
Delphine Devallez, Federico Fontana, Davide Rocchesso
2008An integrative recognition method for speech and gestures.
Madoka Miki, Chiyomi Miyajima, Takanori Nishino, Norihide Kitaoka, Kazuya Takeda
2008As go the feet...: on the estimation of attentional focus from stance.
Francis K. H. Quek, Roger W. Ehrich, Thurmon E. Lockhart
2008Audiovisual 3d rendering as a tool for multimodal interfaces.
George Drettakis
2008Audiovisual laughter detection based on temporal features.
Stavros Petridis, Maja Pantic
2008Automated sip detection in naturally-evoked video.
Rana El Kaliouby, Mina Mikhail
2008Context-based recognition during human interactions: automatic feature selection and encoding dictionary.
Louis-Philippe Morency, Iwan de Kok, Jonathan Gratch
2008Crossmodal congruence: the look, feel and sound of touchscreen widgets.
Eve E. Hoggan, Topi Kaaresoja, Pauli Laitinen, Stephen A. Brewster
2008Deducing the visual focus of attention from head pose estimation in dynamic multi-view meeting scenarios.
Michael Voit, Rainer Stiefelhagen
2008Designing and evaluating multimodal interaction for mobile contexts.
Saija Lemmelä, Akos Vetek, Kaj Mäkelä, Dari Trendafilov
2008Designing context-aware multimodal virtual environments.
Lode Vanacken, Joan De Boeck, Chris Raymaekers, Karin Coninx
2008Detection and localization of 3d audio-visual objects using unsupervised clustering.
Vasil Khalidov, Florence Forbes, Miles E. Hansard, Elise Arnaud, Radu Horaud
2008Dynamic modality weighting for multi-stream hmms inaudio-visual speech recognition.
Mihai Gurban, Jean-Philippe Thiran, Thomas Drugman, Thierry Dutoit
2008Effectiveness and usability of an online help agent embodied as a talking head.
Jérôme Simonin, Noëlle Carbonell, Danielle Pelé
2008Embodied conversational agents for voice-biometric interfaces.
Álvaro Hernández Trapote, Beatriz López-Mencía, David Díaz Pardo de Vera, Rubén Fernández Pozo, Javier Caminero
2008Evaluating talking heads for smart home systems.
Christine Kühnel, Benjamin Weiss, Ina Wechsung, Sascha Fagel, Sebastian Möller
2008Explorative studies on multimodal interaction in a PDA- and desktop-based scenario.
Andreas Ratzka
2008Feel-good touch: finding the most pleasant tactile feedback for a mobile touch screen button.
Emilia Koskinen, Topi Kaaresoja, Pauli Laitinen
2008IGlasses: an automatic wearable speech supplementin face-to-face communication and classroom situations.
Dominic W. Massaro, Miguel Á. Carreira-Perpiñán, David J. Merrill, Cass Sterling, Stephanie Bigler, Elise Piazza, Marcus Perlman
2008Innovative interfaces in MonAMI: the reminder.
Jonas Beskow, Jens Edlund, Teodore Gjermani, Björn Granström, Joakim Gustafson, Oskar Jonsson, Gabriel Skantze, Helena Tobiasson
2008Interaction techniques for the analysis of complex data on high-resolution displays.
Chreston A. Miller, Ashley Robinson, Rongrong Wang, Pak Chung, Francis K. H. Quek
2008Investigating automatic dominance estimation in groups from visual attention and speaking activity.
Hayley Hung, Dinesh Babu Jayagopi, Sileye O. Ba, Jean-Marc Odobez, Daniel Gatica-Perez
2008Knowledge and data flow architecture for reference processing in multimodal dialog systems.
Ali Choumane, Jacques Siroux
2008Manipulating trigonometric expressions encodedthrough electro-tactile signals.
Tatiana Evreinova
2008MultiML: a general purpose representation language for multimodal human utterances.
Manuel Giuliani, Alois C. Knoll
2008Multimodal presentation and browsing of music.
David Damm, Christian Fremerey, Frank Kurth, Meinard Müller, Michael Clausen
2008Multimodal recognition of personality traits in social interactions.
Fabio Pianesi, Nadia Mana, Alessandro Cappelletti, Bruno Lepri, Massimo Zancanaro
2008Multimodal slideshow: demonstration of the openinterface interaction development environment.
David Juras, Laurence Nigay, Michael Ortega, Marcos Serrano
2008Multimodal system evaluation using modality efficiency and synergy metrics.
Manolis Perakakis, Alexandros Potamianos
2008Natural interfaces in the field: the case of pen and paper.
Philip R. Cohen
2008PHANTOM prototype: exploring the potential for learning with multimodal features in dentistry.
Jonathan Padilla San Diego, Alastair Barrow, Margaret J. Cox, William S. Harwin
2008Perception of dynamic audiotactile feedback to gesture input.
Teemu Tuomas Ahmaniemi, Vuokko Lantz, Juha Marila
2008Perception of low-amplitude haptic stimuli when biking.
Toni Pakkanen, Jani Lylykangas, Jukka Raisamo, Roope Raisamo, Katri Salminen, Jussi Rantala, Veikko Surakka
2008Predicting two facets of social verticality in meetings from five-minute time slices and nonverbal cues.
Dinesh Babu Jayagopi, Sileye O. Ba, Jean-Marc Odobez, Daniel Gatica-Perez
2008Proceedings of the 10th International Conference on Multimodal Interfaces, ICMI 2008, Chania, Crete, Greece, October 20-22, 2008
Vassilios Digalakis, Alexandros Potamianos, Matthew Turk, Roberto Pieraccini, Yuri Ivanov
2008Robust gesture processing for multimodal interaction.
Srinivas Bangalore, Michael Johnston
2008Role recognition in multiparty recordings using social affiliation networks and discrete distributions.
Sarah Favre, Hugues Salamin, John Dines, Alessandro Vinciarelli
2008Smoothing human-robot speech interactions by using a blinking-light as subtle expression.
Kotaro Funakoshi, Kazuki Kobayashi, Mikio Nakano, Seiji Yamada, Yasuhiko Kitamura, Hiroshi Tsujino
2008Social signals, their function, and automatic analysis: a survey.
Alessandro Vinciarelli, Maja Pantic, Hervé Bourlard, Alex Pentland
2008TactiMote: a tactile remote control for navigating in long lists.
Muhammad Tahir, Gilles Bailly, Eric Lecolinet, Gérard Mouret
2008The CAVA corpus: synchronised stereoscopic and binaural datasets with head movements.
Elise Arnaud, Heidi Christensen, Yan-Chen Lu, Jon Barker, Vasil Khalidov, Miles E. Hansard, Bertrand Holveck, Hervé Mathieu, Ramya Narasimha, Elise Taillant, Florence Forbes, Radu Horaud
2008The DIRAC AWEAR audio-visual platform for detection of unexpected and incongruent events.
Jörn Anemüller, Jörg-Hendrik Bach, Barbara Caputo, Michal Havlena, Jie Luo, Hendrik Kayser, Bastian Leibe, Petr Motlícek, Tomás Pajdla, Misha Pavel, Akihiko Torii, Luc Van Gool, Alon Zweig, Hynek Hermansky
2008The WAMI toolkit for developing, deploying, and evaluating web-accessible multimodal interfaces.
Alexander Gruenstein, Ian McGraw, Ibrahim Badr
2008The babbleTunes system: talk to your ipod!
Jan Schehl, Alexander Pfalzgraf, Norbert Pfleger, Jochen Steigner
2008Towards a minimalist multimodal dialogue framework using recursive MVC pattern.
Li Li, Wu Chou
2008VoiceLabel: using speech to label mobile sensor data.
Susumu Harada, Jonathan Lester, Kayur Patel, T. Scott Saponas, James Fogarty, James A. Landay, Jacob O. Wobbrock