ICMI B

61 papers

YearTitle / Authors
20073d augmented mirror: a multimodal interface for string instrument learning and teaching with gesture support.
Kia-Chuan Ng, Tillman Weyde, Oliver Larkin, Kerstin Neubarth, Thijs Koerselman, Bee Ong
2007A computational model for spatial expression resolution.
Andrea Corradini
2007A large-scale behavior corpus including multi-angle video data for observing infants' long-term developmental processes.
Shinya Kiriyama, Goh Yamamoto, Naofumi Otani, Shogo Ishikawa, Yoichi Takebayashi
2007A multi-modal mobile device for learning japanese kanji characters through mnemonic stories.
Norman Lin, Shoji Kajita, Kenji Mase
2007A study on the scalability of non-preferred hand mode manipulation.
Jaime Ruiz, Edward Lank
2007A survey of affect recognition methods: audio, visual and spontaneous expressions.
Zhihong Zeng, Maja Pantic, Glenn I. Roisman, Thomas S. Huang
2007A tactile language for intuitive human-robot communication.
Andreas J. Schmid, Martin Hoffmann, Heinz Wörn
2007Audiovisual recognition of spontaneous interest within conversations.
Björn W. Schuller, Ronald Müller, Benedikt Hörnler, Anja Höthker, Hitoshi Konosu, Gerhard Rigoll
2007Automated generation of non-verbal behavior for virtual embodied characters.
Werner Breitfuss, Helmut Prendinger, Mitsuru Ishizuka
2007Automatic inference of cross-modal nonverbal interactions in multiparty conversations: "who responds to whom, when, and how?" from gaze, head gestures, and utterances.
Kazuhiro Otsuka, Hiroshi Sawada, Junji Yamato
2007Can you talk or only touch-talk: A VoIP-based phone feature for quick, quiet, and private communication.
Maria Danninger, Leila Takayama, Qianying Wang, Courtney Schultz, Jörg Beringer, Paul Hofmann, Frankie James, Clifford Nass
2007Designing audio and tactile crossmodal icons for mobile devices.
Eve E. Hoggan, Stephen A. Brewster
2007Detecting communication errors from visual cues during the system's conversational turn.
Sy Bor Wang, David Demirdjian, Trevor Darrell
2007Developing and analyzing intuitive modes for interactive object modeling.
Alexander Kasper, Regine Becher, Peter Steinhaus, Rüdiger Dillmann
2007Disambiguating speech commands using physical context.
Katherine Everitt, Susumu Harada, Jeff A. Bilmes, James A. Landay
2007Eliciting, capturing and tagging spontaneous facialaffect in autism spectrum disorder.
Rana El Kaliouby, Alea Teeters
2007Evaluation of haptically augmented touchscreen gui elements under cognitive load.
Rock Leung, Karon E. MacLean, Martin Bue Bertelsen, Mayukh Saubhasik
2007Extensible middleware framework for multimodal interfaces in distributed environments.
Vitor Fernandes, Tiago João Vieira Guerreiro, Bruno Araújo, Joaquim A. Jorge, João Pereira
2007Extraction of important interactions in medical interviewsusing nonverbal information.
Yuichi Sawamoto, Yuichi Koyama, Yasushi Hirano, Shoji Kajita, Kenji Mase, Kimiko Katsuyama, Kazunobu Yamauchi
2007Faces of pain: automated measurement of spontaneousallfacial expressions of genuine and posed pain.
Gwen Littlewort, Marian Stewart Bartlett, Kang Lee
2007Five-key text input using rhythmic mappings.
Christine Szentgyorgyi, Edward Lank
2007Gaze-communicative behavior of stuffed-toy robot with joint attention and eye contact based on ambient gaze-tracking.
Tomoko Yonezawa, Hirotake Yamazoe, Akira Utsumi, Shinji Abe
2007How to distinguish posed from spontaneous smiles using geometric features.
Michel François Valstar, Hatice Gunes, Maja Pantic
2007Influencing social dynamics in meetings through a peripheral display.
Janienke Sturm, Olga Houben-van Herwijnen, Anke Eyck, Jacques M. B. Terken
2007Interest estimation based on dynamic bayesian networks for visual attentive presentation agents.
Boris Brandherm, Helmut Prendinger, Mitsuru Ishizuka
2007Interfaces for musical activities and interfaces for musicians are not the same: the case for codes, a web-based environment for cooperative music prototyping.
Evandro Manara Miletto, Luciano Vargas Flores, Marcelo Soares Pimenta, Jérôme Rutily, Leonardo Santagada
2007Interfacing life: a year in the life of a research lab.
Yuri Ivanov
2007Just in time learning: implementing principles of multimodal processing and learning for education.
Dominic W. Massaro
2007Map navigation with mobile devices: virtual versus physical movement with and without visual context.
Michael Rohs, Johannes Schöning, Martin Raubal, Georg Essl, Antonio Krüger
2007Modeling human interaction resources to support the design of wearable multimodal systems.
Tobias Klug, Max Mühlhäuser
2007Multimodal interaction analysis in a smart house.
Pilar Manchón Portillo, Carmen del Solar, Gabriel Amores Carredano, Guillermo Pérez-García
2007Multimodal interfaces in semantic interaction.
Naoto Iwahashi, Mikio Nakano
2007Multimodalcues for addressee-hood in triadic communication with a human information retrieval agent.
Jacques M. B. Terken, Irene Joris, Linda De Valk
2007Natural multimodal dialogue systems: a configurable dialogue and presentation strategies component.
Meriam Horchani, Benjamin Caron, Laurence Nigay, Franck Panaget
2007On-line multi-modal speaker diarization.
Athanasios K. Noulas, Ben J. A. Kröse
2007Password management using doodles.
Naveen Sundar Govindarajulu, Sriganesh Madhvanath
2007Positional mapping: keyboard mapping based on characters writing positions for mobile devices.
Ye Kyaw Thu, Yoshiyori Urano
2007Presentation sensei: a presentation training system using speech and image processing.
Kazutaka Kurihara, Masataka Goto, Jun Ogata, Yosuke Matsusaka, Takeo Igarashi
2007Proceedings of the 9th International Conference on Multimodal Interfaces, ICMI 2007, Nagoya, Aichi, Japan, November 12-15, 2007
Dominic W. Massaro, Kazuya Takeda, Deb Roy, Alexandros Potamianos
2007Real-time expression cloning using appearance models.
Barry-John Theobald, Iain A. Matthews, Jeffrey F. Cohn, Steven M. Boker
2007Reciprocal attentive communication in remote meeting with a humanoid robot.
Tomoyuki Morita, Kenji Mase, Yasushi Hirano, Shoji Kajita
2007Simultaneous prediction of dialog acts and address types in three-party conversations.
Yosuke Matsusaka, Mika Enomoto, Yasuharu Den
2007Speech-driven embodied entrainment character system with hand motion input in mobile environment.
Kouzi Osaki, Tomio Watanabe, Michiya Yamamoto
2007Speech-filtered bubble ray: improving target acquisition on display walls.
Edward Tse, Mark S. Hancock, Saul Greenberg
2007Statistical segmentation and recognition of fingertip trajectories for a gesture interface.
Kazuhiro Morimoto, Chiyomi Miyajima, Norihide Kitaoka, Katunobu Itou, Kazuya Takeda
2007Temporal filtering of visual speech for audio-visual speech recognition in acoustically and visually challenging environments.
Jong-Seok Lee, Cheol Hoon Park
2007The effect of input mode on inactivity and interaction times of multimodal systems.
Manolis Perakakis, Alexandros Potamianos
2007The great challenge of multimodal interfacestowards symbiosis of human and robots.
Norihiro Hagita
2007The micole architecture: multimodal support for inclusion of visually impaired children.
Thomas Pietrzak, Benoît Martin, Isabelle Pecci, Rami Saarinen, Roope Raisamo, Janne Järvi
2007The painful face: pain expression recognition using active appearance models.
Ahmed Bilal Ashraf, Simon Lucey, Jeffrey F. Cohn, Tsuhan Chen, Zara Ambadar, Kenneth M. Prkachin, Patty Solomon, Barry-John Theobald
2007The world of mushrooms: human-computer interaction prototype systems for ambient intelligence.
Yasuhiro Minami, Minako Sawaki, Kohji Dohsaka, Ryuichiro Higashinaka, Kentaro Ishizuka, Hideki Isozaki, Tatsushi Matsubayashi, Masato Miyoshi, Atsushi Nakamura, Takanobu Oba, Hiroshi Sawada, Takeshi Yamada, Eisaku Maeda
2007Totalrecall: visualization and semi-automatic annotation of very large audio-visual corpora.
Rony Kubat, Philip DeCamp, Brandon Roy
2007Toward content-aware multimodal tagging of personal photo collections.
Paulo Barthelmess, Edward C. Kaiser, David McGee
2007Towards smart meeting: enabling technologies and a real-world application.
Zhiwen Yu, Motoyuki Ozeki, Yohsuke Fujii, Yuichi Nakamura
2007User impressions of a stuffed doll robot's facing direction in animation systems.
Hiroko Tochigi, Kazuhiko Shinozawa, Norihiro Hagita
2007Using pen input features as indices of cognitive load.
Natalie Ruiz, Ronnie Taib, Yu (David) Shi, Eric H. C. Choi, Fang Chen
2007Using the influence model to recognize functional roles in meetings.
Wen Dong, Bruno Lepri, Alessandro Cappelletti, Alex Pentland, Fabio Pianesi, Massimo Zancanaro
2007Visual inference of human emotion and behaviour.
Shaogang Gong, Caifeng Shan, Tao Xiang
2007Voicepen: augmenting pen input with simultaneous non-linguisitic vocalization.
Susumu Harada, T. Scott Saponas, James A. Landay
2007Workshop on massive datasets.
Christopher Richard Wren, Yuri A. Ivanov
2007Workshop on tagging, mining and retrieval of human related activity information.
Paulo Barthelmess, Edward C. Kaiser