ICMI B

61 papers

YearTitle / Authors
2006A 'need to know' system for group classification.
Wen Dong, Jonathan Gips, Alex Pentland
2006A contextual multimodal integrator.
Péter Pál Boda
2006A fast and robust 3D head pose and gaze estimation system.
Koichi Kinoshita, Yong Ma, Shihong Lao, Masato Kawade
2006A new approach to haptic augmentation of the GUI.
Thomas N. Smyth, Arthur E. Kirkpatrick
2006Audio-visual emotion recognition in adult attachment interview.
Zhihong Zeng, Yuxiao Hu, Yun Fu, Thomas S. Huang, Glenn I. Roisman, Zhen Wen
2006Automatic detection of group functional roles in face to face interactions.
Massimo Zancanaro, Bruno Lepri, Fabio Pianesi
2006Automatic speech recognition for webcasts: how good is good enough and what to do when it isn't.
Cosmin Munteanu, Gerald Penn, Ronald Baecker, Yuecheng Zhang
2006CarDialer: multi-modal in-vehicle cellphone control application.
Vladimír Bergl, Martin Cmejrek, Martin Fanta, Martin Labský, Ladislav Serédi, Jan Sedivý, Lubos Ures
2006Co-Adaptation of audio-visual speech and gesture classifiers.
C. Mario Christoudias, Kate Saenko, Louis-Philippe Morency, Trevor Darrell
2006Collaborative multimodal photo annotation over digital paper.
Paulo Barthelmess, Edward C. Kaiser, Xiao Huang, David McGee, Philip R. Cohen
2006Collaborative multimodal photo annotation over digital paper.
Paulo Barthelmess, Edward C. Kaiser, Xiao Huang, David McGee, Philip R. Cohen
2006Combining audio and video to predict helpers' focus of attention in multiparty remote collaboration on physical tasks.
Jiazhi Ou, Yanxin Shi, Jeffrey Wong, Susan R. Fussell, Jie Yang
2006Comparing the effects of visual-auditory and visual-tactile feedback on user performance: a meta-analysis.
Jennifer L. Burke, Matthew S. Prewett, Ashley A. Gray, Liuquin Yang, Frederick R. B. Stilson, Michael D. Coovert, Linda R. Elliott, Elizabeth S. Redden
2006Computing human faces for human viewers: automated animation in photographs and paintings.
Volker Blanz
2006Cross-modal coordination of expressive strength between voice and gesture for personified media.
Tomoko Yonezawa, Noriko Suzuki, Shinji Abe, Kenji Mase, Kiyoshi Kogure
2006Detection and application of influence rankings in small group meetings.
Rutger Rienks, Dong Zhang, Daniel Gatica-Perez, Wilfried M. Post
2006EM detection of common origin of multi-modal cues.
Athanasios K. Noulas, Ben J. A. Kröse
2006Embodiment and multimodality.
Francis K. H. Quek
2006Embrace system for remote counseling.
Osamu Morikawa, Sayuri Hashimoto, Tsunetsugu Munakata, Junzo Okunaka
2006Enabling multimodal communications for enhancing the ability of learning for the visually impaired.
Francis K. H. Quek, David McNeill, Francisco Oliveira
2006Evaluating usability based on multimodal information: an empirical study.
Tao Lin, Atsumi Imamiya
2006Explorations in sound for tilting-based interfaces.
Matthias Rath, Michael Rohs
2006Foundations of human computing: facial expression and emotion.
Jeffrey F. Cohn
2006From vocal to multimodal dialogue management.
Miroslav Melichar, Pavel Cenek
2006GSI demo: multiuser gesture/speech interaction over digital tables by wrapping single user applications.
Edward Tse, Saul Greenberg, Chia Shen
2006Gaze-X: adaptive affective multimodal interface for single-user office scenarios.
Ludo Maat, Maja Pantic
2006Gender and age estimation system robust to pose variations.
Erina Takikawa, Koichi Kinoshita, Shihong Lao, Masato Kawade
2006HMM-based synthesis of emotional facial expressions during speech in synthetic talking heads.
Nadia Mana, Fabio Pianesi
2006Haptic phonemes: basic building blocks of haptic communication.
Mario J. Enriquez, Karon E. MacLean, Christian Chita
2006Human computing and machine understanding of human behavior: a survey.
Maja Pantic, Alex Pentland, Anton Nijholt, Thomas S. Huang
2006Human computing, virtual humans and artificial imperfection.
Zsófia Ruttkay, Dennis Reidsma, Anton Nijholt
2006Human perception of intended addressee during computer-assisted meetings.
Rebecca Lunsford, Sharon L. Oviatt
2006Human-Robot dialogue for joint construction tasks.
Mary Ellen Foster, Tomas By, Markus Rickert, Alois C. Knoll
2006Mixing virtual and actual.
Herbert H. Clark
2006Modeling naturalistic affective states via facial and vocal expressions recognition.
George Caridakis, Lori Malatesta, Loïc Kessous, Noam Amir, Amaryllis Raouzaiou, Kostas Karpouzis
2006Movement and music: designing gestural interfaces for computer-based musical instruments.
M. Sile O'Modhrain
2006Multimodal estimation of user interruptibility for smart mobile telephones.
Robert G. Malkin, Datong Chen, Jie Yang, Alex Waibel
2006Multimodal fusion: a new hybrid strategy for dialogue systems.
Pilar Manchón Portillo, Guillermo Pérez-García, Gabriel Amores Carredano
2006MyConnector: analysis of context cues to predict human availability for communication.
Maria Danninger, Tobias Kluge, Rainer Stiefelhagen
2006Proceedings of the 8th International Conference on Multimodal Interfaces, ICMI 2006, Banff, Alberta, Canada, November 2-4, 2006
Francis K. H. Quek, Jie Yang, Dominic W. Massaro, Abeer A. Alwan, Timothy J. Hazen
2006Prototyping novel collaborative multimodal systems: simulation, data collection and analysis tools for the next decade.
Alexander M. Arthur, Rebecca Lunsford, Matt Wesson, Sharon L. Oviatt
2006Recognizing gaze aversion gestures in embodied conversational discourse.
Louis-Philippe Morency, C. Mario Christoudias, Trevor Darrell
2006Salience modeling based on non-verbal modalities for spoken language understanding.
Shaolin Qu, Joyce Y. Chai
2006Short message dictation on Symbian series 60 mobile phones.
E. Karpov, Imre Kiss, Jussi Leppänen, Jesper Ø. Olsen, Daniela Oria, S. Sivadas, Jilei Tian
2006Speaker localization for microphone array-based ASR: the effects of accuracy on overlapping speech.
Hari Krishna Maganti, Daniel Gatica-Perez
2006Spontaneous vs. posed facial behavior: automatic analysis of brow actions.
Michel François Valstar, Maja Pantic, Zara Ambadar, Jeffrey F. Cohn
2006The NIST smart data flow system II multimodal data transport infrastructure.
Antoine Fillinger, Stéphane Degré, Imad Hamchi, Vincent Stanford
2006The benefits of multimodal information: a meta-analysis comparing visual and visual-tactile feedback.
Matthew S. Prewett, Liuquin Yang, Frederick R. B. Stilson, Ashley A. Gray, Michael D. Coovert, Jennifer L. Burke, Elizabeth S. Redden, Linda R. Elliott
2006The role of psychological ownership and ownership markers in collaborative working environment.
Qianying Wang, Alberto Battocchi, Ilenia Graziola, Fabio Pianesi, Daniel Tomasini, Massimo Zancanaro, Clifford Nass
2006Toward haptic rendering for a virtual dissection.
Nasim Melony Vafai, Shahram Payandeh, John Dill
2006Toward open-microphone engagement for multiparty interactions.
Rebecca Lunsford, Sharon L. Oviatt, Alexander M. Arthur
2006Towards the integration of shape-related information in 3-D gestures and speech.
Timo Sowa
2006Tracking head pose and focus of attention with multiple far-field cameras.
Michael Voit, Rainer Stiefelhagen
2006Tracking the multi person wandering visual focus of attention.
Kevin Smith, Sileye O. Ba, Daniel Gatica-Perez, Jean-Marc Odobez
2006Using maximum entropy (ME) model to incorporate gesture cues for SU detection.
Lei Chen, Mary P. Harper, Zhongqiang Huang
2006Using redundant speech and handwriting for learning new vocabulary and understanding abbreviations.
Edward C. Kaiser
2006VirtualHuman: dialogic and affective interaction with virtual characters.
Norbert Reithinger, Patrick Gebhard, Markus Löckelt, Alassane Ndiaye, Norbert Pfleger, Martin Klesen
2006Weight, weight, don't tell me.
Ted Warburton
2006Which one is better?: information navigation techniques for spatially aware handheld displays.
Michael Rohs, Georg Essl
2006Word graph based speech rcognition error correction by handwriting input.
Peng Liu, Frank K. Soong
2006roBlocks: a robotic construction kit for mathematics and science education.
Eric Schweikardt, Mark D. Gross