ICMI B

119 papers

YearTitle / Authors
2017"Stop over there": natural gesture and speech interaction for non-critical spontaneous intervention in autonomous driving.
Robert Tscharn, Marc Erich Latoschik, Diana Löffler, Jörn Hurtienne
2017A decentralised multimodal integration of social signals: a bio-inspired approach.
Esma Mansouri-Benssassi
2017A domain adaptation approach to improve speaker turn embedding using face representation.
Nam Le, Jean-Marc Odobez
2017A modular, multimodal open-source virtual interviewer dialog agent.
Kirby Cofino, Vikram Ramanarayanan, Patrick L. Lange, David Pautler, David Suendermann-Oeft, Keelan Evanini
2017A multimodal system to characterise melancholia: cascaded bag of words approach.
Shalini Bhatia, Munawar Hayat, Roland Goecke
2017A new deep-learning framework for group emotion recognition.
Qinglan Wei, Yijia Zhao, Qihua Xu, Liandong Li, Jun He, Lejun Yu, Bo Sun
2017AMHUSE: a multimodal dataset for HUmour SEnsing.
Giuseppe Boccignone, Donatello Conte, Vittorio Cuculo, Raffaella Lanzarotti
2017AQUBE: an interactive music reproduction system for aquariums.
Daisuke Sasaki, Musashi Nakajima, Yoshihiro Kanno
2017An investigation of dynamic crossmodal instantiation in TUIs.
Feng Feng, Tony Stockman
2017Analyzing first impressions of warmth and competence from observable nonverbal cues in expert-novice interactions.
Béatrice Biancardi, Angelo Cafaro, Catherine Pelachaud
2017Analyzing gaze behavior during turn-taking for estimating empathy skill level.
Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka
2017Animating the adelino robot with ERIK: the expressive robotics inverse kinematics.
Tiago Ribeiro, Ana Paiva
2017Audio-visual emotion recognition using deep transfer learning and multiple temporal models.
Xi Ouyang, Shigenori Kawaai, Ester Gue Hua Goh, Shengmei Shen, Wan Ding, Huaiping Ming, Dong-Yan Huang
2017Automatic assessment of communication skill in non-conventional interview settings: a comparative study.
Pooja Rao S. B., Sowmya Rasipuram, Rahul Das, Dinesh Babu Jayagopi
2017Automatic classification of auto-correction errors in predictive text entry based on EEG and context information.
Felix Putze, Maik Schünemann, Tanja Schultz, Wolfgang Stuerzlinger
2017Automatic detection of pain from spontaneous facial expressions.
Fatma Meawad, Su-Yin Yang, Fong Ling Loy
2017Automatically predicting human knowledgeability through non-verbal cues.
Abdelwahab Bourai, Tadas Baltrusaitis, Louis-Philippe Morency
2017Bimodal feedback for in-car mid-air gesture interaction.
Gözel Shakeri, John H. Williamson, Stephen A. Brewster
2017Bot or not: exploring the fine line between cyber and human identity.
Mirjam Wester, Matthew P. Aylett, David A. Braude
2017Boxer: a multimodal collision technique for virtual objects.
Byungjoo Lee, Qiao Deng, Eve E. Hoggan, Antti Oulasvirta
2017Collaborative robots: from action and interaction to collaboration (keynote).
Danica Kragic
2017Comparing human and machine recognition of children's touchscreen stroke gestures.
Alex Shaw, Jaime Ruiz, Lisa Anthony
2017Computer vision based fall detection by a convolutional neural network.
Miao Yu, Liyun Gong, Stefanos D. Kollias
2017Cross-modality interaction between EEG signals and facial expression.
Soheil Rayatdoost
2017Crowdsourcing ratings of caller engagement in thin-slice videos of human-machine dialog: benefits and pitfalls.
Vikram Ramanarayanan, Chee Wee Leong, David Suendermann-Oeft, Keelan Evanini
2017Cumulative attributes for pain intensity estimation.
Joy O. Egede, Michel F. Valstar
2017Data augmentation of wearable sensor data for parkinson's disease monitoring using convolutional neural networks.
Terry Taewoong Um, Franz Michael Josef Pfister, Daniel Pichler, Satoshi Endo, Muriel Lang, Sandra Hirche, Urban Fietzek, Dana Kulic
2017Demonstrating TouchScope: a hybrid multitouch oscilloscope interface.
Matthew Heinz, Sven Bertel, Florian Echtler
2017Digitising a medical clerking system with multimodal interaction support.
Harrison South, Martin Taylor, Huseyin Dogan, Nan Jiang
2017Do you speak to a human or a virtual agent? automatic analysis of user's social cues during mediated communication.
Magalie Ochs, Nathan Libermann, Axel Boidin, Thierry Chaminade
2017Does serial memory of locations benefit from spatially congruent audiovisual stimuli? investigating the effect of adding spatial sound to visuospatial sequences.
Benjamin Stahl, Georgios N. Marentakis
2017Emotion recognition in the wild using deep neural networks and Bayesian classifiers.
Luca Surace, Massimiliano Patacchiola, Elena Battini Sönmez, William Spataro, Angelo Cangelosi
2017Emotion recognition with multimodal features and temporal models.
Shuai Wang, Wenxuan Wang, Jinming Zhao, Shizhe Chen, Qin Jin, Shilei Zhang, Yong Qin
2017Estimating verbal expressions of task and social cohesion in meetings by quantifying paralinguistic mimicry.
Marjolein C. Nanninga, Yanxia Zhang, Nale Lehmann-Willenbrock, Zoltán Szlávik, Hayley Hung
2017Evaluating content-centric vs. user-centric ad affect recognition.
Abhinav Shukla, Shruti Shriya Gullapuram, Harish Katti, Karthik Yadati, Mohan S. Kankanhalli, Ramanathan Subramanian
2017Evaluating engagement in digital narratives from facial data.
Rui Huan
2017Evaluating robot facial expressions.
Ruth Aylett, Frank Broz, Ayan Ghosh, Peter E. McKenna, Gnanathusharan Rajendran, Mary Ellen Foster, Giorgio Roffo, Alessandro Vinciarelli
2017Evaluation of psychoacoustic sound parameters for sonification.
Jamie Iona Ferguson, Stephen A. Brewster
2017Freehand grasping in mixed reality: analysing variation during transition phase of interaction.
Maadh Al Kalbani, Maite Frutos-Pascual, Ian Williams
2017From individual to group-level emotion recognition: EmotiW 5.0.
Abhinav Dhall, Roland Goecke, Shreya Ghosh, Jyoti Joshi, Jesse Hoey, Tom Gedeon
2017Gastrophysics: using technology to enhance the experience of food and drink (keynote).
Charles Spence
2017GazeTap: towards hands-free interaction in the operating room.
Benjamin Hatscher, Maria Luz, Lennart E. Nacke, Norbert Elkmann, Veit Müller, Christian Hansen
2017GazeTouchPIN: protecting sensitive data on mobile devices using secure multimodal authentication.
Mohamed Khamis, Mariam Hassib, Emanuel von Zezschwitz, Andreas Bulling, Florian Alt
2017Gender and emotion recognition with implicit user signals.
Maneesh Bilalpur, Seyed Mostafa Kia, Manisha Chawla, Tat-Seng Chua, Ramanathan Subramanian
2017Group emotion recognition in the wild by combining deep neural networks for facial expression classification and scene-context analysis.
Asad Abbas, Stephan K. Chalup
2017Group emotion recognition with individual facial emotion CNNs and global image based CNNs.
Lianzhi Tan, Kaipeng Zhang, Kai Wang, Xiaoxing Zeng, Xiaojiang Peng, Yu Qiao
2017Group-level emotion recognition using deep models on image scene, faces, and skeletons.
Xin Guo, Luisa F. Polanía, Kenneth E. Barner
2017Group-level emotion recognition using transfer learning from face identification.
Alexandr G. Rassadin, Alexey S. Gruzdev, Andrey V. Savchenko
2017Hand-to-hand: an intermanual illusion of movement.
Dario Pittera, Marianna Obrist, Ali Israr
2017Head and shoulders: automatic error detection in human-robot interaction.
Pauline Trung, Manuel Giuliani, Michael Miksch, Gerald Stollnberger, Susanne Stadler, Nicole Mirnig, Manfred Tscheligi
2017Head-mounted displays as opera glasses: using mixed-reality to deliver an egalitarian user experience during live events.
Carl Bishop, Augusto Esteves, Iain McGregor
2017How may I help you? behavior and impressions in hospitality service encounters.
Skanda Muralidhar, Marianne Schmid Mast, Daniel Gatica-Perez
2017Human-centered recognition of children's touchscreen gestures.
Alex Shaw
2017Hybrid models for opinion analysis in speech interactions.
Valentin Barrière
2017ISIAA 2017: 1st international workshop on investigating social interactions with artificial agents (workshop summary).
Thierry Chaminade, Fabrice Lefèvre, Noël Nguyen, Magalie Ochs
2017Immersive virtual eating and conditioned food responses.
Nikita Mae B. Tuanquin
2017IntelliPrompter: speech-based dynamic note display interface for oral presentations.
Reza Asadi, Ha Trinh, Harriet J. Fell, Timothy W. Bickmore
2017Interactive narration with a child: impact of prosody and facial expressions.
Ovidiu Serban, Mukesh Barange, Sahba Zojaji, Alexandre Pauchet, Adeline Richard, Émilie Chanoni
2017Learning supervised scoring ensemble for emotion recognition in the wild.
Ping Hu, Dongqi Cai, Shandong Wang, Anbang Yao, Yurong Chen
2017Low-intrusive recognition of expressive movement qualities.
Radoslaw Niewiadomski, Maurizio Mancini, Stefano Piana, Paolo Alborno, Gualtiero Volpe, Antonio Camurri
2017MHFI 2017: 2nd international workshop on multisensorial approaches to human-food interaction (workshop summary).
Carlos Velasco, Anton Nijholt, Marianna Obrist, Katsunori Okajima, Rick Schifferstein, Charles Spence
2017MIE 2017: 1st international workshop on multimodal interaction for education (workshop summary).
Gualtiero Volpe, Monica Gori, Nadia Bianchi-Berthouze, Gabriel Baud-Bovy, Paolo Alborno, Erica Volta
2017MIRIAM: a multimodal chat-based interface for autonomous systems.
Helen F. Hastie, Francisco Javier Chiyah Garcia, David A. Robb, Pedro Patrón, Atanas Laskov
2017Markov reward models for analyzing group interaction.
Gabriel Murray
2017Meyendtris: a hands-free, multimodal tetris clone using eye tracking and passive BCI for intuitive neuroadaptive gaming.
Laurens R. Krol, Sarah-Christin Freytag, Thorsten O. Zander
2017Mining a multimodal corpus of doctor's training for virtual patient's feedbacks.
Chris Porhet, Magalie Ochs, Jorane Saubesty, Grégoire de Montcheuil, Roxane Bertrand
2017Modeling multimodal cues in a deep learning-based framework for emotion recognition in the wild.
Stefano Pini, Olfa Ben Ahmed, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara, Benoit Huet
2017Modelling fusion of modalities in multimodal interactive systems with MMMM.
Bruno Dumas, Jonathan Pirau, Denis Lalanne
2017Modulating the non-verbal social signals of a humanoid robot.
Amol A. Deshmukh, Bart G. W. Craenen, Alessandro Vinciarelli, Mary Ellen Foster
2017Multi-level feature fusion for group-level emotion recognition.
B. Balaji, Venkata Ramana Murthy Oruganti
2017Multi-modal emotion recognition using semi-supervised learning and multiple neural networks in the wild.
Dae Ha Kim, Min Kyu Lee, Dong-Yoon Choi, Byung Cheol Song
2017Multi-task learning of social psychology assessments and nonverbal features for automatic leadership identification.
Cigdem Beyan, Francesca Capozzi, Cristina Becchio, Vittorio Murino
2017Multimodal affect recognition in an interactive gaming environment using eye tracking and speech signals.
Ashwaq Al-Hargan, Neil Cooke, Tareq Binjammaz
2017Multimodal analysis of vocal collaborative search: a public corpus and results.
Daniel McDuff, Paul Thomas, Mary Czerwinski, Nick Craswell
2017Multimodal gender detection.
Mohamed Abouelenien, Verónica Pérez-Rosas, Rada Mihalcea, Mihai Burzo
2017Multimodal interaction in classrooms: implementation of tangibles in integrated music and math lessons.
Jennifer Müller, Uwe Oestermeier, Peter Gerjets
2017Multimodal language grounding for improved human-robot collaboration: exploring spatial semantic representations in the shared space of attention.
Dimosthenis Kontogiorgos
2017Multimodal sentiment analysis with word-level fusion and reinforcement learning.
Minghai Chen, Sen Wang, Paul Pu Liang, Tadas Baltrusaitis, Amir Zadeh, Louis-Philippe Morency
2017Playlab: telling stories with technology (workshop summary).
Julie R. Williamson, Tom Flint, Chris Speed
2017Pooling acoustic and lexical features for the prediction of valence.
Zakaria Aldeneh, Soheil Khorram, Dimitrios Dimitriadis, Emily Mower Provost
2017Pre-touch proxemics: moving the design space of touch targets from still graphics towards proxemic behaviors.
Ilhan Aslan, Elisabeth André
2017Predicting meeting extracts in group discussions using multimodal convolutional neural networks.
Fumio Nihei, Yukiko I. Nakano, Yutaka Takase
2017Predicting the distribution of emotion perception: capturing inter-rater variability.
Biqiao Zhang, Georg Essl, Emily Mower Provost
2017Proceedings of the 19th ACM International Conference on Multimodal Interaction, ICMI 2017, Glasgow, United Kingdom, November 13 - 17, 2017
Edward Lank, Alessandro Vinciarelli, Eve E. Hoggan, Sriram Subramanian, Stephen A. Brewster
2017Rapid development of multimodal interactive systems: a demonstration of platform for situated intelligence.
Dan Bohus, Sean Andrist, Mihai Jalobeanu
2017Real-time mixed-reality telepresence via 3D reconstruction with HoloLens and commodity depth sensors.
Michal Joachimczak, Juan Liu, Hiroshi Ando
2017Rhythmic micro-gestures: discreet interaction on-the-go.
Euan Freeman, Gareth Griffiths, Stephen A. Brewster
2017SAM: the school attachment monitor.
Dong-Bach Vo, Mohammad Tayarani, Maki Rooksby, Rui Huan, Alessandro Vinciarelli, Helen Minnis, Stephen A. Brewster
2017Situated conceptualization: a framework for multimodal interaction (keynote).
Lawrence W. Barsalou
2017Social robots for motivation and engagement in therapy.
Katie Winkle
2017Social signal extraction from egocentric photo-streams.
Maedeh Aghaei
2017Steps towards collaborative multimodal dialogue (sustained contribution award).
Phil Cohen
2017Tablets, tabletops, and smartphones: cross-platform comparisons of children's touchscreen interactions.
Julia Woodward, Alex Shaw, Aishat Aloba, Ayushi Jain, Jaime Ruiz, Lisa Anthony
2017Temporal alignment using the incremental unit framework.
Casey Kennington, Ting Han, David Schlangen
2017Temporal multimodal fusion for video emotion classification in the wild.
Valentin Vielzeuf, Stéphane Pateux, Frédéric Jurie
2017Text based user comments as a signal for automatic language identification of online videos.
A. Seza Dogruöz, Natalia Ponomareva, Sertan Girgin, Reshu Jain, Christoph Oehler
2017Textured surfaces for ultrasound haptic displays.
Euan Freeman, Ross Anderson, Julie R. Williamson, Graham A. Wilson, Stephen A. Brewster
2017The Boston Massacre history experience.
David G. Novick, Laura M. Rodriguez, Aaron Pacheco, Aaron Rodriguez, Laura Hinojos, Brad Cartwright, Marco Cardiel, Iván Gris Sepulveda, Olivia Rodriguez-Herrera, Enrique Ponce
2017The MULTISIMO multimodal corpus of collaborative interactions.
Maria Koutsombogera, Carl Vogel
2017The NoXi database: multimodal recordings of mediated novice-expert interactions.
Angelo Cafaro, Johannes Wagner, Tobias Baur, Soumia Dermouche, Mercedes Torres, Catherine Pelachaud, Elisabeth André, Michel F. Valstar
2017The relationship between task-induced stress, vocal changes, and physiological state during a dyadic team task.
Catherine Neubauer, Mathieu Chollet, Sharon Mozgai, Mark Dennison, Peter Khooshabeh, Stefan Scherer
2017The reliability of non-verbal cues for situated reference resolution and their interplay with language: implications for human robot interaction.
Stephanie Gross, Brigitte Krenn, Matthias Scheutz
2017Thermal in-car interaction for navigation.
Patrizia Di Campli San Vito, Stephen A. Brewster, Frank E. Pollick, Stuart White
2017TouchScope: a hybrid multitouch oscilloscope interface.
Matthew Heinz, Sven Bertel, Florian Echtler
2017Toward an efficient body expression recognition based on the synthesis of a neutral movement.
Arthur Crenn, Alexandre Meyer, Rizwan Ahmed Khan, Hubert Konik, Saïda Bouakaz
2017Towards a computational model for first impressions generation.
Béatrice Biancardi
2017Towards designing speech technology based assistive interfaces for children's speech therapy.
Revathy Nayar
2017Towards edible interfaces: designing interactions with food.
Tom Gayler
2017Towards the use of social interaction conventions as prior for gaze model adaptation.
Rémy Siegfried, Yu Yu, Jean-Marc Odobez
2017Tracking liking state in brain activity while watching multiple movies.
Naoto Terasawa, Hiroki Tanaka, Sakriani Sakti, Satoshi Nakamura
2017Trust triggers for multimodal command and control interfaces.
Helen F. Hastie, Xingkun Liu, Pedro Patrón
2017UE-HRI: a new dataset for the study of user engagement in spontaneous human-robot interactions.
Atef Ben Youssef, Chloé Clavel, Slim Essid, Miriam Bilac, Marine Chamoux, Angelica Lim
2017Using mobile virtual reality to empower people with hidden disabilities to overcome their barriers.
Matthieu Poyade, Glyn Morris, Ian Taylor, Victor Portela
2017Utilising natural cross-modal mappings for visual control of feature-based sound synthesis.
Augoustinos Tsiros, Grégory Leplâtre
2017Virtual debate coach design: assessing multimodal argumentation performance.
Volha Petukhova, Tobias Mayer, Andrei Malchanau, Harry Bunt
2017WOCCI 2017: 6th international workshop on child computer interaction (workshop summary).
Keelan Evanini, Maryam Najafian, Saeid Safavi, Kay Berkling
2017Wearable interactive display for the local positioning system (LPS).
Daniel M. Lofaro, Christopher Taylor, Ryan Tse, Donald Sofge
2017Web-based interactive media authoring system with multimodal interaction.
Bok Deuk Song, Yeon Jun Choi, Jong Hyun Park
2017ZSGL: zero shot gestural learning.
Naveen Madapana, Juan P. Wachs