| 2017 | "Stop over there": natural gesture and speech interaction for non-critical spontaneous intervention in autonomous driving. Robert Tscharn, Marc Erich Latoschik, Diana Löffler, Jörn Hurtienne |
| 2017 | A decentralised multimodal integration of social signals: a bio-inspired approach. Esma Mansouri-Benssassi |
| 2017 | A domain adaptation approach to improve speaker turn embedding using face representation. Nam Le, Jean-Marc Odobez |
| 2017 | A modular, multimodal open-source virtual interviewer dialog agent. Kirby Cofino, Vikram Ramanarayanan, Patrick L. Lange, David Pautler, David Suendermann-Oeft, Keelan Evanini |
| 2017 | A multimodal system to characterise melancholia: cascaded bag of words approach. Shalini Bhatia, Munawar Hayat, Roland Goecke |
| 2017 | A new deep-learning framework for group emotion recognition. Qinglan Wei, Yijia Zhao, Qihua Xu, Liandong Li, Jun He, Lejun Yu, Bo Sun |
| 2017 | AMHUSE: a multimodal dataset for HUmour SEnsing. Giuseppe Boccignone, Donatello Conte, Vittorio Cuculo, Raffaella Lanzarotti |
| 2017 | AQUBE: an interactive music reproduction system for aquariums. Daisuke Sasaki, Musashi Nakajima, Yoshihiro Kanno |
| 2017 | An investigation of dynamic crossmodal instantiation in TUIs. Feng Feng, Tony Stockman |
| 2017 | Analyzing first impressions of warmth and competence from observable nonverbal cues in expert-novice interactions. Béatrice Biancardi, Angelo Cafaro, Catherine Pelachaud |
| 2017 | Analyzing gaze behavior during turn-taking for estimating empathy skill level. Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka |
| 2017 | Animating the adelino robot with ERIK: the expressive robotics inverse kinematics. Tiago Ribeiro, Ana Paiva |
| 2017 | Audio-visual emotion recognition using deep transfer learning and multiple temporal models. Xi Ouyang, Shigenori Kawaai, Ester Gue Hua Goh, Shengmei Shen, Wan Ding, Huaiping Ming, Dong-Yan Huang |
| 2017 | Automatic assessment of communication skill in non-conventional interview settings: a comparative study. Pooja Rao S. B., Sowmya Rasipuram, Rahul Das, Dinesh Babu Jayagopi |
| 2017 | Automatic classification of auto-correction errors in predictive text entry based on EEG and context information. Felix Putze, Maik Schünemann, Tanja Schultz, Wolfgang Stuerzlinger |
| 2017 | Automatic detection of pain from spontaneous facial expressions. Fatma Meawad, Su-Yin Yang, Fong Ling Loy |
| 2017 | Automatically predicting human knowledgeability through non-verbal cues. Abdelwahab Bourai, Tadas Baltrusaitis, Louis-Philippe Morency |
| 2017 | Bimodal feedback for in-car mid-air gesture interaction. Gözel Shakeri, John H. Williamson, Stephen A. Brewster |
| 2017 | Bot or not: exploring the fine line between cyber and human identity. Mirjam Wester, Matthew P. Aylett, David A. Braude |
| 2017 | Boxer: a multimodal collision technique for virtual objects. Byungjoo Lee, Qiao Deng, Eve E. Hoggan, Antti Oulasvirta |
| 2017 | Collaborative robots: from action and interaction to collaboration (keynote). Danica Kragic |
| 2017 | Comparing human and machine recognition of children's touchscreen stroke gestures. Alex Shaw, Jaime Ruiz, Lisa Anthony |
| 2017 | Computer vision based fall detection by a convolutional neural network. Miao Yu, Liyun Gong, Stefanos D. Kollias |
| 2017 | Cross-modality interaction between EEG signals and facial expression. Soheil Rayatdoost |
| 2017 | Crowdsourcing ratings of caller engagement in thin-slice videos of human-machine dialog: benefits and pitfalls. Vikram Ramanarayanan, Chee Wee Leong, David Suendermann-Oeft, Keelan Evanini |
| 2017 | Cumulative attributes for pain intensity estimation. Joy O. Egede, Michel F. Valstar |
| 2017 | Data augmentation of wearable sensor data for parkinson's disease monitoring using convolutional neural networks. Terry Taewoong Um, Franz Michael Josef Pfister, Daniel Pichler, Satoshi Endo, Muriel Lang, Sandra Hirche, Urban Fietzek, Dana Kulic |
| 2017 | Demonstrating TouchScope: a hybrid multitouch oscilloscope interface. Matthew Heinz, Sven Bertel, Florian Echtler |
| 2017 | Digitising a medical clerking system with multimodal interaction support. Harrison South, Martin Taylor, Huseyin Dogan, Nan Jiang |
| 2017 | Do you speak to a human or a virtual agent? automatic analysis of user's social cues during mediated communication. Magalie Ochs, Nathan Libermann, Axel Boidin, Thierry Chaminade |
| 2017 | Does serial memory of locations benefit from spatially congruent audiovisual stimuli? investigating the effect of adding spatial sound to visuospatial sequences. Benjamin Stahl, Georgios N. Marentakis |
| 2017 | Emotion recognition in the wild using deep neural networks and Bayesian classifiers. Luca Surace, Massimiliano Patacchiola, Elena Battini Sönmez, William Spataro, Angelo Cangelosi |
| 2017 | Emotion recognition with multimodal features and temporal models. Shuai Wang, Wenxuan Wang, Jinming Zhao, Shizhe Chen, Qin Jin, Shilei Zhang, Yong Qin |
| 2017 | Estimating verbal expressions of task and social cohesion in meetings by quantifying paralinguistic mimicry. Marjolein C. Nanninga, Yanxia Zhang, Nale Lehmann-Willenbrock, Zoltán Szlávik, Hayley Hung |
| 2017 | Evaluating content-centric vs. user-centric ad affect recognition. Abhinav Shukla, Shruti Shriya Gullapuram, Harish Katti, Karthik Yadati, Mohan S. Kankanhalli, Ramanathan Subramanian |
| 2017 | Evaluating engagement in digital narratives from facial data. Rui Huan |
| 2017 | Evaluating robot facial expressions. Ruth Aylett, Frank Broz, Ayan Ghosh, Peter E. McKenna, Gnanathusharan Rajendran, Mary Ellen Foster, Giorgio Roffo, Alessandro Vinciarelli |
| 2017 | Evaluation of psychoacoustic sound parameters for sonification. Jamie Iona Ferguson, Stephen A. Brewster |
| 2017 | Freehand grasping in mixed reality: analysing variation during transition phase of interaction. Maadh Al Kalbani, Maite Frutos-Pascual, Ian Williams |
| 2017 | From individual to group-level emotion recognition: EmotiW 5.0. Abhinav Dhall, Roland Goecke, Shreya Ghosh, Jyoti Joshi, Jesse Hoey, Tom Gedeon |
| 2017 | Gastrophysics: using technology to enhance the experience of food and drink (keynote). Charles Spence |
| 2017 | GazeTap: towards hands-free interaction in the operating room. Benjamin Hatscher, Maria Luz, Lennart E. Nacke, Norbert Elkmann, Veit Müller, Christian Hansen |
| 2017 | GazeTouchPIN: protecting sensitive data on mobile devices using secure multimodal authentication. Mohamed Khamis, Mariam Hassib, Emanuel von Zezschwitz, Andreas Bulling, Florian Alt |
| 2017 | Gender and emotion recognition with implicit user signals. Maneesh Bilalpur, Seyed Mostafa Kia, Manisha Chawla, Tat-Seng Chua, Ramanathan Subramanian |
| 2017 | Group emotion recognition in the wild by combining deep neural networks for facial expression classification and scene-context analysis. Asad Abbas, Stephan K. Chalup |
| 2017 | Group emotion recognition with individual facial emotion CNNs and global image based CNNs. Lianzhi Tan, Kaipeng Zhang, Kai Wang, Xiaoxing Zeng, Xiaojiang Peng, Yu Qiao |
| 2017 | Group-level emotion recognition using deep models on image scene, faces, and skeletons. Xin Guo, Luisa F. Polanía, Kenneth E. Barner |
| 2017 | Group-level emotion recognition using transfer learning from face identification. Alexandr G. Rassadin, Alexey S. Gruzdev, Andrey V. Savchenko |
| 2017 | Hand-to-hand: an intermanual illusion of movement. Dario Pittera, Marianna Obrist, Ali Israr |
| 2017 | Head and shoulders: automatic error detection in human-robot interaction. Pauline Trung, Manuel Giuliani, Michael Miksch, Gerald Stollnberger, Susanne Stadler, Nicole Mirnig, Manfred Tscheligi |
| 2017 | Head-mounted displays as opera glasses: using mixed-reality to deliver an egalitarian user experience during live events. Carl Bishop, Augusto Esteves, Iain McGregor |
| 2017 | How may I help you? behavior and impressions in hospitality service encounters. Skanda Muralidhar, Marianne Schmid Mast, Daniel Gatica-Perez |
| 2017 | Human-centered recognition of children's touchscreen gestures. Alex Shaw |
| 2017 | Hybrid models for opinion analysis in speech interactions. Valentin Barrière |
| 2017 | ISIAA 2017: 1st international workshop on investigating social interactions with artificial agents (workshop summary). Thierry Chaminade, Fabrice Lefèvre, Noël Nguyen, Magalie Ochs |
| 2017 | Immersive virtual eating and conditioned food responses. Nikita Mae B. Tuanquin |
| 2017 | IntelliPrompter: speech-based dynamic note display interface for oral presentations. Reza Asadi, Ha Trinh, Harriet J. Fell, Timothy W. Bickmore |
| 2017 | Interactive narration with a child: impact of prosody and facial expressions. Ovidiu Serban, Mukesh Barange, Sahba Zojaji, Alexandre Pauchet, Adeline Richard, Émilie Chanoni |
| 2017 | Learning supervised scoring ensemble for emotion recognition in the wild. Ping Hu, Dongqi Cai, Shandong Wang, Anbang Yao, Yurong Chen |
| 2017 | Low-intrusive recognition of expressive movement qualities. Radoslaw Niewiadomski, Maurizio Mancini, Stefano Piana, Paolo Alborno, Gualtiero Volpe, Antonio Camurri |
| 2017 | MHFI 2017: 2nd international workshop on multisensorial approaches to human-food interaction (workshop summary). Carlos Velasco, Anton Nijholt, Marianna Obrist, Katsunori Okajima, Rick Schifferstein, Charles Spence |
| 2017 | MIE 2017: 1st international workshop on multimodal interaction for education (workshop summary). Gualtiero Volpe, Monica Gori, Nadia Bianchi-Berthouze, Gabriel Baud-Bovy, Paolo Alborno, Erica Volta |
| 2017 | MIRIAM: a multimodal chat-based interface for autonomous systems. Helen F. Hastie, Francisco Javier Chiyah Garcia, David A. Robb, Pedro Patrón, Atanas Laskov |
| 2017 | Markov reward models for analyzing group interaction. Gabriel Murray |
| 2017 | Meyendtris: a hands-free, multimodal tetris clone using eye tracking and passive BCI for intuitive neuroadaptive gaming. Laurens R. Krol, Sarah-Christin Freytag, Thorsten O. Zander |
| 2017 | Mining a multimodal corpus of doctor's training for virtual patient's feedbacks. Chris Porhet, Magalie Ochs, Jorane Saubesty, Grégoire de Montcheuil, Roxane Bertrand |
| 2017 | Modeling multimodal cues in a deep learning-based framework for emotion recognition in the wild. Stefano Pini, Olfa Ben Ahmed, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara, Benoit Huet |
| 2017 | Modelling fusion of modalities in multimodal interactive systems with MMMM. Bruno Dumas, Jonathan Pirau, Denis Lalanne |
| 2017 | Modulating the non-verbal social signals of a humanoid robot. Amol A. Deshmukh, Bart G. W. Craenen, Alessandro Vinciarelli, Mary Ellen Foster |
| 2017 | Multi-level feature fusion for group-level emotion recognition. B. Balaji, Venkata Ramana Murthy Oruganti |
| 2017 | Multi-modal emotion recognition using semi-supervised learning and multiple neural networks in the wild. Dae Ha Kim, Min Kyu Lee, Dong-Yoon Choi, Byung Cheol Song |
| 2017 | Multi-task learning of social psychology assessments and nonverbal features for automatic leadership identification. Cigdem Beyan, Francesca Capozzi, Cristina Becchio, Vittorio Murino |
| 2017 | Multimodal affect recognition in an interactive gaming environment using eye tracking and speech signals. Ashwaq Al-Hargan, Neil Cooke, Tareq Binjammaz |
| 2017 | Multimodal analysis of vocal collaborative search: a public corpus and results. Daniel McDuff, Paul Thomas, Mary Czerwinski, Nick Craswell |
| 2017 | Multimodal gender detection. Mohamed Abouelenien, Verónica Pérez-Rosas, Rada Mihalcea, Mihai Burzo |
| 2017 | Multimodal interaction in classrooms: implementation of tangibles in integrated music and math lessons. Jennifer Müller, Uwe Oestermeier, Peter Gerjets |
| 2017 | Multimodal language grounding for improved human-robot collaboration: exploring spatial semantic representations in the shared space of attention. Dimosthenis Kontogiorgos |
| 2017 | Multimodal sentiment analysis with word-level fusion and reinforcement learning. Minghai Chen, Sen Wang, Paul Pu Liang, Tadas Baltrusaitis, Amir Zadeh, Louis-Philippe Morency |
| 2017 | Playlab: telling stories with technology (workshop summary). Julie R. Williamson, Tom Flint, Chris Speed |
| 2017 | Pooling acoustic and lexical features for the prediction of valence. Zakaria Aldeneh, Soheil Khorram, Dimitrios Dimitriadis, Emily Mower Provost |
| 2017 | Pre-touch proxemics: moving the design space of touch targets from still graphics towards proxemic behaviors. Ilhan Aslan, Elisabeth André |
| 2017 | Predicting meeting extracts in group discussions using multimodal convolutional neural networks. Fumio Nihei, Yukiko I. Nakano, Yutaka Takase |
| 2017 | Predicting the distribution of emotion perception: capturing inter-rater variability. Biqiao Zhang, Georg Essl, Emily Mower Provost |
| 2017 | Proceedings of the 19th ACM International Conference on Multimodal Interaction, ICMI 2017, Glasgow, United Kingdom, November 13 - 17, 2017 Edward Lank, Alessandro Vinciarelli, Eve E. Hoggan, Sriram Subramanian, Stephen A. Brewster |
| 2017 | Rapid development of multimodal interactive systems: a demonstration of platform for situated intelligence. Dan Bohus, Sean Andrist, Mihai Jalobeanu |
| 2017 | Real-time mixed-reality telepresence via 3D reconstruction with HoloLens and commodity depth sensors. Michal Joachimczak, Juan Liu, Hiroshi Ando |
| 2017 | Rhythmic micro-gestures: discreet interaction on-the-go. Euan Freeman, Gareth Griffiths, Stephen A. Brewster |
| 2017 | SAM: the school attachment monitor. Dong-Bach Vo, Mohammad Tayarani, Maki Rooksby, Rui Huan, Alessandro Vinciarelli, Helen Minnis, Stephen A. Brewster |
| 2017 | Situated conceptualization: a framework for multimodal interaction (keynote). Lawrence W. Barsalou |
| 2017 | Social robots for motivation and engagement in therapy. Katie Winkle |
| 2017 | Social signal extraction from egocentric photo-streams. Maedeh Aghaei |
| 2017 | Steps towards collaborative multimodal dialogue (sustained contribution award). Phil Cohen |
| 2017 | Tablets, tabletops, and smartphones: cross-platform comparisons of children's touchscreen interactions. Julia Woodward, Alex Shaw, Aishat Aloba, Ayushi Jain, Jaime Ruiz, Lisa Anthony |
| 2017 | Temporal alignment using the incremental unit framework. Casey Kennington, Ting Han, David Schlangen |
| 2017 | Temporal multimodal fusion for video emotion classification in the wild. Valentin Vielzeuf, Stéphane Pateux, Frédéric Jurie |
| 2017 | Text based user comments as a signal for automatic language identification of online videos. A. Seza Dogruöz, Natalia Ponomareva, Sertan Girgin, Reshu Jain, Christoph Oehler |
| 2017 | Textured surfaces for ultrasound haptic displays. Euan Freeman, Ross Anderson, Julie R. Williamson, Graham A. Wilson, Stephen A. Brewster |
| 2017 | The Boston Massacre history experience. David G. Novick, Laura M. Rodriguez, Aaron Pacheco, Aaron Rodriguez, Laura Hinojos, Brad Cartwright, Marco Cardiel, Iván Gris Sepulveda, Olivia Rodriguez-Herrera, Enrique Ponce |
| 2017 | The MULTISIMO multimodal corpus of collaborative interactions. Maria Koutsombogera, Carl Vogel |
| 2017 | The NoXi database: multimodal recordings of mediated novice-expert interactions. Angelo Cafaro, Johannes Wagner, Tobias Baur, Soumia Dermouche, Mercedes Torres, Catherine Pelachaud, Elisabeth André, Michel F. Valstar |
| 2017 | The relationship between task-induced stress, vocal changes, and physiological state during a dyadic team task. Catherine Neubauer, Mathieu Chollet, Sharon Mozgai, Mark Dennison, Peter Khooshabeh, Stefan Scherer |
| 2017 | The reliability of non-verbal cues for situated reference resolution and their interplay with language: implications for human robot interaction. Stephanie Gross, Brigitte Krenn, Matthias Scheutz |
| 2017 | Thermal in-car interaction for navigation. Patrizia Di Campli San Vito, Stephen A. Brewster, Frank E. Pollick, Stuart White |
| 2017 | TouchScope: a hybrid multitouch oscilloscope interface. Matthew Heinz, Sven Bertel, Florian Echtler |
| 2017 | Toward an efficient body expression recognition based on the synthesis of a neutral movement. Arthur Crenn, Alexandre Meyer, Rizwan Ahmed Khan, Hubert Konik, Saïda Bouakaz |
| 2017 | Towards a computational model for first impressions generation. Béatrice Biancardi |
| 2017 | Towards designing speech technology based assistive interfaces for children's speech therapy. Revathy Nayar |
| 2017 | Towards edible interfaces: designing interactions with food. Tom Gayler |
| 2017 | Towards the use of social interaction conventions as prior for gaze model adaptation. Rémy Siegfried, Yu Yu, Jean-Marc Odobez |
| 2017 | Tracking liking state in brain activity while watching multiple movies. Naoto Terasawa, Hiroki Tanaka, Sakriani Sakti, Satoshi Nakamura |
| 2017 | Trust triggers for multimodal command and control interfaces. Helen F. Hastie, Xingkun Liu, Pedro Patrón |
| 2017 | UE-HRI: a new dataset for the study of user engagement in spontaneous human-robot interactions. Atef Ben Youssef, Chloé Clavel, Slim Essid, Miriam Bilac, Marine Chamoux, Angelica Lim |
| 2017 | Using mobile virtual reality to empower people with hidden disabilities to overcome their barriers. Matthieu Poyade, Glyn Morris, Ian Taylor, Victor Portela |
| 2017 | Utilising natural cross-modal mappings for visual control of feature-based sound synthesis. Augoustinos Tsiros, Grégory Leplâtre |
| 2017 | Virtual debate coach design: assessing multimodal argumentation performance. Volha Petukhova, Tobias Mayer, Andrei Malchanau, Harry Bunt |
| 2017 | WOCCI 2017: 6th international workshop on child computer interaction (workshop summary). Keelan Evanini, Maryam Najafian, Saeid Safavi, Kay Berkling |
| 2017 | Wearable interactive display for the local positioning system (LPS). Daniel M. Lofaro, Christopher Taylor, Ryan Tse, Donald Sofge |
| 2017 | Web-based interactive media authoring system with multimodal interaction. Bok Deuk Song, Yeon Jun Choi, Jong Hyun Park |
| 2017 | ZSGL: zero shot gestural learning. Naveen Madapana, Juan P. Wachs |