| 2016 | 1st international workshop on embodied interaction with smart environments (workshop summary). Patrick Holthaus, Thomas Hermann, Sebastian Wrede, Sven Wachsmuth, Britta Wrede |
| 2016 | 1st international workshop on multi-sensorial approaches to human-food interaction (workshop summary). Anton Nijholt, Carlos Velasco, Kasun Karunanayaka, Gijs Huisman |
| 2016 | A deep look into group happiness prediction from images. Aleksandra Cerekovic |
| 2016 | A demonstration of multimodal debrief generation for AUVs, post-mission and in-mission. Helen F. Hastie, Xingkun Liu, Pedro Patrón |
| 2016 | A telepresence system using a flexible textile display. Kana Kushida, Hideyuki Nakanishi |
| 2016 | ASSP4MI2016: 2nd international workshop on advancements in social signal processing for multimodal interaction (workshop summary). Khiet P. Truong, Dirk Heylen, Toyoaki Nishida, Mohamed Chetouani |
| 2016 | Active speaker detection with audio-visual co-training. Punarjay Chakravarty, Jeroen Zegers, Tinne Tuytelaars, Hugo Van hamme |
| 2016 | Adaptive review for mobile MOOC learning via implicit physiological signal sensing. Phuong Pham, Jingtao Wang |
| 2016 | An IDE for multimodal controls in smart buildings. Sebastian Peters, Jan Ole Johanssen, Bernd Bruegge |
| 2016 | Analyzing mouth-opening transition pattern for predicting next speaker in multi-party meetings. Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka |
| 2016 | Analyzing the articulation features of children's touchscreen gestures. Alex Shaw, Lisa Anthony |
| 2016 | Ask Alice: an artificial retrieval of information agent. Michel F. Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew P. Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn W. Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, Jelte van Waterschoot |
| 2016 | Assessing symptoms of excessive SNS usage based on user behavior and emotion. Ploypailin Intapong, Tipporn Laohakangvalvit, Tiranee Achalakul, Michiko Ohkura |
| 2016 | Asynchronous video interviews vs. face-to-face interviews for communication skill measurement: a systematic study. Sowmya Rasipuram, Pooja Rao S. B., Dinesh Babu Jayagopi |
| 2016 | AttentiveVideo: quantifying emotional responses to mobile video advertisements. Phuong Pham, Jingtao Wang |
| 2016 | Audio and face video emotion recognition in the wild using deep neural networks and small datasets. Wan Ding, Mingyu Xu, Dong-Yan Huang, Weisi Lin, Minghui Dong, Xinguo Yu, Haizhou Li |
| 2016 | Automated recognition of facial expressions authenticity. Krystian Radlak, Bogdan Smolka |
| 2016 | Automated scoring of interview videos using Doc2Vec multimodal feature extraction paradigm. Lei Chen, Gary Feng, Chee Wee Leong, Blair Lehman, Michelle P. Martin-Raugh, Harrison Kell, Chong Min Lee, Su-Youn Yoon |
| 2016 | Automatic detection of very early stage of dementia through multimodal interaction with computer avatars. Hiroki Tanaka, Hiroyoshi Adachi, Norimichi Ukita, Takashi Kudo, Satoshi Nakamura |
| 2016 | Automatic emotion recognition in the wild using an ensemble of static and dynamic representations. Mostafa Mehdipour-Ghazi, Hazim Kemal Ekenel |
| 2016 | Automatic recognition of self-reported and perceived emotion: does joint modeling help? Biqiao Zhang, Georg Essl, Emily Mower Provost |
| 2016 | Bimanual input for multiscale navigation with pressure and touch gestures. Sébastien Pelurson, Laurence Nigay |
| 2016 | Comparison of three implementations of HeadTurn: a multimodal interaction technique with gaze and head turns. Oleg Spakov, Poika Isokoski, Jari Kangas, Jussi Rantala, Deepak Akkil, Roope Raisamo |
| 2016 | Computational model for interpersonal attitude expression. Soumia Dermouche |
| 2016 | Context and cognitive state triggered interventions for mobile MOOC learning. Xiang Xiao, Jingtao Wang |
| 2016 | Deep learning driven hypergraph representation for image-based emotion recognition. Yuchi Huang, Hanqing Lu |
| 2016 | Deep multimodal fusion for persuasiveness prediction. Behnaz Nojavanasghari, Deepak Gopinath, Jayanth Koushik, Tadas Baltrusaitis, Louis-Philippe Morency |
| 2016 | Design of multimodal instructional tutoring agents using augmented reality and smart learning objects. Anmol Srivastava, Pradeep Yammiyavar |
| 2016 | Detecting emergent leader in a meeting environment using nonverbal visual features only. Cigdem Beyan, Nicolò Carissimi, Francesca Capozzi, Sebastiano Vascon, Matteo Bustreo, Antonio Pierro, Cristina Becchio, Vittorio Murino |
| 2016 | Discovering facial expressions for states of amused, persuaded, informed, sentimental and inspired. Daniel McDuff |
| 2016 | Do speech features for detecting cognitive load depend on specific languages? Rui Chen, Tiantian Xie, Yingtao Xie, Tao Lin, Ningjiu Tang |
| 2016 | Driving maneuver prediction using car sensor and driver physiological signals. Nanxiang Li, Teruhisa Misu, Ashish Tawari, Alexandre Miranda Añon, Chihiro Suga, Kikuo Fujimura |
| 2016 | ERM4CT 2016: 2nd international workshop on emotion representations and modelling for companion systems (workshop summary). Kim Hartmann, Ingo Siegert, Albert Ali Salah, Khiet P. Truong |
| 2016 | Effects of multimodal cues on children's perception of uncanniness in a social robot. Maike Paetzel, Christopher Peters, Ingela Nyström, Ginevra Castellano |
| 2016 | Embodied media: expanding human capacity via virtual reality and telexistence (keynote). Susumu Tachi |
| 2016 | EmoReact: a multimodal approach and dataset for recognizing emotional responses in children. Behnaz Nojavanasghari, Tadas Baltrusaitis, Charles E. Hughes, Louis-Philippe Morency |
| 2016 | EmotiW 2016: video and group-level emotion recognition challenges. Abhinav Dhall, Roland Göcke, Jyoti Joshi, Jesse Hoey, Tom Gedeon |
| 2016 | Emotion recognition in the wild challenge 2016. Abhinav Dhall, Roland Goecke, Jyoti Joshi, Tom Gedeon |
| 2016 | Emotion recognition in the wild from videos using images. Sarah Adel Bargal, Emad Barsoum, Cristian Canton-Ferrer, Cha Zhang |
| 2016 | Emotion spotting: discovering regions of evidence in audio-visual emotion expressions. Yelin Kim, Emily Mower Provost |
| 2016 | Engaging children with autism in a shape perception task using a haptic force feedback interface. Alix Pérusseau-Lambert |
| 2016 | Enriching student learning experience using augmented reality and smart learning objects. Anmol Srivastava |
| 2016 | Estimating communication skills using dialogue acts and nonverbal features in multiple discussion datasets. Shogo Okada, Yoshihiko Ohtake, Yukiko I. Nakano, Yuki Hayashi, Hung-Hsuan Huang, Yutaka Takase, Katsumi Nitta |
| 2016 | Estimating self-assessed personality from body movements and proximity in crowded mingling scenarios. Laura Cabrera Quiros, Ekin Gedik, Hayley Hung |
| 2016 | Exploration of virtual environments on tablet: comparison between tactile and tangible interaction techniques. Adrien Arnaud, Jean-Baptiste Corrégé, Céline Clavel, Michèle Gouiffès, Mehdi Ammi |
| 2016 | Exploring multimodal biosignal features for stress detection during indoor mobility. Kyriaki Kalimeri, Charalampos Saitis |
| 2016 | Getting to know you: a multimodal investigation of team behavior and resilience to stress. Catherine Neubauer, Joshua Woolley, Peter Khooshabeh, Stefan Scherer |
| 2016 | Group happiness assessment using geometric features and dataset balancing. Vassilios Vonikakis, Yasin Yazici, Viet Dung Nguyen, Stefan Winkler |
| 2016 | Happiness level prediction with sequential inputs via multiple regressions. Jianshu Li, Sujoy Roy, Jiashi Feng, Terence Sim |
| 2016 | Help me if you can: towards multiadaptive interaction platforms (ICMI awardee talk). Wolfgang Wahlster |
| 2016 | HoloNet: towards robust emotion recognition in the wild. Anbang Yao, Dongqi Cai, Ping Hu, Shandong Wang, Liang Sha, Yurong Chen |
| 2016 | Immersive virtual reality with multimodal interaction and streaming technology. Wan-Lun Tsai, You-Lun Hsu, Chi-Po Lin, Chen-Yu Zhu, Yu-Cheng Chen, Min-Chun Hu |
| 2016 | Improving the generalizability of emotion recognition systems: towards emotion recognition in the wild. Biqiao Zhang |
| 2016 | International workshop on multimodal analyses enabling artificial agents in human- machine interaction (workshop summary). Ronald Böck, Francesca Bonin, Nick Campbell, Ronald Poppe |
| 2016 | International workshop on multimodal virtual and augmented reality (workshop summary). Wolfgang Hürst, Daisuke Iwai, Prabhakaran Balakrishnan |
| 2016 | International workshop on social learning and multimodal interaction for designing artificial agents (workshop summary). Mohamed Chetouani, Salvatore Maria Anzalone, Giovanna Varni, Isabelle Hupont Torres, Ginevra Castellano, Angelica Lim, Gentiane Venture |
| 2016 | Intervention-free selection using EEG and eye tracking. Felix Putze, Johannes Popp, Jutta Hild, Jürgen Beyerer, Tanja Schultz |
| 2016 | Investigating the impact of automated transcripts on non-native speakers' listening comprehension. Xun Cao, Naomi Yamashita, Toru Ishida |
| 2016 | Kawaii feeling estimation by product attributes and biological signals. Tipporn Laohakangvalvit, Tiranee Achalakul, Michiko Ohkura |
| 2016 | LSTM for dynamic emotion and group emotion recognition in the wild. Bo Sun, Qinglan Wei, Liandong Li, Qihua Xu, Jun He, Lejun Yu |
| 2016 | Language proficiency assessment of English L2 speakers based on joint analysis of prosody and native language. Yue Zhang, Felix Weninger, Anton Batliner, Florian Hönig, Björn W. Schuller |
| 2016 | Large-scale multimodal movie dialogue corpus. Ryu Yasuhara, Masashi Inoue, Ikuya Suga, Tetsuo Kosaka |
| 2016 | Laughter detection in the wild: demonstrating a tool for mobile social signal processing and visualization. Simon Flutura, Johannes Wagner, Florian Lingenfelser, Andreas Seiderer, Elisabeth André |
| 2016 | Learning to generate images and their descriptions (keynote). Richard S. Zemel |
| 2016 | Measuring the impact of multimodal behavioural feedback loops on social interactions. Ionut Damian, Tobias Baur, Elisabeth André |
| 2016 | Meeting extracts for discussion summarization based on multimodal nonverbal information. Fumio Nihei, Yukiko I. Nakano, Yutaka Takase |
| 2016 | Metering "black holes": networking stand-alone applications for distributed multimodal synchronization. Michael Cohen, Yousuke Nagayama, Bektur Ryskeldiev |
| 2016 | MobileSSI: asynchronous fusion for social signal interpretation in the wild. Simon Flutura, Johannes Wagner, Florian Lingenfelser, Andreas Seiderer, Elisabeth André |
| 2016 | Modeling user's decision process through gaze behavior. Kei Shimonishi |
| 2016 | Multi-clue fusion for emotion recognition in the wild. Jingwei Yan, Wenming Zheng, Zhen Cui, Chuangao Tang, Tong Zhang, Yuan Zong, Ning Sun |
| 2016 | Multi-sensor modeling of teacher instructional segments in live classrooms. Patrick J. Donnelly, Nathaniel Blanchard, Borhan Samei, Andrew McGregor Olney, Xiaoyi Sun, Brooke Ward, Sean Kelly, Martin Nystrand, Sidney K. D'Mello |
| 2016 | Multi-view common space learning for emotion recognition in the wild. Jianlong Wu, Zhouchen Lin, Hongbin Zha |
| 2016 | Multimodal affective feedback: combining thermal, vibrotactile, audio and visual signals. Graham A. Wilson, Euan Freeman, Stephen A. Brewster |
| 2016 | Multimodal biofeedback system integrating low-cost easy sensing devices. Wataru Hashiguchi, Junya Morita, Takatsugu Hirayama, Kenji Mase, Kazunori Yamada, Mayu Yokoya |
| 2016 | Multimodal feedback for finger-based interaction in mobile augmented reality. Wolfgang Hürst, Kevin Vriens |
| 2016 | Multimodal interaction with the autonomous Android ERICA. Divesh Lala, Pierrick Milhorat, Koji Inoue, Tianyu Zhao, Tatsuya Kawahara |
| 2016 | Multimodal positive computing system for public speaking with real-time feedback. Fiona Dermody |
| 2016 | Multimodal sensing of affect intensity. Shalini Bhatia |
| 2016 | Multimodal system for public speaking with real time feedback: a positive computing perspective. Fiona Dermody, Alistair Sutherland |
| 2016 | Multiscale kernel locally penalised discriminant analysis exemplified by emotion recognition in speech. Xinzhou Xu, Jun Deng, Maryna Gavryukova, Zixing Zhang, Li Zhao, Björn W. Schuller |
| 2016 | Native vs. non-native language fluency implications on multimodal interaction for interpersonal skills training. Mathieu Chollet, Helmut Prendinger, Stefan Scherer |
| 2016 | Niki and Julie: a robot and virtual human for studying multimodal social interaction. Ron Artstein, David R. Traum, Jill Boberg, Alesia Gainer, Jonathan Gratch, Emmanuel Johnson, Anton Leuski, Mikio Nakano |
| 2016 | On leveraging crowdsourced data for automatic perceived stress detection. Jonathan Aigrain, Arnaud Dapogny, Kevin Bailly, Séverine Dubuisson, Marcin Detyniecki, Mohamed Chetouani |
| 2016 | Personality classification and behaviour interpretation: an approach based on feature categories. Sheng Fang, Catherine Achard, Séverine Dubuisson |
| 2016 | Personalized unknown word detection in non-native language reading using eye gaze. Rui Hiraoka, Hiroki Tanaka, Sakriani Sakti, Graham Neubig, Satoshi Nakamura |
| 2016 | Player/Avatar body relations in multimodal augmented reality games. Nina Rosa |
| 2016 | Prediction/Assessment of communication skill using multimodal cues in social interactions. Sowmya Rasipuram |
| 2016 | Proceedings of the 18th ACM International Conference on Multimodal Interaction, ICMI 2016, Tokyo, Japan, November 12-16, 2016 Yukiko I. Nakano, Elisabeth André, Toyoaki Nishida, Louis-Philippe Morency, Carlos Busso, Catherine Pelachaud |
| 2016 | Reach out and touch me: effects of four distinct haptic technologies on affective touch in virtual reality. Imtiaj Ahmed, Ville J. Harjunen, Giulio Jacucci, Eve E. Hoggan, Niklas Ravaja, Michiel M. A. Spapé |
| 2016 | Semi-situated learning of verbal and nonverbal content for repeated human-robot interaction. Iolanda Leite, André Pereira, Allison Funkhouser, Boyang Li, Jill Fain Lehman |
| 2016 | Semi-supervised model personalization for improved detection of learner's emotional engagement. Nese Alyüz, Eda Okur, Ece Oktay, Utku Genc, Sinem Aslan, Sinem Emine Mete, Bert Arnrich, Asli Arslan Esme |
| 2016 | Sequence-based multimodal behavior modeling for social agents. Soumia Dermouche, Catherine Pelachaud |
| 2016 | Smooth eye movement interaction using EOG glasses. Murtaza Dhuliawala, Juyoung Lee, Junichi Shimizu, Andreas Bulling, Kai Kunze, Thad Starner, Woontack Woo |
| 2016 | Social signal processing for dummies. Ionut Damian, Michael Dietz, Frank Gaibler, Elisabeth André |
| 2016 | Sound emblems for affective multimodal output of a robotic tutor: a perception study. Helen F. Hastie, Pasquale Dente, Dennis Küster, Arvid Kappas |
| 2016 | Speaker impact on audience comprehension for academic presentations. Keith Curtis, Gareth J. F. Jones, Nick Campbell |
| 2016 | Stressful first impressions in job interviews. Ailbhe N. Finnerty, Skanda Muralidhar, Laurent Son Nguyen, Fabio Pianesi, Daniel Gatica-Perez |
| 2016 | The influence of appearance and interaction strategy of a social robot on the feeling of uncanniness in humans. Maike Paetzel |
| 2016 | Towards a listening agent: a system generating audiovisual laughs and smiles to show interest. Kevin El Haddad, Hüseyin Çakmak, Emer Gilmartin, Stéphane Dupont, Thierry Dutoit |
| 2016 | Towards a multimodal adaptive lighting system for visually impaired children. Euan Freeman, Graham A. Wilson, Stephen A. Brewster |
| 2016 | Towards building an attentive artificial listener: on the perception of attentiveness in audio-visual feedback tokens. Catharine Oertel, José Lopes, Yu Yu, Kenneth Alberto Funes Mora, Joakim Gustafson, Alan W. Black, Jean-Marc Odobez |
| 2016 | Training deep networks for facial expression recognition with crowd-sourced label distribution. Emad Barsoum, Cha Zhang, Cristian Canton-Ferrer, Zhengyou Zhang |
| 2016 | Training on the job: behavioral analysis of job interviews in hospitality. Skanda Muralidhar, Laurent Son Nguyen, Denise Frauendorfer, Jean-Marc Odobez, Marianne Schmid Mast, Daniel Gatica-Perez |
| 2016 | Trust me: multimodal signals of trustworthiness. Gale M. Lucas, Giota Stratou, Shari Lieblich, Jonathan Gratch |
| 2016 | Understanding people by tracking their word use (keynote). James W. Pennebaker |
| 2016 | Understanding the impact of personal feedback on face-to-face interactions in the workplace. Afra J. Mashhadi, Akhil Mathur, Marc Van den Broeck, Geert Vanderhulst, Fahim Kawsar |
| 2016 | Using touchscreen interaction data to predict cognitive workload. Philipp Mock, Peter Gerjets, Maike Tibus, Ulrich Trautwein, Korbinian Möller, Wolfgang Rosenstiel |
| 2016 | Video emotion recognition in the wild based on fusion of multimodal features. Shizhe Chen, Xinrui Li, Qin Jin, Shilei Zhang, Yong Qin |
| 2016 | Video-based emotion recognition using CNN-RNN and C3D hybrid networks. Yin Fan, Xiangju Lu, Dian Li, Yuanliu Liu |
| 2016 | Viewing support system for multi-view videos. Xueting Wang |
| 2016 | Visuotactile integration for depth perception in augmented reality. Nina Rosa, Wolfgang Hürst, Peter J. Werkhoven, Remco C. Veltkamp |
| 2016 | Wild wild emotion: a multimodal ensemble approach. John Gideon, Biqiao Zhang, Zakaria Aldeneh, Yelin Kim, Soheil Khorram, Duc Le, Emily Mower Provost |
| 2016 | Young Merlin: an embodied conversational agent in virtual reality. Iván Gris, Diego A. Rivera, Alex Rayón, Adriana I. Camacho, David G. Novick |