ICMI B

113 papers

YearTitle / Authors
20161st international workshop on embodied interaction with smart environments (workshop summary).
Patrick Holthaus, Thomas Hermann, Sebastian Wrede, Sven Wachsmuth, Britta Wrede
20161st international workshop on multi-sensorial approaches to human-food interaction (workshop summary).
Anton Nijholt, Carlos Velasco, Kasun Karunanayaka, Gijs Huisman
2016A deep look into group happiness prediction from images.
Aleksandra Cerekovic
2016A demonstration of multimodal debrief generation for AUVs, post-mission and in-mission.
Helen F. Hastie, Xingkun Liu, Pedro Patrón
2016A telepresence system using a flexible textile display.
Kana Kushida, Hideyuki Nakanishi
2016ASSP4MI2016: 2nd international workshop on advancements in social signal processing for multimodal interaction (workshop summary).
Khiet P. Truong, Dirk Heylen, Toyoaki Nishida, Mohamed Chetouani
2016Active speaker detection with audio-visual co-training.
Punarjay Chakravarty, Jeroen Zegers, Tinne Tuytelaars, Hugo Van hamme
2016Adaptive review for mobile MOOC learning via implicit physiological signal sensing.
Phuong Pham, Jingtao Wang
2016An IDE for multimodal controls in smart buildings.
Sebastian Peters, Jan Ole Johanssen, Bernd Bruegge
2016Analyzing mouth-opening transition pattern for predicting next speaker in multi-party meetings.
Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka
2016Analyzing the articulation features of children's touchscreen gestures.
Alex Shaw, Lisa Anthony
2016Ask Alice: an artificial retrieval of information agent.
Michel F. Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew P. Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn W. Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, Jelte van Waterschoot
2016Assessing symptoms of excessive SNS usage based on user behavior and emotion.
Ploypailin Intapong, Tipporn Laohakangvalvit, Tiranee Achalakul, Michiko Ohkura
2016Asynchronous video interviews vs. face-to-face interviews for communication skill measurement: a systematic study.
Sowmya Rasipuram, Pooja Rao S. B., Dinesh Babu Jayagopi
2016AttentiveVideo: quantifying emotional responses to mobile video advertisements.
Phuong Pham, Jingtao Wang
2016Audio and face video emotion recognition in the wild using deep neural networks and small datasets.
Wan Ding, Mingyu Xu, Dong-Yan Huang, Weisi Lin, Minghui Dong, Xinguo Yu, Haizhou Li
2016Automated recognition of facial expressions authenticity.
Krystian Radlak, Bogdan Smolka
2016Automated scoring of interview videos using Doc2Vec multimodal feature extraction paradigm.
Lei Chen, Gary Feng, Chee Wee Leong, Blair Lehman, Michelle P. Martin-Raugh, Harrison Kell, Chong Min Lee, Su-Youn Yoon
2016Automatic detection of very early stage of dementia through multimodal interaction with computer avatars.
Hiroki Tanaka, Hiroyoshi Adachi, Norimichi Ukita, Takashi Kudo, Satoshi Nakamura
2016Automatic emotion recognition in the wild using an ensemble of static and dynamic representations.
Mostafa Mehdipour-Ghazi, Hazim Kemal Ekenel
2016Automatic recognition of self-reported and perceived emotion: does joint modeling help?
Biqiao Zhang, Georg Essl, Emily Mower Provost
2016Bimanual input for multiscale navigation with pressure and touch gestures.
Sébastien Pelurson, Laurence Nigay
2016Comparison of three implementations of HeadTurn: a multimodal interaction technique with gaze and head turns.
Oleg Spakov, Poika Isokoski, Jari Kangas, Jussi Rantala, Deepak Akkil, Roope Raisamo
2016Computational model for interpersonal attitude expression.
Soumia Dermouche
2016Context and cognitive state triggered interventions for mobile MOOC learning.
Xiang Xiao, Jingtao Wang
2016Deep learning driven hypergraph representation for image-based emotion recognition.
Yuchi Huang, Hanqing Lu
2016Deep multimodal fusion for persuasiveness prediction.
Behnaz Nojavanasghari, Deepak Gopinath, Jayanth Koushik, Tadas Baltrusaitis, Louis-Philippe Morency
2016Design of multimodal instructional tutoring agents using augmented reality and smart learning objects.
Anmol Srivastava, Pradeep Yammiyavar
2016Detecting emergent leader in a meeting environment using nonverbal visual features only.
Cigdem Beyan, Nicolò Carissimi, Francesca Capozzi, Sebastiano Vascon, Matteo Bustreo, Antonio Pierro, Cristina Becchio, Vittorio Murino
2016Discovering facial expressions for states of amused, persuaded, informed, sentimental and inspired.
Daniel McDuff
2016Do speech features for detecting cognitive load depend on specific languages?
Rui Chen, Tiantian Xie, Yingtao Xie, Tao Lin, Ningjiu Tang
2016Driving maneuver prediction using car sensor and driver physiological signals.
Nanxiang Li, Teruhisa Misu, Ashish Tawari, Alexandre Miranda Añon, Chihiro Suga, Kikuo Fujimura
2016ERM4CT 2016: 2nd international workshop on emotion representations and modelling for companion systems (workshop summary).
Kim Hartmann, Ingo Siegert, Albert Ali Salah, Khiet P. Truong
2016Effects of multimodal cues on children's perception of uncanniness in a social robot.
Maike Paetzel, Christopher Peters, Ingela Nyström, Ginevra Castellano
2016Embodied media: expanding human capacity via virtual reality and telexistence (keynote).
Susumu Tachi
2016EmoReact: a multimodal approach and dataset for recognizing emotional responses in children.
Behnaz Nojavanasghari, Tadas Baltrusaitis, Charles E. Hughes, Louis-Philippe Morency
2016EmotiW 2016: video and group-level emotion recognition challenges.
Abhinav Dhall, Roland Göcke, Jyoti Joshi, Jesse Hoey, Tom Gedeon
2016Emotion recognition in the wild challenge 2016.
Abhinav Dhall, Roland Goecke, Jyoti Joshi, Tom Gedeon
2016Emotion recognition in the wild from videos using images.
Sarah Adel Bargal, Emad Barsoum, Cristian Canton-Ferrer, Cha Zhang
2016Emotion spotting: discovering regions of evidence in audio-visual emotion expressions.
Yelin Kim, Emily Mower Provost
2016Engaging children with autism in a shape perception task using a haptic force feedback interface.
Alix Pérusseau-Lambert
2016Enriching student learning experience using augmented reality and smart learning objects.
Anmol Srivastava
2016Estimating communication skills using dialogue acts and nonverbal features in multiple discussion datasets.
Shogo Okada, Yoshihiko Ohtake, Yukiko I. Nakano, Yuki Hayashi, Hung-Hsuan Huang, Yutaka Takase, Katsumi Nitta
2016Estimating self-assessed personality from body movements and proximity in crowded mingling scenarios.
Laura Cabrera Quiros, Ekin Gedik, Hayley Hung
2016Exploration of virtual environments on tablet: comparison between tactile and tangible interaction techniques.
Adrien Arnaud, Jean-Baptiste Corrégé, Céline Clavel, Michèle Gouiffès, Mehdi Ammi
2016Exploring multimodal biosignal features for stress detection during indoor mobility.
Kyriaki Kalimeri, Charalampos Saitis
2016Getting to know you: a multimodal investigation of team behavior and resilience to stress.
Catherine Neubauer, Joshua Woolley, Peter Khooshabeh, Stefan Scherer
2016Group happiness assessment using geometric features and dataset balancing.
Vassilios Vonikakis, Yasin Yazici, Viet Dung Nguyen, Stefan Winkler
2016Happiness level prediction with sequential inputs via multiple regressions.
Jianshu Li, Sujoy Roy, Jiashi Feng, Terence Sim
2016Help me if you can: towards multiadaptive interaction platforms (ICMI awardee talk).
Wolfgang Wahlster
2016HoloNet: towards robust emotion recognition in the wild.
Anbang Yao, Dongqi Cai, Ping Hu, Shandong Wang, Liang Sha, Yurong Chen
2016Immersive virtual reality with multimodal interaction and streaming technology.
Wan-Lun Tsai, You-Lun Hsu, Chi-Po Lin, Chen-Yu Zhu, Yu-Cheng Chen, Min-Chun Hu
2016Improving the generalizability of emotion recognition systems: towards emotion recognition in the wild.
Biqiao Zhang
2016International workshop on multimodal analyses enabling artificial agents in human- machine interaction (workshop summary).
Ronald Böck, Francesca Bonin, Nick Campbell, Ronald Poppe
2016International workshop on multimodal virtual and augmented reality (workshop summary).
Wolfgang Hürst, Daisuke Iwai, Prabhakaran Balakrishnan
2016International workshop on social learning and multimodal interaction for designing artificial agents (workshop summary).
Mohamed Chetouani, Salvatore Maria Anzalone, Giovanna Varni, Isabelle Hupont Torres, Ginevra Castellano, Angelica Lim, Gentiane Venture
2016Intervention-free selection using EEG and eye tracking.
Felix Putze, Johannes Popp, Jutta Hild, Jürgen Beyerer, Tanja Schultz
2016Investigating the impact of automated transcripts on non-native speakers' listening comprehension.
Xun Cao, Naomi Yamashita, Toru Ishida
2016Kawaii feeling estimation by product attributes and biological signals.
Tipporn Laohakangvalvit, Tiranee Achalakul, Michiko Ohkura
2016LSTM for dynamic emotion and group emotion recognition in the wild.
Bo Sun, Qinglan Wei, Liandong Li, Qihua Xu, Jun He, Lejun Yu
2016Language proficiency assessment of English L2 speakers based on joint analysis of prosody and native language.
Yue Zhang, Felix Weninger, Anton Batliner, Florian Hönig, Björn W. Schuller
2016Large-scale multimodal movie dialogue corpus.
Ryu Yasuhara, Masashi Inoue, Ikuya Suga, Tetsuo Kosaka
2016Laughter detection in the wild: demonstrating a tool for mobile social signal processing and visualization.
Simon Flutura, Johannes Wagner, Florian Lingenfelser, Andreas Seiderer, Elisabeth André
2016Learning to generate images and their descriptions (keynote).
Richard S. Zemel
2016Measuring the impact of multimodal behavioural feedback loops on social interactions.
Ionut Damian, Tobias Baur, Elisabeth André
2016Meeting extracts for discussion summarization based on multimodal nonverbal information.
Fumio Nihei, Yukiko I. Nakano, Yutaka Takase
2016Metering "black holes": networking stand-alone applications for distributed multimodal synchronization.
Michael Cohen, Yousuke Nagayama, Bektur Ryskeldiev
2016MobileSSI: asynchronous fusion for social signal interpretation in the wild.
Simon Flutura, Johannes Wagner, Florian Lingenfelser, Andreas Seiderer, Elisabeth André
2016Modeling user's decision process through gaze behavior.
Kei Shimonishi
2016Multi-clue fusion for emotion recognition in the wild.
Jingwei Yan, Wenming Zheng, Zhen Cui, Chuangao Tang, Tong Zhang, Yuan Zong, Ning Sun
2016Multi-sensor modeling of teacher instructional segments in live classrooms.
Patrick J. Donnelly, Nathaniel Blanchard, Borhan Samei, Andrew McGregor Olney, Xiaoyi Sun, Brooke Ward, Sean Kelly, Martin Nystrand, Sidney K. D'Mello
2016Multi-view common space learning for emotion recognition in the wild.
Jianlong Wu, Zhouchen Lin, Hongbin Zha
2016Multimodal affective feedback: combining thermal, vibrotactile, audio and visual signals.
Graham A. Wilson, Euan Freeman, Stephen A. Brewster
2016Multimodal biofeedback system integrating low-cost easy sensing devices.
Wataru Hashiguchi, Junya Morita, Takatsugu Hirayama, Kenji Mase, Kazunori Yamada, Mayu Yokoya
2016Multimodal feedback for finger-based interaction in mobile augmented reality.
Wolfgang Hürst, Kevin Vriens
2016Multimodal interaction with the autonomous Android ERICA.
Divesh Lala, Pierrick Milhorat, Koji Inoue, Tianyu Zhao, Tatsuya Kawahara
2016Multimodal positive computing system for public speaking with real-time feedback.
Fiona Dermody
2016Multimodal sensing of affect intensity.
Shalini Bhatia
2016Multimodal system for public speaking with real time feedback: a positive computing perspective.
Fiona Dermody, Alistair Sutherland
2016Multiscale kernel locally penalised discriminant analysis exemplified by emotion recognition in speech.
Xinzhou Xu, Jun Deng, Maryna Gavryukova, Zixing Zhang, Li Zhao, Björn W. Schuller
2016Native vs. non-native language fluency implications on multimodal interaction for interpersonal skills training.
Mathieu Chollet, Helmut Prendinger, Stefan Scherer
2016Niki and Julie: a robot and virtual human for studying multimodal social interaction.
Ron Artstein, David R. Traum, Jill Boberg, Alesia Gainer, Jonathan Gratch, Emmanuel Johnson, Anton Leuski, Mikio Nakano
2016On leveraging crowdsourced data for automatic perceived stress detection.
Jonathan Aigrain, Arnaud Dapogny, Kevin Bailly, Séverine Dubuisson, Marcin Detyniecki, Mohamed Chetouani
2016Personality classification and behaviour interpretation: an approach based on feature categories.
Sheng Fang, Catherine Achard, Séverine Dubuisson
2016Personalized unknown word detection in non-native language reading using eye gaze.
Rui Hiraoka, Hiroki Tanaka, Sakriani Sakti, Graham Neubig, Satoshi Nakamura
2016Player/Avatar body relations in multimodal augmented reality games.
Nina Rosa
2016Prediction/Assessment of communication skill using multimodal cues in social interactions.
Sowmya Rasipuram
2016Proceedings of the 18th ACM International Conference on Multimodal Interaction, ICMI 2016, Tokyo, Japan, November 12-16, 2016
Yukiko I. Nakano, Elisabeth André, Toyoaki Nishida, Louis-Philippe Morency, Carlos Busso, Catherine Pelachaud
2016Reach out and touch me: effects of four distinct haptic technologies on affective touch in virtual reality.
Imtiaj Ahmed, Ville J. Harjunen, Giulio Jacucci, Eve E. Hoggan, Niklas Ravaja, Michiel M. A. Spapé
2016Semi-situated learning of verbal and nonverbal content for repeated human-robot interaction.
Iolanda Leite, André Pereira, Allison Funkhouser, Boyang Li, Jill Fain Lehman
2016Semi-supervised model personalization for improved detection of learner's emotional engagement.
Nese Alyüz, Eda Okur, Ece Oktay, Utku Genc, Sinem Aslan, Sinem Emine Mete, Bert Arnrich, Asli Arslan Esme
2016Sequence-based multimodal behavior modeling for social agents.
Soumia Dermouche, Catherine Pelachaud
2016Smooth eye movement interaction using EOG glasses.
Murtaza Dhuliawala, Juyoung Lee, Junichi Shimizu, Andreas Bulling, Kai Kunze, Thad Starner, Woontack Woo
2016Social signal processing for dummies.
Ionut Damian, Michael Dietz, Frank Gaibler, Elisabeth André
2016Sound emblems for affective multimodal output of a robotic tutor: a perception study.
Helen F. Hastie, Pasquale Dente, Dennis Küster, Arvid Kappas
2016Speaker impact on audience comprehension for academic presentations.
Keith Curtis, Gareth J. F. Jones, Nick Campbell
2016Stressful first impressions in job interviews.
Ailbhe N. Finnerty, Skanda Muralidhar, Laurent Son Nguyen, Fabio Pianesi, Daniel Gatica-Perez
2016The influence of appearance and interaction strategy of a social robot on the feeling of uncanniness in humans.
Maike Paetzel
2016Towards a listening agent: a system generating audiovisual laughs and smiles to show interest.
Kevin El Haddad, Hüseyin Çakmak, Emer Gilmartin, Stéphane Dupont, Thierry Dutoit
2016Towards a multimodal adaptive lighting system for visually impaired children.
Euan Freeman, Graham A. Wilson, Stephen A. Brewster
2016Towards building an attentive artificial listener: on the perception of attentiveness in audio-visual feedback tokens.
Catharine Oertel, José Lopes, Yu Yu, Kenneth Alberto Funes Mora, Joakim Gustafson, Alan W. Black, Jean-Marc Odobez
2016Training deep networks for facial expression recognition with crowd-sourced label distribution.
Emad Barsoum, Cha Zhang, Cristian Canton-Ferrer, Zhengyou Zhang
2016Training on the job: behavioral analysis of job interviews in hospitality.
Skanda Muralidhar, Laurent Son Nguyen, Denise Frauendorfer, Jean-Marc Odobez, Marianne Schmid Mast, Daniel Gatica-Perez
2016Trust me: multimodal signals of trustworthiness.
Gale M. Lucas, Giota Stratou, Shari Lieblich, Jonathan Gratch
2016Understanding people by tracking their word use (keynote).
James W. Pennebaker
2016Understanding the impact of personal feedback on face-to-face interactions in the workplace.
Afra J. Mashhadi, Akhil Mathur, Marc Van den Broeck, Geert Vanderhulst, Fahim Kawsar
2016Using touchscreen interaction data to predict cognitive workload.
Philipp Mock, Peter Gerjets, Maike Tibus, Ulrich Trautwein, Korbinian Möller, Wolfgang Rosenstiel
2016Video emotion recognition in the wild based on fusion of multimodal features.
Shizhe Chen, Xinrui Li, Qin Jin, Shilei Zhang, Yong Qin
2016Video-based emotion recognition using CNN-RNN and C3D hybrid networks.
Yin Fan, Xiangju Lu, Dian Li, Yuanliu Liu
2016Viewing support system for multi-view videos.
Xueting Wang
2016Visuotactile integration for depth perception in augmented reality.
Nina Rosa, Wolfgang Hürst, Peter J. Werkhoven, Remco C. Veltkamp
2016Wild wild emotion: a multimodal ensemble approach.
John Gideon, Biqiao Zhang, Zakaria Aldeneh, Yelin Kim, Soheil Khorram, Duc Le, Emily Mower Provost
2016Young Merlin: an embodied conversational agent in virtual reality.
Iván Gris, Diego A. Rivera, Alex Rayón, Adriana I. Camacho, David G. Novick