ICMI B

107 papers

YearTitle / Authors
2018!FTL, an Articulation-Invariant Stroke Gesture Recognizer with Controllable Position, Scale, and Rotation Invariances.
Jean Vanderdonckt, Paolo Roselli, Jorge Luis Pérez-Medina
2018"Honey, I Learned to Talk": Multimodal Fusion for Behavior Analysis.
Shao-Yen Tseng, Haoqi Li, Brian R. Baucom, Panayiotis G. Georgiou
20183rd International Workshop on Multisensory Approaches to Human-Food Interaction.
Anton Nijholt, Carlos Velasco, Marianna Obrist, Katsunori Okajima, Charles Spence
2018A Generative Approach for Dynamically Varying Photorealistic Facial Expressions in Human-Agent Interactions.
Yuchi Huang, Saad M. Khan
2018A Multimodal Approach for Predicting Changes in PTSD Symptom Severity.
Adria Mallol-Ragolta, Svati Dhamija, Terrance E. Boult
2018A Multimodal Approach to Understanding Human Vocal Expressions and Beyond.
Shrikanth S. Narayanan
2018A Multimodal-Sensor-Enabled Room for Unobtrusive Group Meeting Analysis.
Indrani Bhattacharya, Michael Foley, Ni Zhang, Tongtao Zhang, Christine Ku, Cameron Mine, Heng Ji, Christoph Riedl, Brooke Foucault Welles, Richard J. Radke
2018Adaptive Review for Mobile MOOC Learning via Multimodal Physiological Signal Sensing - A Longitudinal Study.
Phuong Pham, Jingtao Wang
2018An Attention Model for Group-Level Emotion Recognition.
Aarush Gupta, Dakshit Agrawal, Hardik Chauhan, Jose Dolz, Marco Pedersoli
2018An Ensemble Model Using Face and Body Tracking for Engagement Detection.
Cheng Chang, Cheng Zhang, Lei Chen, Yang Liu
2018An Occam's Razor View on Learning Audiovisual Emotion Recognition with Small Training Sets.
Valentin Vielzeuf, Corentin Kervadec, Stéphane Pateux, Alexis Lechervy, Frédéric Jurie
2018Analyzing Gaze Behavior and Dialogue Act during Turn-taking for Estimating Empathy Skill Level.
Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Ryuichiro Higashinaka, Junji Tomita
2018Attention Network for Engagement Prediction in the Wild.
Amanjot Kaur
2018Attention-based Audio-Visual Fusion for Robust Automatic Speech Recognition.
George Sterpu, Christian Saam, Naomi Harte
2018Automated Affect Detection in Deep Brain Stimulation for Obsessive-Compulsive Disorder: A Pilot Study.
Jeffrey F. Cohn, László A. Jeni, Itir Önal Ertugrul, Donald Malone, Michael S. Okun, David A. Borton, Wayne K. Goodman
2018Automatic Engagement Prediction with GAP Feature.
Xuesong Niu, Hu Han, Jiabei Zeng, Xuran Sun, Shiguang Shan, Yan Huang, Songfan Yang, Xilin Chen
2018Automatic Recognition of Affective Laughter in Spontaneous Dyadic Interactions from Audiovisual Signals.
Reshmashree B. Kantharaju, Fabien Ringeval, Laurent Besacier
2018Cascade Attention Networks For Group Emotion Recognition with Face, Body and Image Cues.
Kai Wang, Xiaoxing Zeng, Jianfei Yang, Debin Meng, Kaipeng Zhang, Xiaojiang Peng, Yu Qiao
2018Data Driven Non-Verbal Behavior Generation for Humanoid Robots.
Taras Kucherenko
2018Deep End-to-End Representation Learning for Food Type Recognition from Speech.
Benjamin Sertolli, Nicholas Cummins, Abdulkadir Sengür, Björn W. Schuller
2018Deep Recurrent Multi-instance Learning with Spatio-temporal Features for Engagement Intensity Prediction.
Jianfei Yang, Kai Wang, Xiaojiang Peng, Yu Qiao
2018Detecting Deception and Suspicion in Dyadic Game Interactions.
Jan Ondras, Hatice Gunes
2018Detecting User's Likes and Dislikes for a Virtual Negotiating Agent.
Caroline Langlet, Chloé Clavel
2018Dozing Off or Thinking Hard?: Classifying Multi-dimensional Attentional States in the Classroom from Video.
Felix Putze, Dennis Küster, Sonja Annerer-Walcher, Mathias Benedek
2018EAT -: The ICMI 2018 Eating Analysis and Tracking Challenge.
Simone Hantke, Maximilian Schmitt, Panagiotis Tzirakis, Björn W. Schuller
2018EEG-based Evaluation of Cognitive Workload Induced by Acoustic Parameters for Data Sonification.
Maneesh Bilalpur, Mohan S. Kankanhalli, Stefan Winkler, Ramanathan Subramanian
2018EVA: A Multimodal Argumentative Dialogue System.
Niklas Rach, Klaus Weber, Louisa Pragst, Elisabeth André, Wolfgang Minker, Stefan Ultes
2018EmotiW 2018: Audio-Video, Student Engagement and Group-Level Affect Prediction.
Abhinav Dhall, Amanjot Kaur, Roland Goecke, Tom Gedeon
2018End-to-end Learning for 3D Facial Animation from Speech.
Hai Xuan Pham, Yuting Wang, Vladimir Pavlovic
2018Enhancing Multiparty Cooperative Movements: A Robotic Wheelchair that Assists in Predicting Next Actions.
Hisato Fukuda, Keiichi Yamazaki, Akiko Yamazaki, Yosuke Saito, Emi Iiyama, Seiji Yamazaki, Yoshinori Kobayashi, Yoshinori Kuno, Keiko Ikeda
2018Estimating Head Motion from Egocentric Vision.
Satoshi Tsutsui, Sven Bambach, David J. Crandall, Chen Yu
2018Estimating Visual Focus of Attention in Multiparty Meetings using Deep Convolutional Neural Networks.
Kazuhiro Otsuka, Keisuke Kasuga, Martina Köhler
2018Evaluation of Real-time Deep Learning Turn-taking Models for Multiple Dialogue Scenarios.
Divesh Lala, Koji Inoue, Tatsuya Kawahara
2018Exploring A New Method for Food Likability Rating Based on DT-CWT Theory.
Ya'nan Guo, Jing Han, Zixing Zhang, Björn W. Schuller, Yide Ma
2018Exploring the Design of Audio-Kinetic Graphics for Education.
Annika Muehlbradt, Madhur Atreya, Darren Guinness, Shaun K. Kane
2018EyeLinks: A Gaze-Only Click Alternative for Heterogeneous Clickables.
Pedro Figueiredo, Manuel J. Fonseca
2018Floor Apportionment and Mutual Gazes in Native and Second-Language Conversation.
Ichiro Umata, Koki Ijuin, Tsuneo Kato, Seiichi Yamamoto
2018Functional-Based Acoustic Group Feature Selection for Automatic Recognition of Eating Condition.
Dara Pir
2018Gazeover - Exploring the UX of Gaze-triggered Affordance Communication for GUI Elements.
Ilhan Aslan, Michael Dietz, Elisabeth André
2018Generating fMRI-Enriched Acoustic Vectors using a Cross-Modality Adversarial Network for Emotion Recognition.
Gao-Yi Chao, Chun-Min Chang, Jeng-Lin Li, Ya-Tse Wu, Chi-Chun Lee
2018Group Interaction Frontiers in Technology.
Gabriel Murray, Hayley Hung, Joann Keyton, Catherine Lai, Nale Lehmann-Willenbrock, Catharine Oertel
2018Group-Level Emotion Recognition Using Hybrid Deep Models Based on Faces, Scenes, Skeletons and Visual Attentions.
Xin Guo, Bin Zhu, Luisa F. Polanía, Charles Boncelet, Kenneth E. Barner
2018Group-Level Emotion Recognition using Deep Models with A Four-stream Hybrid Network.
Ahmed-Shehab Khan, Zhiyuan Li, Jie Cai, Zibo Meng, James O'Reilly, Yan Tong
2018Hand, Foot or Voice: Alternative Input Modalities for Touchless Interaction in the Medical Domain.
Benjamin Hatscher, Christian Hansen
2018How to Shape the Humor of a Robot - Social Behavior Adaptation Based on Reinforcement Learning.
Klaus Weber, Hannes Ritschel, Ilhan Aslan, Florian Lingenfelser, Elisabeth André
2018Human, Chameleon or Nodding Dog?
Leshao Zhang, Patrick G. T. Healey
2018Human-Habitat for Health (H3): Human-habitat Multimodal Interaction for Promoting Health and Well-being in the Internet of Things Era.
Theodora Chaspari, Angeliki Metallinou, Leah I. Stein Duker, Amir H. Behzadan
2018I Smell Trouble: Using Multiple Scents To Convey Driving-Relevant Information.
Dmitrijs Dmitrenko, Emanuela Maggioni, Marianna Obrist
2018If You Ask Nicely: A Digital Assistant Rebuking Impolite Voice Commands.
Michael Bonfert, Maximilian Spliethöver, Roman Arzaroli, Marvin Lange, Martin Hanci, Robert Porzel
2018Improving Object Disambiguation from Natural Language using Empirical Models.
Daniel Prendergast, Daniel Szafir
2018Inferring User Intention using Gaze in Vehicles.
Yu-Sian Jiang, Garrett Warnell, Peter Stone
2018International Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction (Workshop Summary).
Ronald Böck, Francesca Bonin, Nick Campbell, Ronald Poppe
2018Interpretable Multimodal Deception Detection in Videos.
Hamid Karimi
2018Introducing WESAD, a Multimodal Dataset for Wearable Stress and Affect Detection.
Philip Schmidt, Attila Reiss, Robert Dürichen, Claus Marberger, Kristof Van Laerhoven
2018Joint Discrete and Continuous Emotion Prediction Using Ensemble and End-to-End Approaches.
Ehab AlBadawy, Yelin Kim
2018Keep Me in the Loop: Increasing Operator Situation Awareness through a Conversational Multimodal Interface.
David A. Robb, Francisco Javier Chiyah Garcia, Atanas Laskov, Xingkun Liu, Pedro Patrón, Helen F. Hastie
2018Large Vocabulary Continuous Audio-Visual Speech Recognition.
George Sterpu
2018Listening Skills Assessment through Computer Agents.
Hiroki Tanaka, Hideki Negoro, Hidemi Iwasaka, Satoshi Nakamura
2018Looking Beyond a Clever Narrative: Visual Context and Attention are Primary Drivers of Affect in Video Advertisements.
Abhinav Shukla, Harish Katti, Mohan S. Kankanhalli, Ramanathan Subramanian
2018MIRIAM: A Multimodal Interface for Explaining the Reasoning Behind Actions of Remote Autonomous Systems.
Helen F. Hastie, Francisco Javier Chiyah Garcia, David A. Robb, Atanas Laskov, Pedro Patrón
2018Modeling Cognitive Processes from Multimodal Signals.
Felix Putze, Jutta Hild, Akane Sano, Enkelejda Kasneci, Erin Solovey, Tanja Schultz
2018Modeling Empathy in Embodied Conversational Agents: Extended Abstract.
Özge Nilay Yalçin
2018Multi-Feature Based Emotion Recognition for Video Clips.
Chuanhe Liu, Tianhao Tang, Kui Lv, Minghao Wang
2018Multi-Modal Multi sensor Interaction between Human andHeterogeneous Multi-Robot System.
S. M. al Mahi
2018Multimodal Analysis of Client Behavioral Change Coding in Motivational Interviewing.
Chanuwas Aswamenakul, Lixing Liu, Kate B. Carey, Joshua Woolley, Stefan Scherer, Brian Borsari
2018Multimodal Continuous Turn-Taking Prediction Using Multiscale RNNs.
Matthew Roddy, Gabriel Skantze, Naomi Harte
2018Multimodal Control of Lighter-Than-Air Agents.
Daniel M. Lofaro, Donald Sofge
2018Multimodal Dialogue Management for Multiparty Interaction with Infants.
Setareh Nasihati Gilani, David R. Traum, Arcangelo Merla, Eugenia Hee, Zoey Walker, Barbara Manini, Grady Gallagher, Laura-Ann Petitto
2018Multimodal Interaction Modeling of Child Forensic Interviewing.
Victor Ardulov, Madelyn Mendlen, Manoj Kumar, Neha Anand, Shanna Williams, Thomas D. Lyon, Shrikanth S. Narayanan
2018Multimodal Local-Global Ranking Fusion for Emotion Recognition.
Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency
2018Multimodal Modeling of Coordination and Coregulation Patterns in Speech Rate during Triadic Collaborative Problem Solving.
Angela E. B. Stewart, Zachary A. Keirn, Sidney K. D'Mello
2018Multimodal Representation of Advertisements Using Segment-level Autoencoders.
Krishna Somandepalli, Victor R. Martinez, Naveen Kumar, Shrikanth S. Narayanan
2018Multimodal Teaching and Learning Analytics for Classroom and Online Educational Settings.
Chinchu Thomas
2018Multimodal and Context-Aware Interaction in Augmented Reality for Active Assistance.
Damien Brun
2018Multiple Spatio-temporal Feature Learning for Video-based Emotion Recognition in the Wild.
Cheng Lu, Wenming Zheng, Chaolong Li, Chuangao Tang, Suyuan Liu, Simeng Yan, Yuan Zong
2018Olfactory Display Prototype for Presenting and Sensing Authentic and Synthetic Odors.
Katri Salminen, Jussi Rantala, Poika Isokoski, Marko Lehtonen, Philipp Müller, Markus Karjalainen, Jari Väliaho, Anton Kontunen, Ville Nieminen, Joni Leivo, Anca A. Telembeci, Jukka Lekkala, Pasi Kallio, Veikko Surakka
2018Online Privacy-Safe Engagement Tracking System.
Cheng Zhang, Cheng Chang, Lei Chen, Yang Liu
2018Path Word: A Multimodal Password Entry Method for Ad-hoc Authentication Based on Digits' Shape and Smooth Pursuit Eye Movements.
Almoctar Hassoumi, Pourang Irani, Vsevolod Peysakhovich, Christophe Hurter
2018Pen + Mid-Air Gestures: Eliciting Contextual Gestures.
Ilhan Aslan, Tabea Schmidt, Jens Woehrle, Lukas Vogel, Elisabeth André
2018Population-specific Detection of Couples' Interpersonal Conflict using Multi-task Learning.
Aditya Gujral, Theodora Chaspari, Adela C. Timmons, Yehsong Kim, Sarah Barrett, Gayla Margolin
2018Predicting ADHD Risk from Touch Interaction Data.
Philipp Mock, Maike Tibus, Ann-Christine Ehlis, R. Harald Baayen, Peter Gerjets
2018Predicting Engagement Intensity in the Wild Using Temporal Convolutional Network.
Chinchu Thomas, Nitin Nair, Dinesh Babu Jayagopi
2018Predicting Group Performance in Task-Based Interaction.
Gabriel Murray, Catharine Oertel
2018Proceedings of the 2018 on International Conference on Multimodal Interaction, ICMI 2018, Boulder, CO, USA, October 16-20, 2018
Sidney K. D'Mello, Panayiotis G. Georgiou, Stefan Scherer, Emily Mower Provost, Mohammad Soleymani, Marcelo Worsley
2018Put That There: 20 Years of Research on Multimodal Interaction.
James L. Crowley
2018RainCheck: Overcoming Capacitive Interference Caused by Rainwater on Smartphones.
Ying-Chao Tung, Mayank Goel, Isaac Zinda, Jacob O. Wobbrock
2018Reinforcing, Reassuring, and Roasting: The Forms and Functions of the Human Smile.
Paula M. Niedenthal
2018Responding with Sentiment Appropriate for the User's Current Sentiment in Dialog as Inferred from Prosody and Gaze Patterns.
Anindita Nath
2018SAAMEAT: Active Feature Transformation and Selection Methods for the Recognition of User Eating Conditions.
Fasih Haider, Senja Pollak, Eleni Zarogianni, Saturnino Luz
2018Sensing Arousal and Focal Attention During Visual Interaction.
Oludamilare Matthews, Markel Vigo, Simon Harper
2018Simultaneous Multimodal Access to Wheelchair and Computer for People with Tetraplegia.
Md. Nazmus Sahadat, Nordine Sebkhi, Maysam Ghovanloo
2018Smart Arse: Posture Classification with Textile Sensors in Trousers.
Sophie Skach, Rebecca Stewart, Patrick G. T. Healey
2018Smell-O-Message: Integration of Olfactory Notifications into a Messaging Application to Improve Users' Performance.
Emanuela Maggioni, Robert Cobden, Dmitrijs Dmitrenko, Marianna Obrist
2018Strike A Pose: Capturing Non-Verbal Behaviour with Textile Sensors.
Sophie Skach
2018Survival at the Museum: A Cooperation Experiment with Emotionally Expressive Virtual Characters.
Ilaria Torre, Emma Carrigan, Killian McCabe, Rachel McDonnell, Naomi Harte
2018Tactile Sensitivity to Distributed Patterns in a Palm.
Bukun Son, Jaeyoung Park
2018TapTag: Assistive Gestural Interactions in Social Media on Touchscreens for Older Adults.
Shraddha Pandya, Yasmine N. El-Glaly
2018Ten Opportunities and Challenges for Advancing Student-Centered Multimodal Learning Analytics.
Sharon L. Oviatt
2018The Multimodal Dataset of Negative Affect and Aggression: A Validation Study.
Iulia Lefter, Siska Fitrianie
2018Toward Objective, Multifaceted Characterization of Psychotic Disorders: Lexical, Structural, and Disfluency Markers of Spoken Language.
Alexandria K. Vail, Elizabeth S. Liebson, Justin T. Baker, Louis-Philippe Morency
2018Towards Attentive Speed Reading on Small Screen Wearable Devices.
Wei Guo, Jingtao Wang
2018Understanding Mobile Reading via Camera Based Gaze Tracking and Kinematic Touch Modeling.
Wei Guo, Jingtao Wang
2018Unobtrusive Analysis of Group Interactions without Cameras.
Indrani Bhattacharya
2018Using Data-Driven Approach for Modeling Timing Parameters of American Sign Language.
Sedeeq Al-khazraji
2018Using Interlocutor-Modulated Attention BLSTM to Predict Personality Traits in Small Group Interaction.
Yun-Shao Lin, Chi-Chun Lee
2018Using Technology for Health and Wellbeing.
Mary Czerwinski
2018Video-based Emotion Recognition Using Deeply-Supervised Neural Networks.
Yingruo Fan, Jacqueline C. K. Lam, Victor O. K. Li