ICMI B

86 papers

YearTitle / Authors
2024"Uh, This One?": Leveraging Behavioral Signals for Detecting Confusion during Physical Tasks.
Maia Stiber, Dan Bohus, Sean Andrist
20243D Calling with Codec Avatars.
Yaser Sheikh
2024A Model of Factors Contributing to the Success of Dialogical Explanations.
Meisam Booshehri, Hendrik Buschmeier, Philipp Cimiano
2024A Multimodal Understanding of the Eye-Mind Link.
Megan Caruso
2024A Time Series Classification Pipeline for Detecting Interaction Ruptures in HRI Based on User Reactions.
Lennart Wachowiak, Peter Tisnikar, Andrew Coles, Gerard Canal, Oya Çeliktutan
2024A human-centered approach to design multimodal conversational systems.
Heloisa Candello
2024A multimodal analysis of environmental stress experienced by older adults during outdoor walking trips: Implications for designing new intelligent technologies to enhance walkability in low-income Latino communities.
Raquel Yupanqui, John Sohn, Yoojun Kim, Raquel Flores, Hanwool Lee, Jinwoo Kim, Sanghyun Lee, Youngjib Ham, Chanam Lee, Theodora Chaspari
2024A musical Robot for People with Dementia.
Paul Raingeard de la Bletiere
2024AI as Modality in Human Augmentation: Toward New Forms of Multimodal Interaction with AI-Embodied Modalities.
Radu-Daniel Vatavu
2024Across Trials vs Subjects vs Contexts: A Multi-Reservoir Computing Approach for EEG Variations in Emotion Recognition.
Anubhav, Kantaro Fujiwara
2024Automatic mild cognitive impairment estimation from the group conversation of coimagination method.
Sixia Li, Kazumi Kumagai, Mihoko Otake-Matsuura, Shogo Okada
2024Can Text-to-image Model Assist Multi-modal Learning for Visual Recognition with Visual Modality Missing?
Tiantian Feng, Daniel Yang, Digbalay Bose, Shrikanth Narayanan
2024Decoding Contact: Automatic Estimation of Contact Signatures in Parent-Infant Free Play Interactions.
Metehan Doyran, Albert Ali Salah, Ronald Poppe
2024Design Digital Multisensory Textile Experiences.
Shu Zhong
2024Detecting Autism from Head Movements using Kinesics.
Muhittin Gokmen, Evangelos Sariyanidi, Lisa Yankowitz, Casey J. Zampella, Robert T. Schultz, Birkan Tunç
2024Detecting Aware and Unaware Mind Wandering During Lecture Viewing: A Multimodal Machine Learning Approach Using Eye Tracking, Facial Videos and Physiological Data.
Babette Bühler, Efe Bozkir, Hannah Deininger, Patricia Goldberg, Peter Gerjets, Ulrich Trautwein, Enkelejda Kasneci
2024Detecting Deception in Natural Environments Using Incremental Transfer Learning.
Muneeb Imtiaz Ahmad, Abdullah S. Alzahrani, Sunbul M. Ahmad
2024Distinguishing Target and Non-Target Fixations with EEG and Eye Tracking in Realistic Visual Scenes.
Mansi Sharma, Camilo Andrés Martínez Martínez, Benedikt Emanuel Wirth, Antonio Krüger, Philipp Müller
2024Do We Need To Watch It All? Efficient Job Interview Video Processing with Differentiable Masking.
Hung Le, Sixia Li, Candy Olivia Mawalim, Hung-Hsuan Huang, Chee Wee Leong, Shogo Okada
2024DoubleDistillation: Enhancing LLMs for Informal Text Analysis using Multistage Knowledge Distillation from Speech and Text.
Fatema Hasan, Yulong Li, James R. Foulds, Shimei Pan, Bishwaranjan Bhattacharjee
2024ERR@HRI 2024 Challenge: Multimodal Detection of Errors and Failures in Human-Robot Interactions.
Micol Spitale, Maria Teresa Parreira, Maia Stiber, Minja Axelsson, Neval Kara, Garima Kankariya, Chien-Ming Huang, Malte F. Jung, Wendy Ju, Hatice Gunes
2024EVAC 2024 - Empathic Virtual Agent Challenge: Appraisal-based Recognition of Affective States.
Fabien Ringeval, Björn W. Schuller, Gérard Bailly, Safaa Azzakhnini, Hippolyte Fournier
2024Emotion Recognition for Multimodal Recognition of Attachment in School-Age Children.
Areej Buker, Alessandro Vinciarelli
2024Enhancing Collaboration and Performance among EMS Students through Multimodal Learning Analytics.
Vasundhara Joshi
2024Envisioning Futures: How the Modality of AI Recommendations Impacts Conversation Flow in AR-enhanced Dialogue.
Steeven Villa, Yannick Weiss, Mei-Yi Lu, Moritz Ziarko, Albrecht Schmidt, Jasmin Niess
2024Everything We Hear: Towards Tackling Misinformation in Podcasts.
Sachin Pathiyan Cherumanal, Ujwal Gadiraju, Damiano Spina
2024Exploring Interlocutor Gaze Interactions in Conversations based on Functional Spectrum Analysis.
Ayane Tashiro, Mai Imamura, Shiro Kumano, Kazuhiro Otsuka
2024Exploring the Alteration and Masking of Everyday Noise Sounds using Auditory Augmented Reality.
Isna Alfi Bustoni, Mark McGill, Stephen Anthony Brewster
2024Feeling Textiles through AI: An exploration into Multimodal Language Models and Human Perception Alignment.
Shu Zhong, Elia Gatti, Youngjun Cho, Marianna Obrist
2024First Multimodal Banquet: Exploring Innovative Technology for Commensality and Human-Food Interaction (CoFI2024).
Radoslaw Niewiadomski, Ferran Altarriba Bertran, Christopher Dawes, Marianna Obrist, Maurizio Mancini
2024First-Person Perspective Induces Stronger Feelings of Awe and Presence Compared to Third-Person Perspective in Virtual Reality.
Hiromu Otsubo, Alexander Marquardt, Melissa Steininger, Marvin Lehnort, Felix Dollack, Yutaro Hirao, Monica Perusquía-Hernández, Hideaki Uchiyama, Ernst Kruijff, Bernhard E. Riecke, Kiyoshi Kiyokawa
2024GENEA Workshop 2024: The 5th Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied Agents.
Youngwoo Yoon, Taras Kucherenko, Alice Delbosc, Rajmund Nagy, Teodor Nikolov, Gustav Eje Henter
2024Generalization Boost in Bimodal Classification via Data Fusion Trained on Sparse Datasets.
Wentao Yu, Dorothea Kolossa, Robert M. Nickel
2024Generating Facial Expression Sequences of Complex Emotions with Generative Adversarial Networks.
Zakariae Belmekki, David Antonio Gómez Jáuregui, Patrick Reuter, Jun Li, Jean-Claude Martin, Karl Jenkins, Nadine Couture
2024Greta, what else? Our research towards building socially interactive agents.
Catherine Pelachaud
2024HumanEYEze 2024: Workshop on Eye Tracking for Multimodal Human-Centric Computing.
Michael Barz, Roman Bednarik, Andreas Bulling, Cristina Conati, Daniel Sonntag
2024Improving Usability of Data Charts in Multimodal Documents for Low Vision Users.
Yash Prakash, Akshay Kolgar Nayak, Shoaib Mohammed Alyaan, Pathan Aseef Khan, Hae Na Lee, Vikas Ashok
2024Integrating Multimodal Affective Signals for Stress Detection from Audio-Visual Data.
Debasmita Ghose, Oz Gitelson, Brian Scassellati
2024Investigating Multi-Reservoir Computing for EEG-based Emotion Recognition.
Anubhav
2024Is Distance a Modality? Multi-Label Learning for Speech-Based Joint Prediction of Attributed Traits and Perceived Distances in 3D Audio Immersive Environments.
Eva Fringi, Nesreen Alshubaily, Lorenzo Picinali, Stephen Anthony Brewster, Tanaya Guha, Alessandro Vinciarelli
2024Juicy Text: Onomatopoeia and Semantic Text Effects for Juicy Player Experiences.
Émilie Fabre, Katie Seaborn, Adrien Verhulst, Yuta Itoh, Jun Rekimoto
2024LLM-powered Multimodal Insight Summarization for UX Testing.
Kelsey Turbeville, Jennarong Muengtaweepongsa, Samuel Stevens, Jason Moss, Amy Pon, Kyra Lee, Charu Mehra, Jenny Gutierrez Villalobos, Ranjitha Kumar
2024Learning Co-Speech Gesture Representations in Dialogue through Contrastive Learning: An Intrinsic Evaluation.
Esam Ghaleb, Bulat Khaertdinov, Wim T. J. L. Pouw, Marlou Rasenberg, Judith Holler, Asli Özyürek, Raquel Fernández
2024Leveraging Prosody as an Informative Teaching Signal for Agent Learning: Exploratory Studies and Algorithmic Implications.
Matilda Knierim, Sahil Jain, Murat Han Aydogan, Kenneth Mitra, Kush Desai, Akanksha Saran, Kim Baraka
2024Lip Abnormality Detection for Patients with Repaired Cleft Lip and Palate: A Lip Normalization Approach.
Karen Rosero, Ali N. Salman, Rami R. Hallac, Carlos Busso
2024Low-Rank Adaptation of Time Series Foundational Models for Out-of-Domain Modality Forecasting.
Divij Gupta, Anubhav Bhatti, Suraj Parmar, Chen Dan, Yuwei Liu, Bingjie Shen, San Lee
2024M2RL: A Multimodal Multi-Interface Dataset for Robot Learning from Human Demonstrations.
Shaid Hasan, Mohammad Samin Yasar, Tariq Iqbal
2024MR-Driven Near-Future Realities: Previewing Everyday Life Real-World Experiences Using Mixed Reality.
Florian Mathis, Brad A. Myers, Ben Lafreniere, Michael Glueck, David P. S. Marques
2024MSP-GEO Corpus: A Multimodal Database for Understanding Video-Learning Experience.
Ali N. Salman, Ning Wang, Luz Martinez-Lucas, Andrea Vidal, Carlos Busso
2024Mitigation of gender bias in automatic facial non-verbal behaviors generation.
Alice Delbosc, Magalie Ochs, Nicolas Sabouret, Brian Ravenet, Stéphane Ayache
2024Modelling Social Intentions in Complex Conversational Settings.
Ivan Kondyurin
2024Multilingual Dyadic Interaction Corpus NoXi+J: Toward Understanding Asian-European Non-verbal Cultural Characteristics and their Influences on Engagement.
Marius Funk, Shogo Okada, Elisabeth André
2024Multimodal Co-Construction of Explanations with XAI Workshop.
Hendrik Buschmeier, Teena Hassan, Stefan Kopp
2024Multimodal Emotion Recognition Harnessing the Complementarity of Speech, Language, and Vision.
Thomas Thebaud, Anna Favaro, Yaohan Guan, Yuchen Yang, Prabhav Singh, Jesús Villalba, Laureano Moro-Velázquez, Najim Dehak
2024Multimodal User Enjoyment Detection in Human-Robot Conversation: The Power of Large Language Models.
André Pereira, Lubos Marcinek, Jura Miniota, Sofia Thunberg, Erik Lagerstedt, Joakim Gustafson, Gabriel Skantze, Bahar Irfan
2024NapTune: Efficient Model Tuning for Mood Classification using Previous Night's Sleep Measures along with Wearable Time-series.
Debaditya Shome, Nasim Montazeri Ghahjaverestan, Ali Etemad
2024NearFetch: Enhancing Touch-Based Mobile Interaction on Public Displays with an Embedded Programmable NFC Array.
Qijun Cao, Junqi Zhang, Shengtao Fan, Jiaqi Rong, Menghao Qi, Zhuowen Duan, Peikun Zhao, Ling Liu, Zihao Zhou, Wenjie Chen
2024Nonverbal Dynamics in Dyadic Videoconferencing Interaction: The Role of Video Resolution and Conversational Quality.
Chenyao Diao, Stephanie Arévalo Arboleda, Alexander Raake
2024On Multimodal Emotion Recognition for Human-Chatbot Interaction in the Wild.
Nikola Kovacevic, Christian Holz, Markus Gross, Rafael Wampfler
2024Online Multimodal End-of-Turn Prediction for Three-party Conversations.
Meng-Chen Lee, Zhigang Deng
2024PRISCA at ERR@HRI 2024: Multimodal Representation Learning for Detecting Interaction Ruptures in HRI.
Pradip Pramanick, Silvia Rossi
2024Participation Role-Driven Engagement Estimation of ASD Individuals in Neurodiverse Group Discussions.
Kalin Stefanov, Yukiko I. Nakano, Chisa Kobayashi, Ibuki Hoshina, Tatsuya Sakato, Fumio Nihei, Chihiro Takayama, Ryo Ishii, Masatsugu Tsujii
2024Perceived Text Relevance Estimation Using Scanpaths and GNNs.
Abdulrahman Mohamed Selim, Omair Shahzad Bhatti, Michael Barz, Daniel Sonntag
2024Perception of Stress: A Comparative Multimodal Analysis of Time-Continuous Stress Ratings from Self and Observers.
Ehsanul Haque Nirjhar, Winfred Arthur, Theodora Chaspari
2024Poke Typing: Effects of Hand-Tracking Input and Key Representation on Mid-Air Text Entry Performance in Virtual Reality.
Mehmet Akhoroz, Caglar Yildirim
2024Predictability of Understanding in Explanatory Interactions Based on Multimodal Cues.
Olcay Türk, Stefan Lazarov, Yu Wang, Hendrik Buschmeier, Angela Grimminger, Petra Wagner
2024Predicting Errors and Failures in Human-Robot Interaction from Multi-Modal Temporal Data.
Ruben Janssens, Eva Verhelst, Mathieu De Coster
2024Predicting Human Intent to Interact with a Public Robot: The People Approaching Robots Database (PAR-D).
Sydney Thompson, Alexander Lew, Yifan Li, Elizabeth Stanish, Alex Huang, Rohan Phanse, Marynel Vázquez
2024Proceedings of the 26th International Conference on Multimodal Interaction, ICMI 2024, San Jose, Costa Rica, November 4-8, 2024
Hayley Hung, Catharine Oertel, Mohammad Soleymani, Theodora Chaspari, Hamdi Dibeklioglu, Jainendra Shukla, Khiet P. Truong
2024Putting the "Brain" Back in the Eye-Mind Link: Aligning Eye Movements and Brain Activations During Naturalistic Reading.
Megan Caruso, Rosy Southwell, Leanne M. Hirshfield, Sidney D'Mello
2024Real-Time Trust Measurement in Human-Robot Interaction: Insights from Physiological Behaviours.
Abdullah S. Alzahrani, Muneeb Imtiaz Ahmad
2024RealSeal: Revolutionizing Media Authentication with Real-Time Realism Scoring.
Bhaktipriya Radharapu, Harish Krishna
2024Relating Students Cognitive Processes and Learner-Centered Emotions: An Advanced Deep Learning Approach.
Ashwin T. S., Gautam Biswas
2024SEMPI: A Database for Understanding Social Engagement in Video-Mediated Multiparty Interaction.
Maksim Siniukov, Yufeng Yin, Eli Fast, Yingshan Qi, Aarav Monga, Audrey Kim, Mohammad Soleymani
2024SMURF: Statistical Modality Uniqueness and Redundancy Factorization.
Torsten Wörtwein, Nicholas B. Allen, Jeffrey F. Cohn, Louis-Philippe Morency
2024ScentHaptics: Augmenting the Haptic Experiences of Digital Mid-Air Textiles with Scent.
Christopher Dawes, Jing Xue, Giada Brianza, Patricia Ivette Cornelio-Martínez, Roberto A. Montaño-Murillo, Emanuela Maggioni, Marianna Obrist
2024SemanticTap: A Haptic Toolkit for Vibration Semantic Design of Smartphone.
Rui Zhang, Yixuan Li, Zihuang Wu, Yong Zhang, Jie Zhao, Yang Jiao
2024Stressor Type Matters! - Exploring Factors Influencing Cross-Dataset Generalizability of Physiological Stress Detection.
Pooja Prajod, Bhargavi Mahesh, Elisabeth André
2024The Impact of Auditory Warning Types and Emergency Obstacle Avoidance Takeover Scenarios on Takeover Behavior.
Xuenan Li, Zhaoyang Xu
2024The Plausibility Paradox on Interactions with Complex Virtual Objects in Virtual Environments.
Daniel Alvarado-Chou, Yuen C. Law
2024Towards Automated Annotation of Infant-Caregiver Engagement Phases with Multimodal Foundation Models.
Daksitha Senel Withanage Don, Dominik Schiller, Tobias Hallmen, Silvan Mertes, Tobias Baur, Florian Lingenfelser, Mitho Müller, Lea Kaubisch, Corinna Reck, Elisabeth André
2024Towards Automatic Social Involvement Estimation.
Zonghuan Li
2024Towards Trustworthy and Efficient Diffusion Models.
Jayneel Vora
2024Understanding Non-Verbal Irony Markers: Machine Learning Insights Versus Human Judgment.
Micol Spitale, Fabio Catania, Francesca Panzeri
2024Video Game Technologies Applied for Teaching Assembly Language Programming.
Ernesto Rivera-Alvarado
2024Whispering Wearables: Multimodal Approach to Silent Speech Recognition with Head-Worn Devices.
Tanmay Srivastava, R. Michael Winters, Thomas M. Gable, Yu-Te Wang, Teresa LaScala, Ivan J. Tashev