In conjunction with CVPR 2025.
Nashville, TN
June 11th - 12th 2025 (Half day)
In recent years, the utilization of big data has greatly advanced Computer Vision and Machine Learning applications. However, the majority of these tasks have focused on only one modality, such as the visual one, with only a few incorporating multiple modalities like audio or thermal. Additionally, the handling of multimodal datasets remains a challenge, particularly in the areas of data acquisition, synchronization, and annotation. As a result, many research investigations have been limited to a single modality, and even when multiple modalities are considered independently, performance tends to suffer when compared to an integrated multimodal learning approach.
Recently, there has been a growing focus on leveraging the synchronization of multimodal streams to enhance the transfer of semantic information. Various works have successfully utilized combinations such as audio/video, RGB/depth, RGB/Lidar, visual/text, text/audio, and more, achieving exceptional outcomes. Additionally, intriguing applications have emerged, employing self-supervised methodologies that enable multiple modalities to learn associations without manual labeling. This approach yields more advanced feature representations as compared to individual modality processing. Moreover, researchers have explored training paradigms that allow neural networks to perform well even when one modality is absent due to sensor failure, impaired functioning, or unfavorable environmental conditions. These topics have garnered significant interest in the computer vision community, particularly in the field of autonomous driving. Furthermore, recent attention has been directed towards the fusion of language (including Large Language models) and vision, such as in the generation of images/videos from text (e.g., DALL-E, text2video), audio (wav2clip), or vice versa (image2speech). Exploiting multimodal scenarios, diffusion models have also emerged as a fascinating framework to explore.
The information fusion from multiple sensors is a topic of major interest also in industry, the exponential growth of companies working on automotive, drone vision, surveillance or robotics are just a few examples. Many companies are trying to automate processes, by using a large variety of control signals from different sources. The aim of this workshop is to generate momentum around this topic of growing interest, and to encourage interdisciplinary interaction and collaboration between computer vision, multimedia, remote sensing, and robotics communities, that will serve as a forum for research groups from academia and industry.
We expect contributions involving, but not limited to, image, video, audio, depth, IR, IMU, laser, text, drawings, synthetic, etc. Position papers with feasibility studies and cross-modality issues with highly applicative flair are also encouraged. Multimodal data analysis is a very important bridge among vision, multimedia, remote sensing, and robotics, therefore we expect a positive response from these communities.
Potential topics include, but are not limited to:
Papers will be limited to 8 pages according to the CVPR format (c.f. main conference authors guidelines also for what concerns dual and double submission). All papers will be reviewed by at least two reviewers with double blind policy. Papers will be selected based on relevance, significance and novelty of results, technical merit, and clarity of presentation. Papers will be published in CVPR 2025 workshop proceedings.
All the papers should be submitted using CMT website https://cmt3.research.microsoft.com/MULA2025.
Tentative
08:30-08:45 - Welcome from organizers and openings remarks
08:45-09:30 - Keynote 1
09:30-10:15 - Keynote 2
10:15-10:45 - Coffee Break
10:45-11:30 - Keynote 3
11:30-12:15 - Keynote 4
12:15-13:00 - Oral Session
13:00-13:15 - Closing Remarks
TBA - Poster Session (all papers)
Elisa Ricci is a Professor at the Department of Information Engineering and Computer Science (DISI) at University of Trento and the Head of the Research Unit Deep Visual Learning at Fondazione Bruno Kessler. Elisa is also the Coordinator of Doctoral Program in Information Engineering and Computer Science at University of Trento. She is an ELLIS and a IAPR Fellow. Her research lies at the intersection of computer vision, deep learning and robotics perception. She is interested in developing novel approaches for learning from visual and multi-modal data in an open world, with particular emphasis in methods for domain adaptation, continual and self-supervised learning.
Georgia Gkioxari is an Assistant Professor of Computing + Mathematical Sciences at Caltech and a William H. Hurt scholar. She is also a visiting researcher at Meta AI in the Embodied AI team. From 2016 to 2022, she was a research scientist at Meta's FAIR team. She received my PhD from UC Berkeley, where she was advised by Jitendra Malik. She did her bachelors in ECE at NTUA in Athens, Greece, where she worked with Petros Maragos. She is the recipient of the PAMI Young Researcher Award (2021).
Stéphane Lathuilière is a research scientist in the RobotLearn team at Inria Grenoble. His research interests include machine learning for computer vision problems (e.g., adaptation of foundation models, continual learning), generative models for image and video generation, and multimodal learning. Until December 2024, he was an associate professor (maître de conférence) at Telecom Paris, France, where he led the multimedia research team. Previously, he was a post-doctoral fellow at the University of Trento in the Multimedia and Human Understanding Group, led by Prof. Nicu Sebe and Prof. Elisa Ricci. He worked towards my Ph.D. in mathematics and computer science in the Perception Team at Inria under the supervision of Dr. Radu Horaud, and obtained it from Université Grenoble Alpes (France) in 2018.
Katerina Fragkiadaki is a JPMorgan Chase Associate Professor of Computer Science in the Machine Learning Department at Carnegie Mellon University. She works in Artificial Intelligence at the intersection of Computer Vision, Machine Learning, Language Understanding and Robotics. Prior to joining MLD's faculty she spent three years as a post doctoral researcher first at UC Berkeley working with Jitendra Malik and then at Google Research in Mountain View working with the video group. She completed her Ph.D. in GRASP, UPenn with Jianbo Shi . She did her undergraduate studies at the National Technical University of Athens and before that she was in Crete.
We gratefully acknowledge our reviewers
For additional info please contact us here