In conjunction with CVPR 2019.Long Beach, CA - June 16th 2019 (Morning)
Papers will be limited to 8 pages according to the CVPR format (c.f. main conference authors guidelines). All papers will be reviewed by at least two reviewers with double blind policy. Papers will be selected based on relevance, significance and novelty of results, technical merit, and clarity of presentation. Papers will be published in CVPR 2019 proceedings.
All the papers should be submitted using CMT website.
08:20 - Initial remarks and workshop introduction
08:30 - WiFi and Vision Multimodal Learning for Accurate and Robust Device-Free Human Activity Recognition - Han Zou; Jianfei Yang; Hari Prasanna Das; Huihan Liu; Yuxun Zhou; Costas Spanos.
08:50 - Invited Speaker: Kristen Grauman - Disentangling Object Sounds in Video
09:40 - Two Stream 3D Semantic Scene Completion - Martin Garbade; Yueh-Tung Chen; Johann Sawatzky; Jürgen Gall.
10:00 - Co-compressing and Unifying Deep CNN Models for Efﬁcient Human Face and Speaker Recognition - Timmy S. T. Wan; Jia-Hong Lee; Yi-Ming Chan; Chu-Song Chen.
10:20 - Coffee Break
10:30 - Invited Speaker: Alexei (Alyosha) Efros - Title TBA
11:20 - Spotlight session (3 mins presentation for each poster)
12:00 - Poster Session
Kristen Grauman is a Professor in the Department of Computer Science at the University of Texas at Austin and a Research Scientist in Facebook AI Research (FAIR). Her research in computer vision and machine learning focuses on visual recognition and search. Before joining UT-Austin in 2007, she received her Ph.D. at MIT. She is an Alfred P. Sloan Research Fellow, a Microsoft Research New Faculty Fellow, and a recipient of NSF CAREER and ONR Young Investigator awards, the PAMI Young Researcher Award in 2013, the 2013 Computers and Thought Award from the International Joint Conference on Artificial Intelligence (IJCAI), the Presidential Early Career Award for Scientists and Engineers (PECASE) in 2013, and the Helmholtz Prize (computer vision test of time award) in 2017. She and her collaborators were recognized with the CVPR Best Student Paper Award in 2008 for their work on hashing algorithms for large-scale image retrieval, the Marr Prize at ICCV in 2011 for their work on modeling relative visual attributes, the ACCV Best Application Paper Award in 2016 for their work on automatic cinematography for 360 degree video, and a Best Paper Honorable Mention at CHI in 2017 for work on crowds and visual question answering.
Alexei (Alyosha) Efros joined UC Berkeley in 2013. Prior to that, he was nine years on the faculty of Carnegie Mellon University, and has also been affiliated with École Normale Supérieure/INRIA and University of Oxford. His research is in the area of computer vision and computer graphics, especially at the intersection of the two. He is particularly interested in using data-driven techniques to tackle problems where large quantities of unlabeled visual data are readily available. Efros received his PhD in 2003 from UC Berkeley. He is a recipient of CVPR Best Paper Award (2006), NSF CAREER award (2006), Sloan Fellowship (2008), Guggenheim Fellowship (2008), Okawa Grant (2008), Finmeccanica Career Development Chair (2010), SIGGRAPH Significant New Researcher Award (2010), ECCV Best Paper Honorable Mention (2010), 3 Helmholtz Test-of-Time Prizes (1999,2003,2005), and the ACM Prize in Computing (2016).