BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Department of Electrical and Computer Engineering (HKUECE) 電機與計算機工程系 - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://ece.hku.hk
X-WR-CALDESC:Events for Department of Electrical and Computer Engineering (HKUECE) 電機與計算機工程系
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Asia/Hong_Kong
BEGIN:STANDARD
TZOFFSETFROM:+0800
TZOFFSETTO:+0800
TZNAME:HKT
DTSTART:20230101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Asia/Hong_Kong:20241119T133000
DTEND;TZID=Asia/Hong_Kong:20241119T143000
DTSTAMP:20260512T075047
CREATED:20241030T022835Z
LAST-MODIFIED:20250114T032727Z
UID:19361-1732023000-1732026600@ece.hku.hk
SUMMARY:Towards Provable Unaligned Multimodal Learning: A Model Identification Perspective
DESCRIPTION:The seminar scheduled for November 19\, 2024 (Tue) will start earlier at 1:30 pm\, while the date and venue will remain unchanged.\nAbstract\n2023 was “the year of AI”\, highlighted by the release of numerous AI models with remarkable capabilities. Multimodal learning is at the forefront of AI advancements\, with state-of-the-art models like GPT-4 and Gemini emphasizing multimodal functionalities as their defining features. Despite its importance\, many aspects of multimodal learning\, and AI developments in general\, still lack a concrete and comprehensive understanding—which is essential for building resilient and trustworthy systems. Our research focuses on the understanding of AI/ML systems to drive theory-backed advancements. From this perspective\, this presentation revisits a core component of multimodal learning—Unsupervised Domain Translation (UDT). Many UDT systems\, such as CycleGAN\, use Distribution Matching (DM) modules\, which often fail in content-aligned translations due to measure-preserving automorphism (MPA). Existing remedies fall short of guaranteed performance. In my talk\, I will introduce a model identification perspective for UDT\, overcoming the MPA issues and ensuring identifiability of the desired translation functions. This is the first proven identification result in UDT under CycleGAN’s settings\, to our knowledge. We have also broadened these concepts\, providing solutions for various translation challenges\, enabling provable content-style disentanglement\, and offering more versatile cross-domain data generation. These advancements promise significant theoretically supported enhancements for UDT applications\, particularly in annotation-limited fields such as medicine and biology. \nSpeaker\nProf. Xiao Fu\nAssociate Professor\,\nOregon State University \nBiography of the Speaker\nXiao Fu has been with the School of Electrical Engineering and Computer Science\, Oregon State University since 2017\, where he is currently an Associate Professor. He received the Ph.D. degree in Electronic Engineering from The Chinese University of Hong Kong\, in 2014. He was a Postdoctoral Associate with the Department of Electrical and Computer Engineering\, University of Minnesota – Twin Cities\, from 2014 to 2017. His research interests include the broad area of machine learning and signal processing\, especially theory and algorithms. Dr. Fu received the Best Student Paper Award at ICASSP 2014\, the 2022 IEEE Signal Processing Society (SPS) Best Paper Award\, and the 2022 IEEE SPS Donald G. Fink Overview Paper Award. He also received the Outstanding Postdoctoral Scholar Award at University of Minnesota in 2016\, the Engelbrecht Early Career Faculty Award from the College of Engineering at Oregon State University in 2023\, and the National Science Foundation (NSF) CAREER Award in 2022. \nOrganiser\nProf. Kaibin Huang\nHead of Department\,\nDepartment of Electrical and Electronic Engineering\,\nThe University of Hong Kong \nAll are welcome!
URL:https://ece.hku.hk/events/20241119-1/
LOCATION:Room CB-603\, 6/F\, Chow Yei Ching Building\, The University of Hong Kong
CATEGORIES:Seminar
ATTACH;FMTTYPE=image/jpeg:https://ece.hku.hk/wp-content/uploads/2024/10/NEW-02_2.jpg
END:VEVENT
END:VCALENDAR