BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Department of Electrical and Computer Engineering (HKUECE) 電機與計算機工程系 - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Department of Electrical and Computer Engineering (HKUECE) 電機與計算機工程系
X-ORIGINAL-URL:https://ece.hku.hk
X-WR-CALDESC:Events for Department of Electrical and Computer Engineering (HKUECE) 電機與計算機工程系
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Asia/Hong_Kong
BEGIN:STANDARD
TZOFFSETFROM:+0800
TZOFFSETTO:+0800
TZNAME:HKT
DTSTART:20240101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Asia/Hong_Kong:20251217T100000
DTEND;TZID=Asia/Hong_Kong:20251217T110000
DTSTAMP:20260511T134136
CREATED:20251215T071917Z
LAST-MODIFIED:20251215T072002Z
UID:114421-1765965600-1765969200@ece.hku.hk
SUMMARY:Seminar on Efficient Generative Modelling\, Multi-agent Systems Based on Knowledge Graphs and LLMs
DESCRIPTION:Abstract\nI will overview our recent results on diffusion generative modelling and how to make inference faster\, just in a few steps; also\, I will provide some new concepts of Engineering AI and discuss how we can construct efficient multi-agent systems based on knowledge graphs and LLMs to solve complex engineering problems. \nSpeaker\nProf. Evgeny BURNAEV\nVice President for AI Development & Professor\,\nSkolkovo Institute of Science and Technology\nVisiting Chair Professor\,\nHarbin Institute of Technology \nSpeaker’s Biography\nEvgeny BURNAEV is Vice President for AI Development and Professor at the Skolkovo Institute of Science and Technology (Skoltech)\, where he also directs the Skoltech AI Center. His research focuses on engineering AI\, generative modelling\, optimal transport\, physics-informed machine learning\, and topological data analysis for reliable\, efficient\, and interpretable AI systems. At the AI Center\, Burnaev leads interdisciplinary projects that bridge theoretical foundations and large-scale applications in energy\, transport\, materials\, and climate modelling. \nHe has authored more than 200 peer-reviewed publications in leading international venues (NeurIPS\, ICML\, ICLR\, IEEE\, Nature Scientific Reports) and collaborates with global industry leaders such as Sber\, Huawei\, and Gazprom Neft. His achievements have been recognised with the Russian Government Prize in Science and Technology (2024)\, the Sber Science Award (2024)\, and inclusion in the Elsevier–Stanford global Top-2% scientists list (2023–2025). He also serves as Visiting Chair Professor at the Harbin Institute of Technology and contributes to international expert communities and program committees advancing transparent and trustworthy AI worldwide. \nOrganiser\nProf. Ngai WONG\nDepartment of Electrical and Electronic Engineering\,\nThe University of Hong Kong\n\nAll are welcome!
URL:https://ece.hku.hk/events/20251217-1/
LOCATION:Room CB-601J\, 6/F\, Chow Yei Ching Building\, The University of Hong Kong
CATEGORIES:Highlights,Seminar
ATTACH;FMTTYPE=image/jpeg:https://ece.hku.hk/wp-content/uploads/2025/12/1280-5.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Asia/Hong_Kong:20251217T110000
DTEND;TZID=Asia/Hong_Kong:20251217T120000
DTSTAMP:20260511T134136
CREATED:20251211T021448Z
LAST-MODIFIED:20251211T021448Z
UID:114396-1765969200-1765972800@ece.hku.hk
SUMMARY:RPG Seminar – Bridging Visual Generation and Understanding in Native MLLMs with a Unified Visual Tokenizer
DESCRIPTION:Zoom Link: https://hku.zoom.us/j/98499142544?pwd=zvVs3BqWzIzCA071Dqq2rYW7vIAqj7.1 \nAbstract\nThe advent of GPT-4o highlights the immense potential of Multimodal Large Language Models (MLLMs) with native visual generation capabilities. These unified models offer precise control in multimodal interactions\, enabling exceptional fluency in tasks such as multi-turn image editing and visual in-context learning. However\, a fundamental dilemma remains in the choice of visual tokenizers for unified MLLMs – e.g.\, semantic tokenizers like CLIP excel in multimodal understanding but complicates generative modeling due to its high-dimensional\, continuous feature space; Conversely\, VQVAE tokenizers fit autoregressive generation but struggles to capture essential semantics for understanding. \nThis seminar explores how to design a unified visual tokenizer to bridge the gap in multimodal generation and understanding. Recent studies attempt to address this by connecting the training of VQVAE (for autoregressive generation) and CLIP (for understanding). However\, directly combining these training objectives has been observed to cause severe loss conflicts.  We will show that reconstruction and semantic supervision do not inherently conflict. Instead\, the underlying bottleneck stems from limited representational capacity of discrete token space. Building on these insights\, we introduce UniTok\, a unified tokenizer featuring a novel multi-codebook quantization mechanism that effectively scales up the vocabulary size and bottleneck dimension. \nSpeaker\nMr. Chuofan Ma\nDepartment of Electrical and Electronic Engineering\nThe University of Hong Kong \nBiography of the Speaker\nMr. Chuofan Ma is currently pursuing the Ph.D. degree with the Department of Electrical and Electronic Engineering\, supervised by Professor Xiaojuan Qi. He received his bachelor’s degree in computer science from The University of Hong Kong in 2021. His research interests primarily lie in open-world visual intelligence and multi-modal foundation models. \nOrganiser\nProf. Xiaojuan Qi\nDepartment of Electrical and Electronic Engineering\, The University of Hong Kong \nAll are welcome.
URL:https://ece.hku.hk/events/20251217/
LOCATION:Online via Zoom
CATEGORIES:Seminar
ATTACH;FMTTYPE=image/jpeg:https://ece.hku.hk/wp-content/uploads/2024/11/rpg-seminar.jpg
END:VEVENT
END:VCALENDAR