BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Department of Electrical and Computer Engineering (HKUECE) 電機與計算機工程系 - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Department of Electrical and Computer Engineering (HKUECE) 電機與計算機工程系
X-ORIGINAL-URL:https://ece.hku.hk
X-WR-CALDESC:Events for Department of Electrical and Computer Engineering (HKUECE) 電機與計算機工程系
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Asia/Hong_Kong
BEGIN:STANDARD
TZOFFSETFROM:+0800
TZOFFSETTO:+0800
TZNAME:HKT
DTSTART:20250101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Asia/Hong_Kong:20260324T140000
DTEND;TZID=Asia/Hong_Kong:20260324T150000
DTSTAMP:20260511T074929
CREATED:20260320T093741Z
LAST-MODIFIED:20260320T093741Z
UID:115337-1774360800-1774364400@ece.hku.hk
SUMMARY:RPG Seminar – LLMs for Social Good: Addressing Data Scarcity and Opacity for Alzheimer’s Diagnosis and Prognosis
DESCRIPTION:Zoom Link:\nhttps://hku.zoom.us/j/94842355191?pwd=02bHCUfep3119O1jbeDHbnZNKaKUJ8.1 \nAbstract\nEarly detection of Alzheimer’s Disease (AD) through non-invasive speech analysis offers a highly promising diagnostic avenue. However\, the development of robust computational models is severely hindered by the fundamental imperfections of real-world clinical data. Spontaneous patient speech is often noisy and highly variable\, while longitudinal clinical records suffer from severe data scarcity\, temporal sparsity\, and missing values. Consequently\, traditional deep learning models act as opaque “black boxes\,” and this inherent opacity undermines the clinical trust required for real-world deployment. Furthermore\, while Large Language Models (LLMs) show revolutionary potential\, they too struggle to robustly model individualized disease progression from sparse data without specialized architectural integration. This leads to the central research question: How can an LLM-driven framework be systematically designed to extract clinically meaningful features and synthesize high-fidelity multi-modal data\, thereby overcoming the intertwined limitations of data incompleteness and black-box opacity? \nTo address this\, this seminar proposes an LLM-driven spatio-temporal multi-modal framework. The overarching objective is to develop theoretically grounded methodologies that leverage LLMs to robustly distill raw patient speech into structured Cognitive-Linguistic (CL) atoms and interpretable linguistic markers. Concurrently\, the framework integrates qualitative medical knowledge and synthesizes rich\, realistic training samples to effectively enrich decision boundaries in data-deficient environments. This research significantly advances AI for Social Good by providing a scalable\, low-cost methodology for early dementia screening that reduces the reliance on invasive and expensive traditional diagnostics. \nSpeaker\nMr. Tingyu MO\nDepartment of Electrical and Computer Engineering\nThe University of Hong Kong \nBiography of the Speaker\nMr. Tingyu MO is a Ph.D. candidate with the Advanced Well-being and Society Research Platform (AI-WiSe) at The University of Hong Kong\, under the supervision of Prof. Victor O.K. Li\, Prof. Jacqueline C.K. Lam\, and Prof. Yunhe Hou. He received his B.S. degree in Intelligence Science and Technology from the University of Science and Technology Beijing in 2021\, and his M.Eng. degree in Electronic and Information Engineering from Beihang University. His research interests include AI for Social Good\, with a specific focus on Alzheimer’s diagnosis and prognosis. \nOrganiser\nProf. Victor O.K Li\, Prof. Jacqueline C.K Lam\, Prof. Yunhe Hou\nDepartment of Electrical and Computer Engineering\, The University of Hong Kong \nAll are welcome.
URL:https://ece.hku.hk/events/20260324/
LOCATION:Room CB-603\, 6/F\, Chow Yei Ching Building\, The University of Hong Kong
CATEGORIES:Seminar
ATTACH;FMTTYPE=image/jpeg:https://ece.hku.hk/wp-content/uploads/2024/11/rpg-seminar.jpg
END:VEVENT
END:VCALENDAR