BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Department of Electrical and Computer Engineering (HKUECE) 電機與計算機工程系 - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://ece.hku.hk
X-WR-CALDESC:Events for Department of Electrical and Computer Engineering (HKUECE) 電機與計算機工程系
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Asia/Hong_Kong
BEGIN:STANDARD
TZOFFSETFROM:+0800
TZOFFSETTO:+0800
TZNAME:HKT
DTSTART:20240101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Asia/Hong_Kong:20250409T110000
DTEND;TZID=Asia/Hong_Kong:20250409T120000
DTSTAMP:20260512T004410
CREATED:20250410T070435Z
LAST-MODIFIED:20250410T070813Z
UID:111088-1744196400-1744200000@ece.hku.hk
SUMMARY:Efficient Fine-Tuning and Compression of Large Language Models: Towards Low-bit and Ultra-Low Parameter Solutions
DESCRIPTION:Zoom Link: https://hku.zoom.us/j/5995074181?omn=91841905345 \nAbstract\nEfficient fine-tuning of Large Language Models (LLMs) is crucial due to their substantial memory and computational demands. This seminar discusses recent advancements in techniques aimed at significantly reducing these costs\, enabling effective adaptation of large-scale models even on resource-constrained hardware. The talk will begin with an overview of current challenges and mainstream approaches to compressing and fine-tuning LLMs\, highlighting trade-offs between model size\, accuracy\, and efficiency. Subsequently\, the speaker will introduce novel approaches that enable fine-tuning at extremely low precision and ultra-low parameter regimes\, significantly reducing memory requirements without compromising performance. Finally\, the discussion will cover recent progress and future directions for achieving efficient deployment of LLMs in real-world applications. \nSpeaker\nMr. Jiajun Zhou\nDepartment of Electrical and Electronic Engineering\nThe University of Hong Kong \nBiography of the Speaker\nJiajun Zhou is currently a Ph.D. student in the Department of Electrical and Electronic Engineering at the University of Hong Kong (HKU)\, supervised by Prof. Ngai Wong\, and a visiting scholar at the University of California\, Santa Barbara (UCSB). He received his Master’s degree in IC Design Engineering from the Hong Kong University of Science and Technology (HKUST) in 2019 and a Bachelor’s degree in Integrated Circuit Design and Integrated Systems from National Huaqiao University\, China\, in 2018. He previously worked as a Research Assistant at the Chinese University of Hong Kong (CUHK). His research primarily focuses on developing innovative frameworks for efficient training and inference of Large Language Models (LLMs)\, particularly through quantization\, low-bit optimization\, and tensor decomposition. He has published extensively in AI and hardware acceleration venues\, including ACL\, NAACL\, IEEE FCCM\, and IEEE TCAD. \nOrganiser\nProf. Ngai Wong\nDepartment of Electrical and Electronic Engineering\, The University of Hong KongAll are welcome!
URL:https://ece.hku.hk/events/20250409-1/
LOCATION:Online via Zoom
CATEGORIES:Seminar
ATTACH;FMTTYPE=image/jpeg:https://ece.hku.hk/wp-content/uploads/2024/11/rpg-seminar.jpg
END:VEVENT
END:VCALENDAR