BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Department of Electrical and Computer Engineering (HKUECE) 電機與計算機工程系 - ECPv6.16.0//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Department of Electrical and Computer Engineering (HKUECE) 電機與計算機工程系
X-ORIGINAL-URL:https://ece.hku.hk
X-WR-CALDESC:Events for Department of Electrical and Computer Engineering (HKUECE) 電機與計算機工程系
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Asia/Hong_Kong
BEGIN:STANDARD
TZOFFSETFROM:+0800
TZOFFSETTO:+0800
TZNAME:HKT
DTSTART:20230101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Asia/Hong_Kong:20240506T150000
DTEND;TZID=Asia/Hong_Kong:20240506T160000
DTSTAMP:20260512T175220
CREATED:20240429T093019Z
LAST-MODIFIED:20250114T063840Z
UID:18470-1715007600-1715011200@ece.hku.hk
SUMMARY:RPG Seminar – On-the-fly communication-and-computing for distributed data analytics and edge intelligence
DESCRIPTION:Abstract:\nEnormous amounts of data are generated by billions of edge devices in mobile networks. Distributed data analytics can support a broad range of mobile applications\, from edge AI to IoT sensing. Enabling such analytics while improving its effectiveness has triggered a paradigm shift from separated optimization between communication techniques and computation algorithms to a joint design. \nConventionally\, the wireless implementation of computation algorithms\, such as statistic data analytics and AI models\, has followed a one-shot approach. This approach first computes local results at devices using local data and then aggregates them to a server with communication-efficient techniques. However\, this implementation is confronted with issues such as limited on-device storage and computation capacities\, link interruption\, and coarse efficiency-accuracy trade-offs. \nIn this seminar\, I will introduce a novel framework of on-the-fly communication-and-computing (FlyCom2). FlyCom2 exploits streaming low-complexity computation and progressive transmission to realize demanding computation algorithms in a mobile network\, such as distributed data analytics and device-server fine-tuning of large language models (LLMs). I will elaborate on the distinct features and advantages of FlyCom2 as well as the possible challenges for materializing it. Furthermore\, I will introduce two use cases explored in my studies on FlyCom2. \nSpeaker:\nMr. Xu CHEN\nDepartment of Electrical and Electronic Engineering\,\nThe University of Hong Kong \nBiography of the speaker:\nMr. Xu CHEN received the B.Eng. and M.Eng. from Harbin Institute of Technology (HIT)\, Harbin\, China in 2018 and 2020\, respectively. He is currently pursuing the Ph.D. degree with the Department of Electrical and Electronic Engineering\, The University of Hong Kong\, Hong Kong. His research interests include MIMO communications\, distributed computing\, and integrated sensing and edge AI. \nOrganizer:\nProf. Kaibin HUANG \nAll are welcome.
URL:https://ece.hku.hk/events/20240506-2/
LOCATION:Online via Zoom
CATEGORIES:Seminar
ATTACH;FMTTYPE=image/jpeg:https://ece.hku.hk/wp-content/uploads/2024/11/rpg-seminar.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Asia/Hong_Kong:20240506T160000
DTEND;TZID=Asia/Hong_Kong:20240506T170000
DTSTAMP:20260512T175220
CREATED:20240429T092539Z
LAST-MODIFIED:20250114T063806Z
UID:18469-1715011200-1715014800@ece.hku.hk
SUMMARY:RPG Seminar – Vertical Layering of Quantized Neural Networks for Heterogeneous Inference
DESCRIPTION:Abstract:\nAlthough considerable progress has been obtained in neural network quantization for efficient inference\, existing methods are not scalable to heterogeneous devices as one dedicated model needs to be trained\, transmitted\, and stored for one specific hardware setting\, incurring considerable costs in model training and maintenance. In this seminar\, we study a new vertical-layered representation of neural network weights for encapsulating all quantized models into a single one. It represents weights as a group of bits (i.e.\, vertical layers) organized from the most significant bit (also called the basic layer) to less significant bits (i.e.\, enhance layers). Hence\, a neural network with an arbitrary quantization precision can be obtained by adding corresponding enhance layers to the basic layer. However\, we empirically find that models obtained with existing quantization methods suffer severe performance degradation if they are adapted to vertical-layered weight representation. To this end\, we propose a simple once quantization-aware training (QAT) scheme for obtaining high-performance vertical-layered models. Our design incorporates a cascade downsampling mechanism with the multi-objective optimization employed to train the shared source model weights such that they can be updated simultaneously\, considering the performance of all networks. After the model is trained\, to construct a vertical-layered network\, the lowest bit-width quantized weights become the basic layer\, and every bit dropped along the downsampling process act as an enhance layer. Experiments show that the proposed vertical-layered representation and developed once QAT scheme are effective in embodying multiple quantized networks into a single one and allow one-time training\, and it delivers comparable performance as that of quantized models tailored to any specific bit-width. \nSpeaker:\nMr. Hai WU\nDepartment of Electrical and Electronic Engineering\,\nThe University of Hong Kong \nBiography of the speaker:\nMr. Hai WU (Graduate Student Member\, IEEE) received the BEng degree from the Department of Electronic and Electrical Engineering\, Southern University of Science and Technology\, China\, in 2020. He is currently working toward the PhD degree with the Department of Electrical and Electronic Engineering\, The University of Hong Kong\, Hong Kong. \nOrganizer:\nProf. Kaibin HUANG \nAll are welcome.
URL:https://ece.hku.hk/events/20240506-1/
LOCATION:Online via Zoom
CATEGORIES:Seminar
ATTACH;FMTTYPE=image/jpeg:https://ece.hku.hk/wp-content/uploads/2024/11/rpg-seminar.jpg
END:VEVENT
END:VCALENDAR