BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Department of Electrical and Computer Engineering (HKUECE) 電機與計算機工程系 - ECPv6.16.0//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://ece.hku.hk
X-WR-CALDESC:Events for Department of Electrical and Computer Engineering (HKUECE) 電機與計算機工程系
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Asia/Hong_Kong
BEGIN:STANDARD
TZOFFSETFROM:+0800
TZOFFSETTO:+0800
TZNAME:HKT
DTSTART:20230101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Asia/Hong_Kong:20240430T140000
DTEND;TZID=Asia/Hong_Kong:20240430T150000
DTSTAMP:20260512T175356
CREATED:20240411T072951Z
LAST-MODIFIED:20250114T064121Z
UID:18266-1714485600-1714489200@ece.hku.hk
SUMMARY:RPG Seminar – Communication-Efficient Joint Signal Compression and Activity Detection in Cell-Free Massive MIMO
DESCRIPTION:Speaker:\nMr. Lin Qingfeng\nDepartment of Electrical and Electronic Engineering\,\nThe University of Hong Kong \nAbstract:\nA great amount of endeavour has recently been devoted to device activity detection in massive machine-type communications. This seminar targets at a practical issue: communication-efficient joint signal compression and activity detection in cell-free massive MIMO with capacity-limited fronthauls. To this end\, we propose a novel deep learning framework which jointly optimizes the compression modules\, quantization modules at the access points\, and the decompression module and detection module at the central processing unit. Specifically\, deep unfolding is leveraged for designing the detection module in order to inherit the domain knowledge derived from the optimization algorithm\, and the other modules are constructed by generic layers for increasing the learning capability. A joint training strategy is proposed to optimize all the modules in an end-to-end manner. Numerical results demonstrate the superiority of the proposed end-to-end learning framework compared with classical optimization methods. \nBiography of the speaker:\nMr. Qingfeng LIN received the B.Eng. degree in communication engineering and the M.Eng. degree in information and communication engineering from the Harbin Institute of Technology in 2018 and 2020\, respectively. He is currently pursuing the Ph.D. degree with the Department of Electrical and Electronic Engineering\, The University of Hong Kong\, Hong Kong. \nOrganizer:   Prof. Yik-Chung Wu \nAll are welcome.
URL:https://ece.hku.hk/events/20240430-2/
LOCATION:Online via Zoom
CATEGORIES:Seminar
ATTACH;FMTTYPE=image/jpeg:https://ece.hku.hk/wp-content/uploads/2024/11/rpg-seminar.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Asia/Hong_Kong:20240430T140000
DTEND;TZID=Asia/Hong_Kong:20240430T150000
DTSTAMP:20260512T175356
CREATED:20240416T065435Z
LAST-MODIFIED:20250114T064217Z
UID:18274-1714485600-1714489200@ece.hku.hk
SUMMARY:RPG Seminar – Towards Parameter-free Ultrasound Localization Microscopy by Vision Transformer
DESCRIPTION:Speaker:\nMr. Wang Renxian\nDepartment of Electrical and Electronic Engineering\,\nThe University of Hong Kong \nAbstract:\nUltrasound localization microscopy (ULM) has emerged as an unprecedented noninvasive microvascular imaging technique that breaks the acoustic diffraction limit. However\, current ULM workflow relies significantly on prior knowledge\, including the impulse response and empirical parameters (e.g.\, the number of microbubbles (MBs) per frame M)\, or training-test dataset consistency in deep learning (DL)-based studies. In this seminar\, a general ULM pipeline that is free from priors will be presented. Specifically\, a channel attention vision transformer model (ViT) was trained using a progressive learning strategy to distill microbubble signals and reduce speckles simultaneously from a single frame without estimation of the impulse response and M. This approach enables the model to learn global information through training on patch sizes that increase progressively. Ample synthetic ultrasound data were generated using the k-Wave toolbox to provide various MB patterns\, thus overcoming the deficiency of labeled data. The ViT output was further processed by a standard radial symmetry method for sub-pixel localization. Our method performed well on model-unseen public datasets: one in silico flow dataset with ground truth and four in vivo datasets of mouse tumor\, rat brain\, rat brain bolus\, and rat kidney\, in terms of precision and accuracy for in silico dataset\, the number of vessels for diverse in vivo datasets while preserving comparable resolutions. The proposed ViT-based model\, seamlessly integrated with state-of-the-art downstream ULM steps\, improved the overall ULM performance with no priors. \nBiography of the speaker:\nMr. Renxian WANG received the B.S. degree in Material Physics from Northwestern Polytechnical University in 2019 and the MPhil degree in Department of Physics from The Chinese University of Hong Kong in 2021\, respectively. He is currently pursuing the Ph.D. degree in the Department of Electrical and Electronic Engineering at the University of Hong Kong\, Hong Kong. \nAll are welcome.
URL:https://ece.hku.hk/events/20240430-1/
LOCATION:Room CB-603\, 6/F\, Chow Yei Ching Building\, The University of Hong Kong
CATEGORIES:Seminar
ATTACH;FMTTYPE=image/jpeg:https://ece.hku.hk/wp-content/uploads/2024/11/rpg-seminar.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Asia/Hong_Kong:20240430T160000
DTEND;TZID=Asia/Hong_Kong:20240430T170000
DTSTAMP:20260512T175356
CREATED:20240422T075807Z
LAST-MODIFIED:20250114T064040Z
UID:18359-1714492800-1714496400@ece.hku.hk
SUMMARY:Towards Human-enabled Intelligent Robots: Perception\, Imitation and Morphology
DESCRIPTION:Meeting ID: 972 6774 1607 \nAbstract:\nThe robotics industry has manufactured multiple successful robots that are deployed in various domains and have been playing a significant role in the modern economy. How to efficiently build\, train and deploy different robots with improved cost and operational safety in diverse tasks in a scalable way? I argue that efficiently using human intelligence embedded in human’s daily activities is the key to help achieve so\, and in this talk\, I will introduce my research works towards achieving this goal. \nI will first introduce my research on extracting useful state information about humans and objects via visual perception\, and focus on efficient training data collection and annotation that can best utilize human capability. I will then introduce my research on human-to-robot imitation\, specifically a new type of methodology that leverages continuous transformation of robot embodiments to co-develop robot hardware and skills\, allowing continuous transformation of a human agent to a robot agent and transferring the human skills along the way. As an application\, I show how this methodology can be used to efficiently control\, design and optimize robots with new morphology and use human experience in this process. I conclude the talk with discussions on my future research plan on improving various aspects of human-enabled safe and low-cost robot systems\, as well as their broader impacts on science\, engineering and society. \nSpeaker:\nDr. Xingyu LIU\nPostdoctoral Associate\,\nCarnegie Mellon University (CMU) \nBiography of the Speaker:\nDr. Xingyu LIU is currently a Postdoctoral Associate at Carnegie Mellon University (CMU) where he works with Professor Ding Zhao in CMU SafeAI Lab. He received his Ph.D. degree from Stanford University where he was advised by Professor Jeannette Bohg. During his Ph.D.\, he spent time in research labs including Google Brain Robotics and Adobe Research. Prior to Ph.D.\, he received M.S. degree from Stanford University and B.Eng. degree from Tsinghua University. His research interest is at the intersection of robotics\, machine learning and computer vision and he reviews regularly for conferences such as RSS\, NeurIPS and CVPR. His research works have been recognized with a Best Paper Award Finalist at CVPR 2022 conference\, a Best Demo Award Finalist at RoboSoft 2024 conference\, multiple (Long) Oral Presentation honors at top AI conferences and are covered by media outlets including Scientific American magazine\, ACM Tech News and O’Reilly. \nOrganizer:\nProf. Kaibin HUANG \nAll are welcome! We look forward to seeing you!
URL:https://ece.hku.hk/events/20240430-3/
LOCATION:Online via Zoom
CATEGORIES:Seminar
ATTACH;FMTTYPE=image/jpeg:https://ece.hku.hk/wp-content/uploads/2024/04/1280.jpg
END:VEVENT
END:VCALENDAR