BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Department of Electrical and Computer Engineering (HKUECE) 電機與計算機工程系 - ECPv6.16.0//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Department of Electrical and Computer Engineering (HKUECE) 電機與計算機工程系
X-ORIGINAL-URL:https://ece.hku.hk
X-WR-CALDESC:Events for Department of Electrical and Computer Engineering (HKUECE) 電機與計算機工程系
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Asia/Hong_Kong
BEGIN:STANDARD
TZOFFSETFROM:+0800
TZOFFSETTO:+0800
TZNAME:HKT
DTSTART:20220101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20231213
DTEND;VALUE=DATE:20231214
DTSTAMP:20260513T073708
CREATED:20231206T082433Z
LAST-MODIFIED:20250114T081120Z
UID:17871-1702425600-1702511999@ece.hku.hk
SUMMARY:RPG Seminar – Unsupervised Domain Adaptation for 3D Object Detection from Point Cloud
DESCRIPTION:3D object detection aims to categorize and localize objects from 3D sensor data  with many applications in autonomous driving\, robotics\, virtual reality\, etc. Recently\, this field has obtained remarkable advancements driven by deep neural networks and large-scale human-annotated datasets. However\, 3D detectors developed on one specific domain (i.e. source domain) might not generalize well to novel testing domains (i.e. target domains) due to unavoidable domain-shifts arising from different types of 3D sensors\, weather conditions and geographical locations\, etc. Though collecting more training data from different domains could alleviate this problem\, it unfortunately might be infeasible given various real-world scenarios and enormous costs for 3D annotation. Therefore\, approaches to effectively adapting 3D detector trained on labeled source domain to a new unlabeled target domain is highly demanded in practical applications. This task is also known as unsupervised domain adaptation (UDA) for 3D object detection. We propose a self-training pipeline ST3D to tackle this problem. \nZoom Link :\nhttps://hku.zoom.us/j/9122109333\nMeeting ID: 912 210 9333 \nBiography of the speaker:\n\nJihan Yang is a PhD candidate at EEE\, The University of Hong Kong\, advised by Dr. Xiaojuan Qi. Before that\, he obtained Bachelor’s degree from Sun Yat-sen University\, supervised by Prof. Liang Lin and Prof. Guanbin Li. \nAll are welcome.
URL:https://ece.hku.hk/events/rpg-seminar-unsupervised-domain-adaptation-for-3d-object-detection-from-point-cloud/
CATEGORIES:Seminar
ATTACH;FMTTYPE=image/jpeg:https://ece.hku.hk/wp-content/uploads/2024/11/rpg-seminar.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20231213
DTEND;VALUE=DATE:20231214
DTSTAMP:20260513T073708
CREATED:20231206T083214Z
LAST-MODIFIED:20250114T075814Z
UID:17872-1702425600-1702511999@ece.hku.hk
SUMMARY:RPG Seminar – Language-driven Open-vocabulary 3D Scene Understanding
DESCRIPTION:Open-vocabulary scene understanding aims to localize and recognize unseen categories beyond the annotated label space. The recent breakthrough of 2D open-vocabulary perception is largely driven by Internet-scale paired image-text data with rich vocabulary concepts. However\, this success cannot be directly transferred to 3D scenarios. In this seminar\, we will introduce the challenges faced in open-vocabulary 3D scene understanding and presents our proposed solution that harness powerful vision-language models to tackle this challenge. \nZoom Link :\nhttps://hku.zoom.us/j/3533656068\nMeeting ID: 353 365 6068 \nBiography of the speaker:\n\nRunyu Ding received her B.Eng. degree at Tsinghua University. Currently\, she is pursuing Ph.D. degree at the University of Hong Kong. Her research interests focus on 3D vision and embodied intelligence. \nAll are welcome.
URL:https://ece.hku.hk/events/rpg-seminar-language-driven-open-vocabulary-3d-scene-understanding/
CATEGORIES:Seminar
ATTACH;FMTTYPE=image/jpeg:https://ece.hku.hk/wp-content/uploads/2024/11/rpg-seminar.jpg
END:VEVENT
END:VCALENDAR