Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleJuly 2024
RTG-SLAM: Real-time 3D Reconstruction at Scale using Gaussian Splatting
SIGGRAPH '24: ACM SIGGRAPH 2024 Conference PapersArticle No.: 30, Pages 1–11https://doi.org/10.1145/3641519.3657455We present Real-time Gaussian SLAM (RTG-SLAM), a real-time 3D reconstruction system with an RGBD camera for large-scale environments using Gaussian splatting. The system features a compact Gaussian representation and a highly efficient on-the-fly ...
- demonstrationJune 2024
Using Depth to Enhance Video-centric Applications
IMX '24: Proceedings of the 2024 ACM International Conference on Interactive Media ExperiencesPages 439–442https://doi.org/10.1145/3639701.3661088Acquiring depth data has become easily achievable with advancements in depth sensing and depth estimation technologies. As a result, obtaining a depth stream to describe the topology of a corresponding video stream has been considerably simplified. ...
- review-articleMay 2024
Autonomous driving system: A comprehensive survey
- Jingyuan Zhao,
- Wenyi Zhao,
- Bo Deng,
- Zhenghong Wang,
- Feng Zhang,
- Wenxiang Zheng,
- Wanke Cao,
- Jinrui Nan,
- Yubo Lian,
- Andrew F. Burke
Expert Systems with Applications: An International Journal (EXWA), Volume 242, Issue Chttps://doi.org/10.1016/j.eswa.2023.122836AbstractAutomation is increasingly at the forefront of transportation research, with the potential to bring fully autonomous vehicles to our roads in the coming years. This comprehensive survey provides a holistic look at the essential components and ...
- research-articleJuly 2024
A guided-based approach for deepfake detection: RGB-depth integration via features fusion
Pattern Recognition Letters (PTRL), Volume 181, Issue CPages 99–105https://doi.org/10.1016/j.patrec.2024.03.025AbstractDeep fake technology paves the way for a new generation of super realistic artificial content. While this opens the door to extraordinary new applications, the malicious use of deepfakes allows for far more realistic disinformation attacks than ...
Highlights- Integrating depth and RGB improves the accuracy and robustness of deepfake detectors.
- Late fusion is the best fusion strategy for integrating RGB and depth features.
- We guide the integration of depth information via a self-...
- articleMay 2024
RGBD Synergetic Model for Image Enhancement in Animation Advertisements
International Journal of Intelligent Information Technologies (IJIIT-IGI), Volume 20, Issue 1Pages 1–17https://doi.org/10.4018/IJIIT.342478This paper proposes a depth image symbiosis model to solve the problem of insufficient depth image quality in animated advertising. The model uses image surface information and image edge cues as the main guidance information to obtain image symbiosis ...
-
- research-articleJune 2023
"You AR' right in front of me": RGBD-based capture and rendering for remote training
MMSys '23: Proceedings of the 14th ACM Multimedia Systems ConferencePages 307–311https://doi.org/10.1145/3587819.3593936Immersive technologies such as virtual reality have enabled novel forms of education and training, where students can learn new skills in simulated environments. But some specialized training procedures, e.g. ESA-certified soldering, still involve real-...
- research-articleFebruary 2024
Development of 3D Scanner Application with Stereo Camera for 3D Object Reconstruction
Procedia Computer Science (PROCS), Volume 227, Issue CPages 422–431https://doi.org/10.1016/j.procs.2023.10.542AbstractVisual perception in RGBD (Red-Green-Blue and Depth) camera technology allows the camera to provide color image and depth structure information simultaneously from the surface of the scanned object. Based on this technology, data from an RGBD ...
- research-articleSeptember 2022
SL-Net: self-learning and mutual attention-based distinguished window for RGBD complex salient object detection
Neural Computing and Applications (NCAA), Volume 35, Issue 1Pages 595–609https://doi.org/10.1007/s00521-022-07772-7AbstractSignificant improvement has been noticed in salient object detection by multi-modal cross-complementary fusion between Depth and RGB features. The multi-modal feature extracting backbone of existing networks cannot extract complex RGB and color ...
- research-articleAugust 2022
3D real-time human reconstruction with a single RGBD camera
Applied Intelligence (KLU-APIN), Volume 53, Issue 8Pages 8735–8745https://doi.org/10.1007/s10489-022-03969-4Abstract3D human reconstruction is an important technology connecting the real world and the virtual world, but most of previous work needs expensive computing resources, making it difficult in real-time scenarios. We propose a lightweight human body ...
- research-articleAugust 2022
Virtual visits: life-size immersive communication
MMSys '22: Proceedings of the 13th ACM Multimedia Systems ConferencePages 310–314https://doi.org/10.1145/3524273.3532903Elderly people in care homes face a great lack of contact with their families and loved ones. Social isolation and loneliness are detrimental factors for older people in their health condition, cognitive impairment and quality of life. We previously ...
- research-articleMarch 2022
RGBD mapping solution for low-cost robot
Machine Vision and Applications (MVAA), Volume 33, Issue 2https://doi.org/10.1007/s00138-022-01275-0AbstractThis paper is focused on the proposal and verification of the RGBD mapping system for a small, low-cost mobile robot. The solution's requested properties were easy to replicate and easy to extend for further development on commonly available ...
- research-articleOctober 2020
Is Depth Really Necessary for Salient Object Detection?
MM '20: Proceedings of the 28th ACM International Conference on MultimediaPages 1745–1754https://doi.org/10.1145/3394171.3413855Salient object detection (SOD) is a crucial and preliminary task for many computer vision applications, which have made progress with deep CNNs. Most of the existing methods mainly rely on the RGB information to distinguish the salient objects, which ...
- research-articleSeptember 2020
The Blackbird UAV dataset
International Journal of Robotics Research (RBRS), Volume 39, Issue 10-11Pages 1346–1364https://doi.org/10.1177/0278364920908331This article describes the Blackbird unmanned aerial vehicle (UAV) Dataset, a large-scale suite of sensor data and corresponding ground truth from a custom-built quadrotor platform equipped with an inertial measurement unit (IMU), rotor tachometers, and ...
- ArticleAugust 2020
The Eighth Visual Object Tracking VOT2020 Challenge Results
- Matej Kristan,
- Aleš Leonardis,
- Jiří Matas,
- Michael Felsberg,
- Roman Pflugfelder,
- Joni-Kristian Kämäräinen,
- Martin Danelljan,
- Luka Čehovin Zajc,
- Alan Lukežič,
- Ondrej Drbohlav,
- Linbo He,
- Yushan Zhang,
- Song Yan,
- Jinyu Yang,
- Gustavo Fernández,
- Alexander Hauptmann,
- Alireza Memarmoghadam,
- �lvaro Garc�a-Mart�n,
- Andreas Robinson,
- Anton Varfolomieiev,
- Awet Haileslassie Gebrehiwot,
- Bedirhan Uzun,
- Bin Yan,
- Bing Li,
- Chen Qian,
- Chi-Yi Tsai,
- Christian Micheloni,
- Dong Wang,
- Fei Wang,
- Fei Xie,
- Felix Jaremo Lawin,
- Fredrik Gustafsson,
- Gian Luca Foresti,
- Goutam Bhat,
- Guangqi Chen,
- Haibin Ling,
- Haitao Zhang,
- Hakan Cevikalp,
- Haojie Zhao,
- Haoran Bai,
- Hari Chandana Kuchibhotla,
- Hasan Saribas,
- Heng Fan,
- Hossein Ghanei-Yakhdan,
- Houqiang Li,
- Houwen Peng,
- Huchuan Lu,
- Hui Li,
- Javad Khaghani,
- Jesus Bescos,
- Jianhua Li,
- Jianlong Fu,
- Jiaqian Yu,
- Jingtao Xu,
- Josef Kittler,
- Jun Yin,
- Junhyun Lee,
- Kaicheng Yu,
- Kaiwen Liu,
- Kang Yang,
- Kenan Dai,
- Li Cheng,
- Li Zhang,
- Lijun Wang,
- Linyuan Wang,
- Luc Van Gool,
- Luca Bertinetto,
- Matteo Dunnhofer,
- Miao Cheng,
- Mohana Murali Dasari,
- Ning Wang,
- Ning Wang,
- Pengyu Zhang,
- Philip H. S. Torr,
- Qiang Wang,
- Radu Timofte,
- Rama Krishna Sai Gorthi,
- Seokeon Choi,
- Seyed Mojtaba Marvasti-Zadeh,
- Shaochuan Zhao,
- Shohreh Kasaei,
- Shoumeng Qiu,
- Shuhao Chen,
- Thomas B. Sch�n,
- Tianyang Xu,
- Wei Lu,
- Weiming Hu,
- Wengang Zhou,
- Xi Qiu,
- Xiao Ke,
- Xiao-Jun Wu,
- Xiaolin Zhang,
- Xiaoyun Yang,
- Xuefeng Zhu,
- Yingjie Jiang,
- Yingming Wang,
- Yiwei Chen,
- Yu Ye,
- Yuezhou Li,
- Yuncon Yao,
- Yunsung Lee,
- Yuzhang Gu,
- Zezhou Wang,
- Zhangyong Tang,
- Zhen-Hua Feng,
- Zhijun Mai,
- Zhipeng Zhang,
- Zhirong Wu,
- Ziang Ma
AbstractThe Visual Object Tracking challenge VOT2020 is the eighth annual tracker benchmarking activity organized by the VOT initiative. Results of 58 trackers are presented; many are state-of-the-art trackers published at major computer vision ...
- research-articleMarch 2020
Attention-guided RGBD saliency detection using appearance information
AbstractMost of the deep convolutional neural networks (CNNs) based RGBD saliency models either regard the RGB and depth cues as the same status or trust the depth information excessively. However, they ignore that the low-quality depth map is ...
- research-articleJuly 2019
RGBD Semantic Segmentation Based on Global Convolutional Network
ICRCA 2019: Proceedings of the 2019 4th International Conference on Robotics, Control and AutomationPages 192–197https://doi.org/10.1145/3351180.3351182Convolutional neural networks have gradually dominated the field of image semantic segmentation, and have achieved good results in 2D image semantic segmentation tasks. However, the 2D semantic segmentation algorithm based on CNN is still unsatisfactory ...
- articleMay 2019
Self-organizing background subtraction using color and depth data
Multimedia Tools and Applications (MTAA), Volume 78, Issue 9Pages 11927–11948https://doi.org/10.1007/s11042-018-6741-7Background subtraction from color and depth data is a fundamental task for video surveillance applications that use data acquired by RGBD sensors. We present a method that adopts a self-organizing neural background model previously adopted for RGB videos ...
- ArticleNovember 2018
Co-saliency Detection for RGBD Images Based on Multi-constraint Superpixels Matching and Co-cellular Automata
AbstractCo-saliency detection aims at extracting the common salient regions from an image group containing two or more relevant images. It is a newly emerging topic in computer vision community. Different from the existing co-saliency methods focusing on ...
- research-articleSeptember 2018
Dense 3D Optical Flow Co-occurrence Matrices for Human Activity Recognition
iWOAR '18: Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and InteractionArticle No.: 16, Pages 1–8https://doi.org/10.1145/3266157.3266220In this paper, a new activity recognition technique is introduced based on the gray level co-occurrence matrices (GLCM) from a 3D dense optical flow of the input RGB and Depth videos. These matrices are one of the earliest techniques used for image ...