Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,623)

Search Parameters:
Keywords = 3D LiDAR

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 3301 KiB  
Article
Stereo and LiDAR Loosely Coupled SLAM Constrained Ground Detection
by Tian Sun, Lei Cheng, Ting Zhang, Xiaoping Yuan, Yanzheng Zhao and Yong Liu
Sensors 2024, 24(21), 6828; https://doi.org/10.3390/s24216828 - 24 Oct 2024
Abstract
In many robotic applications, creating a map is crucial, and 3D maps provide a method for estimating the positions of other objects or obstacles. Most of the previous research processes 3D point clouds through projection-based or voxel-based models, but both approaches have certain [...] Read more.
In many robotic applications, creating a map is crucial, and 3D maps provide a method for estimating the positions of other objects or obstacles. Most of the previous research processes 3D point clouds through projection-based or voxel-based models, but both approaches have certain limitations. This paper proposes a hybrid localization and mapping method using stereo vision and LiDAR. Unlike the traditional single-sensor systems, we construct a pose optimization model by matching ground information between LiDAR maps and visual images. We use stereo vision to extract ground information and fuse it with LiDAR tensor voting data to establish coplanarity constraints. Pose optimization is achieved through a graph-based optimization algorithm and a local window optimization method. The proposed method is evaluated using the KITTI dataset and compared against the ORB-SLAM3, F-LOAM, LOAM, and LeGO-LOAM methods. Additionally, we generate 3D point cloud maps for the corresponding sequences and high-definition point cloud maps of the streets in sequence 00. The experimental results demonstrate significant improvements in trajectory accuracy and robustness, enabling the construction of clear, dense 3D maps. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

18 pages, 3251 KiB  
Article
Impacts of Digital Elevation Model Elevation Error on Terrain Gravity Field Calculations: A Case Study in the Wudalianchi Airborne Gravity Gradiometer Test Site, China
by Lehan Wang, Meng Yang, Zhiyong Huang, Wei Feng, Xingyuan Yan and Min Zhong
Remote Sens. 2024, 16(21), 3948; https://doi.org/10.3390/rs16213948 - 23 Oct 2024
Abstract
Accurate Digital Elevation Models (DEMs) are essential for precise terrain gravity field calculations, which are critical in gravity field modeling, airborne gravimeter and gradiometer calibration, and geophysical inversion. This study evaluates the accuracy of various satellite DEMs by comparing them with a LiDAR [...] Read more.
Accurate Digital Elevation Models (DEMs) are essential for precise terrain gravity field calculations, which are critical in gravity field modeling, airborne gravimeter and gradiometer calibration, and geophysical inversion. This study evaluates the accuracy of various satellite DEMs by comparing them with a LiDAR DEM at the Wudalianchi test site, a location requiring ultra-accurate terrain gravity fields. Major DEM error sources, particularly those related to vegetation, were identified and corrected using a least squares method that integrates canopy height, vegetation cover, NDVI, and airborne LiDAR DEM data. The impact of DEM vegetation errors on terrain gravity anomalies and gravity gradients was quantified using a partitioned adaptive gravity forward-modeling method at different measurement heights. The results indicate that the TanDEM-X DEM and AW3D30 DEM exhibit the highest vertical accuracy among the satellite DEMs evaluated in the Wudalianchi area. Vegetation significantly affects DEM accuracy, with vegetation-related errors causing an impact of approximately 0.17 mGal (RMS) on surface gravity anomalies. This effect is more pronounced in densely vegetated and volcanic regions. At 100 m above the surface and at an altitude of 1 km, vegetation height affects gravity anomalies by approximately 0.12 mGal and 0.07 mGal, respectively. Additionally, vegetation height impacts the vertical gravity gradient at 100 m above the surface by approximately 4.20 E (RMS), with errors up to 48.84 E over vegetation covered areas. The findings underscore the critical importance of using DEMs with vegetation errors removed for high-precision terrain gravity and gravity gradient modeling, particularly in applications such as airborne gravimeter and gradiometer calibration. Full article
Show Figures

Figure 1

17 pages, 4394 KiB  
Article
Real-Time Semantic Segmentation of 3D LiDAR Point Clouds for Aircraft Engine Detection in Autonomous Jetbridge Operations
by Ihnsik Weon, Soongeul Lee and Juhan Yoo
Appl. Sci. 2024, 14(21), 9685; https://doi.org/10.3390/app14219685 - 23 Oct 2024
Abstract
This paper presents a study on aircraft engine identification using real-time 3D LiDAR point cloud segmentation technology, a key element for the development of automated docking systems in airport boarding facilities, known as jetbridges. To achieve this, 3D LiDAR sensors utilizing a spinning [...] Read more.
This paper presents a study on aircraft engine identification using real-time 3D LiDAR point cloud segmentation technology, a key element for the development of automated docking systems in airport boarding facilities, known as jetbridges. To achieve this, 3D LiDAR sensors utilizing a spinning method were employed to gather surrounding environmental 3D point cloud data. The raw 3D environmental data were then filtered using the 3D RANSAC technique, excluding ground data and irrelevant apron areas. Segmentation was subsequently conducted based on the filtered data, focusing on aircraft sections. For the segmented aircraft engine parts, the centroid of the grouped data was computed to determine the 3D position of the aircraft engine. Additionally, PointNet was applied to identify aircraft engines from the segmented data. Dynamic tests were conducted in various weather and environmental conditions, evaluating the detection performance across different jetbridge movement speeds and object-to-object distances. The study achieved a mean intersection over union (mIoU) of 81.25% in detecting aircraft engines, despite experiencing challenging conditions such as low-frequency vibrations and changes in the field of view during jetbridge maneuvers. This research provides a strong foundation for enhancing the robustness of jetbridge autonomous docking systems by reducing the sensor noise and distortion in real-time applications. Our future research will focus on optimizing sensor configurations, especially in environments where sea fog, snow, and rain are frequent, by combining RGB image data with 3D LiDAR information. The ultimate goal is to further improve the system’s reliability and efficiency, not only in jetbridge operations but also in broader autonomous vehicle and robotics applications, where precision and reliability are critical. The methodologies and findings of this study hold the potential to significantly advance the development of autonomous technologies across various industrial sectors. Full article
(This article belongs to the Section Mechanical Engineering)
Show Figures

Figure 1

26 pages, 28365 KiB  
Article
Three-Dimensional Geometric-Physical Modeling of an Environment with an In-House-Developed Multi-Sensor Robotic System
by Su Zhang, Minglang Yu, Haoyu Chen, Minchao Zhang, Kai Tan, Xufeng Chen, Haipeng Wang and Feng Xu
Remote Sens. 2024, 16(20), 3897; https://doi.org/10.3390/rs16203897 - 20 Oct 2024
Viewed by 236
Abstract
Environment 3D modeling is critical for the development of future intelligent unmanned systems. This paper proposes a multi-sensor robotic system for environmental geometric-physical modeling and the corresponding data processing methods. The system is primarily equipped with a millimeter-wave cascaded radar and a multispectral [...] Read more.
Environment 3D modeling is critical for the development of future intelligent unmanned systems. This paper proposes a multi-sensor robotic system for environmental geometric-physical modeling and the corresponding data processing methods. The system is primarily equipped with a millimeter-wave cascaded radar and a multispectral camera to acquire the electromagnetic characteristics and material categories of the target environment and simultaneously employs light detection and ranging (LiDAR) and an optical camera to achieve a three-dimensional spatial reconstruction of the environment. Specifically, the millimeter-wave radar sensor adopts a multiple input multiple output (MIMO) array and obtains 3D synthetic aperture radar images through 1D mechanical scanning perpendicular to the array, thereby capturing the electromagnetic properties of the environment. The multispectral camera, equipped with nine channels, provides rich spectral information for material identification and clustering. Additionally, LiDAR is used to obtain a 3D point cloud, combined with the RGB images captured by the optical camera, enabling the construction of a three-dimensional geometric model. By fusing the data from four sensors, a comprehensive geometric-physical model of the environment can be constructed. Experiments conducted in indoor environments demonstrated excellent spatial-geometric-physical reconstruction results. This system can play an important role in various applications, such as environment modeling and planning. Full article
Show Figures

Figure 1

12 pages, 6298 KiB  
Article
A CMOS Optoelectronic Transimpedance Amplifier Using Concurrent Automatic Gain Control for LiDAR Sensors
by Yeojin Chon, Shinhae Choi and Sung-Min Park
Photonics 2024, 11(10), 974; https://doi.org/10.3390/photonics11100974 - 17 Oct 2024
Viewed by 214
Abstract
This paper presents a novel optoelectronic transimpedance amplifier (OTA) for short-range LiDAR sensors used in 180 nm CMOS technology, which consists of a main transimpedance amplifier (m-TIA) with an on-chip P+/N-well/Deep N-well avalanche photodiode (P+/NW/DNW APD) and a replica [...] Read more.
This paper presents a novel optoelectronic transimpedance amplifier (OTA) for short-range LiDAR sensors used in 180 nm CMOS technology, which consists of a main transimpedance amplifier (m-TIA) with an on-chip P+/N-well/Deep N-well avalanche photodiode (P+/NW/DNW APD) and a replica TIA with another on-chip APD, not only to acquire circuit symmetry but to also obtain concurrent automatic gain control (AGC) function within a narrow single pulse-width duration. In particular, for concurrent AGC operations, 3-bit PMOS switches with series resistors are added in parallel with the passive feedback resistor in the m-TIA. Then, the PMOS switches can be turned on or off in accordance with the DC output voltage amplitudes of the replica TIA. The post-layout simulations reveal that the OTA extends the dynamic range up to 74.8 dB (i.e., 1 µApp~5.5 mApp) and achieves a 67 dBΩ transimpedance gain, an 830 MHz bandwidth, a 16 pA/Hz noise current spectral density, a −31 dBm optical sensitivity for a 10−12 bit error rate, and a 6 mW power dissipation from a single 1.8 V supply. The chip occupies a core area of 200 × 120 µm2. Full article
(This article belongs to the Section Optoelectronics and Optical Materials)
Show Figures

Figure 1

13 pages, 7469 KiB  
Article
An 8 × 8 CMOS Optoelectronic Readout Array of Short-Range LiDAR Sensors
by Yeojin Chon, Shinhae Choi, Jieun Joo and Sung-Min Park
Sensors 2024, 24(20), 6686; https://doi.org/10.3390/s24206686 - 17 Oct 2024
Viewed by 321
Abstract
This paper presents an 8 × 8 channel optoelectronic readout array (ORA) realized in the TSMC 180 nm 1P6M RF CMOS process for the applications of short-range light detection and ranging (LiDAR) sensors. We propose several circuit techniques in this work, including an [...] Read more.
This paper presents an 8 × 8 channel optoelectronic readout array (ORA) realized in the TSMC 180 nm 1P6M RF CMOS process for the applications of short-range light detection and ranging (LiDAR) sensors. We propose several circuit techniques in this work, including an amplitude-to-voltage (A2V) converter that reduces the notorious walk errors by intensity compensation and a time-to-voltage (T2V) converter that acquires the linear slope of the output signals by exploiting a charging circuit, thus extending the input dynamic range significantly from 5 μApp to 1.1 mApp, i.e., 46.8 dB. These results correspond to the maximum detection range of 8.2 m via the action of the A2V converter and the minimum detection range of 56 cm with the aid of the proposed T2V converter. Optical measurements utilizing an 850 nm laser diode confirm that the proposed 8 × 8 ORA with 64 on-chip avalanche photodiodes (APDs) can successfully recover the narrow 5 ns light pulses even at the shortest distance of 56 cm. Hence, this work provides a potential CMOS solution for low-cost, low-power, short-range LiDAR sensors. Full article
(This article belongs to the Special Issue Recent Advances in LiDAR Sensor)
Show Figures

Figure 1

23 pages, 7971 KiB  
Article
Three-Dimensional Outdoor Object Detection in Quadrupedal Robots for Surveillance Navigations
by Muhammad Hassan Tanveer, Zainab Fatima, Hira Mariam, Tanazzah Rehman and Razvan Cristian Voicu
Actuators 2024, 13(10), 422; https://doi.org/10.3390/act13100422 - 16 Oct 2024
Viewed by 437
Abstract
Quadrupedal robots are confronted with the intricate challenge of navigating dynamic environments fraught with diverse and unpredictable scenarios. Effectively identifying and responding to obstacles is paramount for ensuring safe and reliable navigation. This paper introduces a pioneering method for 3D object detection, termed [...] Read more.
Quadrupedal robots are confronted with the intricate challenge of navigating dynamic environments fraught with diverse and unpredictable scenarios. Effectively identifying and responding to obstacles is paramount for ensuring safe and reliable navigation. This paper introduces a pioneering method for 3D object detection, termed viewpoint feature histograms, which leverages the established paradigm of 2D detection in projection. By translating 2D bounding boxes into 3D object proposals, this approach not only enables the reuse of existing 2D detectors but also significantly increases the performance with less computation required, allowing for real-time detection. Our method is versatile, targeting both bird’s eye view objects (e.g., cars) and frontal view objects (e.g., pedestrians), accommodating various types of 2D object detectors. We showcase the efficacy of our approach through the integration of YOLO3D, utilizing LiDAR point clouds on the KITTI dataset, to achieve real-time efficiency aligned with the demands of autonomous vehicle navigation. Our model selection process, tailored to the specific needs of quadrupedal robots, emphasizes considerations such as model complexity, inference speed, and customization flexibility, achieving an accuracy of up to 99.93%. This research represents a significant advancement in enabling quadrupedal robots to navigate complex and dynamic environments with heightened precision and safety. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

18 pages, 15258 KiB  
Article
Vibration Position Detection of Robot Arm Based on Feature Extraction of 3D Lidar
by Jinchao Hu, Xiaobin Xu, Chenfei Cao, Zhenghong Tian, Yuanshan Ma, Xiao Sun and Jian Yang
Sensors 2024, 24(20), 6584; https://doi.org/10.3390/s24206584 - 12 Oct 2024
Viewed by 382
Abstract
In the process of construction, pouring and vibrating concrete on existing reinforced structures is a necessary process. This paper presents an automatic vibration position detecting method based on the feature extraction of 3D lidar point clouds. Compared with the image-based method, this method [...] Read more.
In the process of construction, pouring and vibrating concrete on existing reinforced structures is a necessary process. This paper presents an automatic vibration position detecting method based on the feature extraction of 3D lidar point clouds. Compared with the image-based method, this method has better anti-interference performance to light with reduced computational consumption. First, lidar scans are used to capture multiple frames of local steel bar point clouds. Then, the clouds are stitched by Normal Distribution Transform (NDT) for preliminary matching and Iterative Closest Point (ICP) for fine-matching. The Graph-Based Optimization (g2o) method further refines the precision of the 3D registration. Afterwards, the 3D point clouds are projected into a 2D image. Finally, the locations of concrete vibration points and concrete casting points are discerned through point cloud and image processing technologies. Experiments demonstrate that the proposed automatic method outperforms ICP and NDT algorithms, reducing the mean square error (MSE) by 11.5% and 11.37%, respectively. The maximum discrepancies in identifying concrete vibration points and concrete casting points are 0.059 ± 0.031 m and 0.089 ± 0.0493 m, respectively, fulfilling the requirement for concrete vibration detection. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

18 pages, 19487 KiB  
Article
Dense 3D Point Cloud Environmental Mapping Using Millimeter-Wave Radar
by Zhiyuan Zeng, Jie Wen, Jianan Luo, Gege Ding and Xiongfei Geng
Sensors 2024, 24(20), 6569; https://doi.org/10.3390/s24206569 - 12 Oct 2024
Viewed by 548
Abstract
To address the challenges of sparse point clouds in current MIMO millimeter-wave radar environmental mapping, this paper proposes a dense 3D millimeter-wave radar point cloud environmental mapping algorithm. In the preprocessing phase, a radar SLAM-based approach is introduced to construct local submaps, which [...] Read more.
To address the challenges of sparse point clouds in current MIMO millimeter-wave radar environmental mapping, this paper proposes a dense 3D millimeter-wave radar point cloud environmental mapping algorithm. In the preprocessing phase, a radar SLAM-based approach is introduced to construct local submaps, which replaces the direct use of radar point cloud frames. This not only reduces data dimensionality but also enables the proposed method to handle scenarios involving vehicle motion with varying speeds. Building on this, a 3D-RadarHR cross-modal learning network is proposed, which uses LiDAR as the target output to train the radar submaps, thereby generating a dense millimeter-wave radar point cloud map. Experimental results across multiple scenarios, including outdoor environments and underground tunnels, demonstrate that the proposed method can increase the point cloud density of millimeter-wave radar environmental maps by over 50 times, with a point cloud accuracy better than 0.1 m. Compared to existing algorithms, the proposed method achieves superior environmental map reconstruction performance while maintaining a real-time processing rate of 15 Hz. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

17 pages, 8191 KiB  
Technical Note
See the Unseen: Grid-Wise Drivable Area Detection Dataset and Network Using LiDAR
by Christofel Rio Goenawan, Dong-Hee Paek and Seung-Hyun Kong
Remote Sens. 2024, 16(20), 3777; https://doi.org/10.3390/rs16203777 - 11 Oct 2024
Viewed by 349
Abstract
Drivable Area (DA) detection is crucial for autonomous driving. Camera-based methods rely heavily on illumination conditions and often fail to capture accurate 3D information, while LiDAR-based methods offer accurate 3D data and are less susceptible to illumination conditions. However, existing LiDAR-based methods focus [...] Read more.
Drivable Area (DA) detection is crucial for autonomous driving. Camera-based methods rely heavily on illumination conditions and often fail to capture accurate 3D information, while LiDAR-based methods offer accurate 3D data and are less susceptible to illumination conditions. However, existing LiDAR-based methods focus on point-wise detection, so are prone to occlusion and limited by point cloud sparsity, which leads to decreased performance in motion planning and localization. We propose Argoverse-grid, a grid-wise DA detection dataset derived from Argoverse 1, comprising over 20K frames with fine-grained BEV DA labels across various scenarios. We also introduce Grid-DATrNet, a first grid-wise DA detection model utilizing global attention through transformers. Our experiments demonstrate the superiority of Grid-DATrNet over various methods, including both LiDAR and camera-based approaches, in detecting grid-wise DA on the proposed Argoverse-grid dataset. Grid-DATrNet achieves state-of-the-art results with an accuracy of 93.28% and an F1-score of 0.8328. We show that Grid-DATrNet can detect grids even in occluded and unmeasured areas by leveraging contextual and semantic information through global attention, unlike CNN-based DA detection methods. The preprocessing code for Argoverse-grid, experiment code, Grid-DATrNet implementation, and result visualization code are available at AVE Laboratory official git hub. Full article
(This article belongs to the Special Issue Point Cloud Processing with Machine Learning)
Show Figures

Figure 1

12 pages, 1744 KiB  
Article
DGRO: Doppler Velocity and Gyroscope-Aided Radar Odometry
by Chao Guo, Bangguo Wei, Bin Lan, Lunfei Liang and Houde Liu
Sensors 2024, 24(20), 6559; https://doi.org/10.3390/s24206559 - 11 Oct 2024
Viewed by 377
Abstract
A stable and robust odometry system is essential for autonomous robot navigation. The 4D millimeter-wave radar, known for its resilience in harsh weather conditions, has attracted considerable attention. As the latest generation of FMCW radar, 4D millimeter-wave radar provides point clouds with both [...] Read more.
A stable and robust odometry system is essential for autonomous robot navigation. The 4D millimeter-wave radar, known for its resilience in harsh weather conditions, has attracted considerable attention. As the latest generation of FMCW radar, 4D millimeter-wave radar provides point clouds with both position and Doppler velocity information. However, the increased uncertainty and noise in 4D radar point clouds pose challenges that prevent the direct application of LiDAR-based SLAM algorithms. To address this, we propose a SLAM framework that fuses 4D radar data with gyroscope readings using graph optimization techniques. Initially, Doppler velocity is employed to estimate the radar’s ego velocity, with dynamic points being removed accordingly. Building on this, we introduce a pre-integration factor that combines ego-velocity and gyroscope data. Additionally, leveraging the stable RCS characteristics of radar, we design a corresponding point selection method based on normal direction and propose a scan-to-submap point cloud registration technique weighted by RCS intensity. Finally, we validate the reliability and localization accuracy of our framework using both our own dataset and the NTU dataset. Experimental results show that the proposed DGRO system outperforms traditional 4D radar odometry methods, especially in environments with slow speeds and fewer dynamic objects. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

16 pages, 20799 KiB  
Article
Path Tracing-Inspired Modeling of Non-Line-of-Sight SPAD Data
by Stirling Scholes and Jonathan Leach
Sensors 2024, 24(20), 6522; https://doi.org/10.3390/s24206522 - 10 Oct 2024
Viewed by 440
Abstract
Non-Line of Sight (NLOS) imaging has gained attention for its ability to detect and reconstruct objects beyond the direct line of sight, using scattered light, with applications in surveillance and autonomous navigation. This paper presents a versatile framework for modeling the temporal distribution [...] Read more.
Non-Line of Sight (NLOS) imaging has gained attention for its ability to detect and reconstruct objects beyond the direct line of sight, using scattered light, with applications in surveillance and autonomous navigation. This paper presents a versatile framework for modeling the temporal distribution of photon detections in direct Time of Flight (dToF) Lidar NLOS systems. Our approach accurately accounts for key factors such as material reflectivity, object distance, and occlusion by utilizing a proof-of-principle simulation realized with the Unreal Engine. By generating likelihood distributions for photon detections over time, we propose a mechanism for the simulation of NLOS imaging data, facilitating the optimization of NLOS systems and the development of novel reconstruction algorithms. The framework allows for the analysis of individual components of photon return distributions, yielding results consistent with prior experimental data and providing insights into the effects of extended surfaces and multi-path scattering. We introduce an optimized secondary scattering approach that captures critical multi-path information with reduced computational cost. This work provides a robust tool for the design and improvement of dToF SPAD Lidar-based NLOS imaging systems. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

22 pages, 7672 KiB  
Article
ALS-Based, Automated, Single-Tree 3D Reconstruction and Parameter Extraction Modeling
by Hong Wang, Dan Li, Jiaqi Duan and Peng Sun
Forests 2024, 15(10), 1776; https://doi.org/10.3390/f15101776 - 9 Oct 2024
Viewed by 623
Abstract
The 3D reconstruction of point cloud trees and the acquisition of stand factors are key to supporting forestry regulation and urban planning. However, the two are usually independent modules in existing studies. In this work, we extended the AdTree method for 3D modeling [...] Read more.
The 3D reconstruction of point cloud trees and the acquisition of stand factors are key to supporting forestry regulation and urban planning. However, the two are usually independent modules in existing studies. In this work, we extended the AdTree method for 3D modeling of trees by adding a quantitative analysis capability to acquire stand factors. We used unmanned aircraft LiDAR (ALS) data as the raw data for this study. After denoising the data and segmenting the single trees, we obtained the single-tree samples needed for this study and produced our own single-tree sample dataset. The scanned tree point cloud was reconstructed in three dimensions in terms of geometry and topology, and important stand parameters in forestry were extracted. This improvement in the quantification of model parameters significantly improves the utility of the original point cloud tree reconstruction algorithm and increases its ability for quantitative analysis. The tree parameters obtained by this improved model were validated on 82 camphor pine trees sampled from the Northeast Forestry University forest. In a controlled experiment with the same field-measured parameters, the root mean square errors (RMSEs) and coefficients of determination (R2s) for diameters at breast height (DBHs) and crown widths (CWs) were 4.1 cm and 0.63, and 0.61 m and 0.74, and the RMSEs and coefficients of determination (R2s) for heights at tree height (THs) and crown base heights (CBHs) were 0.55 m and 0.85, and 1.02 m and 0.88, respectively. The overall effect of the canopy volume extracted based on the alpha shape is closest to the original point cloud and best estimated when alpha = 0.3. Full article
(This article belongs to the Special Issue Forest Parameter Detection and Modeling Using Remote Sensing Data)
Show Figures

Figure 1

16 pages, 29393 KiB  
Article
Switchable Dual-Wavelength Fiber Laser with Narrow-Linewidth Output Based on Parity-Time Symmetry System and the Cascaded FBG
by Kaiwen Wang, Bin Yin, Chao Lv, Yanzhi Lv, Yiming Wang, Hao Liang, Qun Wang, Shiyang Wang, Fengjie Yu, Zhong Zhang, Ziwang Li and Songhua Wu
Photonics 2024, 11(10), 946; https://doi.org/10.3390/photonics11100946 - 8 Oct 2024
Viewed by 620
Abstract
In this paper, a dual-wavelength narrow-linewidth fiber laser based on parity-time (PT) symmetry theory is proposed and experimentally demonstrated. The PT-symmetric filter system consists of two optical couplers (OCs), four polarization controllers (PCs), a polarization beam splitter (PBS), and cascaded fiber Bragg gratings [...] Read more.
In this paper, a dual-wavelength narrow-linewidth fiber laser based on parity-time (PT) symmetry theory is proposed and experimentally demonstrated. The PT-symmetric filter system consists of two optical couplers (OCs), four polarization controllers (PCs), a polarization beam splitter (PBS), and cascaded fiber Bragg gratings (FBGs), enabling stable switchable dual-wavelength output and single longitudinal-mode (SLM) operation. The realization of single-frequency oscillation requires precise tuning of the PCs to match gain, loss, and coupling coefficients to ensure that the PT-broken phase occurs. During single-wavelength operation at 1548.71 nm (λ1) over a 60-min period, power and wavelength fluctuations were observed to be 0.94 dB and 0.01 nm, respectively, while for the other wavelength at 1550.91 nm (λ2), fluctuations were measured at 0.76 dB and 0.01 nm. The linewidths of each wavelength were 1.01 kHz and 0.89 kHz, with a relative intensity noise (RIN) lower than −117 dB/Hz. Under dual-wavelength operation, the maximum wavelength fluctuations for λ1 and λ2 were 0.03 nm and 0.01 nm, respectively, with maximum power fluctuations of 3.23 dB and 2.38 dB. The SLM laser source is suitable for applications in long-distance fiber-optic sensing and coherent LiDAR detection. Full article
(This article belongs to the Special Issue Single Frequency Fiber Lasers and Their Applications)
Show Figures

Figure 1

21 pages, 10278 KiB  
Article
Three-Dimensional Reconstruction of Zebra Crossings in Vehicle-Mounted LiDAR Point Clouds
by Zhenfeng Zhao, Shu Gan, Bo Xiao, Xinpeng Wang and Chong Liu
Remote Sens. 2024, 16(19), 3722; https://doi.org/10.3390/rs16193722 - 7 Oct 2024
Viewed by 890
Abstract
In the production of high-definition maps, it is necessary to achieve the three-dimensional instantiation of road furniture that is difficult to depict on traditional maps. The development of mobile laser measurement technology provides a new means for acquiring road furniture data. To address [...] Read more.
In the production of high-definition maps, it is necessary to achieve the three-dimensional instantiation of road furniture that is difficult to depict on traditional maps. The development of mobile laser measurement technology provides a new means for acquiring road furniture data. To address the issue of traffic marking extraction accuracy in practical production, which is affected by degradation, occlusion, and non-standard variations, this paper proposes a 3D reconstruction method based on energy functions and template matching, using zebra crossings in vehicle-mounted LiDAR point clouds as an example. First, regions of interest (RoIs) containing zebra crossings are obtained through manual selection. Candidate point sets are then obtained at fixed distances, and their neighborhood intensity features are calculated to determine the number of zebra stripes using non-maximum suppression. Next, the slice intensity feature of each zebra stripe is calculated, followed by outlier filtering to determine the optimized length. Finally, a matching template is selected, and an energy function composed of the average intensity of the point cloud within the template, the intensity information entropy, and the intensity gradient at the template boundary is constructed. The 3D reconstruction result is obtained by solving the energy function, performing mode statistics, and normalization. This method enables the complete 3D reconstruction of zebra stripes within the RoI, maintaining an average planar corner accuracy within 0.05 m and an elevation accuracy within 0.02 m. The matching and reconstruction time does not exceed 1 s, and it has been applied in practical production. Full article
Show Figures

Figure 1

Back to TopTop