Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (175)

Search Parameters:
Keywords = unsupervised domain adaptation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 5900 KiB  
Article
Progressive Unsupervised Domain Adaptation for Radio Frequency Signal Attribute Recognition across Communication Scenarios
by Jing Xiao, Hang Zhang, Zeqi Shao, Yikai Zheng and Wenrui Ding
Remote Sens. 2024, 16(19), 3696; https://doi.org/10.3390/rs16193696 - 4 Oct 2024
Viewed by 364
Abstract
As the development of low-altitude economies and aerial countermeasures continues, the safety of unmanned aerial vehicles becomes increasingly critical, making emitter identification in remote sensing practices more essential. Effective recognition of radio frequency (RF) signal attributes is a prerequisite for identifying emitters. However, [...] Read more.
As the development of low-altitude economies and aerial countermeasures continues, the safety of unmanned aerial vehicles becomes increasingly critical, making emitter identification in remote sensing practices more essential. Effective recognition of radio frequency (RF) signal attributes is a prerequisite for identifying emitters. However, due to diverse wireless communication environments, RF signals often face challenges from complex and time-varying wireless channel conditions. These challenges lead to difficulties in data collection and annotation, as well as disparities in data distribution across different communication scenarios. To address this issue, this paper proposes a progressive maximum similarity-based unsupervised domain adaptation (PMS-UDA) method for RF signal attribute recognition. First, we introduce a noise perturbation consistency optimization method to enhance the robustness of the PMS-UDA method under low signal-to-noise conditions. Subsequently, a progressive label alignment training method is proposed, combining sample-level maximum correlation with distribution-level maximum similarity optimization techniques to enhance the similarity of cross-domain features. Finally, a domain adversarial optimization method is employed to extract domain-independent features, reducing the impact of channel scenarios. The experimental results demonstrate that the PMS-UDA method achieves superior recognition performance in automatic modulation recognition and RF fingerprint identification tasks, as well as across both ground-to-ground and air-to-ground scenarios, compared to baseline methods. Full article
Show Figures

Figure 1

18 pages, 2584 KiB  
Article
Robust Remote Sensing Scene Interpretation Based on Unsupervised Domain Adaptation
by Linjuan Li, Haoxue Zhang, Gang Xie and Zhaoxiang Zhang
Electronics 2024, 13(18), 3709; https://doi.org/10.3390/electronics13183709 - 19 Sep 2024
Viewed by 798
Abstract
Deep learning models excel in interpreting the exponentially growing amounts of remote sensing data; however, they are susceptible to deception and spoofing by adversarial samples, posing catastrophic threats. The existing methods to combat adversarial samples have limited performance in robustness and efficiency, particularly [...] Read more.
Deep learning models excel in interpreting the exponentially growing amounts of remote sensing data; however, they are susceptible to deception and spoofing by adversarial samples, posing catastrophic threats. The existing methods to combat adversarial samples have limited performance in robustness and efficiency, particularly in complex remote sensing scenarios. To tackle these challenges, an unsupervised domain adaptation algorithm is proposed for the accurate identification of clean images and adversarial samples by exploring a robust generative adversarial classification network that can harmonize the features between clean images and adversarial samples to minimize distribution discrepancies. Furthermore, linear polynomial loss as a replacement for cross-entropy loss is integrated to guide robust representation learning. Additionally, we leverage the fast gradient sign method (FGSM) and projected gradient descent (PGD) algorithms to generate adversarial samples with varying perturbation amplitudes to assess model robustness. A series of experiments was performed on the RSSCN7 dataset and SIRI-WHU dataset. Our experimental results illustrate that the proposed algorithm performs exceptionally well in classifying clean images while demonstrating robustness against adversarial perturbations. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

20 pages, 2849 KiB  
Article
Towards Discriminative Class-Aware Domain Alignment via Coding Rate Reduction for Unsupervised Adversarial Domain Adaptation
by Jiahua Wu and Yuchun Fang
Symmetry 2024, 16(9), 1216; https://doi.org/10.3390/sym16091216 - 16 Sep 2024
Viewed by 516
Abstract
Unsupervised domain adaptation (UDA) methods, based on adversarial learning, employ the means of implicit global and class-aware domain alignment to learn the symmetry between source and target domains and facilitate the transfer of knowledge from a labeled source domain to an unlabeled target [...] Read more.
Unsupervised domain adaptation (UDA) methods, based on adversarial learning, employ the means of implicit global and class-aware domain alignment to learn the symmetry between source and target domains and facilitate the transfer of knowledge from a labeled source domain to an unlabeled target domain. However, these methods still face misalignment and poor target generalization due to small inter-class domain discrepancy and large intra-class discrepancy of target features. To tackle these challenges, we introduce a novel adversarial learning-based UDA framework named Coding Rate Reduction Adversarial Domain Adaptation (CR2ADA) to better learn the symmetry between source and target domains. Integrating conditional domain adversarial networks with domain-specific batch normalization, CR2ADA learns robust domain-invariant features to implement global domain alignment. For discriminative class-aware domain alignment, we propose the global and local coding rate reduction methods in CR2ADA to maximize inter-class domain discrepancy and minimize intra-class discrepancy of target features. Additionally, CR2ADA combines minimum class confusion and mutual information to further regularize the diversity and discriminability of the learned features. The effectiveness of CR2ADA is demonstrated through experiments on four UDA datasets. The code can be obtained through email or GitHub. Full article
Show Figures

Figure 1

15 pages, 12772 KiB  
Article
Learning Unsupervised Cross-Domain Model for TIR Target Tracking
by Xiu Shu, Feng Huang, Zhaobing Qiu, Xinming Zhang and Di Yuan
Mathematics 2024, 12(18), 2882; https://doi.org/10.3390/math12182882 - 15 Sep 2024
Viewed by 375
Abstract
The limited availability of thermal infrared (TIR) training samples leads to suboptimal target representation by convolutional feature extraction networks, which adversely impacts the accuracy of TIR target tracking methods. To address this issue, we propose an unsupervised cross-domain model (UCDT) for TIR tracking. [...] Read more.
The limited availability of thermal infrared (TIR) training samples leads to suboptimal target representation by convolutional feature extraction networks, which adversely impacts the accuracy of TIR target tracking methods. To address this issue, we propose an unsupervised cross-domain model (UCDT) for TIR tracking. Our approach leverages labeled training samples from the RGB domain (source domain) to train a general feature extraction network. We then employ a cross-domain model to adapt this network for effective target feature extraction in the TIR domain (target domain). This cross-domain strategy addresses the challenge of limited TIR training samples effectively. Additionally, we utilize an unsupervised learning technique to generate pseudo-labels for unlabeled training samples in the source domain, which helps overcome the limitations imposed by the scarcity of annotated training data. Extensive experiments demonstrate that our UCDT tracking method outperforms existing tracking approaches on the PTB-TIR and LSOTB-TIR benchmarks. Full article
Show Figures

Figure 1

19 pages, 10886 KiB  
Article
Advancing Nighttime Object Detection through Image Enhancement and Domain Adaptation
by Chenyuan Zhang and Deokwoo Lee
Appl. Sci. 2024, 14(18), 8109; https://doi.org/10.3390/app14188109 - 10 Sep 2024
Viewed by 642
Abstract
Due to the lack of annotations for nighttime low-light images, object detection in low-light images has always been a challenging problem. Achieving high-precision results at night is also an issue. Additionally, we aim to use a single nighttime dataset to complete the knowledge [...] Read more.
Due to the lack of annotations for nighttime low-light images, object detection in low-light images has always been a challenging problem. Achieving high-precision results at night is also an issue. Additionally, we aim to use a single nighttime dataset to complete the knowledge distillation task while improving the detection accuracy of object detection models under nighttime low-light conditions and reducing the computational cost of the model, especially for small targets and objects contaminated by special nighttime lighting. This paper proposes a Nighttime Unsupervised Domain Adaptation Network (NUDN) based on knowledge distillation to address these issues. To improve the detection accuracy of nighttime images, high-confidence bounding box predictions from the teacher and region proposals from the student are first fused, allowing the teacher to perform better in subsequent training, thus generating a combination of high-confidence and low-confidence pseudo-labels. This combination of feature information is used to guide model training, enabling the model to extract feature information similar to that of source images in nighttime low-light images. Nighttime images and pseudo-labels undergo random size transformations before being used as input for the student, enhancing the model’s generalization across different scales. To address the scarcity of nighttime datasets, we propose a nighttime-specific augmentation pipeline called LightImg. This pipeline enhances nighttime features, transforming them into daytime features and reducing issues such as backlighting, uneven illumination, and dim nighttime light, enabling cross-domain research using existing nighttime datasets. Our experimental results show that NUDN can significantly improve nighttime low-light object detection accuracy on the SHIFT and ExDark datasets. We conduct extensive experiments and ablation studies to demonstrate the effectiveness and efficiency of our work. Full article
Show Figures

Figure 1

13 pages, 43293 KiB  
Article
Masked Style Transfer for Source-Coherent Image-to-Image Translation
by Filippo Botti, Tomaso Fontanini, Massimo Bertozzi and Andrea Prati
Appl. Sci. 2024, 14(17), 7876; https://doi.org/10.3390/app14177876 - 4 Sep 2024
Viewed by 495
Abstract
The goal of image-to-image translation (I2I) is to translate images from one domain to another while maintaining the content representations. A popular method for I2I translation involves the use of a reference image to guide the transformation process. However, most architectures fail to [...] Read more.
The goal of image-to-image translation (I2I) is to translate images from one domain to another while maintaining the content representations. A popular method for I2I translation involves the use of a reference image to guide the transformation process. However, most architectures fail to maintain the input’s main characteristics and produce images that are too similar to the reference during style transfer. In order to avoid this problem, we propose a novel architecture that is able to perform source-coherent translation between multiple domains. Our goal is to preserve the input details during I2I translation by weighting the style code obtained from the reference images before applying it to the source image. Therefore, we choose to mask the reference images in an unsupervised way before extracting the style from them. By doing so, the input characteristics are better maintained while performing the style transfer. As a result, we also increase the diversity in the generated images by extracting the style from the same reference. Additionally, adaptive normalization layers, which are commonly used to inject styles into a model, are substituted with an attention mechanism for the purpose of increasing the quality of the generated images. Several experiments are performed on the CelebA-HQ and AFHQ datasets in order to prove the efficacy of the proposed system. Quantitative results measured using the LPIPS and FID metrics demonstrate the superiority of the proposed architecture compared to the state-of-the-art methods. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

30 pages, 11567 KiB  
Article
Gini Coefficient-Based Feature Learning for Unsupervised Cross-Domain Classification with Compact Polarimetric SAR Data
by Xianyu Guo, Junjun Yin, Kun Li and Jian Yang
Agriculture 2024, 14(9), 1511; https://doi.org/10.3390/agriculture14091511 - 3 Sep 2024
Viewed by 614
Abstract
Remote sensing image classification usually needs many labeled samples so that the target nature can be fully described. For synthetic aperture radar (SAR) images, variations of the target scattering always happen to some extent due to the imaging geometry, weather conditions, and system [...] Read more.
Remote sensing image classification usually needs many labeled samples so that the target nature can be fully described. For synthetic aperture radar (SAR) images, variations of the target scattering always happen to some extent due to the imaging geometry, weather conditions, and system parameters. Therefore, labeled samples in one image could not be suitable to represent the same target in other images. The domain distribution shift of different images reduces the reusability of the labeled samples. Thus, exploring cross-domain interpretation methods is of great potential for SAR images to improve the reuse rate of existing labels from historical images. In this study, an unsupervised cross-domain classification method is proposed that utilizes the Gini coefficient to rank the robust and stable polarimetric features in both the source and target domains (GRFST) such that an unsupervised domain adaptation (UDA) can be achieved. This method selects the optimal features from both the source and target domains to alleviate the domain distribution shift. Both fully polarimetric (FP) and compact polarimetric (CP) SAR features are explored for crop-domain terrain type classification. Specifically, the CP mode refers to the hybrid dual-pol mode with an arbitrary transmitting ellipse wave. This is the first attempt in the open literature to investigate the representing abilities of different CP modes for cross-domain terrain classification. Experiments are conducted from four aspects to demonstrate the performance of CP modes for cross-data, cross-scene, and cross-crop type classification. Results show that the GRFST-UDA method yields a classification accuracy of 2% to 12% higher than the traditional UDA methods. The degree of scene similarity has a certain impact on the accuracy of cross-domain crop classification. It was also found that when both the FP and circular CP SAR data are used, stable, promising results can be achieved. Full article
(This article belongs to the Special Issue Applications of Remote Sensing in Agricultural Soil and Crop Mapping)
Show Figures

Figure 1

15 pages, 1114 KiB  
Article
Cross-Domain Object Detection through Consistent and Contrastive Teacher with Fourier Transform
by Longfei Jia, Xianlong Tian, Mengmeng Jing, Lin Zuo and Wen Li
Electronics 2024, 13(16), 3292; https://doi.org/10.3390/electronics13163292 - 19 Aug 2024
Viewed by 531
Abstract
The teacher–student framework has been employed in unsupervised domain adaptation, which transfers knowledge learned from a labeled source domain to an unlabeled target domain. However, this framework suffers from two serious challenges: the domain gap, causing performance degradation, and noisy teacher pseudo-labels, which [...] Read more.
The teacher–student framework has been employed in unsupervised domain adaptation, which transfers knowledge learned from a labeled source domain to an unlabeled target domain. However, this framework suffers from two serious challenges: the domain gap, causing performance degradation, and noisy teacher pseudo-labels, which tend to mislead students. In this paper, we propose a Consistent and Contrastive Teacher with Fourier Transform (CCTF) method to address these challenges for high-performance cross-domain object detection. To mitigate the negative impact of domain shifts, we use the Fourier transform to exchange the low-frequency components of the source and target domain images, replacing the source domain inputs with the transformed image, thereby reducing domain gaps. In addition, we encourage the localization and classification branches of the teacher to make consistent predictions to minimize the noise in the generated pseudo-labels. Finally, contrastive learning is employed to resist the impact of residual noise in pseudo-labels. After extensive experiments, we show that our method achieves the best performance. For example, our model outperforms previous methods by 3.0% on FoggyCityscapes. Full article
(This article belongs to the Special Issue Neuromorphic Computing: Devices, Chips, and Algorithm)
Show Figures

Figure 1

19 pages, 400 KiB  
Review
Person Re-Identification in Special Scenes Based on Deep Learning: A Comprehensive Survey
by Yanbing Chen, Ke Wang, Hairong Ye, Lingbing Tao and Zhixin Tie
Mathematics 2024, 12(16), 2495; https://doi.org/10.3390/math12162495 - 13 Aug 2024
Viewed by 750
Abstract
Person re-identification (ReID) refers to the task of retrieving target persons from image libraries captured by various distinct cameras. Over the years, person ReID has yielded favorable recognition outcomes under typical visible light conditions, yet there remains considerable scope for enhancement in challenging [...] Read more.
Person re-identification (ReID) refers to the task of retrieving target persons from image libraries captured by various distinct cameras. Over the years, person ReID has yielded favorable recognition outcomes under typical visible light conditions, yet there remains considerable scope for enhancement in challenging conditions. The challenges and research gaps include the following: multi-modal data fusion, semi-supervised and unsupervised learning, domain adaptation, ReID in 3D space, fast ReID, decentralized learning, and end-to-end systems. The main problems to be solved, which are the occlusion problem, viewpoint problem, illumination problem, background problem, resolution problem, openness problem, etc., remain challenges. For the first time, this paper uses person ReID in special scenarios as a basis for classification to categorize and analyze the related research in recent years. Starting from the perspectives of person ReID methods and research directions, we explore the current research status in special scenarios. In addition, this work conducts a detailed experimental comparison of person ReID methods employing deep learning, encompassing both system development and comparative methodologies. In addition, we offer a prospective analysis of forthcoming research approaches in person ReID and address unresolved concerns within the field. Full article
Show Figures

Figure 1

18 pages, 2400 KiB  
Article
Cross-Modality Medical Image Segmentation via Enhanced Feature Alignment and Cross Pseudo Supervision Learning
by Mingjing Yang, Zhicheng Wu, Hanyu Zheng, Liqin Huang, Wangbin Ding, Lin Pan and Lei Yin
Diagnostics 2024, 14(16), 1751; https://doi.org/10.3390/diagnostics14161751 - 12 Aug 2024
Viewed by 1124
Abstract
Given the diversity of medical images, traditional image segmentation models face the issue of domain shift. Unsupervised domain adaptation (UDA) methods have emerged as a pivotal strategy for cross modality analysis. These methods typically utilize generative adversarial networks (GANs) for both image-level and [...] Read more.
Given the diversity of medical images, traditional image segmentation models face the issue of domain shift. Unsupervised domain adaptation (UDA) methods have emerged as a pivotal strategy for cross modality analysis. These methods typically utilize generative adversarial networks (GANs) for both image-level and feature-level domain adaptation through the transformation and reconstruction of images, assuming the features between domains are well-aligned. However, this assumption falters with significant gaps between different medical image modalities, such as MRI and CT. These gaps hinder the effective training of segmentation networks with cross-modality images and can lead to misleading training guidance and instability. To address these challenges, this paper introduces a novel approach comprising a cross-modality feature alignment sub-network and a cross pseudo supervised dual-stream segmentation sub-network. These components work together to bridge domain discrepancies more effectively and ensure a stable training environment. The feature alignment sub-network is designed for the bidirectional alignment of features between the source and target domains, incorporating a self-attention module to aid in learning structurally consistent and relevant information. The segmentation sub-network leverages an enhanced cross-pseudo-supervised loss to harmonize the output of the two segmentation networks, assessing pseudo-distances between domains to improve the pseudo-label quality and thus enhancing the overall learning efficiency of the framework. This method’s success is demonstrated by notable advancements in segmentation precision across target domains for abdomen and brain tasks. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

17 pages, 5686 KiB  
Article
A Comprehensive Study on Unsupervised Transfer Learning for Structural Health Monitoring of Bridges Using Joint Distribution Adaptation
by Laura Souza, Marcus Omori Yano, Samuel da Silva and Eloi Figueiredo
Infrastructures 2024, 9(8), 131; https://doi.org/10.3390/infrastructures9080131 - 8 Aug 2024
Viewed by 1053
Abstract
Bridges are crucial transportation infrastructures with significant socioeconomic impacts, necessitating continuous assessment to ensure safe operation. However, the vast number of bridges and the technical and financial challenges of maintaining permanent monitoring systems in every single bridge make the implementation of structural health [...] Read more.
Bridges are crucial transportation infrastructures with significant socioeconomic impacts, necessitating continuous assessment to ensure safe operation. However, the vast number of bridges and the technical and financial challenges of maintaining permanent monitoring systems in every single bridge make the implementation of structural health monitoring (SHM) difficult for authorities. Unsupervised transfer learning, which reuses experimental or numerical data from well-known bridges to detect damage on other bridges with limited monitoring response data, has emerged as a promising solution. This solution can reduce SHM costs while ensuring the safety of bridges with similar characteristics. This paper investigates the limitations, challenges, and opportunities of unsupervised transfer learning via domain adaptation across datasets from various prestressed concrete bridges under distinct operational and environmental conditions. A feature-based transfer learning approach is proposed, where the joint distribution adaptation method is used for domain adaptation. As the main advantage, this study leverages the generalization of SHM for damage detection in prestressed concrete bridges with limited long-term monitoring data. Full article
(This article belongs to the Special Issue Bridge Modeling, Monitoring, Management and Beyond)
Show Figures

Figure 1

21 pages, 2094 KiB  
Article
Unsupervised Domain Adaptation for Inter-Session Re-Calibration of Ultrasound-Based HMIs
by Antonios Lykourinas, Xavier Rottenberg, Francky Catthoor and Athanassios Skodras
Sensors 2024, 24(15), 5043; https://doi.org/10.3390/s24155043 - 4 Aug 2024
Viewed by 822
Abstract
Human–Machine Interfaces (HMIs) have gained popularity as they allow for an effortless and natural interaction between the user and the machine by processing information gathered from a single or multiple sensing modalities and transcribing user intentions to the desired actions. Their operability depends [...] Read more.
Human–Machine Interfaces (HMIs) have gained popularity as they allow for an effortless and natural interaction between the user and the machine by processing information gathered from a single or multiple sensing modalities and transcribing user intentions to the desired actions. Their operability depends on frequent periodic re-calibration using newly acquired data due to their adaptation needs in dynamic environments, where test–time data continuously change in unforeseen ways, a cause that significantly contributes to their abandonment and remains unexplored by the Ultrasound-based (US-based) HMI community. In this work, we conduct a thorough investigation of Unsupervised Domain Adaptation (UDA) algorithms for the re-calibration of US-based HMIs during within-day sessions, which utilize unlabeled data for re-calibration. Our experimentation led us to the proposal of a CNN-based architecture for simultaneous wrist rotation angle and finger gesture prediction that achieves comparable performance with the state-of-the-art while featuring 87.92% less trainable parameters. According to our findings, DANN (a Domain-Adversarial training algorithm), with proper initialization, offers an average 24.99% classification accuracy performance enhancement when compared to no re-calibration setting. However, our results suggest that in cases where the experimental setup and the UDA configuration may differ, observed enhancements would be rather small or even unnoticeable. Full article
Show Figures

Figure 1

13 pages, 853 KiB  
Article
SL: Stable Learning in Source-Free Domain Adaptation for Medical Image Segmentation
by Yan Wang, Yixin Chen, Tingyang Yang and Haogang Zhu
Electronics 2024, 13(14), 2878; https://doi.org/10.3390/electronics13142878 - 22 Jul 2024
Viewed by 669
Abstract
Deep learning techniques for medical image analysis often encounter domain shifts between source and target data. Most existing approaches focus on unsupervised domain adaptation (UDA). However, in practical applications, many source domain data are often inaccessible due to issues such as privacy concerns. [...] Read more.
Deep learning techniques for medical image analysis often encounter domain shifts between source and target data. Most existing approaches focus on unsupervised domain adaptation (UDA). However, in practical applications, many source domain data are often inaccessible due to issues such as privacy concerns. For instance, data from different hospitals exhibit domain shifts due to equipment discrepancies, and data from both domains cannot be accessed simultaneously because of privacy issues. This challenge, known as source-free UDA, limits the effectiveness of previous UDA medical methods. Despite the introduction of various medical source-free unsupervised domain adaptation (MSFUDA) methods, they tend to suffer from an over-fitting problem described as “longer training, worse performance”. To address this issue, we proposed the Stable Learning (SL) strategy. SL is a method that can be integrated with other approaches and consists of weight consolidation and entropy increase. Weight consolidation helps retain domain-invariant knowledge, while entropy increase prevents over-learning. We validated our strategy through experiments on three MSFUDA methods and two public datasets. For the abdominal dataset, the application of the SL strategy enables the MSFUDA method to effectively address the domain shift issue. This results in an improvement in the Dice coefficient from 0.5167 to 0.7006 for the adaptation from CT to MRI, and from 0.6474 to 0.7188 for the adaptation from MRI to CT. The same improvement is observed with the cardiac dataset. Additionally, we conducted ablation studies on the two involved modules, and the results demonstrated the effectiveness of the SL strategy. Full article
Show Figures

Figure 1

15 pages, 4651 KiB  
Article
Hydroelectric Unit Vibration Signal Feature Extraction Based on IMF Energy Moment and SDAE
by Dong Liu, Lijun Kong, Bing Yao, Tangming Huang, Xiaoqin Deng and Zhihuai Xiao
Water 2024, 16(14), 1956; https://doi.org/10.3390/w16141956 - 11 Jul 2024
Cited by 1 | Viewed by 696
Abstract
Aiming at the problem that it is difficult to effectively characterize the operation status of hydropower units with a single vibration signal feature under the influence of multiple factors such as water–machine–electricity coupling, a multidimensional fusion feature extraction method for hydroelectric units based [...] Read more.
Aiming at the problem that it is difficult to effectively characterize the operation status of hydropower units with a single vibration signal feature under the influence of multiple factors such as water–machine–electricity coupling, a multidimensional fusion feature extraction method for hydroelectric units based on time–frequency analysis and unsupervised learning models is proposed. Firstly, the typical time–domain and frequency–domain characteristics of vibration signals are calculated through amplitude domain analysis and Fourier transform. Secondly, the time–frequency characteristics of vibration signals are obtained by combining the complementary ensemble empirical mode decomposition and energy moment calculation methods to supplement the traditional time–domain and frequency–domain characteristics, which have difficulty in comprehensively reflecting the correlation between nonlinear non–stationary signals and the state of the unit. Finally, in order to overcome the limitations of shallow feature extraction relying on artificial experience, a Stacked Denoising Autoencoder is used to adaptively mine the deep features of vibration signals, and the extracted features are fused to construct a multidimensional feature vector of vibration signals. The proposed multidimensional information fusion feature extraction method is verified to realize the multidimensional complementarity of feature attributes, which helps to accurately distinguish equipment state types and provides the foundation for subsequent state identification and trend prediction. Full article
Show Figures

Figure 1

26 pages, 12605 KiB  
Article
Active Bidirectional Self-Training Network for Cross-Domain Segmentation in Remote-Sensing Images
by Zhujun Yang, Zhiyuan Yan, Wenhui Diao, Yihang Ma, Xinming Li and Xian Sun
Remote Sens. 2024, 16(13), 2507; https://doi.org/10.3390/rs16132507 - 8 Jul 2024
Viewed by 764
Abstract
Semantic segmentation with cross-domain adaptation in remote-sensing images (RSIs) is crucial and mitigates the expense of manually labeling target data. However, the performance of existing unsupervised domain adaptation (UDA) methods is still significantly impacted by domain bias, leading to a considerable gap compared [...] Read more.
Semantic segmentation with cross-domain adaptation in remote-sensing images (RSIs) is crucial and mitigates the expense of manually labeling target data. However, the performance of existing unsupervised domain adaptation (UDA) methods is still significantly impacted by domain bias, leading to a considerable gap compared to supervised trained models. To address this, our work focuses on semi-supervised domain adaptation, selecting a small subset of target annotations through active learning (AL) that maximize information to improve domain adaptation. Overall, we propose a novel active bidirectional self-training network (ABSNet) for cross-domain semantic segmentation in RSIs. ABSNet consists of two sub-stages: a multi-prototype active region selection (MARS) stage and a source-weighted class-balanced self-training (SCBS) stage. The MARS approach captures the diversity in labeled source data by introducing multi-prototype density estimation based on Gaussian mixture models. We then measure inter-domain similarity to select complementary and representative target samples. Through fine-tuning with the selected active samples, we propose an enhanced self-training strategy SCBS, designed for weighted training on source data, aiming to avoid the negative effects of interfering samples. We conduct extensive experiments on the LoveDA and ISPRS datasets to validate the superiority of our method over existing state-of-the-art domain-adaptive semantic segmentation methods. Full article
(This article belongs to the Special Issue Geospatial Artificial Intelligence (GeoAI) in Remote Sensing)
Show Figures

Figure 1

Back to TopTop