Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (141)

Search Parameters:
Keywords = visibility graph

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 3704 KiB  
Article
Deep Unsupervised Homography Estimation for Single-Resolution Infrared and Visible Images Using GNN
by Yanhao Liao, Yinhui Luo, Qiang Fu, Chang Shu, Yuezhou Wu, Qijian Liu and Yuanqing He
Electronics 2024, 13(21), 4173; https://doi.org/10.3390/electronics13214173 - 24 Oct 2024
Abstract
Single-resolution homography estimation of infrared and visible images is a significant and challenging research area within the field of computing, which has attracted a great deal of attention. However, due to the large modal differences between infrared and visible images, existing methods are [...] Read more.
Single-resolution homography estimation of infrared and visible images is a significant and challenging research area within the field of computing, which has attracted a great deal of attention. However, due to the large modal differences between infrared and visible images, existing methods are difficult to stably and accurately extract and match features between the two image types at a single resolution, which results in poor performance on the homography estimation task. To address this issue, this paper proposes an end-to-end unsupervised single-resolution infrared and visible image homography estimation method based on graph neural network (GNN), homoViG. Firstly, the method employs a triple attention shallow feature extractor to capture cross-dimensional feature dependencies and enhance feature representation effectively. Secondly, Vision GNN (ViG) is utilized as the backbone network to transform the feature point matching problem into a graph node matching problem. Finally, this paper proposes a new homography estimator, residual fusion vision graph neural network (RFViG), to reduce the feature redundancy caused by the frequent residual operations of ViG. Meanwhile, RFViG replaces the residual connections with an attention feature fusion module, highlighting the important features in the low-level feature graph. Furthermore, this model introduces detail feature loss and feature identity loss in the optimization phase, facilitating network optimization. Through extensive experimentation, we demonstrate the efficacy of all proposed components. The experimental results demonstrate that homoViG outperforms existing methods on synthetic benchmark datasets in both qualitative and quantitative comparisons. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

33 pages, 6528 KiB  
Article
TVGeAN: Tensor Visibility Graph-Enhanced Attention Network for Versatile Multivariant Time Series Learning Tasks
by Mohammed Baz
Mathematics 2024, 12(21), 3320; https://doi.org/10.3390/math12213320 - 23 Oct 2024
Abstract
This paper introduces Tensor Visibility Graph-enhanced Attention Networks (TVGeAN), a novel graph autoencoder model specifically designed for MTS learning tasks. The underlying approach of TVGeAN is to combine the power of complex networks in representing time series as graphs with the strengths of [...] Read more.
This paper introduces Tensor Visibility Graph-enhanced Attention Networks (TVGeAN), a novel graph autoencoder model specifically designed for MTS learning tasks. The underlying approach of TVGeAN is to combine the power of complex networks in representing time series as graphs with the strengths of Graph Neural Networks (GNNs) in learning from graph data. TVGeAN consists of two new main components: TVG which extend the capabilities of visibility graph algorithms in representing MTSs by converting them into weighted temporal graphs where both the nodes and the edges are tensors. Each node in the TVG represents the MTS observations at a particular time, while the weights of the edges are defined based on the visibility angle algorithm. The second main component of the proposed model is GeAN, a novel graph attention mechanism developed to seamlessly integrate the temporal interactions represented in the nodes and edges of the graphs into the core learning process. GeAN achieves this by using the outer product to quantify the pairwise interactions of nodes and edges at a fine-grained level and a bilinear model to effectively distil the knowledge interwoven in these representations. From an architectural point of view, TVGeAN builds on the autoencoder approach complemented by sparse and variational learning units. The sparse learning unit is used to promote inductive learning in TVGeAN, and the variational learning unit is used to endow TVGeAN with generative capabilities. The performance of the TVGeAN model is extensively evaluated against four widely cited MTS benchmarks for both supervised and unsupervised learning tasks. The results of these evaluations show the high performance of TVGeAN for various MTS learning tasks. In particular, TVGeAN can achieve an average root mean square error of 6.8 for the C-MPASS dataset (i.e., regression learning tasks) and a precision close to one for the SMD, MSL, and SMAP datasets (i.e., anomaly detection learning tasks), which are better results than most published works. Full article
(This article belongs to the Section Mathematics and Computer Science)
Show Figures

Figure 1

19 pages, 5199 KiB  
Article
Geometry-Aware Enhanced Mutual-Supervised Point Elimination with Overlapping Mask Contrastive Learning for Partitial Point Cloud Registration
by Yue Dai, Shuilin Wang, Chunfeng Shao, Heng Zhang and Fucang Jia
Electronics 2024, 13(20), 4074; https://doi.org/10.3390/electronics13204074 - 16 Oct 2024
Viewed by 385
Abstract
Point cloud registration is one of the fundamental tasks in computer vision, but faces challenges under low overlap conditions. Recent approaches use transformers and overlapping masks to improve perception, but mask learning only considers Euclidean distances between features, ignores mismatches caused by fuzzy [...] Read more.
Point cloud registration is one of the fundamental tasks in computer vision, but faces challenges under low overlap conditions. Recent approaches use transformers and overlapping masks to improve perception, but mask learning only considers Euclidean distances between features, ignores mismatches caused by fuzzy geometric structures, and is often computationally inefficient. To address these issues, we introduce a novel matching framework. Firstly, we fuse adaptive graph convolution with PPF features to obtain rich feature perception. Subsequently, we construct a PGT framework that uses GeoTransformer and combines it with location information encoding to enhance the geometry perception between source and target clouds. In addition, we improve the visibility of overlapping regions through information exchange and the AIS module, aiming at subsequent keypoint extraction, preserving points with distinct geometrical structures while suppressing the influence of non-overlapping regions to improve computational efficiency. Finally, the mask is refined through contrast learning to preserve geometric and distance similarity, which helps to compute the transformation parameters more accurately. We have conducted comprehensive experiments on synthetic and real-world scene datasets, demonstrating superior registration performance compared to recent deep learning methods. Our approach shows remarkable improvements of 68.21% in RRMSE and 76.31% in tRMSE on synthetic data, while also excelling in real-world scenarios with enhancements of 76.46% in RRMSE and 45.16% in tRMSE. Full article
Show Figures

Figure 1

30 pages, 6684 KiB  
Article
Investigating System Dynamics of Vegetable Prices Using Complex Network Analysis and Temporal Variation Methods
by Sofia Karakasidou, Avraam Charakopoulos and Loukas Zachilas
AppliedMath 2024, 4(4), 1328-1357; https://doi.org/10.3390/appliedmath4040071 - 16 Oct 2024
Viewed by 243
Abstract
In the present study, we analyze the price time series behavior of selected vegetable products, using complex network analysis in two approaches: (a) correlation complex networks and (b) visibility complex networks based on transformed time series. Additionally, we apply time variability methods, including [...] Read more.
In the present study, we analyze the price time series behavior of selected vegetable products, using complex network analysis in two approaches: (a) correlation complex networks and (b) visibility complex networks based on transformed time series. Additionally, we apply time variability methods, including Hurst exponent and Hjorth parameter analysis. We have chosen products available throughout the year from the Central Market of Thessaloniki (Greece) as a case study. To the best of our knowledge, this kind of study is applied for the first time, both as a type of analysis and on the given dataset. Our aim was to investigate alternative ways of classifying products into groups that could be useful for management and policy issues. The results show that the formed groups present similarities related to their use as plates as well as price variation mode and variability depending on the type of analysis performed. The results could be of interest to government policies in various directions, such as products to develop greater stability, identify fluctuating prices, etc. This work could be extended in the future by including data from other central markets as well as with data with missing data, as is the case for products not available throughout the year. Full article
Show Figures

Figure 1

15 pages, 1956 KiB  
Article
Information–Theoretic Analysis of Visibility Graph Properties of Extremes in Time Series Generated by a Nonlinear Langevin Equation
by Luciano Telesca and Zbigniew Czechowski
Mathematics 2024, 12(20), 3197; https://doi.org/10.3390/math12203197 - 12 Oct 2024
Viewed by 299
Abstract
In this study, we examined how the nonlinearity α of the Langevin equation influences the behavior of extremes in a generated time series. The extremes, defined according to run theory, result in two types of series, run lengths and surplus magnitudes, whose complex [...] Read more.
In this study, we examined how the nonlinearity α of the Langevin equation influences the behavior of extremes in a generated time series. The extremes, defined according to run theory, result in two types of series, run lengths and surplus magnitudes, whose complex structure was investigated using the visibility graph (VG) method. For both types of series, the information measures of the Shannon entropy measure and Fisher Information Measure were utilized for illustrating the influence of the nonlinearity α on the distribution of the degree, which is the main topological parameter describing the graph constructed by the VG method. The main finding of our study was that the Shannon entropy of the degree of the run lengths and the surplus magnitudes of the extremes is mostly influenced by the nonlinearity, which decreases with with an increase in α. This result suggests that the run lengths and surplus magnitudes of extremes are characterized by a sort of order that increases with increases in nonlinearity. Full article
(This article belongs to the Special Issue Recent Advances in Time Series Analysis)
Show Figures

Figure 1

26 pages, 7501 KiB  
Article
Remote Sensing-Based Drought Monitoring in Iran’s Sistan and Balouchestan Province
by Kamal Omidvar, Masoume Nabavizadeh, Iman Rousta and Haraldur Olafsson
Atmosphere 2024, 15(10), 1211; https://doi.org/10.3390/atmos15101211 - 10 Oct 2024
Viewed by 297
Abstract
Drought is a natural phenomenon that has adverse effects on agriculture, the economy, and human well-being. The primary objective of this research was to comprehensively understand the drought conditions in Sistan and Balouchestan Province from 2002 to 2017 from two perspectives: vegetation cover [...] Read more.
Drought is a natural phenomenon that has adverse effects on agriculture, the economy, and human well-being. The primary objective of this research was to comprehensively understand the drought conditions in Sistan and Balouchestan Province from 2002 to 2017 from two perspectives: vegetation cover and hydrology. To achieve this goal, the study utilized MODIS satellite data in the first part to monitor vegetation cover as an indicator of agricultural drought. In the second part, GRACE satellite data were employed to analyze changes in groundwater resources as an indicator of hydrological drought. To assess vegetation drought, four indices were used: Vegetation Health Index (VHI), Vegetation Drought Index (VDI), Visible Infrared Drought Index (VSDI), and Temperature Vegetation Drought Index (TVDI). To validate vegetation drought indices, they were compared with Global Land Data Assimilation System (GLDAS) precipitation data. The vegetation indices showed a strong, statistically significant correlation with GLDAS precipitation data in most regions of the province. Among all indices, the VHI showed the highest correlation with precipitation (moderate (0.3–0.7) in 51.7% and strong (≥0.7) in 45.82% of lands). The output of vegetation indices revealed that the study province has experienced widespread drought in recent years. The results showed that the southern and central regions of the province have faced more severe drought classes. In the second part of this research, hydrological drought monitoring was conducted in fifty third-order sub-basins located within the study province using the Total Water Storage (TWS) deficit, Drought Severity, and Total Storage Deficit Index (TSDI Index). Annual average calculations of the TWS deficit over the period from April 2012 to 2016 indicated a substantial depletion of groundwater reserves in the province, amounting to a cumulative loss of 12.2 km3 Analysis results indicate that drought severity continuously increased in all study basins until the end of the study period. Studies have shown that all the studied basins are facing severe and prolonged water scarcity. Among the 50 studied basins, the Rahmatabad basin, located in the semi-arid northern regions of the province, has experienced the most severe drought. This basin has experienced five drought events, particularly one lasting 89 consecutive months and causing a reduction of more than 665.99 km3. of water in month 1, placing it in a critical condition. On the other hand, the Niskoofan Chabahar basin, located in the tropical southern part of the province near the Sea of Oman, has experienced the lowest reduction in water volume with 10 drought events and a decrease of approximately 111.214 km3. in month 1. However, even this basin has not been spared from prolonged droughts. Analysis of drought index graphs across different severity classes confirmed that all watersheds experienced drought conditions, particularly in the later years of this period. Data analysis revealed a severe water crisis in the province. Urgent and coordinated actions are needed to address this challenge. Transitioning to drought-resistant crops, enhancing irrigation efficiency, and securing water rights are essential steps towards a sustainable future. Full article
(This article belongs to the Section Meteorology)
Show Figures

Figure 1

23 pages, 5405 KiB  
Article
Iterative Removal of G-PCC Attribute Compression Artifacts Based on a Graph Neural Network
by Zhouyan He, Wenming Yang, Lijun Li and Rui Bai
Electronics 2024, 13(18), 3768; https://doi.org/10.3390/electronics13183768 - 22 Sep 2024
Viewed by 626
Abstract
As a compression standard, Geometry-based Point Cloud Compression (G-PCC) can effectively reduce data by compressing both geometric and attribute information. Even so, due to coding errors and data loss, point clouds (PCs) still face distortion challenges, such as the encoding of attribute information [...] Read more.
As a compression standard, Geometry-based Point Cloud Compression (G-PCC) can effectively reduce data by compressing both geometric and attribute information. Even so, due to coding errors and data loss, point clouds (PCs) still face distortion challenges, such as the encoding of attribute information may lead to spatial detail loss and visible artifacts, which negatively impact visual quality. To address these challenges, this paper proposes an iterative removal method for attribute compression artifacts based on a graph neural network. First, the geometric coordinates of the PCs are used to construct a graph that accurately reflects the spatial structure, with the PC attributes treated as signals on the graph’s vertices. Adaptive graph convolution is then employed to dynamically focus on the areas most affected by compression, while a bi-branch attention block is used to restore high-frequency details. To maintain overall visual quality, a spatial consistency mechanism is applied to the recovered PCs. Additionally, an iterative strategy is introduced to correct systematic distortions, such as additive bias, introduced during compression. The experimental results demonstrate that the proposed method produces finer and more realistic visual details, compared to state-of-the-art techniques for PC attribute compression artifact removal. Furthermore, the proposed method significantly reduces the network runtime, enhancing processing efficiency. Full article
Show Figures

Figure 1

29 pages, 27855 KiB  
Article
The Influence of Urban Design Performance on Walkability in Cultural Heritage Sites of Isfahan, Iran
by Hessameddin Maniei, Reza Askarizad, Maryam Pourzakarya and Dietwald Gruehn
Land 2024, 13(9), 1523; https://doi.org/10.3390/land13091523 - 19 Sep 2024
Viewed by 1096
Abstract
This research explores the impact of urban design performance qualities on pedestrian behavior in a cultural heritage site designated by UNESCO. The study employs a multi-method approach, including a questionnaire survey, empirical observation of pedestrian activities, and empirical axial line and visibility graph [...] Read more.
This research explores the impact of urban design performance qualities on pedestrian behavior in a cultural heritage site designated by UNESCO. The study employs a multi-method approach, including a questionnaire survey, empirical observation of pedestrian activities, and empirical axial line and visibility graph analysis using the space syntax technique. The first part of the study involved a questionnaire formatted as a polling sheet to gather expert assessments of spatial performance measures. The second part used a pilot survey to capture the perspectives of end users regarding the study’s objectives and their perceptions of the site. Pedestrian flow was observed using a technique called “gate counts”, with observations recorded as video clips during specific morning and afternoon periods across three pedestrian zones. The study also examined the behavioral patterns of pedestrians, including their movement patterns. Finally, the ArcGIS 10.3.1 software was employed to evaluate the reliability of the results. The main finding of this research is that pedestrian behavior and walkability in the historical areas are significantly influenced by landmark integration, wayfinding behavior, and the socio-economic functions of heritage sites. This study highlights the importance of using cognitive and syntactic analysis, community engagement, and historical preservation to enhance walkability, accessibility, and social interaction in heritage contexts. In addition, it identifies the need for improvements in urban design to address inconsistencies between syntactic maps and actual pedestrian flow, emphasizing the role of imageability and the impact of environmental and aesthetic factors on pedestrian movement. This research provides valuable insights for urban designers and planners, environmental psychologists, architects, and policymakers by highlighting the key elements that make urban spaces walkable, aiming to enhance the quality of public spaces. Full article
(This article belongs to the Special Issue Urban Landscape Transformation vs. Heritage)
Show Figures

Figure 1

23 pages, 23211 KiB  
Article
Efficient Path Planning Algorithm Based on Laser SLAM and an Optimized Visibility Graph for Robots
by Yunjie Hu, Fei Xie, Jiquan Yang, Jing Zhao, Qi Mao, Fei Zhao and Xixiang Liu
Remote Sens. 2024, 16(16), 2938; https://doi.org/10.3390/rs16162938 - 10 Aug 2024
Viewed by 1314
Abstract
Mobile robots’ efficient path planning has long been a challenging task due to the complexity and dynamism of environments. If an occupancy grid map is used in path planning, the number of grids is determined by grid resolution and the size of the [...] Read more.
Mobile robots’ efficient path planning has long been a challenging task due to the complexity and dynamism of environments. If an occupancy grid map is used in path planning, the number of grids is determined by grid resolution and the size of the actual environment. Excessively high resolution increases the number of traversed grid nodes and thus prolongs path planning time. To address this challenge, this paper proposes an efficient path planning algorithm based on laser SLAM and an optimized visibility graph for mobile robots, which achieves faster computation of the shortest path using the optimized visibility graph. Firstly, the laser SLAM algorithm is used to acquire the undistorted LiDAR point cloud data, which are converted into a visibility graph. Secondly, a bidirectional A* path search algorithm is combined with the Minimal Construct algorithm, enabling the robot to only compute heuristic paths to the target node during path planning in order to reduce search time. Thirdly, a filtering method based on edge length and the number of vertices of obstacles is proposed to reduce redundant vertices and edges in the visibility graph. Additionally, the bidirectional A* search method is implemented for pathfinding in the efficient path planning algorithm proposed in this paper to reduce unnecessary space searches. Finally, simulation and field tests are conducted to validate the algorithm and compare its performance with classic algorithms. The test results indicate that the method proposed in this paper exhibits superior performance in terms of path search time, navigation time, and distance compared to D* Lite, FAR, and FPS algorithms. Full article
(This article belongs to the Special Issue Advances in Applications of Remote Sensing GIS and GNSS)
Show Figures

Figure 1

17 pages, 7301 KiB  
Article
Vision-Based Situational Graphs Exploiting Fiducial Markers for the Integration of Semantic Entities
by Ali Tourani, Hriday Bavle, Deniz Işınsu Avşar, Jose Luis Sanchez-Lopez, Rafael Munoz-Salinas and Holger Voos
Robotics 2024, 13(7), 106; https://doi.org/10.3390/robotics13070106 - 16 Jul 2024
Cited by 1 | Viewed by 985
Abstract
Situational Graphs (S-Graphs) merge geometric models of the environment generated by Simultaneous Localization and Mapping (SLAM) approaches with 3D scene graphs into a multi-layered jointly optimizable factor graph. As an advantage, S-Graphs not only offer a more comprehensive robotic situational awareness by combining [...] Read more.
Situational Graphs (S-Graphs) merge geometric models of the environment generated by Simultaneous Localization and Mapping (SLAM) approaches with 3D scene graphs into a multi-layered jointly optimizable factor graph. As an advantage, S-Graphs not only offer a more comprehensive robotic situational awareness by combining geometric maps with diverse, hierarchically organized semantic entities and their topological relationships within one graph, but they also lead to improved performance of localization and mapping on the SLAM level by exploiting semantic information. In this paper, we introduce a vision-based version of S-Graphs where a conventional Visual SLAM (VSLAM) system is used for low-level feature tracking and mapping. In addition, the framework exploits the potential of fiducial markers (both visible and our recently introduced transparent or fully invisible markers) to encode comprehensive information about environments and the objects within them. The markers aid in identifying and mapping structural-level semantic entities, including walls and doors in the environment, with reliable poses in the global reference, subsequently establishing meaningful associations with higher-level entities, including corridors and rooms. However, in addition to including semantic entities, the semantic and geometric constraints imposed by the fiducial markers are also utilized to improve the reconstructed map’s quality and reduce localization errors. Experimental results on a real-world dataset collected using legged robots show that our framework excels in crafting a richer, multi-layered hierarchical map and enhances robot pose accuracy at the same time. Full article
(This article belongs to the Special Issue Localization and 3D Mapping of Intelligent Robotics)
Show Figures

Figure 1

21 pages, 3130 KiB  
Article
Large-Scale Indoor Camera Positioning Using Fiducial Markers
by Pablo Garc�a-Ruiz, Francisco J. Romero-Ramirez, Rafael Mu�oz-Salinas, Manuel J. Mar�n-Jim�nez and Rafael Medina-Carnicer
Sensors 2024, 24(13), 4303; https://doi.org/10.3390/s24134303 - 2 Jul 2024
Viewed by 923
Abstract
Estimating the pose of a large set of fixed indoor cameras is a requirement for certain applications in augmented reality, autonomous navigation, video surveillance, and logistics. However, accurately mapping the positions of these cameras remains an unsolved problem. While providing partial solutions, existing [...] Read more.
Estimating the pose of a large set of fixed indoor cameras is a requirement for certain applications in augmented reality, autonomous navigation, video surveillance, and logistics. However, accurately mapping the positions of these cameras remains an unsolved problem. While providing partial solutions, existing alternatives are limited by their dependence on distinct environmental features, the requirement for large overlapping camera views, and specific conditions. This paper introduces a novel approach to estimating the pose of a large set of cameras using a small subset of fiducial markers printed on regular pieces of paper. By placing the markers in areas visible to multiple cameras, we can obtain an initial estimation of the pair-wise spatial relationship between them. The markers can be moved throughout the environment to obtain the relationship between all cameras, thus creating a graph connecting all cameras. In the final step, our method performs a full optimization, minimizing the reprojection errors of the observed markers and enforcing physical constraints, such as camera and marker coplanarity and control points. We validated our approach using novel artificial and real datasets with varying levels of complexity. Our experiments demonstrated superior performance over existing state-of-the-art techniques and increased effectiveness in real-world applications. Accompanying this paper, we provide the research community with access to our code, tutorials, and an application framework to support the deployment of our methodology. Full article
(This article belongs to the Special Issue Sensor Fusion Applications for Navigation and Indoor Positioning)
Show Figures

Figure 1

36 pages, 57800 KiB  
Article
Advanced Image Stitching Method for Dual-Sensor Inspection
by Sara Shahsavarani, Fernando Lopez, Clemente Ibarra-Castanedo and Xavier P. V. Maldague
Sensors 2024, 24(12), 3778; https://doi.org/10.3390/s24123778 - 11 Jun 2024
Cited by 1 | Viewed by 1233
Abstract
Efficient image stitching plays a vital role in the Non-Destructive Evaluation (NDE) of infrastructures. An essential challenge in the NDE of infrastructures is precisely visualizing defects within large structures. The existing literature predominantly relies on high-resolution close-distance images to detect surface or subsurface [...] Read more.
Efficient image stitching plays a vital role in the Non-Destructive Evaluation (NDE) of infrastructures. An essential challenge in the NDE of infrastructures is precisely visualizing defects within large structures. The existing literature predominantly relies on high-resolution close-distance images to detect surface or subsurface defects. While the automatic detection of all defect types represents a significant advancement, understanding the location and continuity of defects is imperative. It is worth noting that some defects may be too small to capture from a considerable distance. Consequently, multiple image sequences are captured and processed using image stitching techniques. Additionally, visible and infrared data fusion strategies prove essential for acquiring comprehensive information to detect defects across vast structures. Hence, there is a need for an effective image stitching method appropriate for infrared and visible images of structures and industrial assets, facilitating enhanced visualization and automated inspection for structural maintenance. This paper proposes an advanced image stitching method appropriate for dual-sensor inspections. The proposed image stitching technique employs self-supervised feature detection to enhance the quality and quantity of feature detection. Subsequently, a graph neural network is employed for robust feature matching. Ultimately, the proposed method results in image stitching that effectively eliminates perspective distortion in both infrared and visible images, a prerequisite for subsequent multi-modal fusion strategies. Our results substantially enhance the visualization capabilities for infrastructure inspection. Comparative analysis with popular state-of-the-art methods confirms the effectiveness of the proposed approach. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

10 pages, 2555 KiB  
Article
Alterations in pH of Coffee Bean Extract and Properties of Chlorogenic Acid Based on the Roasting Degree
by Yi Kyeoung Kim, Jae-Min Lim, Young Jae Kim and Wook Kim
Foods 2024, 13(11), 1757; https://doi.org/10.3390/foods13111757 - 3 Jun 2024
Cited by 2 | Viewed by 1237
Abstract
Factors influencing the sour taste of coffee and the properties of chlorogenic acid are not yet fully understood. This study aimed to evaluate the impact of roasting degree on pH-associated changes in coffee bean extract and the thermal stability of chlorogenic acid. Coffee [...] Read more.
Factors influencing the sour taste of coffee and the properties of chlorogenic acid are not yet fully understood. This study aimed to evaluate the impact of roasting degree on pH-associated changes in coffee bean extract and the thermal stability of chlorogenic acid. Coffee bean extract pH decreased up to a chromaticity value of 75 but increased with higher chromaticity values. Ultraviolet–visible spectrophotometry and structural analysis attributed this effect to chlorogenic and caffeic acids. Moreover, liquid chromatography-mass spectrometry analysis identified four chlorogenic acid types in green coffee bean extract. Chlorogenic acid isomers were eluted broadly on HPLC, and a chlorogenic acid fraction graph with two peaks, fractions 5 and 9, was obtained. Among the various fractions, the isomer in fraction 5 had significantly lower thermal stability, indicating that thermal stability differs between chlorogenic acid isomers. Full article
(This article belongs to the Section Food Physics and (Bio)Chemistry)
Show Figures

Graphical abstract

22 pages, 5093 KiB  
Article
Rapeseed Seed Coat Color Classification Based on the Visibility Graph Algorithm and Hyperspectral Technique
by Chaojun Zou, Xinghui Zhu, Fang Wang, Jinran Wu and You-Gan Wang
Agronomy 2024, 14(5), 941; https://doi.org/10.3390/agronomy14050941 - 30 Apr 2024
Viewed by 940
Abstract
Information technology and statistical modeling have made significant contributions to smart agriculture. Machine vision and hyperspectral technologies, with their non-destructive and real-time capabilities, have been extensively utilized in the non-destructive diagnosis and quality monitoring of crops and seeds, becoming essential tools in traditional [...] Read more.
Information technology and statistical modeling have made significant contributions to smart agriculture. Machine vision and hyperspectral technologies, with their non-destructive and real-time capabilities, have been extensively utilized in the non-destructive diagnosis and quality monitoring of crops and seeds, becoming essential tools in traditional agriculture. This work applies these techniques to address the color classification of rapeseed, which is of great significance in the field of rapeseed growth diagnosis research. To bridge the gap between machine vision and hyperspectral technology, a framework is developed that includes seed color calibration, spectral feature extraction and fusion, and the recognition modeling of three seed colors using four machine learning methods. Three categories of rapeseed coat colors are calibrated based on visual perception and vector-square distance methods. A fast-weighted visibility graph method is employed to map the spectral reflectance sequences to complex networks, and five global network attributes are extracted to fuse the full-band reflectance as model input. The experimental results demonstrate that the classification recognition rate of the fused feature reaches 0.943 under the XGBoost model, confirming the effectiveness of the network features as a complement to the spectral reflectance. The high recognition accuracy and simple operation process of the framework support the further application of hyperspectral technology to analyze the quality of rapeseed. Full article
Show Figures

Figure 1

32 pages, 7180 KiB  
Article
Identifying Characteristic Fire Properties with Stationary and Non-Stationary Fire Alarm Systems
by Michał Wiśnios, Sebastian Tatko, Michał Mazur, Jacek Paś, Jarosław Mateusz Łukasiak and Tomasz Klimczak
Sensors 2024, 24(9), 2772; https://doi.org/10.3390/s24092772 - 26 Apr 2024
Viewed by 1020
Abstract
The article reviews issues associated with the operation of stationary and non-stationary electronic fire alarm systems (FASs). These systems are employed for the fire protection of selected buildings (stationary) or to monitor vast areas, e.g., forests, airports, logistics hubs, etc. (non-stationary). An FAS [...] Read more.
The article reviews issues associated with the operation of stationary and non-stationary electronic fire alarm systems (FASs). These systems are employed for the fire protection of selected buildings (stationary) or to monitor vast areas, e.g., forests, airports, logistics hubs, etc. (non-stationary). An FAS is operated under various environmental conditions, indoor and outdoor, favourable or unfavourable to the operation process. Therefore, an FAS has to exhibit a reliable structure in terms of power supply and operation. To this end, the paper discusses a representative FAS monitoring a facility and presents basic tactical and technical assumptions for a non-stationary system. The authors reviewed fire detection methods in terms of fire characteristic values (FCVs) impacting detector sensors. Another part of the article focuses on false alarm causes. Assumptions behind the use of unmanned aerial vehicles (UAVs) with visible-range cameras (e.g., Aviotec) and thermal imaging were presented for non-stationary FASs. The FAS operation process model was defined and a computer simulation related to its operation was conducted. Analysing the FAS operation process in the form of models and graphs, and the conducted computer simulation enabled conclusions to be drawn. They may be applied for the design, ongoing maintenance and operation of an FAS. As part of the paper, the authors conducted a reliability analysis of a selected FAS based on the original performance tests of an actual system in operation. They formulated basic technical and tactical requirements applicable to stationary and mobile FASs detecting the so-called vast fires. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

Back to TopTop