Next Issue
Volume 17, September
Previous Issue
Volume 17, July
 
 

Algorithms, Volume 17, Issue 8 (August 2024) – 55 articles

Cover Story (view full-size image): This article introduces Lester, a novel method with which to automatically synthesize retro-style 2D animations from videos. The method approaches the challenge mainly as an object segmentation and tracking problem. Video frames are processed with the Segment Anything Model (SAM), and the resulting masks are tracked through subsequent frames with DeAOT, a method for semi-supervised video object segmentation. The geometry of the masks' contours is simplified with the Douglas–Peucker algorithm. Finally, facial traits, pixelation, and a basic rim light effect can be optionally added. The results show that the method exhibits an excellent temporal consistency and can correctly process videos with different poses and appearances, dynamic shots, partial shots, and diverse backgrounds. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
13 pages, 4381 KiB  
Article
Extended General Malfatti’s Problem
by Ching-Shoei Chiang
Algorithms 2024, 17(8), 374; https://doi.org/10.3390/a17080374 - 22 Aug 2024
Viewed by 347
Abstract
Malfatti’s problem involves three circles (called Malfatti circles) that are tangent to each other and two sides of a triangle. In this study, our objective is to extend the problem to find 6, 10, … 1ni (n > 2) circles [...] Read more.
Malfatti’s problem involves three circles (called Malfatti circles) that are tangent to each other and two sides of a triangle. In this study, our objective is to extend the problem to find 6, 10, … 1ni (n > 2) circles inside the triangle so that the three corner circles are tangent to two sides of the triangle, the boundary circles are tangent to one side of the triangle, and four other circles (at least two of them being boundary or corner circles) and the inner circles are tangent to six other circles. We call this problem the extended general Malfatti’s problem, or the Tri(Tn) problem, where Tri means that the boundary of these circles is a triangle, and Tn is the number of circles inside the triangle. In this paper, we propose an algorithm to solve the Tri(Tn) problem. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Graphical abstract

46 pages, 501 KiB  
Article
Algorithms for Various Trigonometric Power Sums
by Victor Kowalenko
Algorithms 2024, 17(8), 373; https://doi.org/10.3390/a17080373 - 22 Aug 2024
Viewed by 288
Abstract
In this paper, algorithms for different types of trigonometric power sums are developed and presented. Although interesting in their own right, these trigonometric power sums arise during the creation of an algorithm for the four types of twisted trigonometric power sums defined in [...] Read more.
In this paper, algorithms for different types of trigonometric power sums are developed and presented. Although interesting in their own right, these trigonometric power sums arise during the creation of an algorithm for the four types of twisted trigonometric power sums defined in the introduction. The primary aim in evaluating these sums is to obtain exact results in a rational form, as opposed to standard or direct evaluation, which often results in machine-dependent decimal values that can be affected by round-off errors. Moreover, since the variable, m, appearing in the denominators of the arguments of the trigonometric functions in these sums, can remain algebraic in the algorithms/codes, one can also obtain polynomial solutions in powers of m and the variable r that appears in the cosine factor accompanying the trigonometric power. The degrees of these polynomials are found to be dependent upon v, the value of the trigonometric power in the sum, which must always be specified. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)
20 pages, 5263 KiB  
Article
Correlation Analysis of Railway Track Alignment and Ballast Stiffness: Comparing Frequency-Based and Machine Learning Algorithms
by Saeed Mohammadzadeh, Hamidreza Heydari, Mahdi Karimi and Araliya Mosleh
Algorithms 2024, 17(8), 372; https://doi.org/10.3390/a17080372 - 22 Aug 2024
Viewed by 376
Abstract
One of the primary challenges in the railway industry revolves around achieving a comprehensive and insightful understanding of track conditions. The geometric parameters and stiffness of railway tracks play a crucial role in condition monitoring as well as maintenance work. Hence, this study [...] Read more.
One of the primary challenges in the railway industry revolves around achieving a comprehensive and insightful understanding of track conditions. The geometric parameters and stiffness of railway tracks play a crucial role in condition monitoring as well as maintenance work. Hence, this study investigated the relationship between vertical ballast stiffness and the track longitudinal level. Initially, the ballast stiffness and track longitudinal level data were acquired through a series of experimental measurements conducted on a reference test track along the Tehran–Mashhad railway line, utilizing recording cars for geometric track and stiffness recordings. Subsequently, the correlation between the track longitudinal level and ballast stiffness was surveyed using both frequency-based techniques and machine learning (ML) algorithms. The power spectrum density (PSD) as a frequency-based technique was employed, alongside ML algorithms, including linear regression, decision trees, and random forests, for correlation mining analyses. The results showed a robust and statistically significant relationship between the vertical ballast stiffness and longitudinal levels of railway tracks. Specifically, the PSD data exhibited a considerable correlation, especially within the 1–4 rad/m wave number range. Furthermore, the data analyses conducted using ML methods indicated that the values of the root mean square error (RMSE) were about 0.05, 0.07, and 0.06 for the linear regression, decision tree, and random forest algorithms, respectively, demonstrating the adequate accuracy of ML-based approaches. Full article
(This article belongs to the Special Issue Algorithms in Data Classification (2nd Edition))
Show Figures

Figure 1

28 pages, 1897 KiB  
Article
Bi-Objective, Dynamic, Multiprocessor Open-Shop Scheduling: A Hybrid Scatter Search–Tabu Search Approach
by Tamer F. Abdelmaguid 
Algorithms 2024, 17(8), 371; https://doi.org/10.3390/a17080371 - 21 Aug 2024
Viewed by 320
Abstract
This paper presents a novel, multi-objective scatter search algorithm (MOSS) for a bi-objective, dynamic, multiprocessor open-shop scheduling problem (Bi-DMOSP). The considered objectives are the minimization of the maximum completion time (makespan) and the minimization of the mean weighted flow time. Both are particularly [...] Read more.
This paper presents a novel, multi-objective scatter search algorithm (MOSS) for a bi-objective, dynamic, multiprocessor open-shop scheduling problem (Bi-DMOSP). The considered objectives are the minimization of the maximum completion time (makespan) and the minimization of the mean weighted flow time. Both are particularly important for improving machines’ utilization and customer satisfaction level in maintenance and healthcare diagnostic systems, in which the studied Bi-DMOSP is mostly encountered. Since the studied problem is NP-hard for both objectives, fast algorithms are needed to fulfill the requirements of real-life circumstances. Previous attempts have included the development of an exact algorithm and two metaheuristic approaches based on the non-dominated sorting genetic algorithm (NSGA-II) and the multi-objective gray wolf optimizer (MOGWO). The exact algorithm is limited to small-sized instances; meanwhile, NSGA-II was found to produce better results compared to MOGWO in both small- and large-sized test instances. The proposed MOSS in this paper attempts to provide more efficient non-dominated solutions for the studied Bi-DMOSP. This is achievable via its hybridization with a novel, bi-objective tabu search approach that utilizes a set of efficient neighborhood search functions. Parameter tuning experiments are conducted first using a subset of small-sized benchmark instances for which the optimal Pareto front solutions are known. Then, detailed computational experiments on small- and large-sized instances are conducted. Comparisons with the previously developed NSGA-II metaheuristic demonstrate the superiority of the proposed MOSS approach for small-sized instances. For large-sized instances, it proves its capability of producing competitive results for instances with low and medium density. Full article
(This article belongs to the Special Issue Scheduling: Algorithms and Real-World Applications)
Show Figures

Figure 1

23 pages, 1362 KiB  
Article
Joint Optimization of Service Migration and Resource Allocation in Mobile Edge–Cloud Computing
by Zhenli He, Liheng Li, Ziqi Lin, Yunyun Dong, Jianglong Qin and Keqin Li
Algorithms 2024, 17(8), 370; https://doi.org/10.3390/a17080370 - 21 Aug 2024
Viewed by 467
Abstract
In the rapidly evolving domain of mobile edge–cloud computing (MECC), the proliferation of Internet of Things (IoT) devices and mobile applications poses significant challenges, particularly in dynamically managing computational demands and user mobility. Current research has partially addressed aspects of service migration and [...] Read more.
In the rapidly evolving domain of mobile edge–cloud computing (MECC), the proliferation of Internet of Things (IoT) devices and mobile applications poses significant challenges, particularly in dynamically managing computational demands and user mobility. Current research has partially addressed aspects of service migration and resource allocation, yet it often falls short in thoroughly examining the nuanced interdependencies between migration strategies and resource allocation, the consequential impacts of migration delays, and the intricacies of handling incomplete tasks during migration. This study advances the discourse by introducing a sophisticated framework optimized through a deep reinforcement learning (DRL) strategy, underpinned by a Markov decision process (MDP) that dynamically adapts service migration and resource allocation strategies. This refined approach facilitates continuous system monitoring, adept decision making, and iterative policy refinement, significantly enhancing operational efficiency and reducing response times in MECC environments. By meticulously addressing these previously overlooked complexities, our research not only fills critical gaps in the literature but also enhances the practical deployment of edge computing technologies, contributing profoundly to both theoretical insights and practical implementations in contemporary digital ecosystems. Full article
(This article belongs to the Collection Feature Papers in Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

16 pages, 439 KiB  
Article
On the Complexity of the Bipartite Polarization Problem: From Neutral to Highly Polarized Discussions
by Teresa Alsinet, Josep Argelich, Ram�n B�jar and Santi Mart�nez
Algorithms 2024, 17(8), 369; https://doi.org/10.3390/a17080369 - 21 Aug 2024
Viewed by 281
Abstract
The bipartite polarization problem is an optimization problem where the goal is to find the highest polarized bipartition on a weighted and labeled graph that represents a debate developed through some social network, where nodes represent user’s opinions and edges agreement or disagreement [...] Read more.
The bipartite polarization problem is an optimization problem where the goal is to find the highest polarized bipartition on a weighted and labeled graph that represents a debate developed through some social network, where nodes represent user’s opinions and edges agreement or disagreement between users. This problem can be seen as a generalization of the maxcut problem, and in previous work, approximate solutions and exact solutions have been obtained for real instances obtained from Reddit discussions, showing that such real instances seem to be very easy to solve. In this paper, we further investigate the complexity of this problem by introducing an instance generation model where a single parameter controls the polarization of the instances in such a way that this correlates with the average complexity to solve those instances. The average complexity results we obtain are consistent with our hypothesis: the higher the polarization of the instance, the easier is to find the corresponding polarized bipartition. In view of the experimental results, it is computationally feasible to implement transparent mechanisms to monitor polarization on online discussions and to inform about solutions for creating healthier social media environments. Full article
Show Figures

Figure 1

22 pages, 2634 KiB  
Article
Identification of Crude Distillation Unit: A Comparison between Neural Network and Koopman Operator
by Abdulrazaq Nafiu Abubakar, Mustapha Kamel Khaldi, Mujahed Aldhaifallah, Rohit Patwardhan and Hussain Salloum
Algorithms 2024, 17(8), 368; https://doi.org/10.3390/a17080368 - 21 Aug 2024
Viewed by 329
Abstract
In this paper, we aimed to identify the dynamics of a crude distillation unit (CDU) using closed-loop data with NARX−NN and the Koopman operator in both linear (KL) and bilinear (KB) forms. A comparative analysis was conducted to assess the performance of each [...] Read more.
In this paper, we aimed to identify the dynamics of a crude distillation unit (CDU) using closed-loop data with NARX−NN and the Koopman operator in both linear (KL) and bilinear (KB) forms. A comparative analysis was conducted to assess the performance of each method under different experimental conditions, such as the gain, a delay and time constant mismatch, tight constraints, nonlinearities, and poor tuning. Although NARX−NN showed good training performance with the lowest Mean Squared Error (MSE), the KB demonstrated better generalization and robustness, outperforming the other methods. The KL observed a significant decline in performance in the presence of nonlinearities in inputs, yet it remained competitive with the KB under other circumstances. The use of the bilinear form proved to be crucial, as it offered a more accurate representation of CDU dynamics, resulting in enhanced performance. Full article
Show Figures

Figure 1

16 pages, 8528 KiB  
Article
Augmented Dataset for Vision-Based Analysis of Railroad Ballast via Multi-Dimensional Data Synthesis
by Kelin Ding, Jiayi Luo, Haohang Huang, John M. Hart, Issam I. A. Qamhia and Erol Tutumluer
Algorithms 2024, 17(8), 367; https://doi.org/10.3390/a17080367 - 21 Aug 2024
Viewed by 328
Abstract
Ballast serves a vital structural function in supporting railroad tracks under continuous loading. The degradation of ballast can result in issues such as inadequate drainage, lateral instability, excessive settlement, and potential service disruptions, necessitating efficient evaluation methods to ensure safe and reliable railroad [...] Read more.
Ballast serves a vital structural function in supporting railroad tracks under continuous loading. The degradation of ballast can result in issues such as inadequate drainage, lateral instability, excessive settlement, and potential service disruptions, necessitating efficient evaluation methods to ensure safe and reliable railroad operations. The incorporation of computer vision techniques into ballast inspection processes has proven effective in enhancing accuracy and robustness. Given the data-driven nature of deep learning approaches, the efficacy of these models is intrinsically linked to the quality of the training datasets, thereby emphasizing the need for a comprehensive and meticulously annotated ballast aggregate dataset. This paper presents the development of a multi-dimensional ballast aggregate dataset, constructed using empirical data collected from field and laboratory environments, supplemented with synthetic data generated by a proprietary ballast particle generator. The dataset comprises both two-dimensional (2D) data, consisting of ballast images annotated with 2D masks for particle localization, and three-dimensional (3D) data, including heightmaps, point clouds, and 3D annotations for particle localization. The data collection process encompassed various environmental lighting conditions and degradation states, ensuring extensive coverage and diversity within the training dataset. A previously developed 2D ballast particle segmentation model was trained on this augmented dataset, demonstrating high accuracy in field ballast inspections. This comprehensive database will be utilized in subsequent research to advance 3D ballast particle segmentation and shape completion, thereby facilitating enhanced inspection protocols and the development of effective ballast maintenance methodologies. Full article
(This article belongs to the Special Issue Recent Advances in Algorithms for Computer Vision Applications)
Show Figures

Figure 1

19 pages, 7973 KiB  
Article
Determining Thresholds for Optimal Adaptive Discrete Cosine Transformation
by Alexander Khanov, Anastasija Shulzhenko, Anzhelika Voroshilova, Alexander Zubarev, Timur Karimov and Shakeeb Fahmi
Algorithms 2024, 17(8), 366; https://doi.org/10.3390/a17080366 - 21 Aug 2024
Viewed by 347
Abstract
The discrete cosine transform (DCT) is widely used for image and video compression. Lossy algorithms such as JPEG, WebP, BPG and many others are based on it. Multiple modifications of DCT have been developed to improve its performance. One of them is adaptive [...] Read more.
The discrete cosine transform (DCT) is widely used for image and video compression. Lossy algorithms such as JPEG, WebP, BPG and many others are based on it. Multiple modifications of DCT have been developed to improve its performance. One of them is adaptive DCT (ADCT) designed to deal with heterogeneous image structure and it may be found, for example, in the HEVC video codec. Adaptivity means that the image is divided into an uneven grid of squares: smaller ones retain information about details better, while larger squares are efficient for homogeneous backgrounds. The practical use of adaptive DCT algorithms is complicated by the lack of optimal threshold search algorithms for image partitioning procedures. In this paper, we propose a novel method for optimal threshold search in ADCT using a metric based on tonal distribution. We define two thresholds: pm, the threshold defining solid mean coloring, and ps, defining the quadtree fragment splitting. In our algorithm, the values of these thresholds are calculated via polynomial functions of the tonal distribution of a particular image or fragment. The polynomial coefficients are determined using the dedicated optimization procedure on the dataset containing images from the specific domain, urban road scenes in our case. In the experimental part of the study, we show that ADCT allows a higher compression ratio compared to non-adaptive DCT at the same level of quality loss, up to 66% for acceptable quality. The proposed algorithm may be used directly for image compression, or as a core of video compression framework in traffic-demanding applications, such as urban video surveillance systems. Full article
(This article belongs to the Special Issue Algorithms for Image Processing and Machine Vision)
Show Figures

Figure 1

16 pages, 5082 KiB  
Article
An Image Processing-Based Correlation Method for Improving the Characteristics of Brillouin Frequency Shift Extraction in Distributed Fiber Optic Sensors
by Yuri Konstantinov, Anton Krivosheev and Fedor Barkov
Algorithms 2024, 17(8), 365; https://doi.org/10.3390/a17080365 - 20 Aug 2024
Viewed by 531
Abstract
This paper demonstrates how the processing of Brillouin gain spectra (BGS) by two-dimensional correlation methods improves the accuracy of Brillouin frequency shift (BFS) extraction in distributed fiber optic sensor systems based on the BOTDA/BOTDR (Brillouin optical time domain analysis/reflectometry) principles. First, the spectra [...] Read more.
This paper demonstrates how the processing of Brillouin gain spectra (BGS) by two-dimensional correlation methods improves the accuracy of Brillouin frequency shift (BFS) extraction in distributed fiber optic sensor systems based on the BOTDA/BOTDR (Brillouin optical time domain analysis/reflectometry) principles. First, the spectra corresponding to different spatial coordinates of the fiber sensor are resampled. Subsequently, the resampled spectra are aligned by the position of the maximum by shifting in frequency relative to each other. The spectra aligned by the position of the maximum are then averaged, which effectively increases the signal-to-noise ratio (SNR). Finally, the Lorentzian curve fitting (LCF) method is applied to the spectrum with improved characteristics, including a reduced scanning step and an increased SNR. Simulations and experiments have demonstrated that the method is particularly efficacious when the signal-to-noise ratio does not exceed 8 dB and the frequency scanning step is coarser than 4 MHz. This is particularly relevant when designing high-speed sensors, as well as when using non-standard laser sources, such as a self-scanning frequency laser, for distributed fiber-optic sensing. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

29 pages, 8768 KiB  
Article
HRIDM: Hybrid Residual/Inception-Based Deeper Model for Arrhythmia Detection from Large Sets of 12-Lead ECG Recordings
by Syed Atif Moqurrab, Hari Mohan Rai and Joon Yoo
Algorithms 2024, 17(8), 364; https://doi.org/10.3390/a17080364 - 19 Aug 2024
Cited by 2 | Viewed by 336
Abstract
Heart diseases such as cardiovascular and myocardial infarction are the foremost reasons of death in the world. The timely, accurate, and effective prediction of heart diseases is crucial for saving lives. Electrocardiography (ECG) is a primary non-invasive method to identify cardiac abnormalities. However, [...] Read more.
Heart diseases such as cardiovascular and myocardial infarction are the foremost reasons of death in the world. The timely, accurate, and effective prediction of heart diseases is crucial for saving lives. Electrocardiography (ECG) is a primary non-invasive method to identify cardiac abnormalities. However, manual interpretation of ECG recordings for heart disease diagnosis is a time-consuming and inaccurate process. For the accurate and efficient detection of heart diseases from the 12-lead ECG dataset, we have proposed a hybrid residual/inception-based deeper model (HRIDM). In this study, we have utilized ECG datasets from various sources, which are multi-institutional large ECG datasets. The proposed model is trained on 12-lead ECG data from over 10,000 patients. We have compared the proposed model with several state-of-the-art (SOTA) models, such as LeNet-5, AlexNet, VGG-16, ResNet-50, Inception, and LSTM, on the same training and test datasets. To show the effectiveness of the computational efficiency of the proposed model, we have only trained over 20 epochs without GPU support and we achieved an accuracy of 50.87% on the test dataset for 27 categories of heart abnormalities. We found that our proposed model outperformed the previous studies which participated in the official PhysioNet/CinC Challenge 2020 and achieved fourth place as compared with the 41 official ranking teams. The result of this study indicates that the proposed model is an implying new method for predicting heart diseases using 12-lead ECGs. Full article
Show Figures

Figure 1

15 pages, 7315 KiB  
Article
Computer Vision Algorithms on a Raspberry Pi 4 for Automated Depalletizing
by Danilo Greco, Majid Fasihiany, Ali Varasteh Ranjbar, Francesco Masulli, Stefano Rovetta and Alberto Cabri
Algorithms 2024, 17(8), 363; https://doi.org/10.3390/a17080363 - 18 Aug 2024
Viewed by 468
Abstract
The primary objective of a depalletizing system is to automate the process of detecting and locating specific variable-shaped objects on a pallet, allowing a robotic system to accurately unstack them. Although many solutions exist for the problem in industrial and manufacturing settings, the [...] Read more.
The primary objective of a depalletizing system is to automate the process of detecting and locating specific variable-shaped objects on a pallet, allowing a robotic system to accurately unstack them. Although many solutions exist for the problem in industrial and manufacturing settings, the application to small-scale scenarios such as retail vending machines and small warehouses has not received much attention so far. This paper presents a comparative analysis of four different computer vision algorithms for the depalletizing task, implemented on a Raspberry Pi 4, a very popular single-board computer with low computer power suitable for the IoT and edge computing. The algorithms evaluated include the following: pattern matching, scale-invariant feature transform, Oriented FAST and Rotated BRIEF, and Haar cascade classifier. Each technique is described and their implementations are outlined. Their evaluation is performed on the task of box detection and localization in the test images to assess their suitability in a depalletizing system. The performance of the algorithms is given in terms of accuracy, robustness to variability, computational speed, detection sensitivity, and resource consumption. The results reveal the strengths and limitations of each algorithm, providing valuable insights for selecting the most appropriate technique based on the specific requirements of a depalletizing system. Full article
(This article belongs to the Special Issue Recent Advances in Algorithms for Computer Vision Applications)
Show Figures

Figure 1

24 pages, 4114 KiB  
Systematic Review
Utilization of Machine Learning Algorithms for the Strengthening of HIV Testing: A Systematic Review
by Musa Jaiteh, Edith Phalane, Yegnanew A. Shiferaw, Karen Alida Voet and Refilwe Nancy Phaswana-Mafuya
Algorithms 2024, 17(8), 362; https://doi.org/10.3390/a17080362 - 17 Aug 2024
Viewed by 886
Abstract
Several machine learning (ML) techniques have demonstrated efficacy in precisely forecasting HIV risk and identifying the most eligible individuals for HIV testing in various countries. Nevertheless, there is a data gap on the utility of ML algorithms in strengthening HIV testing worldwide. This [...] Read more.
Several machine learning (ML) techniques have demonstrated efficacy in precisely forecasting HIV risk and identifying the most eligible individuals for HIV testing in various countries. Nevertheless, there is a data gap on the utility of ML algorithms in strengthening HIV testing worldwide. This systematic review aimed to evaluate how effectively ML algorithms can enhance the efficiency and accuracy of HIV testing interventions and to identify key outcomes, successes, gaps, opportunities, and limitations in their implementation. This review was guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines. A comprehensive literature search was conducted via PubMed, Google Scholar, Web of Science, Science Direct, Scopus, and Gale OneFile databases. Out of the 845 identified articles, 51 studies were eligible. More than 75% of the articles included in this review were conducted in the Americas and various parts of Sub-Saharan Africa, and a few were from Europe, Asia, and Australia. The most common algorithms applied were logistic regression, deep learning, support vector machine, random forest, extreme gradient booster, decision tree, and the least absolute shrinkage selection operator model. The findings demonstrate that ML techniques exhibit higher accuracy in predicting HIV risk/testing compared to traditional approaches. Machine learning models enhance early prediction of HIV transmission, facilitate viable testing strategies to improve the efficiency of testing services, and optimize resource allocation, ultimately leading to improved HIV testing. This review points to the positive impact of ML in enhancing early prediction of HIV spread, optimizing HIV testing approaches, improving efficiency, and eventually enhancing the accuracy of HIV diagnosis. We strongly recommend the integration of ML into HIV testing programs for efficient and accurate HIV testing. Full article
Show Figures

Graphical abstract

28 pages, 5276 KiB  
Article
Frequency-Domain and Spatial-Domain MLMVN-Based Convolutional Neural Networks
by Igor Aizenberg and Alexander Vasko
Algorithms 2024, 17(8), 361; https://doi.org/10.3390/a17080361 - 17 Aug 2024
Viewed by 327
Abstract
This paper presents a detailed analysis of a convolutional neural network based on multi-valued neurons (CNNMVN) and a fully connected multilayer neural network based on multi-valued neurons (MLMVN), employed here as a convolutional neural network in the frequency domain. We begin by providing [...] Read more.
This paper presents a detailed analysis of a convolutional neural network based on multi-valued neurons (CNNMVN) and a fully connected multilayer neural network based on multi-valued neurons (MLMVN), employed here as a convolutional neural network in the frequency domain. We begin by providing an overview of the fundamental concepts underlying CNNMVN, focusing on the organization of convolutional layers and the CNNMVN learning algorithm. The error backpropagation rule for this network is justified and presented in detail. Subsequently, we consider how MLMVN can be used as a convolutional neural network in the frequency domain. It is shown that each neuron in the first hidden layer of MLMVN may work as a frequency-domain convolutional kernel, utilizing the Convolution Theorem. Essentially, these neurons create Fourier transforms of the feature maps that would have resulted from the convolutions in the spatial domain performed in regular convolutional neural networks. Furthermore, we discuss optimization techniques for both networks and compare the resulting convolutions to explore which features they extract from images. Finally, we present experimental results showing that both approaches can achieve high accuracy in image recognition. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Image Understanding and Analysis)
Show Figures

Figure 1

12 pages, 6087 KiB  
Article
Detection of Subtle ECG Changes Despite Superimposed Artifacts by Different Machine Learning Algorithms
by Matthias Noitz, Christoph M�rtl, Carl B�ck, Christoph Mahringer, Ulrich Bodenhofer, Martin W. D�nser and Jens Meier
Algorithms 2024, 17(8), 360; https://doi.org/10.3390/a17080360 - 16 Aug 2024
Viewed by 353
Abstract
Analyzing electrocardiographic (ECG) signals is crucial for evaluating heart function and diagnosing cardiac pathology. Traditional methods for detecting ECG changes often rely on offline analysis or subjective visual inspection, which may overlook subtle variations, particularly in the case of artifacts. In this theoretical, [...] Read more.
Analyzing electrocardiographic (ECG) signals is crucial for evaluating heart function and diagnosing cardiac pathology. Traditional methods for detecting ECG changes often rely on offline analysis or subjective visual inspection, which may overlook subtle variations, particularly in the case of artifacts. In this theoretical, proof-of-concept study, we investigated the potential of five different machine learning algorithms [random forests (RFs), gradient boosting methods (GBMs), deep neural networks (DNNs), an ensemble learning technique, as well as logistic regression] to detect subtle changes in the morphology of synthetically generated ECG beats despite artifacts. Following the generation of a synthetic ECG beat using the standardized McSharry algorithm, the baseline ECG signal was modified by changing the amplitude of different ECG components by 0.01–0.06 mV. In addition, a Gaussian jitter of 0.1–0.3 mV was overlaid to simulate artifacts. Five different machine learning algorithms were then applied to detect differences between the modified ECG beats. The highest discriminatory potency, as assessed by the discriminatory accuracy, was achieved by RFs and GBMs (accuracy of up to 1.0), whereas the least accurate results were obtained by logistic regression (accuracy approximately 10% less). In a second step, a feature importance algorithm (Boruta) was used to discriminate which signal parts were responsible for difference detection. For all comparisons, only signal components that had been modified in advance were used for discretion, demonstrating that the RF model focused on the appropriate signal elements. Our findings highlight the potential of RFs and GBMs as valuable tools for detecting subtle ECG changes despite artifacts, with implications for enhancing clinical diagnosis and monitoring. Further studies are needed to validate our findings with clinical data. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (2nd Edition))
Show Figures

Figure 1

21 pages, 347 KiB  
Article
Exploring Clique Transversal Variants on Distance-Hereditary Graphs: Computational Insights and Algorithmic Approaches
by Chuan-Min Lee
Algorithms 2024, 17(8), 359; https://doi.org/10.3390/a17080359 - 16 Aug 2024
Viewed by 303
Abstract
The clique transversal problem is a critical concept in graph theory, focused on identifying a minimum subset of vertices that intersects all maximal cliques in a graph. This problem and its variations—such as the k-fold clique, {k}-clique, minus clique, [...] Read more.
The clique transversal problem is a critical concept in graph theory, focused on identifying a minimum subset of vertices that intersects all maximal cliques in a graph. This problem and its variations—such as the k-fold clique, {k}-clique, minus clique, and signed clique transversal problems—have received significant interest due to their theoretical importance and practical applications. This paper examines the k-fold clique, {k}-clique, minus clique, and signed clique transversal problems on distance-hereditary graphs. Known for their distinctive structural properties, distance hereditary graphs provide an ideal framework for studying these problem variants. By exploring these issues in the context of distance-hereditary graphs, this research enhances the understanding of the computational challenges and the potential for developing efficient algorithms to address these problems. Full article
Show Figures

Figure 1

21 pages, 3425 KiB  
Article
Directed Clustering of Multivariate Data Based on Linear or Quadratic Latent Variable Models
by Yingjuan Zhang and Jochen Einbeck
Algorithms 2024, 17(8), 358; https://doi.org/10.3390/a17080358 - 16 Aug 2024
Viewed by 326
Abstract
We consider situations in which the clustering of some multivariate data is desired, which establishes an ordering of the clusters with respect to an underlying latent variable. As our motivating example for a situation where such a technique is desirable, we consider scatterplots [...] Read more.
We consider situations in which the clustering of some multivariate data is desired, which establishes an ordering of the clusters with respect to an underlying latent variable. As our motivating example for a situation where such a technique is desirable, we consider scatterplots of traffic flow and speed, where a pattern of consecutive clusters can be thought to be linked by a latent variable, which is interpretable as traffic density. We focus on latent structures of linear or quadratic shapes, and present an estimation methodology based on expectation–maximization, which estimates both the latent subspace and the clusters along it. The directed clustering approach is summarized in two algorithms and applied to the traffic example outlined. Connections to related methodology, including principal curves, are briefly drawn. Full article
(This article belongs to the Special Issue Supervised and Unsupervised Classification Algorithms (2nd Edition))
Show Figures

Figure 1

24 pages, 4557 KiB  
Article
A System Design Perspective for Business Growth in a Crowdsourced Data Labeling Practice
by Vahid Hajipour, Sajjad Jalali, Francisco Javier Santos-Arteaga, Samira Vazifeh Noshafagh and Debora Di Caprio
Algorithms 2024, 17(8), 357; https://doi.org/10.3390/a17080357 - 15 Aug 2024
Viewed by 269
Abstract
Data labeling systems are designed to facilitate the training and validation of machine learning algorithms under the umbrella of crowdsourcing practices. The current paper presents a novel approach for designing a customized data labeling system, emphasizing two key aspects: an innovative payment mechanism [...] Read more.
Data labeling systems are designed to facilitate the training and validation of machine learning algorithms under the umbrella of crowdsourcing practices. The current paper presents a novel approach for designing a customized data labeling system, emphasizing two key aspects: an innovative payment mechanism for users and an efficient configuration of output results. The main problem addressed is the labeling of datasets where golden items are utilized to verify user performance and assure the quality of the annotated outputs. Our proposed payment mechanism is enhanced through a modified skip-based golden-oriented function that balances user penalties and prevents spam activities. Additionally, we introduce a comprehensive reporting framework to measure aggregated results and accuracy levels, ensuring the reliability of the labeling output. Our findings indicate that the proposed solutions are pivotal in incentivizing user participation, thereby reinforcing the applicability and profitability of newly launched labeling systems. Full article
(This article belongs to the Collection Feature Papers in Algorithms)
Show Figures

Figure 1

33 pages, 14331 KiB  
Article
A Virtual Machine Platform Providing Machine Learning as a Programmable and Distributed Service for IoT and Edge On-Device Computing: Architecture, Transformation, and Evaluation of Integer Discretization
by Stefan Bosse
Algorithms 2024, 17(8), 356; https://doi.org/10.3390/a17080356 - 15 Aug 2024
Viewed by 349
Abstract
Data-driven models used for predictive classification and regression tasks are commonly computed using floating-point arithmetic and powerful computers. We address constraints in distributed sensor networks like the IoT, edge, and material-integrated computing, providing only low-resource embedded computers with sensor data that are acquired [...] Read more.
Data-driven models used for predictive classification and regression tasks are commonly computed using floating-point arithmetic and powerful computers. We address constraints in distributed sensor networks like the IoT, edge, and material-integrated computing, providing only low-resource embedded computers with sensor data that are acquired and processed locally. Sensor networks are characterized by strong heterogeneous systems. This work introduces and evaluates a virtual machine architecture that provides ML as a service layer (MLaaS) on the node level and addresses very low-resource distributed embedded computers (with less than 20 kB of RAM). The VM provides a unified ML instruction set architecture that can be programmed to implement decision trees, ANN, and CNN model architectures using scaled integer arithmetic only. Models are trained primarily offline using floating-point arithmetic, finally converted by an iterative scaling and transformation process, demonstrated in this work by two tests based on simulated and synthetic data. This paper is an extended version of the FedCSIS 2023 conference paper providing new algorithms and ML applications, including ANN/CNN-based regression and classification tasks studying the effects of discretization on classification and regression accuracy. Full article
(This article belongs to the Special Issue Algorithms for Network Systems and Applications)
Show Figures

Figure 1

19 pages, 322 KiB  
Article
Multi-Objective Unsupervised Feature Selection and Cluster Based on Symbiotic Organism Search
by Abbas Fadhil Jasim AL-Gburi, Mohd Zakree Ahmad Nazri, Mohd Ridzwan Bin Yaakub and Zaid Abdi Alkareem Alyasseri
Algorithms 2024, 17(8), 355; https://doi.org/10.3390/a17080355 - 14 Aug 2024
Viewed by 406
Abstract
Unsupervised learning is a type of machine learning that learns from data without human supervision. Unsupervised feature selection (UFS) is crucial in data analytics, which plays a vital role in enhancing the quality of results and reducing computational complexity in huge feature spaces. [...] Read more.
Unsupervised learning is a type of machine learning that learns from data without human supervision. Unsupervised feature selection (UFS) is crucial in data analytics, which plays a vital role in enhancing the quality of results and reducing computational complexity in huge feature spaces. The UFS problem has been addressed in several research efforts. Recent studies have witnessed a surge in innovative techniques like nature-inspired algorithms for clustering and UFS problems. However, very few studies consider the UFS problem as a multi-objective problem to find the optimal trade-off between the number of selected features and model accuracy. This paper proposes a multi-objective symbiotic organism search algorithm for unsupervised feature selection (SOSUFS) and a symbiotic organism search-based clustering (SOSC) algorithm to generate the optimal feature subset for more accurate clustering. The efficiency and robustness of the proposed algorithm are investigated on benchmark datasets. The SOSUFS method, combined with SOSC, demonstrated the highest f-measure, whereas the KHCluster method resulted in the lowest f-measure. SOSFS effectively reduced the number of features by more than half. The proposed symbiotic organisms search-based optimal unsupervised feature-selection (SOSUFS) method, along with search-based optimal clustering (SOSC), was identified as the top-performing clustering approach. Following this, the SOSUFS method demonstrated strong performance. In summary, this empirical study indicates that the proposed algorithm significantly surpasses state-of-the-art algorithms in both efficiency and effectiveness. Unsupervised learning in artificial intelligence involves machine-learning techniques that learn from data without human supervision. Unlike supervised learning, unsupervised machine-learning models work with unlabeled data to uncover patterns and insights independently, without explicit guidance or instruction. Full article
20 pages, 6532 KiB  
Article
Machine Learning Analysis Using the Black Oil Model and Parallel Algorithms in Oil Recovery Forecasting
by Bazargul Matkerim, Aksultan Mukhanbet, Nurislam Kassymbek, Beimbet Daribayev, Maksat Mustafin and Timur Imankulov
Algorithms 2024, 17(8), 354; https://doi.org/10.3390/a17080354 - 14 Aug 2024
Viewed by 429
Abstract
The accurate forecasting of oil recovery factors is crucial for the effective management and optimization of oil production processes. This study explores the application of machine learning methods, specifically focusing on parallel algorithms, to enhance traditional reservoir simulation frameworks using black oil models. [...] Read more.
The accurate forecasting of oil recovery factors is crucial for the effective management and optimization of oil production processes. This study explores the application of machine learning methods, specifically focusing on parallel algorithms, to enhance traditional reservoir simulation frameworks using black oil models. This research involves four main steps: collecting a synthetic dataset, preprocessing it, modeling and predicting the oil recovery factors with various machine learning techniques, and evaluating the model’s performance. The analysis was carried out on a synthetic dataset containing parameters such as porosity, pressure, and the viscosity of oil and gas. By utilizing parallel computing, particularly GPUs, this study demonstrates significant improvements in processing efficiency and prediction accuracy. While maintaining the value of the R2 metric in the range of 0.97, using data parallelism sped up the learning process by, at best, 10.54 times. Neural network training was accelerated almost 8 times when running on a GPU. These findings underscore the potential of parallel machine learning algorithms to revolutionize the decision-making processes in reservoir management, offering faster and more precise predictive tools. This work not only contributes to computational sciences and reservoir engineering but also opens new avenues for the integration of advanced machine learning and parallel computing methods in optimizing oil recovery. Full article
Show Figures

Figure 1

19 pages, 1604 KiB  
Article
An Efficient AdaBoost Algorithm for Enhancing Skin Cancer Detection and Classification
by Seham Gamil, Feng Zeng, Moath Alrifaey, Muhammad Asim and Naveed Ahmad
Algorithms 2024, 17(8), 353; https://doi.org/10.3390/a17080353 - 12 Aug 2024
Viewed by 846
Abstract
Skin cancer is a prevalent and perilous form of cancer and presents significant diagnostic challenges due to its high costs, dependence on medical experts, and time-consuming procedures. The existing diagnostic process is inefficient and expensive, requiring extensive medical expertise and time. To tackle [...] Read more.
Skin cancer is a prevalent and perilous form of cancer and presents significant diagnostic challenges due to its high costs, dependence on medical experts, and time-consuming procedures. The existing diagnostic process is inefficient and expensive, requiring extensive medical expertise and time. To tackle these issues, researchers have explored the application of artificial intelligence (AI) tools, particularly machine learning techniques such as shallow and deep learning, to enhance the diagnostic process for skin cancer. These tools employ computer algorithms and deep neural networks to identify and categorize skin cancer. However, accurately distinguishing between skin cancer and benign tumors remains challenging, necessitating the extraction of pertinent features from image data for classification. This study addresses these challenges by employing Principal Component Analysis (PCA), a dimensionality-reduction approach, to extract relevant features from skin images. Additionally, accurately classifying skin images into malignant and benign categories presents another obstacle. To improve accuracy, the AdaBoost algorithm is utilized, which amalgamates weak classification models into a robust classifier with high accuracy. This research introduces a novel approach to skin cancer diagnosis by integrating Principal Component Analysis (PCA), AdaBoost, and EfficientNet B0, leveraging artificial intelligence (AI) tools. The novelty lies in the combination of these techniques to develop a robust and accurate system for skin cancer classification. The advantage of this approach is its ability to significantly reduce costs, minimize reliance on medical experts, and expedite the diagnostic process. The developed model achieved an accuracy of 93.00% using the DermIS dataset and demonstrated excellent precision, recall, and F1-score values, confirming its ability to correctly classify skin lesions as malignant or benign. Additionally, the model achieved an accuracy of 91.00% using the ISIC dataset, which is widely recognized for its comprehensive collection of annotated dermoscopic images, providing a robust foundation for training and validation. These advancements have the potential to significantly enhance the efficiency and accuracy of skin cancer diagnosis and classification. Ultimately, the integration of AI tools and techniques in skin cancer diagnosis can lead to cost reduction and improved patient outcomes, benefiting both patients and healthcare providers. Full article
Show Figures

Figure 1

26 pages, 501 KiB  
Article
In-Depth Analysis of GAF-Net: Comparative Fusion Approaches in Video-Based Person Re-Identification
by Moncef Boujou, Rabah Iguernaissi, Lionel Nicod, Djamal Merad and Séverine Dubuisson
Algorithms 2024, 17(8), 352; https://doi.org/10.3390/a17080352 - 11 Aug 2024
Viewed by 656
Abstract
This study provides an in-depth analysis of GAF-Net, a novel model for video-based person re-identification (Re-ID) that matches individuals across different video sequences. GAF-Net combines appearance-based features with gait-based features derived from skeletal data, offering a new approach that diverges from traditional silhouette-based [...] Read more.
This study provides an in-depth analysis of GAF-Net, a novel model for video-based person re-identification (Re-ID) that matches individuals across different video sequences. GAF-Net combines appearance-based features with gait-based features derived from skeletal data, offering a new approach that diverges from traditional silhouette-based methods. We thoroughly examine each module of GAF-Net and explore various fusion methods at the both score and feature levels, extending beyond initial simple concatenation. Comprehensive evaluations on the iLIDS-VID and MARS datasets demonstrate GAF-Net’s effectiveness across scenarios. GAF-Net achieves state-of-the-art 93.2% rank-1 accuracy on iLIDS-VID’s long sequences, while MARS results (86.09% mAP, 89.78% rank-1) reveal challenges with shorter, variable sequences in complex real-world settings. We demonstrate that integrating skeleton-based gait features consistently improves Re-ID performance, particularly with long, more informative sequences. This research provides crucial insights into multi-modal feature integration in Re-ID tasks, laying a foundation for the advancement of multi-modal biometric systems for diverse computer vision applications. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Image Understanding and Analysis)
Show Figures

Figure 1

22 pages, 8170 KiB  
Article
Multi-Objective Resource-Constrained Scheduling in Large and Repetitive Construction Projects
by Vasiliki Lazari, Athanasios Chassiakos and Stylianos Karatzas
Algorithms 2024, 17(8), 351; https://doi.org/10.3390/a17080351 - 10 Aug 2024
Viewed by 725
Abstract
Effective resource management constitutes a cornerstone of construction project success. This is a challenging combinatorial optimization problem with multiple and contradictory objectives whose complexity rises disproportionally with the project size and special characteristics (e.g., repetitive projects). While relevant work exists, there is still [...] Read more.
Effective resource management constitutes a cornerstone of construction project success. This is a challenging combinatorial optimization problem with multiple and contradictory objectives whose complexity rises disproportionally with the project size and special characteristics (e.g., repetitive projects). While relevant work exists, there is still a need for thorough modeling of the practical implications of non-optimal decisions. This study proposes a multi-objective model, which can realistically represent the actual loss from not meeting the resource utilization priorities and constraints of a given project, including parameters that assess the cost of exceeding the daily resource availability, the cost of moving resources in and out of the worksite, and the cost of delaying the project completion. Optimization is performed using Genetic Algorithms, with problem setups organized in a spreadsheet format for enhanced readability and the solving is conducted via commercial software. A case study consisting of 16 repetitive projects, totaling 160 activities, tested under different objective and constraint scenarios is used to evaluate the algorithm effectiveness in different project management priorities. The main study conclusions emphasize the importance of conducting multiple analyses for effective decision-making, the increasing necessity for formal optimization as a project’s size and complexity increase, and the significant support that formal optimization provides in customizing resource allocation decisions in construction projects. Full article
Show Figures

Figure 1

26 pages, 513 KiB  
Article
A Non-Smooth Numerical Optimization Approach to the Three-Point Dubins Problem (3PDP)
by Mattia Piazza, Enrico Bertolazzi and Marco Frego
Algorithms 2024, 17(8), 350; https://doi.org/10.3390/a17080350 - 10 Aug 2024
Viewed by 567
Abstract
This paper introduces a novel non-smooth numerical optimization approach for solving the Three-Point Dubins Problem (3PDP). The 3PDP requires determining the shortest path of bounded curvature that connects given initial and final positions and orientations while traversing a specified waypoint. The inherent discontinuity [...] Read more.
This paper introduces a novel non-smooth numerical optimization approach for solving the Three-Point Dubins Problem (3PDP). The 3PDP requires determining the shortest path of bounded curvature that connects given initial and final positions and orientations while traversing a specified waypoint. The inherent discontinuity of this problem precludes the use of conventional optimization algorithms. We propose two innovative methods specifically designed to address this challenge. These methods not only effectively solve the 3PDP but also offer significant computational efficiency improvements over existing state-of-the-art techniques. Our contributions include the formulation of these new algorithms, a detailed analysis of their theoretical foundations, and their implementation. Additionally, we provide a thorough comparison with current leading approaches, demonstrating the superior performance of our methods in terms of accuracy and computational speed. This work advances the field of path planning in robotics, providing practical solutions for applications requiring efficient and precise motion planning. Full article
Show Figures

Figure 1

3 pages, 138 KiB  
Editorial
Editorial for the Special Issue on “Recent Advances in Nonsmooth Optimization and Analysis”
by Sorin-Mihai Grad
Algorithms 2024, 17(8), 349; https://doi.org/10.3390/a17080349 - 9 Aug 2024
Viewed by 504
Abstract
In recent years, nonsmooth optimization and analysis have seen remarkable advancements, significantly impacting various scientific and engineering disciplines [...] Full article
(This article belongs to the Special Issue Recent Advances in Nonsmooth Optimization and Analysis)
18 pages, 1001 KiB  
Article
The Parallel Machine Scheduling Problem with Different Speeds and Release Times in the Ore Hauling Operation
by Luis Tarazona-Torres, Ciro Amaya, Alvaro Paipilla, Camilo Gomez and David Alvarez-Martinez
Algorithms 2024, 17(8), 348; https://doi.org/10.3390/a17080348 - 8 Aug 2024
Viewed by 598
Abstract
Ore hauling operations are crucial within the mining industry as they supply essential minerals to production plants. Conducted with sophisticated and high-cost operational equipment, these operations demand meticulous planning to ensure that production targets are met while optimizing equipment utilization. In this study, [...] Read more.
Ore hauling operations are crucial within the mining industry as they supply essential minerals to production plants. Conducted with sophisticated and high-cost operational equipment, these operations demand meticulous planning to ensure that production targets are met while optimizing equipment utilization. In this study, we present an algorithm to determine the minimum amount of hauling equipment required to meet the ore transport target. To achieve this, a mathematical model has been developed, considering it as a parallel machine scheduling problem with different speeds and release times, focusing on minimizing both the completion time and the costs associated with equipment use. Additionally, another algorithm was developed to allow the tactical evaluation of these two variables. These procedures and the model contribute significantly to decision-makers by providing a systematic approach to resource allocation, ensuring that loading and hauling equipment are utilized to their fullest potentials while adhering to budgetary constraints and operational schedules. This approach optimizes resource usage and improves operational efficiency, facilitating continuous improvement in mining operations. Full article
(This article belongs to the Special Issue Scheduling Theory and Algorithms for Sustainable Manufacturing)
Show Figures

Figure 1

19 pages, 4938 KiB  
Article
Classification and Regression of Pinhole Corrosions on Pipelines Based on Magnetic Flux Leakage Signals Using Convolutional Neural Networks
by Yufei Shen and Wenxing Zhou
Algorithms 2024, 17(8), 347; https://doi.org/10.3390/a17080347 - 8 Aug 2024
Viewed by 596
Abstract
Pinhole corrosions on oil and gas pipelines are difficult to detect and size and, therefore, pose a significant challenge to the pipeline integrity management practice. This study develops two convolutional neural network (CNN) models to identify pinholes and predict the sizes and location [...] Read more.
Pinhole corrosions on oil and gas pipelines are difficult to detect and size and, therefore, pose a significant challenge to the pipeline integrity management practice. This study develops two convolutional neural network (CNN) models to identify pinholes and predict the sizes and location of the pinhole corrosions according to the magnetic flux leakage signals generated using the magneto-static finite element analysis. Extensive three-dimensional parametric finite element analysis cases are generated to train and validate the two CNN models. Additionally, comprehensive algorithm analysis evaluates the model performance, providing insights into the practical application of CNN models in pipeline integrity management. The proposed classification CNN model is shown to be highly accurate in classifying pinholes and pinhole-in-general corrosion defects. The proposed regression CNN model is shown to be highly accurate in predicting the location of the pinhole and obtain a reasonably high accuracy in estimating the depth and diameter of the pinhole, even in the presence of measurement noises. This study indicates the effectiveness of employing deep learning algorithms to enhance the integrity management practice of corroded pipelines. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition (2nd Edition))
Show Figures

Figure 1

24 pages, 8078 KiB  
Article
EEG Channel Selection for Stroke Patient Rehabilitation Using BAT Optimizer
by Mohammed Azmi Al-Betar, Zaid Abdi Alkareem Alyasseri, Noor Kamal Al-Qazzaz, Sharif Naser Makhadmeh, Nabeel Salih Ali and Christoph Guger
Algorithms 2024, 17(8), 346; https://doi.org/10.3390/a17080346 - 8 Aug 2024
Viewed by 665
Abstract
Stroke is a major cause of mortality worldwide, disrupts cerebral blood flow, leading to severe brain damage. Hemiplegia, a common consequence, results in motor task loss on one side of the body. Many stroke survivors face long-term motor impairments and require great rehabilitation. [...] Read more.
Stroke is a major cause of mortality worldwide, disrupts cerebral blood flow, leading to severe brain damage. Hemiplegia, a common consequence, results in motor task loss on one side of the body. Many stroke survivors face long-term motor impairments and require great rehabilitation. Electroencephalograms (EEGs) provide a non-invasive method to monitor brain activity and have been used in brain–computer interfaces (BCIs) to help in rehabilitation. Motor imagery (MI) tasks, detected through EEG, are pivotal for developing BCIs that assist patients in regaining motor purpose. However, interpreting EEG signals for MI tasks remains challenging due to their complexity and low signal-to-noise ratio. The main aim of this study is to focus on optimizing channel selection in EEG-based BCIs specifically for stroke rehabilitation. Determining the most informative EEG channels is crucial for capturing the neural signals related to motor impairments in stroke patients. In this paper, a binary bat algorithm (BA)-based optimization method is proposed to select the most relevant channels tailored to the unique neurophysiological changes in stroke patients. This approach is able to enhance the BCI performance by improving classification accuracy and reducing data dimensionality. We use time–entropy–frequency (TEF) attributes, processed through automated independent component analysis with wavelet transform (AICA-WT) denoising, to enhance signal clarity. The selected channels and features are proved through a k-nearest neighbor (KNN) classifier using public BCI datasets, demonstrating improved classification of MI tasks and the potential for better rehabilitation outcomes. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms in Healthcare)
Show Figures

Figure 1

34 pages, 433 KiB  
Article
Precedence Table Construction Algorithm for CFGs Regardless of Being OPGs
by Leonardo Lizcano, Eduardo Angulo and José Márquez
Algorithms 2024, 17(8), 345; https://doi.org/10.3390/a17080345 - 7 Aug 2024
Viewed by 550
Abstract
Operator precedence grammars (OPG) are context-free grammars (CFG) that are characterized by the absence of two adjacent non-terminal symbols in the body of each production (right-hand side). Operator precedence languages (OPL) are deterministic and context-free. Three possible precedence relations between pairs of terminal [...] Read more.
Operator precedence grammars (OPG) are context-free grammars (CFG) that are characterized by the absence of two adjacent non-terminal symbols in the body of each production (right-hand side). Operator precedence languages (OPL) are deterministic and context-free. Three possible precedence relations between pairs of terminal symbols are established for these languages. Many CFGs are not OPGs because the operator precedence cannot be applied to them as they do not comply with the basic rule. To solve this problem, we have conducted a thorough redefinition of the Left and Right sets of terminals that are the basis for calculating the precedence relations, and we have defined a new Leftmost set. The algorithms for calculating them are also described in detail. Our work’s most significant contribution is that we establish precedence relationships between terminals by overcoming the basic rule of not having two consecutive non-terminals using an algorithm that allows building the operator precedence table for a CFG regardless of whether it is an OPG. The paper shows the complexities of the proposed algorithms and possible exceptions to the proposed rules. We present examples by using an OPG and two non-OPGs to illustrate the operation of the proposed algorithms. With these, the operator precedence table is built, and bottom-up parsing is carried out correctly. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop