Journal Description
Algorithms
Algorithms
is a peer-reviewed, open access journal which provides an advanced forum for studies related to algorithms and their applications. Algorithms is published monthly online by MDPI. The European Society for Fuzzy Logic and Technology (EUSFLAT) is affiliated with Algorithms and their members receive discounts on the article processing charges.
- Open Access — free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, and other databases.
- Journal Rank: CiteScore - Q2 (Numerical Analysis)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 15 days after submission; acceptance to publication is undertaken in 2.9 days (median values for papers published in this journal in the second half of 2023).
- Testimonials: See what our editors and authors say about Algorithms.
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.3 (2022);
5-Year Impact Factor:
2.2 (2022)
Latest Articles
The Knapsack Problem with Conflict Pair Constraints on Bipartite Graphs and Extensions
Algorithms 2024, 17(5), 219; https://doi.org/10.3390/a17050219 (registering DOI) - 18 May 2024
Abstract
In this paper, we study the knapsack problem with conflict pair constraints. After a thorough literature survey on the topic, our study focuses on the special case of bipartite conflict graphs. For complete bipartite (multipartite) conflict graphs, the problem is shown to be
[...] Read more.
In this paper, we study the knapsack problem with conflict pair constraints. After a thorough literature survey on the topic, our study focuses on the special case of bipartite conflict graphs. For complete bipartite (multipartite) conflict graphs, the problem is shown to be NP-hard but solvable in pseudo-polynomial time, and it admits an FPTAS. Extensions of these results to more general classes of graphs are also presented. Further, a class of integer programming models for the general knapsack problem with conflict pair constraints is presented, which generalizes and unifies the existing formulations. The strength of the LP relaxations of these formulations is analyzed, and we discuss different ways to tighten them. Experimental comparisons of these models are also presented to assess their relative strengths. This analysis disclosed various strong and weak points of different formulations of the problem and their relationships to different types of problem data. This information can be used in designing special purpose algorithms for KPCC involving a learning component.
Full article
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)
Open AccessArticle
Boundary SPH for Robust Particle–Mesh Interaction in Three Dimensions
by
Ryan Kim and Paul M. Torrens
Algorithms 2024, 17(5), 218; https://doi.org/10.3390/a17050218 - 16 May 2024
Abstract
This paper introduces an algorithm to tackle the boundary condition (BC) problem, which has long persisted in the numerical and computational treatment of smoothed particle hydrodynamics (SPH). Central to the BC problem is a need for an effective method to reconcile a numerical
[...] Read more.
This paper introduces an algorithm to tackle the boundary condition (BC) problem, which has long persisted in the numerical and computational treatment of smoothed particle hydrodynamics (SPH). Central to the BC problem is a need for an effective method to reconcile a numerical representation of particles with 2D or 3D geometry. We describe and evaluate an algorithmic solution—boundary SPH (BSPH)—drawn from a novel twist on the mesh-based boundary method, allowing SPH particles to interact (directly and implicitly) with either convex or concave 3D meshes. The method draws inspiration from existing works in graphics, particularly discrete signed distance fields, to determine whether particles are intersecting or submerged with mesh triangles. We evaluate the efficacy of BSPH through application to several simulation environments of varying mesh complexity, showing practical real-time implementation in Unity3D and its high-level shader language (HLSL), which we test in the parallelization of particle operations. To examine robustness, we portray slip and no-slip conditions in simulation, and we separately evaluate convex and concave meshes. To demonstrate empirical utility, we show pressure gradients as measured in simulated still water tank implementations of hydrodynamics. Our results identify that BSPH, despite producing irregular pressure values among particles close to the boundary manifolds of the meshes, successfully prevents particles from intersecting or submerging into the boundary manifold. Average FPS calculations for each simulation scenario show that the mesh boundary method can still be used effectively with simple simulation scenarios. We additionally point the reader to future works that could investigate the effect of simulation parameters and scene complexity on simulation performance, resolve abnormal pressure values along the mesh boundary, and test the method’s robustness on a wider variety of simulation environments.
Full article
(This article belongs to the Special Issue Geometric Algorithms and Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
Fault Location Method Based on Dynamic Operation and Maintenance Map and Common Alarm Points Analysis
by
Sheng Wu and Jihong Guan
Algorithms 2024, 17(5), 217; https://doi.org/10.3390/a17050217 - 16 May 2024
Abstract
►▼
Show Figures
Under a distributed information system, the scale of various operational components such as applications, operating systems, databases, servers, and networks is immense, with intricate access relationships. The silo effect of each professional is prominent, and the linkage mechanism is insufficient, making it difficult
[...] Read more.
Under a distributed information system, the scale of various operational components such as applications, operating systems, databases, servers, and networks is immense, with intricate access relationships. The silo effect of each professional is prominent, and the linkage mechanism is insufficient, making it difficult to locate the infrastructure components that cause exceptions under a particular application. Current research only plays a role in local scenarios, and its accuracy and generalization are still very limited. This paper proposes a novel fault location method based on dynamic operation maps and alarm common point analysis. During the fault period, various alarm entities are associated with dynamic operation maps, and alarm common points are obtained based on graph search addressing methods, covering deployment relationship common points, connection common points (physical and logical), and access flow common points. This method, compared with knowledge graph approaches, eliminates the complex process of knowledge graph construction, making it more concise and efficient. Furthermore, in contrast to indicator correlation analysis methods, this approach supplements with configuration correlation information, resulting in more precise positioning. Through practical validation, its fault hit rate exceeds 82%, which is significantly better than the existing main methods.
Full article
Figure 1
Open AccessArticle
An Interface to Monitor Process Variability Using the Binomial ATTRIVAR SS Control Chart
by
João Pedro Costa Violante, Marcela A. G. Machado, Amanda dos Santos Mendes and Túlio S. Almeida
Algorithms 2024, 17(5), 216; https://doi.org/10.3390/a17050216 - 16 May 2024
Abstract
Control charts are tools of paramount importance in statistical process control. They are broadly applied in monitoring processes and improving quality, as they allow the detection of special causes of variation with a significant level of accuracy. Furthermore, there are several strategies able
[...] Read more.
Control charts are tools of paramount importance in statistical process control. They are broadly applied in monitoring processes and improving quality, as they allow the detection of special causes of variation with a significant level of accuracy. Furthermore, there are several strategies able to be employed in different contexts, all of which offer their own advantages. Therefore, this study focuses on monitoring the variability in univariate processes through variance using the Binomial version of the ATTRIVAR Same Sample S2 (B-ATTRIVAR SS S2) control chart, given that it allows coupling attribute and variable inspections (ATTRIVAR means attribute + variable), i.e., taking advantage of the cost-effectiveness of the former and the wealth of information and greater performance of the latter. Its Binomial version was used for such a purpose, since inspections are made using two attributes, and the Same Sample was used due to being submitted to both the attribute and variable stages of inspection. A computational application was developed in the R language using the Shiny package so as to create an interface to facilitate its application and use in the quality control of the production processes. Its application enables users to input process parameters and generate the B-ATTRIVAR SS control chart for monitoring the process variability with variance. By comparing the data obtained from its application with a simpler code, its performance was validated, given that its results exhibited striking similarity.
Full article
(This article belongs to the Special Issue Data-Driven Intelligent Modeling and Optimization Algorithms for Industrial Processes)
►▼
Show Figures
Figure 1
Open AccessArticle
Motion Correction for Brain MRI Using Deep Learning and a Novel Hybrid Loss Function
by
Lei Zhang, Xiaoke Wang, Michael Rawson, Radu Balan, Edward H. Herskovits, Elias R. Melhem, Linda Chang, Ze Wang and Thomas Ernst
Algorithms 2024, 17(5), 215; https://doi.org/10.3390/a17050215 - 15 May 2024
Abstract
Purpose: Motion-induced magnetic resonance imaging (MRI) artifacts can deteriorate image quality and reduce diagnostic accuracy, but motion by human subjects is inevitable and can even be caused by involuntary physiological movements. Deep-learning-based motion correction methods might provide a solution. However, most studies have
[...] Read more.
Purpose: Motion-induced magnetic resonance imaging (MRI) artifacts can deteriorate image quality and reduce diagnostic accuracy, but motion by human subjects is inevitable and can even be caused by involuntary physiological movements. Deep-learning-based motion correction methods might provide a solution. However, most studies have been based on directly applying existing models, and the trained models are rarely accessible. Therefore, we aim to develop and evaluate a deep-learning-based method (Motion Correction-Net, or MC-Net) for suppressing motion artifacts in brain MRI scans. Methods: A total of 57 subjects, providing 20,889 slices in four datasets, were used. Furthermore, 3T 3D sagittal magnetization-prepared rapid gradient-echo (MP-RAGE) and 2D axial fluid-attenuated inversion-recovery (FLAIR) sequences were acquired. The MC-Net was derived from a UNet combined with a two-stage multi-loss function. T1-weighted axial brain images contaminated with synthetic motions were used to train the network to remove motion artifacts. Evaluation used simulated T1- and T2-weighted axial, coronal, and sagittal images unseen during training, as well as T1-weighted images with motion artifacts from real scans. The performance indices included the peak-signal-to-noise ratio (PSNR), the structural similarity index measure (SSIM), and visual reading scores from three blinded clinical readers. A one-sided Wilcoxon signed-rank test was used to compare reader scores, with p < 0.05 considered significant. Intraclass correlation coefficients (ICCs) were calculated for inter-rater evaluations. Results: The MC-Net outperformed other methods in terms of PSNR and SSIM for the T1 axial test set. The MC-Net significantly improved the quality of all T1-weighted images for all directions (i.e., the mean SSIM of axial, sagittal, and coronal slices improved from 0.77, 0.64, and 0.71 to 0.92, 0.75, and 0.84; the mean PSNR improved from 26.35, 24.03, and 24.55 to 29.72, 24.40, and 25.37, respectively) and for simulated as well as real motion artifacts, both using quantitative measures and visual scores. However, MC-Net performed poorly for images with untrained T2-weighted contrast because the T2 contrast was unseen during training and is different from T1 contrast. Conclusion: The proposed two-stage multi-loss MC-Net can effectively suppress motion artifacts in brain MRI without compromising image quality. Given the efficiency of MC-Net (with a single-image processing time of ~40 ms), it can potentially be used in clinical settings.
Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Biomedical Image Analysis and Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
Improving 2–5 Qubit Quantum Phase Estimation Circuits Using Machine Learning
by
Charles Woodrum, Torrey Wagner and David Weeks
Algorithms 2024, 17(5), 214; https://doi.org/10.3390/a17050214 - 15 May 2024
Abstract
Quantum computing has the potential to solve problems that are currently intractable to classical computers with algorithms like Quantum Phase Estimation (QPE); however, noise significantly hinders the performance of today’s quantum computers. Machine learning has the potential to improve the performance of QPE
[...] Read more.
Quantum computing has the potential to solve problems that are currently intractable to classical computers with algorithms like Quantum Phase Estimation (QPE); however, noise significantly hinders the performance of today’s quantum computers. Machine learning has the potential to improve the performance of QPE algorithms, especially in the presence of noise. In this work, QPE circuits were simulated with varying levels of depolarizing noise to generate datasets of QPE output. In each case, the phase being estimated was generated with a phase gate, and each circuit modeled was defined by a randomly selected phase. The model accuracy, prediction speed, overfitting level and variation in accuracy with noise level was determined for 5 machine learning algorithms. These attributes were compared to the traditional method of post-processing and a 6x–36 improvement in model performance was noted, depending on the dataset. No algorithm was a clear winner when considering these 4 criteria, as the lowest-error model (neural network) was also the slowest predictor; the algorithm with the lowest overfitting and fastest prediction time (linear regression) had the highest error level and a high degree of variation of error with noise. The XGBoost ensemble algorithm was judged to be the best tradeoff between these criteria due to its error level, prediction time and low variation of error with noise. For the first time, a machine learning model was validated using a 2-qubit datapoint obtained from an IBMQ quantum computer. The best 2-qubit model predicted within 2% of the actual phase, while the traditional method possessed a 25% error.
Full article
(This article belongs to the Special Issue Quantum and Classical Artificial Intelligence)
►▼
Show Figures
Figure 1
Open AccessArticle
EPSOM-Hyb: A General Purpose Estimator of Log-Marginal Likelihoods with Applications in Probabilistic Graphical Models
by
Eric Chuu, Yabo Niu, Anirban Bhattacharya and Debdeep Pati
Algorithms 2024, 17(5), 213; https://doi.org/10.3390/a17050213 - 15 May 2024
Abstract
We consider the estimation of the marginal likelihood in Bayesian statistics, with primary emphasis on Gaussian graphical models, where the intractability of the marginal likelihood in high dimensions is a frequently researched problem. We propose a general algorithm that can be widely applied
[...] Read more.
We consider the estimation of the marginal likelihood in Bayesian statistics, with primary emphasis on Gaussian graphical models, where the intractability of the marginal likelihood in high dimensions is a frequently researched problem. We propose a general algorithm that can be widely applied to a variety of problem settings and excels particularly when dealing with near log-concave posteriors. Our method builds upon a previously posited algorithm that uses MCMC samples to partition the parameter space and forms piecewise constant approximations over these partition sets as a means of estimating the normalizing constant. In this paper, we refine the aforementioned local approximations by taking advantage of the shape of the target distribution and leveraging an expectation propagation algorithm to approximate Gaussian integrals over rectangular polytopes. Our numerical experiments show the versatility and accuracy of the proposed estimator, even as the parameter space increases in dimension and becomes more complicated.
Full article
(This article belongs to the Collection Feature Papers in Randomized, Online and Approximation Algorithms)
►▼
Show Figures
Figure 1
Open AccessArticle
A General Statistical Physics Framework for Assignment Problems
by
Patrice Koehl and Henri Orland
Algorithms 2024, 17(5), 212; https://doi.org/10.3390/a17050212 - 14 May 2024
Abstract
Linear assignment problems hold a pivotal role in combinatorial optimization, offering a broad spectrum of applications within the field of data sciences. They consist of assigning “agents” to “tasks” in a way that leads to a minimum total cost associated with the assignment.
[...] Read more.
Linear assignment problems hold a pivotal role in combinatorial optimization, offering a broad spectrum of applications within the field of data sciences. They consist of assigning “agents” to “tasks” in a way that leads to a minimum total cost associated with the assignment. The assignment is balanced when the number of agents equals the number of tasks, with a one-to-one correspondence between agents and tasks, and it is and unbalanced otherwise. Additional options and constraints may be imposed, such as allowing agents to perform multiple tasks or allowing tasks to be performed by multiple agents. In this paper, we propose a novel framework that can solve all these assignment problems employing methodologies derived from the field of statistical physics. We describe this formalism in detail and validate all its assertions. A major part of this framework is the definition of a concave effective free energy function that encapsulates the constraints of the assignment problem within a finite temperature context. We demonstrate that this free energy monotonically decreases as a function of a parameter representing the inverse of temperature. As increases, the free energy converges to the optimal assignment cost. Furthermore, we demonstrate that when values are sufficiently large, the exact solution to the assignment problem can be derived by rounding off the elements of the computed assignment matrix to the nearest integer. We describe a computer implementation of our framework and illustrate its application to multi-task assignment problems for which the Hungarian algorithm is not applicable.
Full article
(This article belongs to the Collection Feature Papers in Combinatorial Optimization, Graph, and Network Algorithms)
►▼
Show Figures
Figure 1
Open AccessArticle
Solving Least-Squares Problems via a Double-Optimal Algorithm and a Variant of the Karush–Kuhn–Tucker Equation for Over-Determined Systems
by
Chein-Shan Liu, Chung-Lun Kuo and Chih-Wen Chang
Algorithms 2024, 17(5), 211; https://doi.org/10.3390/a17050211 - 14 May 2024
Abstract
A double optimal solution (DOS) of a least-squares problem with is derived in an -dimensional varying affine Krylov subspace (VAKS); two minimization techniques exactly determine the
[...] Read more.
A double optimal solution (DOS) of a least-squares problem with is derived in an -dimensional varying affine Krylov subspace (VAKS); two minimization techniques exactly determine the expansion coefficients of the solution in the VAKS. The minimal-norm solution can be obtained automatically regardless of whether the linear system is consistent or inconsistent. A new double optimal algorithm (DOA) is created; it is sufficiently time saving by inverting an positive definite matrix at each iteration step, where . The properties of the DOA are investigated and the estimation of residual error is provided. The residual norms are proven to be strictly decreasing in the iterations; hence, the DOA is absolutely convergent. Numerical tests reveal the efficiency of the DOA for solving least-squares problems. The DOA is applicable to least-squares problems regardless of whether or . The Moore–Penrose inverse matrix is also addressed by adopting the DOA; the accuracy and efficiency of the proposed method are proven. The -dimensional VAKS is different from the traditional m-dimensional affine Krylov subspace used in the conjugate gradient (CG)-type iterative algorithms CGNR (or CGLS) and CGRE (or Craig method) for solving least-squares problems with . We propose a variant of the Karush–Kuhn–Tucker equation, and then we apply the partial pivoting Gaussian elimination method to solve the variant, which is better than the original Karush–Kuhn–Tucker equation, the CGNR and the CGNE for solving over-determined linear systems. Our main contribution is developing a double-optimization-based iterative algorithm in a varying affine Krylov subspace for effectively and accurately solving least-squares problems, even for a dense and ill-conditioned matrix with or .
Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)
►▼
Show Figures
Figure 1
Open AccessArticle
Particle Swarm Optimization-Based Model Abstraction and Explanation Generation for a Recurrent Neural Network
by
Yang Liu, Huadong Wang and Yan Ma
Algorithms 2024, 17(5), 210; https://doi.org/10.3390/a17050210 - 13 May 2024
Abstract
In text classifier models, the complexity of recurrent neural networks (RNNs) is very high because of the vast state space and uncertainty of transitions, which makes the RNN classifier’s explainability insufficient. It is almost impossible to explain the large-scale RNN directly. A feasible
[...] Read more.
In text classifier models, the complexity of recurrent neural networks (RNNs) is very high because of the vast state space and uncertainty of transitions, which makes the RNN classifier’s explainability insufficient. It is almost impossible to explain the large-scale RNN directly. A feasible method is to generalize the rules undermining it, that is, model abstraction. To deal with the low efficiency and excessive information loss in existing model abstraction for RNNs, this work proposes a PSO (Particle Swarm Optimization)-based model abstraction and explanation generation method for RNNs. Firstly, the k-means clustering is applied to preliminarily partition the RNN decision process state. Secondly, a frequency prefix tree is constructed based on the traces, and a PSO algorithm is designed to implement state merging to address the problem of vast state space. Then, a PFA (probabilistic finite automata) is constructed to explain the RNN structure with preserving the origin RNN information as much as possible. Finally, the quantitative keywords are labeled as an explanation for classification results, which are automatically generated with the abstract model PFA. We demonstrate the feasibility and effectiveness of the proposed method in some cases.
Full article
(This article belongs to the Special Issue Deep Learning for Anomaly Detection)
►▼
Show Figures
Graphical abstract
Open AccessArticle
Metaheuristic and Heuristic Algorithms-Based Identification Parameters of a Direct Current Motor
by
David M. Munciño, Emily A. Damian-Ramírez, Mayra Cruz-Fernández, Luis A. Montoya-Santiyanes and Juvenal Rodríguez-Reséndiz
Algorithms 2024, 17(5), 209; https://doi.org/10.3390/a17050209 - 11 May 2024
Abstract
Direct current motors are widely used in industry applications, and it has become necessary to carry out studies and experiments for their optimization. In this manuscript, a comparison between heuristic and metaheuristic algorithms is presented, specifically, the Steiglitz–McBride, Jaya, Genetic Algorithm (GA), and
[...] Read more.
Direct current motors are widely used in industry applications, and it has become necessary to carry out studies and experiments for their optimization. In this manuscript, a comparison between heuristic and metaheuristic algorithms is presented, specifically, the Steiglitz–McBride, Jaya, Genetic Algorithm (GA), and Grey Wolf Optimizer (GWO) algorithms. They were used to estimate the parameters of a dynamic model that approximates the actual responses of current and angular velocity of a DC motor. The inverse of the Euclidean distance between the current and velocity errors was defined as the fitness function for the metaheuristic algorithms. For a more comprehensive comparison between algorithms, other indicators such as mean squared error (MSE), standard deviation, computation time, and key points of the current and velocity responses were used. Simulations were performed with MATLAB/Simulink 2010 using the estimated parameters and compared to the experiments. The results showed that Steiglitz–McBride and GWO are better parametric estimators, performing better than Jaya and GA in real signals and nominal parameters. Indicators say that GWO is more accurate for parametric estimation, with an average MSE of 0.43%, but it requires a high computational cost. On the contrary, Steiglitz–McBride performed with an average MSE of 3.32% but required a much lower computational cost. The GWO presented an error of 1% in the dynamic response using the corresponding indicators. If a more accurate parametric estimation is required, it is recommended to use GWO; however, the heuristic algorithm performed better overall. The performance of the algorithms presented in this paper may change if different error functions are used.
Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimal Design of Engineering Problems)
►▼
Show Figures
Figure 1
Open AccessArticle
Comparative Analysis of Classification Methods and Suitable Datasets for Protocol Recognition in Operational Technologies
by
Eva Holasova, Radek Fujdiak and Jiri Misurec
Algorithms 2024, 17(5), 208; https://doi.org/10.3390/a17050208 - 11 May 2024
Abstract
The interconnection of Operational Technology (OT) and Information Technology (IT) has created new opportunities for remote management, data storage in the cloud, real-time data transfer over long distances, or integration between different OT and IT networks. OT networks require increased attention due to
[...] Read more.
The interconnection of Operational Technology (OT) and Information Technology (IT) has created new opportunities for remote management, data storage in the cloud, real-time data transfer over long distances, or integration between different OT and IT networks. OT networks require increased attention due to the convergence of IT and OT, mainly due to the increased risk of cyber-attacks targeting these networks. This paper focuses on the analysis of different methods and data processing for protocol recognition and traffic classification in the context of OT specifics. Therefore, this paper summarizes the methods used to classify network traffic, analyzes the methods used to recognize and identify the protocol used in the industrial network, and describes machine learning methods to recognize industrial protocols. The output of this work is a comparative analysis of approaches specifically for protocol recognition and traffic classification in OT networks. In addition, publicly available datasets are compared in relation to their applicability for industrial protocol recognition. Research challenges are also identified, highlighting the lack of relevant datasets and defining directions for further research in the area of protocol recognition and classification in OT environments.
Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)
►▼
Show Figures
Figure 1
Open AccessArticle
Advanced Integration of Machine Learning Techniques for Accurate Segmentation and Detection of Alzheimer’s Disease
by
Esraa H. Ali, Sawsan Sadek, Georges Zakka El Nashef and Zaid F. Makki
Algorithms 2024, 17(5), 207; https://doi.org/10.3390/a17050207 - 10 May 2024
Abstract
Alzheimer’s disease is a common type of neurodegenerative condition characterized by progressive neural deterioration. The anatomical changes associated with individuals affected by Alzheimer’s disease include the loss of tissue in various areas of the brain. Magnetic Resonance Imaging (MRI) is commonly used as
[...] Read more.
Alzheimer’s disease is a common type of neurodegenerative condition characterized by progressive neural deterioration. The anatomical changes associated with individuals affected by Alzheimer’s disease include the loss of tissue in various areas of the brain. Magnetic Resonance Imaging (MRI) is commonly used as a noninvasive tool to assess the neural structure of the brain for diagnosing Alzheimer’s disease. In this study, an integrated Improved Fuzzy C-means method with improved watershed segmentation was employed to segment the brain tissue components affected by this disease. These segmented features were fed into a hybrid technique for classification. Specifically, a hybrid Convolutional Neural Network–Long Short-Term Memory classifier with 14 layers was developed in this study. The evaluation results revealed that the proposed method achieved an accuracy of 98.13% in classifying segmented brain images according to different disease severities.
Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
Elite Multi-Criteria Decision Making—Pareto Front Optimization in Multi-Objective Optimization
by
Adarsh Kesireddy and F. Antonio Medrano
Algorithms 2024, 17(5), 206; https://doi.org/10.3390/a17050206 - 10 May 2024
Abstract
Optimization is a process of minimizing or maximizing a given objective function under specified constraints. In multi-objective optimization (MOO), multiple conflicting functions are optimized within defined criteria. Numerous MOO techniques have been developed utilizing various meta-heuristic methods such as Evolutionary Algorithms (EAs), Genetic
[...] Read more.
Optimization is a process of minimizing or maximizing a given objective function under specified constraints. In multi-objective optimization (MOO), multiple conflicting functions are optimized within defined criteria. Numerous MOO techniques have been developed utilizing various meta-heuristic methods such as Evolutionary Algorithms (EAs), Genetic Algorithms (GAs), and other biologically inspired processes. In a cooperative environment, a Pareto front is generated, and an MOO technique is applied to solve for the solution set. On other hand, Multi-Criteria Decision Making (MCDM) is often used to select a single best solution from a set of provided solution candidates. The Multi-Criteria Decision Making–Pareto Front (M-PF) optimizer combines both of these techniques to find a quality set of heuristic solutions. This paper provides an improved version of the M-PF optimizer, which is called the elite Multi-Criteria Decision Making–Pareto Front (eMPF) optimizer. The eMPF method uses an evolutionary algorithm for the meta-heuristic process and then generates a Pareto front and applies MCDM to the Pareto front to rank the solutions in the set. The main objective of the new optimizer is to exploit the Pareto front while also exploring the solution area. The performance of the developed method is tested against M-PF, Non-Dominated Sorting Genetic Algorithm-II (NSGA-II), and Non-Dominated Sorting Genetic Algorithm-III (NSGA-III). The test results demonstrate the performance of the new eMPF optimizer over M-PF, NSGA-II, and NSGA-III. eMPF was not only able to exploit the search domain but also was able to find better heuristic solutions for most of the test functions used.
Full article
(This article belongs to the Special Issue Recent Advances in Multi-Objective Algorithms and Optimization 2023–2024)
►▼
Show Figures
Figure 1
Open AccessArticle
Three-Way Alignment Improves Multiple Sequence Alignment of Highly Diverged Sequences
by
Mahbubeh Askari Rad, Alibek Kruglikov and Xuhua Xia
Algorithms 2024, 17(5), 205; https://doi.org/10.3390/a17050205 - 10 May 2024
Abstract
The standard approach for constructing a phylogenetic tree from a set of sequences consists of two key stages. First, a multiple sequence alignment (MSA) of the sequences is computed. The aligned data are then used to reconstruct the phylogenetic tree. The accuracy of
[...] Read more.
The standard approach for constructing a phylogenetic tree from a set of sequences consists of two key stages. First, a multiple sequence alignment (MSA) of the sequences is computed. The aligned data are then used to reconstruct the phylogenetic tree. The accuracy of the resulting tree heavily relies on the quality of the MSA. The quality of the popularly used progressive sequence alignment depends on a guide tree, which determines the order of aligning sequences. Most MSA methods use pairwise comparisons to generate a distance matrix and reconstruct the guide tree. However, when dealing with highly diverged sequences, constructing a good guide tree is challenging. In this work, we propose an alternative approach using three-way dynamic programming alignment to generate the distance matrix and the guide tree. This three-way alignment incorporates information from additional sequences to compute evolutionary distances more accurately. Using simulated datasets on two symmetric and asymmetric trees, we compared MAFFT with its default guide tree with MAFFT with a guide tree produced using the three-way alignment. We found that (1) the three-way alignment can reconstruct better guide trees than those from the most accurate options of MAFFT, and (2) the better guide tree, on average, leads to more accurate phylogenetic reconstruction. However, the improvement over the L-INS-i option of MAFFT is small, attesting to the excellence of the alignment quality of MAFFT. Surprisingly, the two criteria for choosing the best MSA (phylogenetic accuracy and sum-of-pair score) conflict with each other.
Full article
(This article belongs to the Special Issue Advanced Research on Machine Learning Algorithms in Bioinformatics)
►▼
Show Figures
Figure 1
Open AccessArticle
Three-Dimensional Finite Element Modeling of Ultrasonic Vibration-Assisted Milling of the Nomex Honeycomb Structure
by
Tarik Zarrouk, Mohammed Nouari, Jamal-Eddine Salhi, Mohammed Abbadi and Ahmed Abbadi
Algorithms 2024, 17(5), 204; https://doi.org/10.3390/a17050204 - 10 May 2024
Abstract
Machining of Nomex honeycomb composite (NHC) structures is of critical importance in manufacturing parts to the specifications required in the aerospace industry. However, the special characteristics of the Nomex honeycomb structure, including its composite nature and complex geometry, require a specific machining approach
[...] Read more.
Machining of Nomex honeycomb composite (NHC) structures is of critical importance in manufacturing parts to the specifications required in the aerospace industry. However, the special characteristics of the Nomex honeycomb structure, including its composite nature and complex geometry, require a specific machining approach to avoid cutting defects and ensure optimal surface quality. To overcome this problem, this research suggests the adoption of RUM technology, which involves the application of ultrasonic vibrations following the axis of revolution of the UCK cutting tool. To achieve this objective, a three-dimensional finite element numerical model of Nomex honeycomb structure machining is developed with the Abaqus/Explicit software, 2017 version. Based on this model, this research examines the impact of vibration amplitude on the machinability of this kind of structure, including cutting force components, stress and strain distribution, and surface quality as well as the size of the chips. In conclusion, the results highlight that the use of ultrasonic vibrations results in an important reduction in the components of the cutting force by up to 42%, improves the quality of the surface, and decreases the size of the chips.
Full article
(This article belongs to the Special Issue Data-Driven Intelligent Modeling and Optimization Algorithms for Industrial Processes)
►▼
Show Figures
Figure 1
Open AccessArticle
Segmentation and Tracking Based on Equalized Memory Matching Network and Its Application in Electric Substation Inspection
by
Huanlong Zhang, Bin Zhou, Yangyang Tian and Zhe Li
Algorithms 2024, 17(5), 203; https://doi.org/10.3390/a17050203 - 10 May 2024
Abstract
►▼
Show Figures
With the wide application of deep learning, power inspection technology has made great progress. However, substation inspection videos often present challenges such as complex backgrounds, uneven lighting distribution, variations in the appearance of power equipment targets, and occlusions, which increase the difficulty of
[...] Read more.
With the wide application of deep learning, power inspection technology has made great progress. However, substation inspection videos often present challenges such as complex backgrounds, uneven lighting distribution, variations in the appearance of power equipment targets, and occlusions, which increase the difficulty of object segmentation and tracking, thereby adversely affecting the accuracy and reliability of power equipment condition monitoring. In this paper, a pixel-level equalized memory matching network (PEMMN) for power intelligent inspection segmentation and tracking is proposed. Firstly, an equalized memory matching network is designed to collect historical information about the target using a memory bank, in which a pixel-level equalized matching method is used to ensure that the reference frame information can be transferred to the current frame reliably, guiding the segmentation tracker to focus on the most informative region in the current frame. Then, to prevent memory explosion and the accumulation of segmentation template errors, a mask quality evaluation module is introduced to obtain the confidence level of the current segmentation result so as to selectively store the frames with high segmentation quality to ensure the reliability of the memory update. Finally, the synthetic feature map generated by the PEMMN and the mask quality assessment strategy are unified into the segmentation tracking framework to achieve accurate segmentation and robust tracking. Experimental results show that the method performs excellently on real substation inspection scenarios and three generalized datasets and has high practical value.
Full article
Figure 1
Open AccessArticle
Enforcing Traffic Safety: A Deep Learning Approach for Detecting Motorcyclists’ Helmet Violations Using YOLOv8 and Deep Convolutional Generative Adversarial Network-Generated Images
by
Maged Shoman, Tarek Ghoul, Gabriel Lanzaro, Tala Alsharif, Suliman Gargoum and Tarek Sayed
Algorithms 2024, 17(5), 202; https://doi.org/10.3390/a17050202 - 10 May 2024
Abstract
In this study, we introduce an innovative methodology for the detection of helmet usage violations among motorcyclists, integrating the YOLOv8 object detection algorithm with deep convolutional generative adversarial networks (DCGANs). The objective of this research is to enhance the precision of existing helmet
[...] Read more.
In this study, we introduce an innovative methodology for the detection of helmet usage violations among motorcyclists, integrating the YOLOv8 object detection algorithm with deep convolutional generative adversarial networks (DCGANs). The objective of this research is to enhance the precision of existing helmet violation detection techniques, which are typically reliant on manual inspection and susceptible to inaccuracies. The proposed methodology involves model training on an extensive dataset comprising both authentic and synthetic images, and demonstrates high accuracy in identifying helmet violations, including scenarios with multiple riders. Data augmentation, in conjunction with synthetic images produced by DCGANs, is utilized to expand the training data volume, particularly focusing on imbalanced classes, thereby facilitating superior model generalization to real-world circumstances. The stand-alone YOLOv8 model exhibited an F1 score of 0.91 for all classes at a confidence level of 0.617, whereas the DCGANs + YOLOv8 model demonstrated an F1 score of 0.96 for all classes at a reduced confidence level of 0.334. These findings highlight the potential of DCGANs in enhancing the accuracy of helmet rule violation detection, thus fostering safer motorcycling practices.
Full article
(This article belongs to the Special Issue Algorithms for Image Processing and Machine Vision)
►▼
Show Figures
Figure 1
Open AccessReview
Anomaly Detection in Blockchain Networks Using Unsupervised Learning: A Survey
by
Christos Cholevas, Eftychia Angeli, Zacharoula Sereti, Emmanouil Mavrikos and George E. Tsekouras
Algorithms 2024, 17(5), 201; https://doi.org/10.3390/a17050201 - 9 May 2024
Abstract
In decentralized systems, the quest for heightened security and integrity within blockchain networks becomes an issue. This survey investigates anomaly detection techniques in blockchain ecosystems through the lens of unsupervised learning, delving into the intricacies and going through the complex tapestry of abnormal
[...] Read more.
In decentralized systems, the quest for heightened security and integrity within blockchain networks becomes an issue. This survey investigates anomaly detection techniques in blockchain ecosystems through the lens of unsupervised learning, delving into the intricacies and going through the complex tapestry of abnormal behaviors by examining avant-garde algorithms to discern deviations from normal patterns. By seamlessly blending technological acumen with a discerning gaze, this survey offers a perspective on the symbiotic relationship between unsupervised learning and anomaly detection by reviewing this problem with a categorization of algorithms that are applied to a variety of problems in this field. We propose that the use of unsupervised algorithms in blockchain anomaly detection should be viewed not only as an implementation procedure but also as an integration procedure, where the merits of these algorithms can effectively be combined in ways determined by the problem at hand. In that sense, the main contribution of this paper is a thorough study of the interplay between various unsupervised learning algorithms and how this can be used in facing malicious activities and behaviors within public and private blockchain networks. The result is the definition of three categories, the characteristics of which are recognized in terms of the way the respective integration takes place. When implementing unsupervised learning, the structure of the data plays a pivotal role. Therefore, this paper also provides an in-depth presentation of the data structures commonly used in unsupervised learning-based blockchain anomaly detection. The above analysis is encircled by a presentation of the typical anomalies that have occurred so far along with a description of the general machine learning frameworks developed to deal with them. Finally, the paper spotlights challenges and directions that can serve as a comprehensive compendium for future research efforts.
Full article
(This article belongs to the Special Issue Deep Learning for Anomaly Detection)
►▼
Show Figures
Figure 1
Open AccessArticle
A Sim-Learnheuristic for the Team Orienteering Problem: Applications to Unmanned Aerial Vehicles
by
Mohammad Peyman, Xabier A. Martin, Javier Panadero and Angel A. Juan
Algorithms 2024, 17(5), 200; https://doi.org/10.3390/a17050200 - 8 May 2024
Abstract
In this paper, we introduce a novel sim-learnheuristic method designed to address the team orienteering problem (TOP) with a particular focus on its application in the context of unmanned aerial vehicles (UAVs). Unlike most prior research, which primarily focuses on the deterministic and
[...] Read more.
In this paper, we introduce a novel sim-learnheuristic method designed to address the team orienteering problem (TOP) with a particular focus on its application in the context of unmanned aerial vehicles (UAVs). Unlike most prior research, which primarily focuses on the deterministic and stochastic versions of the TOP, our approach considers a hybrid scenario, which combines deterministic, stochastic, and dynamic characteristics. The TOP involves visiting a set of customers using a team of vehicles to maximize the total collected reward. However, this hybrid version becomes notably complex due to the presence of uncertain travel times with dynamically changing factors. Some travel times are stochastic, while others are subject to dynamic factors such as weather conditions and traffic congestion. Our novel approach combines a savings-based heuristic algorithm, Monte Carlo simulations, and a multiple regression model. This integration incorporates the stochastic and dynamic nature of travel times, considering various dynamic conditions, and generates high-quality solutions in short computational times for the presented problem.
Full article
(This article belongs to the Special Issue Heuristic Optimization Algorithms for Logistics)
►▼
Show Figures
Figure 1
Journal Menu
► ▼ Journal Menu-
- Algorithms Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Algorithms, Diagnostics, Entropy, Information, J. Imaging
Application of Machine Learning in Molecular Imaging
Topic Editors: Allegra Conti, Nicola Toschi, Marianna Inglese, Andrea Duggento, Matthew Grech-Sollars, Serena Monti, Giancarlo Sportelli, Pietro CarraDeadline: 31 May 2024
Topic in
Algorithms, Axioms, Fractal Fract, Mathematics, Symmetry
Fractal and Design of Multipoint Iterative Methods for Nonlinear Problems
Topic Editors: Xiaofeng Wang, Fazlollah SoleymaniDeadline: 30 June 2024
Topic in
Algorithms, Computation, Information, Mathematics
Complex Networks and Social Networks
Topic Editors: Jie Meng, Xiaowei Huang, Minghui Qian, Zhixuan XuDeadline: 31 July 2024
Topic in
Algorithms, Future Internet, Information, Mathematics, Symmetry
Research on Data Mining of Electronic Health Records Using Deep Learning Methods
Topic Editors: Dawei Yang, Yu Zhu, Hongyi XinDeadline: 31 August 2024
Conferences
Special Issues
Special Issue in
Algorithms
Bio-Inspired Algorithms
Guest Editors: Sándor Szénási, Gábor KertészDeadline: 20 May 2024
Special Issue in
Algorithms
Algorithms for Smart Cities
Guest Editors: Gloria Cerasela Crisan, Elena NechitaDeadline: 31 May 2024
Special Issue in
Algorithms
Algorithms for Games AI
Guest Editors: Wenxin Li, Haifeng ZhangDeadline: 20 June 2024
Special Issue in
Algorithms
Recurrent Neural Networks: Algorithms Design and Applications for Safety Critical Systems
Guest Editor: Grazziela Patrocinio FigueredoDeadline: 30 June 2024
Topical Collections
Topical Collection in
Algorithms
Feature Papers in Algorithms for Multidisciplinary Applications
Collection Editor: Francesc Pozo
Topical Collection in
Algorithms
Feature Papers in Randomized, Online and Approximation Algorithms
Collection Editor: Frank Werner
Topical Collection in
Algorithms
Featured Reviews of Algorithms
Collection Editors: Arun Kumar Sangaiah, Xingjuan Cai