Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, and other databases.
- Journal Rank: CiteScore - Q2 (Computer Networks and Communications)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17.7 days after submission; acceptance to publication is undertaken in 3.7 days (median values for papers published in this journal in the second half of 2023).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.8 (2022);
5-Year Impact Factor:
2.6 (2022)
Latest Articles
Object Tracking Using Computer Vision: A Review
Computers 2024, 13(6), 136; https://doi.org/10.3390/computers13060136 (registering DOI) - 28 May 2024
Abstract
Object tracking is one of the most important problems in computer vision applications such as robotics, autonomous driving, and pedestrian movement. There has been a significant development in camera hardware where researchers are experimenting with the fusion of different sensors and developing image
[...] Read more.
Object tracking is one of the most important problems in computer vision applications such as robotics, autonomous driving, and pedestrian movement. There has been a significant development in camera hardware where researchers are experimenting with the fusion of different sensors and developing image processing algorithms to track objects. Image processing and deep learning methods have significantly progressed in the last few decades. Different data association methods accompanied by image processing and deep learning are becoming crucial in object tracking tasks. The data requirement for deep learning methods has led to different public datasets that allow researchers to benchmark their methods. While there has been an improvement in object tracking methods, technology, and the availability of annotated object tracking datasets, there is still scope for improvement. This review contributes by systemically identifying different sensor equipment, datasets, methods, and applications, providing a taxonomy about the literature and the strengths and limitations of different approaches, thereby providing guidelines for selecting equipment, methods, and applications. Research questions and future scope to address the unresolved issues in the object tracking field are also presented with research direction guidelines.
Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision)
►
Show Figures
Open AccessArticle
Two-Phase Fuzzy Real-Time Approach for Fuzzy Demand Electric Vehicle Routing Problem with Soft Time Windows
by
Mohamed A. Wahby Shalaby and Sally S. Kassem
Computers 2024, 13(6), 135; https://doi.org/10.3390/computers13060135 - 27 May 2024
Abstract
Environmental concerns have called for several measures to be taken within the logistics and transportation fields. Among these measures is the adoption of electric vehicles instead of diesel-operated vehicles for personal and commercial-delivery use. The optimized routing of electric vehicles for the commercial
[...] Read more.
Environmental concerns have called for several measures to be taken within the logistics and transportation fields. Among these measures is the adoption of electric vehicles instead of diesel-operated vehicles for personal and commercial-delivery use. The optimized routing of electric vehicles for the commercial delivery of products is the focus of this paper. We study the effect of several practical challenges that are faced when routing electric vehicles. Electric vehicle routing faces the additional challenge of the potential need for recharging while en route, leading to more travel time, and hence cost. Therefore, in this work, we address the issue of electric vehicle routing problem, allowing for partial recharging while en route. In addition, the practical mandate of the time windows set by customers is also considered, where electric vehicle routing problems with soft time windows are studied. Real-life experience shows that the delivery of customers’ demands might be uncertain. In addition, real-time traffic conditions are usually uncertain due to congestion. Therefore, in this work, uncertainties in customers’ demands and traffic conditions are modeled and solved using fuzzy methods. The problems of fuzzy real-time, fuzzy demand, and electric vehicle routing problems with soft time windows are addressed. A mixed-integer programming mathematical model to represent the problem is developed. A novel two-phase solution approach is proposed to solve the problem. In phase I, the classical genetic algorithm (GA) is utilized to obtain an optimum/near-optimum solution for the fuzzy demand electric vehicle routing problem with soft time windows (FD-EVRPSTW). In phase II, a novel fuzzy real-time-adaptive optimizer (FRTAO) is developed to overcome the challenges of recharging and real-time traffic conditions facing FD-EVRPSTW. The proposed solution approach is tested on several modified benchmark instances, and the results show the significance of recharging and congestion challenges for routing costs. In addition, the results show the efficiency of the proposed two-phase approach in overcoming the challenges and reducing the total costs.
Full article
(This article belongs to the Special Issue Recent Advances in Autonomous Vehicle Solutions)
►▼
Show Figures
Figure 1
Open AccessArticle
DCTE-LLIE: A Dual Color-and-Texture-Enhancement-Based Method for Low-Light Image Enhancement
by
Hua Wang, Jianzhong Cao, Lei Yang and Jijiang Huang
Computers 2024, 13(6), 134; https://doi.org/10.3390/computers13060134 - 27 May 2024
Abstract
►▼
Show Figures
The enhancement of images captured under low-light conditions plays a vitally important role in the area of image processing and can significantly affect the performance of following operations. In recent years, deep learning techniques have been leveraged in the area of low-light image
[...] Read more.
The enhancement of images captured under low-light conditions plays a vitally important role in the area of image processing and can significantly affect the performance of following operations. In recent years, deep learning techniques have been leveraged in the area of low-light image enhancement tasks, and deep-learning-based low-light image enhancement methods have been the mainstream for low-light image enhancement tasks. However, due to the inability of existing methods to effectively maintain the color distribution of the original input image and to effectively handle feature descriptions at different scales, the final enhanced image exhibits color distortion and local blurring phenomena. So, in this paper, a novel dual color-and-texture-enhancement-based low-light image enhancement method is proposed, which can effectively enhance low-light images. Firstly, a novel color enhancement block is leveraged to help maintain color distribution during the enhancement process, which can further eliminate the color distortion effect; after that, an attention-based multiscale texture enhancement block is proposed to help the network focus on multiscale local regions and extract more reliable texture representations automatically, and a fusion strategy is leveraged to fuse the multiscale feature representations automatically and finally generate the enhanced reflection component. The experimental results on public datasets and real-world low-light images established the effectiveness of the proposed method on low-light image enhancement tasks.
Full article
Figure 1
Open AccessArticle
Modeling and Analysis of Dekker-Based Mutual Exclusion Algorithms
by
Libero Nigro, Franco Cicirelli and Francesco Pupo
Computers 2024, 13(6), 133; https://doi.org/10.3390/computers13060133 - 25 May 2024
Abstract
Mutual exclusion is a fundamental problem in concurrent/parallel/distributed systems. The first pure-software solution to this problem for two processes, which is not based on hardware instructions like test-and-set, was proposed in 1965 by Th.J. Dekker and communicated by E.W. Dijkstra. The correctness of
[...] Read more.
Mutual exclusion is a fundamental problem in concurrent/parallel/distributed systems. The first pure-software solution to this problem for two processes, which is not based on hardware instructions like test-and-set, was proposed in 1965 by Th.J. Dekker and communicated by E.W. Dijkstra. The correctness of this algorithm has generally been studied under the strong memory model, where the read and write operations on a memory cell are atomic or indivisible. In recent years, some variants of the algorithm have been proposed to make it RW-safe when using the weak memory model, which makes it possible, e.g., for multiple read operations to occur simultaneously to a write operation on the same variable, with the read operations returning (flickering) a non-deterministic value. This paper proposes a novel approach to formal modeling and reasoning on a mutual exclusion algorithm using Timed Automata and the Uppaal tool, and it applies this approach through exhaustive model checking to conduct a thorough analysis of the Dekker’s algorithm and some of its variants proposed in the literature. This paper aims to demonstrate that model checking, although necessarily limited in the scalability of the number of the processes due to the state explosions problem, is effective yet powerful for reasoning on concurrency and process action interleaving, and it can provide significant results about the correctness and robustness of the basic version and variants of the Dekker’s algorithm under both the strong and weak memory models. In addition, the properties of these algorithms are also carefully studied in the context of a tournament-based binary tree for processes.
Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
►▼
Show Figures
Figure 1
Open AccessArticle
A Blockchain-Based Electronic Health Record (EHR) System for Edge Computing Enhancing Security and Cost Efficiency
by
Valerio Mandarino, Giuseppe Pappalardo and Emiliano Tramontana
Computers 2024, 13(6), 132; https://doi.org/10.3390/computers13060132 - 24 May 2024
Abstract
Blockchain technology offers unique features, such as transparency, the immutability of data, and the capacity to establish trust without a central authority. Such characteristics can be leveraged to support the collaboration among several different software systems operating within the healthcare ecosystem, while ensuring
[...] Read more.
Blockchain technology offers unique features, such as transparency, the immutability of data, and the capacity to establish trust without a central authority. Such characteristics can be leveraged to support the collaboration among several different software systems operating within the healthcare ecosystem, while ensuring data integrity and make electronic health records (EHRs) more easily accessible. To provide a solution based on blockchain technology, this paper has evaluated the main issues that arise when large amounts of data are expected, i.e., mainly cost and performance. A balanced approach that maximizes the benefits and mitigates the constraints of the blockchain has been designed. The proposed decentralized application (dApp) architecture employs a hybrid storage strategy that involves storing medical records locally, on users’ devices, while utilizing blockchain to manage an index of these data. The dApp clients facilitate interactions among participants, leveraging a smart contract to enable patients to set authorization policies, thereby ensuring that only designated healthcare providers and authorized entities have access to specific medical records. The blockchain data-immutability property is used to validate data stored externally. This solution significantly reduces the costs related to the utilization of the blockchain, while retaining its advantages, and improves performance, since the majority of data are available off-chain.
Full article
(This article belongs to the Special Issue When Blockchain Meets IoT: Challenges and Potentials)
►▼
Show Figures
Figure 1
Open AccessArticle
A Step-by-Step Methodology for Obtaining the Reliability of Building Microgrids Using Fault TreeAnalysis
by
Gustavo A. Patiño-Álvarez, Johan S. Arias-Pérez and Nicolás Muñoz-Galeano
Computers 2024, 13(6), 131; https://doi.org/10.3390/computers13060131 - 24 May 2024
Abstract
►▼
Show Figures
This paper introduces an improved methodology designed to address a practical deficit of existing methodologies by incorporating circuit-level analysis in the assessment of building microgrid reliability. The scientific problem at hand involves devising a systematic approach that integrates circuit modeling, Probability Density Function
[...] Read more.
This paper introduces an improved methodology designed to address a practical deficit of existing methodologies by incorporating circuit-level analysis in the assessment of building microgrid reliability. The scientific problem at hand involves devising a systematic approach that integrates circuit modeling, Probability Density Function (PDF) selection, formulation of reliability functions, and Fault Tree Analysis (FTA) tailored specifically for the distinctive features of building microgrids. This method entails analyzing inter-component relationships to gain comprehensive insights into system behavior. By harnessing the circuit models and theoretical framework proposed herein, precise estimations of microgrid failure rates can be attained. To complement this approach, we propose a thorough investigation utilizing reliability curves and importance measures, providing valuable insights into individual device failure probabilities over time. Such time-based analysis plays a crucial role in proactively identifying potential failures and facilitating efficient maintenance planning for microgrid devices. We demonstrate the application of this methodology to the University of Antioquia (UdeA) Microgrid, a low-voltage system comprising critical components such as solar panels, microinverters, inverters/chargers, batteries, and charge controllers.
Full article
Figure 1
Open AccessArticle
Exploiting Anytime Algorithms for Collaborative Service Execution in Edge Computing
by
Luís Nogueira, Jorge Coelho and David Pereira
Computers 2024, 13(6), 130; https://doi.org/10.3390/computers13060130 - 23 May 2024
Abstract
The diversity and scarcity of resources across devices in heterogeneous computing environments can impact their ability to meet users’ quality-of-service (QoS) requirements, especially in open real-time environments where computational loads are unpredictable. Despite this uncertainty, timely responses to events remain essential to ensure
[...] Read more.
The diversity and scarcity of resources across devices in heterogeneous computing environments can impact their ability to meet users’ quality-of-service (QoS) requirements, especially in open real-time environments where computational loads are unpredictable. Despite this uncertainty, timely responses to events remain essential to ensure desired performance levels. To address this challenge, this paper introduces collaborative service execution, enabling resource-constrained IoT devices to collaboratively execute services with more powerful neighbors at the edge, thus meeting non-functional requirements that might be unattainable through individual execution. Nodes dynamically form clusters, allocating resources to each service and establishing initial configurations that maximize QoS satisfaction while minimizing global QoS impact. However, the complexity of open real-time environments may hinder the computation of optimal local and global resource allocations within reasonable timeframes. Thus, we reformulate the QoS optimization problem as a heuristic-based anytime optimization problem, capable of interrupting and quickly adapting to environmental changes. Extensive simulations demonstrate that our anytime algorithms rapidly yield satisfactory initial service solutions and effectively optimize the solution quality over iterations, with negligible overhead compared to the benefits gained.
Full article
(This article belongs to the Special Issue Intelligent Edge: When AI Meets Edge Computing)
Open AccessArticle
Robust Algorithms for the Analysis of Fast-Field-Cycling Nuclear Magnetic Resonance Dispersion Curves
by
Villiam Bortolotti, Pellegrino Conte, Germana Landi, Paolo Lo Meo, Anastasiia Nagmutdinova, Giovanni Vito Spinelli and Fabiana Zama
Computers 2024, 13(6), 129; https://doi.org/10.3390/computers13060129 - 23 May 2024
Abstract
►▼
Show Figures
Fast-Field-Cycling (FFC) Nuclear Magnetic Resonance (NMR) relaxometry is a powerful, non-destructive magnetic resonance technique that enables, among other things, the investigation of slow molecular dynamics at low magnetic field intensities. FFC-NMR relaxometry measurements provide insight into molecular motion across various timescales within a
[...] Read more.
Fast-Field-Cycling (FFC) Nuclear Magnetic Resonance (NMR) relaxometry is a powerful, non-destructive magnetic resonance technique that enables, among other things, the investigation of slow molecular dynamics at low magnetic field intensities. FFC-NMR relaxometry measurements provide insight into molecular motion across various timescales within a single experiment. This study focuses on a model-free approach, representing the NMRD profile as a linear combination of Lorentzian functions, thereby addressing the challenges of fitting data within an ill-conditioned linear least-squares framework. Tackling this problem, we present a comprehensive review and experimental validation of three regularization approaches to implement the model-free approach to analyzing NMRD profiles. These include (1) MF-UPen, utilizing locally adapted regularization; (2) MF-L1, based on penalties; and (3) a hybrid approach combining locally adapted and global penalties. Each method’s regularization parameters are determined automatically according to the Balancing and Uniform Penalty principles. Our contributions include the implementation and experimental validation of the MF-UPen and MF-MUPen algorithms, and the development of a “dispersion analysis” technique to assess the existence range of the estimated parameters. The objective of this work is to delineate the variance in fit quality and correlation time distribution yielded by each algorithm, thus broadening the set of software tools for the analysis of sample structures in FFC-NMR studies. The findings underline the efficacy and applicability of these algorithms in the analysis of NMRD profiles from samples representing different potential scenarios.
Full article
Figure 1
Open AccessArticle
Machine Learning Decision System on the Empirical Analysis of the Actual Usage of Interactive Entertainment: A Perspective of Sustainable Innovative Technology
by
Rex Revian A. Guste and Ardvin Kester S. Ong
Computers 2024, 13(6), 128; https://doi.org/10.3390/computers13060128 - 23 May 2024
Abstract
This study focused on the impact of Netflix’s interactive entertainment on Filipino consumers, seamlessly combining vantage points from consumer behavior and employing data analytics. This underlines the revolutionary aspect of interactive entertainment in the quickly expanding digital media ecosystem, particularly as Netflix pioneers
[...] Read more.
This study focused on the impact of Netflix’s interactive entertainment on Filipino consumers, seamlessly combining vantage points from consumer behavior and employing data analytics. This underlines the revolutionary aspect of interactive entertainment in the quickly expanding digital media ecosystem, particularly as Netflix pioneers fresh content distribution techniques. The main objective of this study was to find the factors impacting the real usage of Netflix’s interactive entertainment among Filipino viewers, filling a critical gap in the existing literature. The major goal of using advanced data analytics techniques in this study was to understand the subtle dynamics affecting customer behavior in this setting. Specifically, the random forest classifier with hard and soft classifiers was assessed. The random forest compared to LightGBM was also employed, alongside the different algorithms of the artificial neural network. Purposive sampling was used to obtain responses from 258 people who had experienced Netflix’s interactive entertainment, resulting in a comprehensive dataset. The findings emphasized the importance of hedonic motivation, underlining the requirement for highly engaging and rewarding interactive material. Customer service and device compatibility, for example, have a significant impact on user uptake. Furthermore, behavioral intention and habit emerged as key drivers, revealing interactive entertainment’s long-term influence on user engagement. Practically, the research recommends strategic platform suggestions that emphasize continuous innovation, user-friendly interfaces, and user-centric methods. This study was able to fill in the gap in the literature on interactive entertainment, which contributes to a better understanding of consumer consumption and lays the groundwork for future research in the dynamic field of digital media. Moreover, this study offers essential insights into the intricate interaction of consumer preferences, technology breakthroughs, and societal influences in the ever-expanding environment of digital entertainment. Lastly, the comparative approach to the use of machine learning algorithms provides insights for future works to adopt and employ among human factors and consumer behavior-related studies.
Full article
(This article belongs to the Special Issue Harnessing Artificial Intelligence for Social and Semantic Understanding)
►▼
Show Figures
Figure 1
Open AccessArticle
Machine Learning for Predicting Key Factors to Identify Misinformation in Football Transfer News
by
Ife Runsewe, Majid Latifi, Mominul Ahsan and Julfikar Haider
Computers 2024, 13(6), 127; https://doi.org/10.3390/computers13060127 - 23 May 2024
Abstract
►▼
Show Figures
The spread of misinformation in football transfer news has become a growing concern. To address this challenge, this study introduces a novel approach by employing ensemble learning techniques to identify key factors for predicting such misinformation. The performance of three ensemble learning models,
[...] Read more.
The spread of misinformation in football transfer news has become a growing concern. To address this challenge, this study introduces a novel approach by employing ensemble learning techniques to identify key factors for predicting such misinformation. The performance of three ensemble learning models, namely Random Forest, AdaBoost, and XGBoost, was analyzed on a dataset of transfer rumours. Natural language processing (NLP) techniques were employed to extract structured data from the text, and the veracity of each rumor was verified using factual transfer data. The study also investigated the relationships between specific features and rumor veracity. Key predictive features such as a player’s market value, age, and timing of the transfer window were identified. The Random Forest model outperformed the other two models, achieving a cross-validated accuracy of 95.54%. The top features identified by the model were a player’s market value, time to the start/end of the transfer window, and age. The study revealed weak negative relationships between a player’s age, time to the start/end of the transfer window, and rumor veracity, suggesting that for older players and times further from the transfer window, rumors are slightly less likely to be true. In contrast, a player’s market value did not have a statistically significant relationship with rumor veracity. This study contributes to the existing knowledge of misinformation detection and ensemble learning techniques. Despite some limitations, this study has significant implications for media agencies, football clubs, and fans. By discerning the credibility of transfer news, stakeholders can make informed decisions, reduce the spread of misinformation, and foster a more transparent transfer market.
Full article
Figure 1
Open AccessArticle
An Improved Ensemble-Based Cardiovascular Disease Detection System with Chi-Square Feature Selection
by
Ayad E. Korial, Ivan Isho Gorial and Amjad J. Humaidi
Computers 2024, 13(6), 126; https://doi.org/10.3390/computers13060126 - 22 May 2024
Abstract
Cardiovascular disease (CVD) is a leading cause of death globally; therefore, early detection of CVD is crucial. Many intelligent technologies, including deep learning and machine learning (ML), are being integrated into healthcare systems for disease prediction. This paper uses a voting ensemble ML
[...] Read more.
Cardiovascular disease (CVD) is a leading cause of death globally; therefore, early detection of CVD is crucial. Many intelligent technologies, including deep learning and machine learning (ML), are being integrated into healthcare systems for disease prediction. This paper uses a voting ensemble ML with chi-square feature selection to detect CVD early. Our approach involved applying multiple ML classifiers, including naïve Bayes, random forest, logistic regression (LR), and k-nearest neighbor. These classifiers were evaluated through metrics including accuracy, specificity, sensitivity, F1-score, confusion matrix, and area under the curve (AUC). We created an ensemble model by combining predictions from the different ML classifiers through a voting mechanism, whose performance was then measured against individual classifiers. Furthermore, we applied chi-square feature selection method to the 303 records across 13 clinical features in the Cleveland cardiac disease dataset to identify the 5 most important features. This approach improved the overall accuracy of our ensemble model and reduced the computational load considerably by more than 50%. Demonstrating superior effectiveness, our voting ensemble model achieved a remarkable accuracy of 92.11%, representing an average improvement of 2.95% over the single highest classifier (LR). These results indicate the ensemble method as a viable and practical approach to improve the accuracy of CVD prediction.
Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024)
►▼
Show Figures
Figure 1
Open AccessArticle
A Wireless Noninvasive Blood Pressure Measurement System Using MAX30102 and Random Forest Regressor for Photoplethysmography Signals
by
Michelle Annice Tjitra, Nagisa Eremia Anju, Dodi Sudiana and Mia Rizkinia
Computers 2024, 13(5), 125; https://doi.org/10.3390/computers13050125 - 17 May 2024
Abstract
Hypertension, often termed “the silent killer”, is associated with cardiovascular risk and requires regular blood pressure (BP) monitoring. However, existing methods are cumbersome and require medical expertise, which is worsened by the need for physical contact, particularly during situations such as the coronavirus
[...] Read more.
Hypertension, often termed “the silent killer”, is associated with cardiovascular risk and requires regular blood pressure (BP) monitoring. However, existing methods are cumbersome and require medical expertise, which is worsened by the need for physical contact, particularly during situations such as the coronavirus pandemic that started in 2019 (COVID-19). This study aimed to develop a cuffless, continuous, and accurate BP measurement system using a photoplethysmography (PPG) sensor and a microcontroller via PPG signals. The system utilizes a MAX30102 sensor and ESP-WROOM-32 microcontroller to capture PPG signals that undergo noise reduction during preprocessing. Peak detection and feature extraction algorithms were introduced, and their output data were used to train a machine learning model for BP prediction. Tuning the model resulted in identifying the best-performing model when using a dataset from six subjects with a total of 114 records, thereby achieving a coefficient of determination of 0.37/0.46 and a mean absolute error value of 4.38/4.49 using the random forest algorithm. Integrating this model into a web-based graphical user interface enables its implementation. One probable limitation arises from the small sample size (six participants) of healthy young individuals under seated conditions, thereby potentially hindering the proposed model’s ability to learn and generalize patterns effectively. Increasing the number of participants with diverse ages and medical histories can enhance the accuracy of the proposed model. Nevertheless, this innovative device successfully addresses the need for convenient, remote BP monitoring, particularly during situations like the COVID-19 pandemic, thus making it a promising tool for cardiovascular health management.
Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024)
►▼
Show Figures
Figure 1
Open AccessArticle
Enhancing Brain Segmentation in MRI through Integration of Hidden Markov Random Field Model and Whale Optimization Algorithm
by
Abdelaziz Daoudi and Saïd Mahmoudi
Computers 2024, 13(5), 124; https://doi.org/10.3390/computers13050124 - 17 May 2024
Abstract
The automatic delineation and segmentation of the brain tissues from Magnetic Resonance Images (MRIs) is a great challenge in the medical context. The difficulty of this task arises out of the similar visual appearance of neighboring brain structures in MR images. In this
[...] Read more.
The automatic delineation and segmentation of the brain tissues from Magnetic Resonance Images (MRIs) is a great challenge in the medical context. The difficulty of this task arises out of the similar visual appearance of neighboring brain structures in MR images. In this study, we present an automatic approach for robust and accurate brain tissue boundary outlining in MR images. This algorithm is proposed for the tissue classification of MR brain images into White Matter (WM), Gray Matter (GM) and Cerebrospinal Fluid (CSF). The proposed segmentation process combines two algorithms, the Hidden Markov Random Field (HMRF) model and the Whale Optimization Algorithm (WOA), to enhance the treatment accuracy. In addition, we use the Whale Optimization Algorithm (WOA) to optimize the performance of the segmentation method. The experimental results from a dataset of brain MR images show the superiority of our proposed method, referred to HMRF-WOA, as compared to other reported approaches. The HMRF-WOA is evaluated on multiple MRI contrasts, including both simulated and real MR brain images. The well-known Dice coefficient (DC) and Jaccard coefficient (JC) were used as similarity metrics. The results show that, in many cases, our proposed method approaches the perfect segmentation with a Dice coefficient and Jaccard coefficient above 0.9.
Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision)
►▼
Show Figures
Figure 1
Open AccessArticle
Assessing the Legibility of Arabic Road Signage Using Eye Gazing and Cognitive Loading Metrics
by
Mohammad Lataifeh, Naveed Ahmed, Shaima Elbardawil and Somayeh Gordani
Computers 2024, 13(5), 123; https://doi.org/10.3390/computers13050123 - 15 May 2024
Abstract
This research study aimed to evaluate the legibility of Arabic road signage using an eye-tracking approach within a virtual reality (VR) environment. The study was conducted in a controlled setting involving 20 participants who watched two videos using the HP Omnicept Reverb G2.
[...] Read more.
This research study aimed to evaluate the legibility of Arabic road signage using an eye-tracking approach within a virtual reality (VR) environment. The study was conducted in a controlled setting involving 20 participants who watched two videos using the HP Omnicept Reverb G2. The VR device recorded eye gazing details in addition to other physiological data of the participants, providing an overlay of heart rate, eye movement, and cognitive load, which in combination were used to determine the participants’ focus during the experiment. The data were processed through a schematic design, and the final files were saved in .txt format, which was later used for data extraction and analysis. Through the execution of this study, it became apparent that employing eye-tracking technology within a VR setting offers a promising method for assessing the legibility of road signs. The outcomes of the current research enlightened the vital role of legibility in ensuring road safety and facilitating effective communication with drivers. Clear and easily comprehensible road signs were found to be pivotal in delivering timely information, aiding navigation, and ultimately mitigating accidents or confusion on the road. As a result, this study advocates for the utilization of VR as a valuable platform for enhancing the design and functionality of road signage systems, recognizing its potential to contribute significantly to the improvement of road safety and navigation for drivers.
Full article
(This article belongs to the Special Issue Extended or Mixed Reality (AR + VR): Technology and Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
Securing Critical Infrastructure with Blockchain Technology: An Approach to Cyber-Resilience
by
Jaime Govea, Walter Gaibor-Naranjo and William Villegas-Ch
Computers 2024, 13(5), 122; https://doi.org/10.3390/computers13050122 - 15 May 2024
Abstract
Currently, in the digital era, critical infrastructure is increasingly exposed to cyber threats to their operation and security. This study explores the use of blockchain technology to address these challenges, highlighting its immutability, decentralization, and transparency as keys to strengthening the resilience of
[...] Read more.
Currently, in the digital era, critical infrastructure is increasingly exposed to cyber threats to their operation and security. This study explores the use of blockchain technology to address these challenges, highlighting its immutability, decentralization, and transparency as keys to strengthening the resilience of these vital structures. Through a methodology encompassing literature review, use-case analysis, and the development and evaluation of prototypes, the effective implementation of the blockchain in the protection of critical infrastructure is investigated. The experimental results reveal the positive impact of the blockchain on security and resilience, presenting a solid defense against cyber-attacks due to its immutable and decentralized structure, with a 40% reduction in security incidents. Despite the observed benefits, blockchain integration faces significant challenges in scalability, interoperability, and regulations. This work demonstrates the potential of the blockchain to strengthen critical infrastructure. It marks progress towards the blockchain’s practical adoption, offering a clear direction for future research and development in this evolving field.
Full article
(This article belongs to the Special Issue Blockchain Technology—a Breakthrough Innovation for Modern Industries)
►▼
Show Figures
Figure 1
Open AccessArticle
Indoor Scene Classification through Dual-Stream Deep Learning: A Framework for Improved Scene Understanding in Robotics
by
Sultan Daud Khan and Kamal M. Othman
Computers 2024, 13(5), 121; https://doi.org/10.3390/computers13050121 - 14 May 2024
Abstract
Indoor scene classification plays a pivotal role in enabling social robots to seamlessly adapt to their environments, facilitating effective navigation and interaction within diverse indoor scenes. By accurately characterizing indoor scenes, robots can autonomously tailor their behaviors, making informed decisions to accomplish specific
[...] Read more.
Indoor scene classification plays a pivotal role in enabling social robots to seamlessly adapt to their environments, facilitating effective navigation and interaction within diverse indoor scenes. By accurately characterizing indoor scenes, robots can autonomously tailor their behaviors, making informed decisions to accomplish specific tasks. Traditional methods relying on manually crafted features encounter difficulties when characterizing complex indoor scenes. On the other hand, deep learning models address the shortcomings of traditional methods by autonomously learning hierarchical features from raw images. Despite the success of deep learning models, existing models still struggle to effectively characterize complex indoor scenes. This is because there is high degree of intra-class variability and inter-class similarity within indoor environments. To address this problem, we propose a dual-stream framework that harnesses both global contextual information and local features for enhanced recognition. The global stream captures high-level features and relationships across the scene. The local stream employs a fully convolutional network to extract fine-grained local information. The proposed dual-stream architecture effectively distinguishes scenes that share similar global contexts but contain different localized objects. We evaluate the performance of the proposed framework on a publicly available benchmark indoor scene dataset. From the experimental results, we demonstrate the effectiveness of the proposed framework.
Full article
(This article belongs to the Special Issue Recent Advances in Autonomous Vehicle Solutions)
►▼
Show Figures
Figure 1
Open AccessArticle
Enhancing Workplace Safety through Personalized Environmental Risk Assessment: An AI-Driven Approach in Industry 5.0
by
Janaína Lemos, Vanessa Borba de Souza, Frederico Soares Falcetta, Fernando Kude de Almeida, Tânia M. Lima and Pedro Dinis Gaspar
Computers 2024, 13(5), 120; https://doi.org/10.3390/computers13050120 - 13 May 2024
Abstract
►▼
Show Figures
This paper describes an integrated monitoring system designed for individualized environmental risk assessment and management in the workplace. The system incorporates monitoring devices that measure dust, noise, ultraviolet radiation, illuminance, temperature, humidity, and flammable gases. Comprising monitoring devices, a server-based web application for
[...] Read more.
This paper describes an integrated monitoring system designed for individualized environmental risk assessment and management in the workplace. The system incorporates monitoring devices that measure dust, noise, ultraviolet radiation, illuminance, temperature, humidity, and flammable gases. Comprising monitoring devices, a server-based web application for employers, and a mobile application for workers, the system integrates the registration of workers’ health histories, such as common diseases and symptoms related to the monitored agents, and a web-based recommendation system. The recommendation system application uses classifiers to decide the risk/no risk per sensor and crosses this information with fixed rules to define recommendations. The system generates actionable alerts for companies to improve decision-making regarding professional activities and long-term safety planning by analyzing health information through fixed rules and exposure data through machine learning algorithms. As the system must handle sensitive data, data privacy is addressed in communication and data storage. The study provides test results that evaluate the performance of different machine learning models in building an effective recommendation system. Since it was not possible to find public datasets with all the sensor data needed to train artificial intelligence models, it was necessary to build a data generator for this work. By proposing an approach that focuses on individualized environmental risk assessment and management, considering workers’ health histories, this work is expected to contribute to enhancing occupational safety through computational technologies in the Industry 5.0 approach.
Full article
Figure 1
Open AccessArticle
Detection of Crabs and Lobsters Using a Benchmark Single-Stage Detector and Novel Fisheries Dataset
by
Muhammad Iftikhar, Marie Neal, Natalie Hold, Sebastian Gregory Dal Toé and Bernard Tiddeman
Computers 2024, 13(5), 119; https://doi.org/10.3390/computers13050119 - 11 May 2024
Abstract
Crabs and lobsters are valuable crustaceans that contribute enormously to the seafood needs of the growing human population. This paper presents a comprehensive analysis of single- and multi-stage object detectors for the detection of crabs and lobsters using images captured onboard fishing boats.
[...] Read more.
Crabs and lobsters are valuable crustaceans that contribute enormously to the seafood needs of the growing human population. This paper presents a comprehensive analysis of single- and multi-stage object detectors for the detection of crabs and lobsters using images captured onboard fishing boats. We investigate the speed and accuracy of multiple object detection techniques using a novel dataset, multiple backbone networks, various input sizes, and fine-tuned parameters. We extend our work to train lightweight models to accommodate the fishing boats equipped with low-power hardware systems. Firstly, we train Faster R-CNN, SSD, and YOLO with different backbones and tuning parameters. The models trained with higher input sizes resulted in lower frames per second (FPS) and vice versa. The base models were highly accurate but were compromised in computational and run-time costs. The lightweight models were adaptable to low-power hardware compared to the base models. Secondly, we improved the performance of YOLO (v3, v4, and tiny versions) using custom anchors generated by the k-means clustering approach using our novel dataset. The YOLO (v4 and it’s tiny version) achieved mean average precision (mAP) of 99.2% and 95.2%, respectively. The YOLOv4-tiny trained on the custom anchor-based dataset is capable of precisely detecting crabs and lobsters onboard fishing boats at 64 frames per second (FPS) on an NVidia GeForce RTX 3070 GPU. The Results obtained identified the strengths and weaknesses of each method towards a trade-off between speed and accuracy for detecting objects in input images.
Full article
(This article belongs to the Special Issue Selected Papers from Computer Graphics & Visual Computing (CGVC 2023))
►▼
Show Figures
Figure 1
Open AccessArticle
An Efficient Attribute-Based Participant Selecting Scheme with Blockchain for Federated Learning in Smart Cities
by
Xiaojun Yin, Haochen Qiu, Xijun Wu and Xinming Zhang
Computers 2024, 13(5), 118; https://doi.org/10.3390/computers13050118 - 9 May 2024
Abstract
In smart cities, large amounts of multi-source data are generated all the time. A model established via machine learning can mine information from these data and enable many valuable applications. With concerns about data privacy, it is becoming increasingly difficult for the publishers
[...] Read more.
In smart cities, large amounts of multi-source data are generated all the time. A model established via machine learning can mine information from these data and enable many valuable applications. With concerns about data privacy, it is becoming increasingly difficult for the publishers of these applications to obtain users’ data, which hinders the previous paradigm of centralized training through collecting data on a large scale. Federated learning is expected to prevent the leakage of private data by allowing users to train models locally. The existing works generally ignore architectures designed in real scenarios. Thus, there still exist some challenges that have not yet been explored in federated learning applied in smart cities, such as avoiding sharing models with improper parties under privacy requirements and designing satisfactory incentive mechanisms. Therefore, we propose an efficient attribute-based participant selecting scheme to ensure that only someone who meets the requirements of the task publisher can participate in training under the premise of high privacy requirements, so as to improve the efficiency and avoid attacks. We further extend our scheme to encourage clients to take part in federated learning and provide an audit mechanism using a consortium blockchain. Finally, we present an in-depth discussion of the proposed scheme by comparing it to different methods. The results show that our scheme can improve the efficiency of federated learning by enabling reliable participant selection and promote the extensive use of federated learning in smart cities.
Full article
(This article belongs to the Special Issue Blockchain Technology—a Breakthrough Innovation for Modern Industries)
►▼
Show Figures
Figure 1
Open AccessSystematic Review
A Systematic Review of Using Deep Learning in Aphasia: Challenges and Future Directions
by
Yin Wang, Weibin Cheng, Fahim Sufi, Qiang Fang and Seedahmed S. Mahmoud
Computers 2024, 13(5), 117; https://doi.org/10.3390/computers13050117 - 9 May 2024
Abstract
In this systematic literature review, the intersection of deep learning applications within the aphasia domain is meticulously explored, acknowledging the condition’s complex nature and the nuanced challenges it presents for language comprehension and expression. By harnessing data from primary databases and employing advanced
[...] Read more.
In this systematic literature review, the intersection of deep learning applications within the aphasia domain is meticulously explored, acknowledging the condition’s complex nature and the nuanced challenges it presents for language comprehension and expression. By harnessing data from primary databases and employing advanced query methodologies, this study synthesizes findings from 28 relevant documents, unveiling a landscape marked by significant advancements and persistent challenges. Through a methodological lens grounded in the PRISMA framework (Version 2020) and Machine Learning-driven tools like VosViewer (Version 1.6.20) and Litmaps (Free Version), the research delineates the high variability in speech patterns, the intricacies of speech recognition, and the hurdles posed by limited and diverse datasets as core obstacles. Innovative solutions such as specialized deep learning models, data augmentation strategies, and the pivotal role of interdisciplinary collaboration in dataset annotation emerge as vital contributions to this field. The analysis culminates in identifying theoretical and practical pathways for surmounting these barriers, highlighting the potential of deep learning technologies to revolutionize aphasia assessment and treatment. This review not only consolidates current knowledge but also charts a course for future research, emphasizing the need for comprehensive datasets, model optimization, and integration into clinical workflows to enhance patient care. Ultimately, this work underscores the transformative power of deep learning in advancing aphasia diagnosis, treatment, and support, heralding a new era of innovation and interdisciplinary collaboration in addressing this challenging disorder.
Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024)
►▼
Show Figures
Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Computers, Digital, Electronics, Smart Cities
Artificial Intelligence Models, Tools and Applications
Topic Editors: Phivos Mylonas, Katia Lida Kermanidis, Manolis MaragoudakisDeadline: 31 August 2024
Topic in
Biomedicines, Computers, Information, IJERPH, JPM
eHealth and mHealth: Challenges and Prospects, 2nd Volume
Topic Editors: Antonis Billis, Manuel Dominguez-Morales, Anton CivitDeadline: 30 September 2024
Topic in
Applied Sciences, Computers, Electronics, JSAN, Technologies
Emerging AI+X Technologies including Selected Papers from ICGHIT 2024
Topic Editors: Byung-Seo Kim, Hyunsik Ahn, Kyu-Tae LeeDeadline: 31 October 2024
Topic in
Computers, Informatics, Information, Logistics, Mathematics, Algorithms
Decision Science Applications and Models (DSAM)
Topic Editors: Daniel Riera Terrén, Angel A. Juan, Majsa Ammuriova, Laura CalvetDeadline: 31 December 2024
Conferences
Special Issues
Special Issue in
Computers
Machine and Deep Learning in the Health Domain 2024
Guest Editor: Hersh Sagreiya SagreiyaDeadline: 20 June 2024
Special Issue in
Computers
Game-Based Learning, Gamification in Education and Serious Games 2023
Guest Editors: Carlos Vaz de Carvalho, Hariklia Tsalapatas, Ricardo BaptistaDeadline: 30 June 2024
Special Issue in
Computers
Xtended or Mixed Reality (AR+VR) for Education 2024
Guest Editors: Veronica Rossano, Michele FiorentinoDeadline: 1 August 2024
Special Issue in
Computers
Best Practices, Challenges and Opportunities in Software Engineering
Guest Editor: Yan LiuDeadline: 31 August 2024