Journal Description
Future Internet
Future Internet
is an international, peer-reviewed, open access journal on internet technologies and the information society, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, dblp, Inspec, and other databases.
- Journal Rank: CiteScore - Q1 (Computer Networks and Communications)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 11.8 days after submission; acceptance to publication is undertaken in 2.9 days (median values for papers published in this journal in the second half of 2023).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
3.4 (2022);
5-Year Impact Factor:
3.4 (2022)
Latest Articles
Harnessing the Cloud: A Novel Approach to Smart Solar Plant Monitoring
Future Internet 2024, 16(6), 191; https://doi.org/10.3390/fi16060191 - 29 May 2024
Abstract
Renewable Energy Sources (RESs) such as hydro, wind, and solar are merging as preferred alternatives to fossil fuels. Among these RESs, solar energy is the most ideal solution; it is gaining extensive interest around the globe. However, due to solar energy’s intermittent nature
[...] Read more.
Renewable Energy Sources (RESs) such as hydro, wind, and solar are merging as preferred alternatives to fossil fuels. Among these RESs, solar energy is the most ideal solution; it is gaining extensive interest around the globe. However, due to solar energy’s intermittent nature and sensitivity to environmental parameters (e.g., irradiance, dust, temperature, aging and humidity), real-time solar plant monitoring is imperative. This paper’s contribution is to compare and analyze current IoT trends and propose future research directions. As a result, this will be instrumental in the development of low-cost, real-time, scalable, reliable, and power-optimized solar plant monitoring systems. In this work, a comparative analysis has been performed on proposed solutions using the existing literature. This comparative analysis has been conducted considering five aspects: computer boards, sensors, communication, servers, and architectural paradigms. IoT architectural paradigms employed have been summarized and discussed with respect to communication, application layers, and storage capabilities. To facilitate enhanced IoT-based solar monitoring, an edge computing paradigm has been proposed. Suggestions are presented for the fabrication of edge devices and nodes using optimum compute boards, sensors, and communication modules. Different cloud platforms have been explored, and it was concluded that the public cloud platform Amazon Web Services is the ideal solution. Artificial intelligence-based techniques, methods, and outcomes are presented, which can help in the monitoring, analysis, and management of solar PV systems. As an outcome, this paper can be used to help researchers and academics develop low-cost, real-time, effective, scalable, and reliable solar monitoring systems.
Full article
(This article belongs to the Section Internet of Things)
Open AccessArticle
Tracing Student Activity Patterns in E-Learning Environments: Insights into Academic Performance
by
Evgenia Paxinou, Georgios Feretzakis, Rozita Tsoni, Dimitrios Karapiperis, Dimitrios Kalles and Vassilios S. Verykios
Future Internet 2024, 16(6), 190; https://doi.org/10.3390/fi16060190 - 29 May 2024
Abstract
In distance learning educational environments like Moodle, students interact with their tutors, their peers, and the provided educational material through various means. Due to advancements in learning analytics, students’ transitions within Moodle generate digital trace data that outline learners’ self-directed learning paths and
[...] Read more.
In distance learning educational environments like Moodle, students interact with their tutors, their peers, and the provided educational material through various means. Due to advancements in learning analytics, students’ transitions within Moodle generate digital trace data that outline learners’ self-directed learning paths and reveal information about their academic behavior within a course. These learning paths can be depicted as sequences of transitions between various states, such as completing quizzes, submitting assignments, downloading files, and participating in forum discussions, among others. Considering that a specific learning path summarizes the students’ trajectory in a course during an academic year, we analyzed data on students’ actions extracted from Moodle logs to investigate how the distribution of user actions within different Moodle resources can impact academic achievements. Our analysis was conducted using a Markov Chain Model, whereby transition matrices were constructed to identify steady states, and eigenvectors were calculated. Correlations were explored between specific states in users’ eigenvectors and their final grades, which were used as a proxy of academic performance. Our findings offer valuable insights into the relationship between student actions, link weight vectors, and academic performance, in an attempt to optimize students’ learning paths, tutors’ guidance, and course structures in the Moodle environment.
Full article
Open AccessArticle
Dynamic Spatial–Temporal Self-Attention Network for Traffic Flow Prediction
by
Dong Wang, Hongji Yang and Hua Zhou
Future Internet 2024, 16(6), 189; https://doi.org/10.3390/fi16060189 - 25 May 2024
Abstract
►▼
Show Figures
Traffic flow prediction is considered to be one of the fundamental technologies in intelligent transportation systems (ITSs) with a tremendous application prospect. Unlike traditional time series analysis tasks, the key challenge in traffic flow prediction lies in effectively modelling the highly complex and
[...] Read more.
Traffic flow prediction is considered to be one of the fundamental technologies in intelligent transportation systems (ITSs) with a tremendous application prospect. Unlike traditional time series analysis tasks, the key challenge in traffic flow prediction lies in effectively modelling the highly complex and dynamic spatiotemporal dependencies within the traffic data. In recent years, researchers have proposed various methods to enhance the accuracy of traffic flow prediction, but certain issues still persist. For instance, some methods rely on specific static assumptions, failing to adequately simulate the dynamic changes in the data, thus limiting their modelling capacity. On the other hand, some approaches inadequately capture the spatiotemporal dependencies, resulting in the omission of crucial information and leading to unsatisfactory prediction outcomes. To address these challenges, this paper proposes a model called the Dynamic Spatial–Temporal Self-Attention Network (DSTSAN). Firstly, this research enhances the interaction between different dimension features in the traffic data through a feature augmentation module, thereby improving the model’s representational capacity. Subsequently, the current investigation introduces two masking matrices: one captures local spatial dependencies and the other captures global spatial dependencies, based on the spatial self-attention module. Finally, the methodology employs a temporal self-attention module to capture and integrate the dynamic temporal dependencies of traffic data. We designed experiments using historical data from the previous hour to predict traffic flow conditions in the hour ahead, and the experiments were extensively compared to the DSTSAN model, with 11 baseline methods using four real-world datasets. The results demonstrate the effectiveness and superiority of the proposed approach.
Full article
Figure 1
Open AccessArticle
Studying the Quality of Source Code Generated by Different AI Generative Engines: An Empirical Evaluation
by
Davide Tosi
Future Internet 2024, 16(6), 188; https://doi.org/10.3390/fi16060188 - 24 May 2024
Abstract
The advent of Generative Artificial Intelligence is opening essential questions about whether and when AI will replace human abilities in accomplishing everyday tasks. This issue is particularly true in the domain of software development, where generative AI seems to have strong skills in
[...] Read more.
The advent of Generative Artificial Intelligence is opening essential questions about whether and when AI will replace human abilities in accomplishing everyday tasks. This issue is particularly true in the domain of software development, where generative AI seems to have strong skills in solving coding problems and generating software source code. In this paper, an empirical evaluation of AI-generated source code is performed: three complex coding problems (selected from the exams for the Java Programming course at the University of Insubria) are prompted to three different Large Language Model (LLM) Engines, and the generated code is evaluated in its correctness and quality by means of human-implemented test suites and quality metrics. The experimentation shows that the three evaluated LLM engines are able to solve the three exams but with the constant supervision of software experts in performing these tasks. Currently, LLM engines need human-expert support to produce running code that is of good quality.
Full article
Open AccessArticle
Enhanced Beacons Dynamic Transmission over TSCH
by
Erik Ortiz Guerra, Mario Martínez Morfa, Carlos Manuel García Algora, Hector Cruz-Enriquez, Kris Steenhaut and Samuel Montejo-Sánchez
Future Internet 2024, 16(6), 187; https://doi.org/10.3390/fi16060187 - 24 May 2024
Abstract
Time slotted channel hopping (TSCH) has become the standard multichannel MAC protocol for low-power lossy networks. The procedure for associating nodes in a TSCH-based network is not included in the standard and has been defined in the minimal 6TiSCH configuration. Faster network formation
[...] Read more.
Time slotted channel hopping (TSCH) has become the standard multichannel MAC protocol for low-power lossy networks. The procedure for associating nodes in a TSCH-based network is not included in the standard and has been defined in the minimal 6TiSCH configuration. Faster network formation ensures that data packet transmission can start sooner. This paper proposes a dynamic beacon transmission schedule over the TSCH mechanism that achieves a shorter network formation time than the default minimum 6TiSCH static schedule. A theoretical model is derived for the proposed mechanism to estimate the expected time for a node to get associated with the network. Simulation results obtained with different network topologies and channel conditions show that the proposed mechanism reduces the average association time and average power consumption during network formation compared to the default minimal 6TiSCH configuration.
Full article
(This article belongs to the Special Issue Industrial Internet of Things (IIoT): Trends and Technologies)
►▼
Show Figures
Figure 1
Open AccessArticle
Data Collection in Areas without Infrastructure Using LoRa Technology and a Quadrotor
by
Josué I. Rojo-García, Sergio A. Vera-Chavarría, Yair Lozano-Hernández, Victor G. Sánchez-Meza, Jaime González-Sierra and Luz N. Oliva-Moreno
Future Internet 2024, 16(6), 186; https://doi.org/10.3390/fi16060186 - 24 May 2024
Abstract
►▼
Show Figures
The use of sensor networks in monitoring applications has increased; they are useful in security, environmental, and health applications, among others. These networks usually transmit data through short-range stations, which makes them attractive for incorporation into applications and devices for use in places
[...] Read more.
The use of sensor networks in monitoring applications has increased; they are useful in security, environmental, and health applications, among others. These networks usually transmit data through short-range stations, which makes them attractive for incorporation into applications and devices for use in places without access to satellite or mobile signals, for example, forests, seas, and jungles. To this end, unmanned aerial vehicles (UAVs) have attractive characteristics for data collection and transmission in remote areas without infrastructure. Integrating systems based on wireless sensors and UAVs seems to be an economical and easy-to-use solution. However, the main difficulty is the amount of data sent, which affects the communication time and even the flight status of the UAV. Additionally, factors such as the UAV model and the hardware used for these tasks must be considered. Based on those difficulties mentioned, this paper proposes a system based on long-range (LoRa) technology. We present a low-cost wireless sensor network that is flexible, easy to deploy, and capable of collecting/sending data via LoRa transceivers. The readings obtained are packaged and sent to a UAV. The UAV performs predefined flights at a constant height of 30 m and with a direct line-of-sight (LoS) to the stations, during which it collects information from two data stations, concluding that it is possible to carry out a correct data transmission with a flight speed of 10 m/s and a transmission radius of 690 m for a group of three packages confirmed by 20 messages each. Thus, it is possible to collect data from routes of up to 8 km for each battery charge, considering the return of the UAV.
Full article
Graphical abstract
Open AccessArticle
HP-LSTM: Hawkes Process–LSTM-Based Detection of DDoS Attack for In-Vehicle Network
by
Xingyu Li, Ruifeng Li and Yanchen Liu
Future Internet 2024, 16(6), 185; https://doi.org/10.3390/fi16060185 - 23 May 2024
Abstract
Connected and autonomous vehicles (CAVs) are advancing at a fast speed with the improvement of the automotive industry, which opens up new possibilities for different attacks. A Distributed Denial-of-Service (DDoS) attacker floods the in-vehicle network with fake messages, resulting in the failure of
[...] Read more.
Connected and autonomous vehicles (CAVs) are advancing at a fast speed with the improvement of the automotive industry, which opens up new possibilities for different attacks. A Distributed Denial-of-Service (DDoS) attacker floods the in-vehicle network with fake messages, resulting in the failure of driving assistance systems and impairment of vehicle control functionalities, seriously disrupting the normal operation of the vehicle. In this paper, we propose a novel DDoS attack detection method for in-vehicle Ethernet Scalable service-Oriented Middleware over IP (SOME/IP), which integrates the Hawkes process with Long Short-Term Memory networks (LSTMs) to capture the dynamic behavioral features of the attacker. Specifically, we employ the Hawkes process to capture features of the DDoS attack, with its parameters reflecting the dynamism and self-exciting properties of the attack events. Subsequently, we propose a novel deep learning network structure, an HP-LSTM block, inspired by the Hawkes process, while employing a residual attention block to enhance the model’s detection efficiency and accuracy. Additionally, due to the scarcity of publicly available datasets for SOME/IP, we employed a mature SOME/IP generator to create a dataset for evaluating the validity of the proposed detection model. Finally, extensive experiments were conducted to demonstrate the effectiveness of the proposed DDoS attack detection method.
Full article
(This article belongs to the Special Issue Security for Vehicular Ad Hoc Networks)
Open AccessArticle
Exploiting Autoencoder-Based Anomaly Detection to Enhance Cybersecurity in Power Grids
by
Fouzi Harrou, Benamar Bouyeddou, Abdelkader Dairi and Ying Sun
Future Internet 2024, 16(6), 184; https://doi.org/10.3390/fi16060184 - 22 May 2024
Abstract
The evolution of smart grids has led to technological advances and a demand for more efficient and sustainable energy systems. However, the deployment of communication systems in smart grids has increased the threat of cyberattacks, which can result in power outages and disruptions.
[...] Read more.
The evolution of smart grids has led to technological advances and a demand for more efficient and sustainable energy systems. However, the deployment of communication systems in smart grids has increased the threat of cyberattacks, which can result in power outages and disruptions. This paper presents a semi-supervised hybrid deep learning model that combines a Gated Recurrent Unit (GRU)-based Stacked Autoencoder (AE-GRU) with anomaly detection algorithms, including Isolation Forest, Local Outlier Factor, One-Class SVM, and Elliptical Envelope. Using GRU units in both the encoder and decoder sides of the stacked autoencoder enables the effective capture of temporal patterns and dependencies, facilitating dimensionality reduction, feature extraction, and accurate reconstruction for enhanced anomaly detection in smart grids. The proposed approach utilizes unlabeled data to monitor network traffic and identify suspicious data flow. Specifically, the AE-GRU is performed for data reduction and extracting relevant features, and then the anomaly algorithms are applied to reveal potential cyberattacks. The proposed framework is evaluated using the widely adopted IEC 60870-5-104 traffic dataset. The experimental results demonstrate that the proposed approach outperforms standalone algorithms, with the AE-GRU-based LOF method achieving the highest detection rate. Thus, the proposed approach can potentially enhance the cybersecurity in smart grids by accurately detecting and preventing cyberattacks.
Full article
(This article belongs to the Special Issue Cybersecurity in the IoT)
Open AccessArticle
Cross-Layer Optimization for Enhanced IoT Connectivity: A Novel Routing Protocol for Opportunistic Networks
by
Ayman Khalil and Besma Zeddini
Future Internet 2024, 16(6), 183; https://doi.org/10.3390/fi16060183 - 22 May 2024
Abstract
►▼
Show Figures
Opportunistic networks, an evolution of mobile Ad Hoc networks (MANETs), offer decentralized communication without relying on preinstalled infrastructure, enabling nodes to route packets through different mobile nodes dynamically. However, due to the absence of complete paths and rapidly changing connectivity, routing in opportunistic
[...] Read more.
Opportunistic networks, an evolution of mobile Ad Hoc networks (MANETs), offer decentralized communication without relying on preinstalled infrastructure, enabling nodes to route packets through different mobile nodes dynamically. However, due to the absence of complete paths and rapidly changing connectivity, routing in opportunistic networks presents unique challenges. This paper proposes a novel probabilistic routing model for opportunistic networks, leveraging nodes’ meeting probabilities to route packets towards their destinations. Thismodel dynamically builds routes based on the likelihood of encountering the destination node, considering factors such as the last meeting time and acknowledgment tables to manage network overload. Additionally, an efficient message detection scheme is introduced to alleviate high overhead by selectively deleting messages from buffers during congestion. Furthermore, the proposed model incorporates cross-layer optimization techniques, integrating optimization strategies across multiple protocol layers to maximize energy efficiency, adaptability, and message delivery reliability. Through extensive simulations, the effectiveness of the proposed model is demonstrated, showing improved message delivery probability while maintaining reasonable overhead and latency. This research contributes to the advancement of opportunistic networks, particularly in enhancing connectivity and efficiency for Internet of Things (IoT) applications deployed in challenging environments.
Full article
Figure 1
Open AccessSystematic Review
Urban Green Spaces and Mental Well-Being: A Systematic Review of Studies Comparing Virtual Reality versus Real Nature
by
Liyuan Liang, Like Gobeawan, Siu-Kit Lau, Ervine Shengwei Lin and Kai Keng Ang
Future Internet 2024, 16(6), 182; https://doi.org/10.3390/fi16060182 - 21 May 2024
Abstract
Increasingly, urban planners are adopting virtual reality (VR) in designing urban green spaces (UGS) to visualize landscape designs in immersive 3D. However, the psychological effect of green spaces from the experience in VR may differ from the actual experience in the real world.
[...] Read more.
Increasingly, urban planners are adopting virtual reality (VR) in designing urban green spaces (UGS) to visualize landscape designs in immersive 3D. However, the psychological effect of green spaces from the experience in VR may differ from the actual experience in the real world. In this paper, we systematically reviewed studies in the literature that conducted experiments to investigate the psychological benefits of nature in both VR and the real world to study nature in VR anchored to nature in the real world. We separated these studies based on the type of VR setup used, specifically, 360-degree video or 3D virtual environment, and established a framework of commonly used standard questionnaires used to measure the perceived mental states. The most common questionnaires include Positive and Negative Affect Schedule (PANAS), Perceived Restorativeness Scale (PRS), and Restoration Outcome Scale (ROS). Although the results from studies that used 360-degree video were less clear, results from studies that used 3D virtual environments provided evidence that virtual nature is comparable to real-world nature and thus showed promise that UGS designs in VR can transfer into real-world designs to yield similar physiological effects.
Full article
(This article belongs to the Special Issue Advances in Extended Reality for Smart Cities)
►▼
Show Figures
Figure 1
Open AccessArticle
MADDPG-Based Offloading Strategy for Timing-Dependent Tasks in Edge Computing
by
Yuchen Wang, Zishan Huang, Zhongcheng Wei and Jijun Zhao
Future Internet 2024, 16(6), 181; https://doi.org/10.3390/fi16060181 - 21 May 2024
Abstract
►▼
Show Figures
With the increasing popularity of the Internet of Things (IoT), the proliferation of computation-intensive and timing-dependent applications has brought serious load pressure on terrestrial networks. In order to solve the problem of computing resource conflict and long response delay caused by concurrent application
[...] Read more.
With the increasing popularity of the Internet of Things (IoT), the proliferation of computation-intensive and timing-dependent applications has brought serious load pressure on terrestrial networks. In order to solve the problem of computing resource conflict and long response delay caused by concurrent application service applications from multiple users, this paper proposes an improved edge computing timing-dependent, task-offloading scheme based on Multi-Agent Deep Deterministic Policy Gradient (MADDPG) that aims to shorten the offloading delay and improve the resource utilization rate by means of resource prediction and collaboration among multiple agents to shorten the offloading delay and improve the resource utilization. First, to coordinate the global computing resource, the gated recurrent unit is utilized, which predicts the next computing resource requirements of the timing-dependent tasks according to historical information. Second, the predicted information, the historical offloading decisions and the current state are used as inputs, and the training process of the reinforcement learning algorithm is improved to propose a task-offloading algorithm based on MADDPG. The simulation results show that the algorithm reduces the response latency by 6.7% and improves the resource utilization by 30.6% compared with the suboptimal benchmark algorithm, and it reduces nearly 500 training rounds during the learning process, which effectively improves the timeliness of the offloading strategy.
Full article
Figure 1
Open AccessReview
Using ChatGPT in Software Requirements Engineering: A Comprehensive Review
by
Nuno Marques, Rodrigo Rocha Silva and Jorge Bernardino
Future Internet 2024, 16(6), 180; https://doi.org/10.3390/fi16060180 - 21 May 2024
Abstract
►▼
Show Figures
Large language models (LLMs) have had a significant impact on several domains, including software engineering. However, a comprehensive understanding of LLMs’ use, impact, and potential limitations in software engineering is still emerging and remains in its early stages. This paper analyzes the role
[...] Read more.
Large language models (LLMs) have had a significant impact on several domains, including software engineering. However, a comprehensive understanding of LLMs’ use, impact, and potential limitations in software engineering is still emerging and remains in its early stages. This paper analyzes the role of large language models (LLMs), such as ChatGPT-3.5, in software requirements engineering, a critical area in software engineering experiencing rapid advances due to artificial intelligence (AI). By analyzing several studies, we systematically evaluate the integration of ChatGPT into software requirements engineering, focusing on its benefits, challenges, and ethical considerations. This evaluation is based on a comparative analysis that highlights ChatGPT’s efficiency in eliciting requirements, accuracy in capturing user needs, potential to improve communication among stakeholders, and impact on the responsibilities of requirements engineers. The selected studies were analyzed for their insights into the effectiveness of ChatGPT, the importance of human feedback, prompt engineering techniques, technological limitations, and future research directions in using LLMs in software requirements engineering. This comprehensive analysis aims to provide a differentiated perspective on how ChatGPT can reshape software requirements engineering practices and provides strategic recommendations for leveraging ChatGPT to effectively improve the software requirements engineering process.
Full article
Figure 1
Open AccessArticle
Object and Event Detection Pipeline for Rink Hockey Games
by
Jorge Miguel Lopes, Luis Paulo Mota, Samuel Marques Mota, José Manuel Torres, Rui Silva Moreira, Christophe Soares, Ivo Pereira, Feliz Ribeiro Gouveia and Pedro Sobral
Future Internet 2024, 16(6), 179; https://doi.org/10.3390/fi16060179 - 21 May 2024
Abstract
All types of sports are potential application scenarios for automatic and real-time visual object and event detection. In rink hockey, the popular roller skate variant of team hockey, it is of great interest to automatically track player movements, positions, and sticks, and also
[...] Read more.
All types of sports are potential application scenarios for automatic and real-time visual object and event detection. In rink hockey, the popular roller skate variant of team hockey, it is of great interest to automatically track player movements, positions, and sticks, and also to make other judgments, such as being able to locate the ball. In this work, we present a real-time pipeline consisting of an object detection model specifically designed for rink hockey games, followed by a knowledge-based event detection module. Even in the presence of occlusions and fast movements, our deep learning object detection model effectively identifies and tracks important visual elements in real time, such as: ball, players, sticks, referees, crowd, goalkeeper, and goal. Using a curated dataset consisting of a collection of rink hockey videos containing 2525 annotated frames, we trained and evaluated the algorithm’s performance and compared it to state-of-the-art object detection techniques. Our object detection model, based on YOLOv7, presents a global accuracy of 80% and, according to our results, good performance in terms of accuracy and speed, making it a good choice for rink hockey applications. In our initial tests, the event detection module successfully detected an important event type in rink hockey games, namely, the occurrence of penalties.
Full article
(This article belongs to the Special Issue Advances Techniques in Computer Vision and Multimedia II)
►▼
Show Figures
Figure 1
Open AccessArticle
Validation of Value-Driven Token Economy: Focus on Blockchain Content Platform
by
Young Sook Kim, Seng-Phil Hong and Marko Majer
Future Internet 2024, 16(5), 178; https://doi.org/10.3390/fi16050178 - 20 May 2024
Abstract
►▼
Show Figures
This study explores the architectural framework of a value-driven token economy on a blockchain content platform and critically evaluates the relationship between blockchain’s decentralization and sustainable economic practices. The existing literature often glorifies the rapid market expansion of cryptocurrencies but overlooks how underlying
[...] Read more.
This study explores the architectural framework of a value-driven token economy on a blockchain content platform and critically evaluates the relationship between blockchain’s decentralization and sustainable economic practices. The existing literature often glorifies the rapid market expansion of cryptocurrencies but overlooks how underlying blockchain technology can fundamentally enhance content platforms through a more structured user engagement and equitable reward system. This study proposes a new token economy architecture by adopting the triple-bottom -line (TBL) framework and validates its practicality and effectiveness through an analytic-hierarchy-process (AHP) survey of industry experts. The study shows that the most influential factor in a successful token economy is not profit maximization but fostering a user-centric community where engagement and empowerment are prioritized. This shift can be expected to combine blockchain technology with meaningful economic innovation by challenging traditional profit-driven business models and refocusing on sustainability and user value.
Full article
Figure 1
Open AccessArticle
Teamwork Conflict Management Training and Conflict Resolution Practice via Large Language Models
by
Sakhi Aggrawal and Alejandra J. Magana
Future Internet 2024, 16(5), 177; https://doi.org/10.3390/fi16050177 - 19 May 2024
Abstract
►▼
Show Figures
This study implements a conflict management training approach guided by principles of transformative learning and conflict management practice simulated via an LLM. Transformative learning is more effective when learners are engaged mentally and behaviorally in learning experiences. Correspondingly, the conflict management training approach
[...] Read more.
This study implements a conflict management training approach guided by principles of transformative learning and conflict management practice simulated via an LLM. Transformative learning is more effective when learners are engaged mentally and behaviorally in learning experiences. Correspondingly, the conflict management training approach involved a three-step procedure consisting of a learning phase, a practice phase enabled by an LLM, and a reflection phase. Fifty-six students enrolled in a systems development course were exposed to the transformative learning approach to conflict management so they would be better prepared to address any potential conflicts within their teams as they approached a semester-long software development project. The study investigated the following: (1) How did the training and practice affect students’ level of confidence in addressing conflict? (2) Which conflict management styles did students use in the simulated practice? (3) Which strategies did students employ when engaging with the simulated conflict? The findings indicate that: (1) 65% of the students significantly increased in confidence in managing conflict by demonstrating collaborative, compromising, and accommodative approaches; (2) 26% of the students slightly increased in confidence by implementing collaborative and accommodative approaches; and (3) 9% of the students did not increase in confidence, as they were already confident in applying collaborative approaches. The three most frequently used strategies for managing conflict were identifying the root cause of the problem, actively listening, and being specific and objective in explaining their concerns.
Full article
Figure 1
Open AccessArticle
MetaSSI: A Framework for Personal Data Protection, Enhanced Cybersecurity and Privacy in Metaverse Virtual Reality Platforms
by
Faisal Fiaz, Syed Muhammad Sajjad, Zafar Iqbal, Muhammad Yousaf and Zia Muhammad
Future Internet 2024, 16(5), 176; https://doi.org/10.3390/fi16050176 - 18 May 2024
Abstract
The Metaverse brings together components of parallel processing computing platforms, the digital development of physical systems, cutting-edge machine learning, and virtual identity to uncover a fully digitalized environment with equal properties to the real world. It possesses more rigorous requirements for connection, including
[...] Read more.
The Metaverse brings together components of parallel processing computing platforms, the digital development of physical systems, cutting-edge machine learning, and virtual identity to uncover a fully digitalized environment with equal properties to the real world. It possesses more rigorous requirements for connection, including safe access and data privacy, which are necessary with the advent of Metaverse technology. Traditional, centralized, and network-centered solutions fail to provide a resilient identity management solution. There are multifaceted security and privacy issues that hinder the secure adoption of this game-changing technology in contemporary cyberspace. Moreover, there is a need to dedicate efforts towards a secure-by-design Metaverse that protects the confidentiality, integrity, and privacy of the personally identifiable information (PII) of users. In this research paper, we propose a logical substitute for established centralized identity management systems in compliance with the complexity of the Metaverse. This research proposes a sustainable Self-Sovereign Identity (SSI), a fully decentralized identity management system to mitigate PII leaks and corresponding cyber threats on all multiverse platforms. The principle of the proposed framework ensures that the users are the only custodians and proprietors of their own identities. In addition, this article provides a comprehensive approach to the implementation of the SSI principles to increase interoperability and trustworthiness in the Metaverse. Finally, the proposed framework is validated using mathematical modeling and proved to be stringent and resilient against modern-day cyber attacks targeting Metaverse platforms.
Full article
(This article belongs to the Special Issue Advances and Perspectives in Human-Computer Interaction)
►▼
Show Figures
Figure 1
Open AccessArticle
Chatbots in Airport Customer Service—Exploring Use Cases and Technology Acceptance
by
Isabel Auer, Stephan Schlögl and Gundula Glowka
Future Internet 2024, 16(5), 175; https://doi.org/10.3390/fi16050175 - 17 May 2024
Abstract
Throughout the last decade, chatbots have gained widespread adoption across various industries, including healthcare, education, business, e-commerce, and entertainment. These types of artificial, usually cloud-based, agents have also been used in airport customer service, although there has been limited research concerning travelers’ perspectives
[...] Read more.
Throughout the last decade, chatbots have gained widespread adoption across various industries, including healthcare, education, business, e-commerce, and entertainment. These types of artificial, usually cloud-based, agents have also been used in airport customer service, although there has been limited research concerning travelers’ perspectives on this rather techno-centric approach to handling inquiries. Consequently, the goal of the presented study was to tackle this research gap and explore potential use cases for chatbots at airports, as well as investigate travelers’ acceptance of said technology. We employed an extended version of the Technology Acceptance Model considering Perceived Usefulness, Perceived Ease of Use, Trust, and Perceived Enjoyment as predictors of Behavioral Intention, with Affinity for Technology as a potential moderator. A total of travelers completed our survey. The results show that Perceived Usefulness, Trust, Perceived Ease of Use, and Perceived Enjoyment positively correlate with the Behavioral Intention to use a chatbot for airport customer service inquiries, with Perceived Usefulness showing the highest impact. Travelers’ Affinity for Technology, on the other hand, does not seem to have any significant effect.
Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
►▼
Show Figures
Figure 1
Open AccessArticle
TQU-SLAM Benchmark Dataset for Comparative Study to Build Visual Odometry Based on Extracted Features from Feature Descriptors and Deep Learning
by
Thi-Hao Nguyen, Van-Hung Le, Huu-Son Do, Trung-Hieu Te and Van-Nam Phan
Future Internet 2024, 16(5), 174; https://doi.org/10.3390/fi16050174 - 17 May 2024
Abstract
The problem of data enrichment to train visual SLAM and VO construction models using deep learning (DL) is an urgent problem today in computer vision. DL requires a large amount of data to train a model, and more data with many different contextual
[...] Read more.
The problem of data enrichment to train visual SLAM and VO construction models using deep learning (DL) is an urgent problem today in computer vision. DL requires a large amount of data to train a model, and more data with many different contextual and conditional conditions will create a more accurate visual SLAM and VO construction model. In this paper, we introduce the TQU-SLAM benchmark dataset, which includes 160,631 RGB-D frame pairs. It was collected from the corridors of three interconnected buildings comprising a length of about 230 m. The ground-truth data of the TQU-SLAM benchmark dataset were prepared manually, including 6-DOF camera poses, 3D point cloud data, intrinsic parameters, and the transformation matrix between the camera coordinate system and the real world. We also tested the TQU-SLAM benchmark dataset using the PySLAM framework with traditional features such as SHI_TOMASI, SIFT, SURF, ORB, ORB2, AKAZE, KAZE, and BRISK and features extracted from DL such as VGG, DPVO, and TartanVO. The camera pose estimation results are evaluated, and we show that the ORB2 features have the best results ( = 5.74 mm), while the ratio of the number of frames with detected keypoints of the SHI_TOMASI feature is the best ( ). At the same time, we also present and analyze the challenges of the TQU-SLAM benchmark dataset for building visual SLAM and VO systems.
Full article
(This article belongs to the Special Issue Machine Learning Techniques for Computer Vision)
►▼
Show Figures
Figure 1
Open AccessReview
Machine Learning Strategies for Reconfigurable Intelligent Surface-Assisted Communication Systems—A Review
by
Roilhi F. Ibarra-Hernández, Francisco R. Castillo-Soria, Carlos A. Gutiérrez, Abel García-Barrientos, Luis Alberto Vásquez-Toledo and J. Alberto Del-Puerto-Flores
Future Internet 2024, 16(5), 173; https://doi.org/10.3390/fi16050173 - 17 May 2024
Abstract
Machine learning (ML) algorithms have been widely used to improve the performance of telecommunications systems, including reconfigurable intelligent surface (RIS)-assisted wireless communication systems. The RIS can be considered a key part of the backbone of sixth-generation (6G) communication mainly due to its electromagnetic
[...] Read more.
Machine learning (ML) algorithms have been widely used to improve the performance of telecommunications systems, including reconfigurable intelligent surface (RIS)-assisted wireless communication systems. The RIS can be considered a key part of the backbone of sixth-generation (6G) communication mainly due to its electromagnetic properties for controlling the propagation of the signals in the wireless channel. The ML-optimized (RIS)-assisted wireless communication systems can be an effective alternative to mitigate the degradation suffered by the signal in the wireless channel, providing significant advantages in the system’s performance. However, the variety of approaches, system configurations, and channel conditions make it difficult to determine the best technique or group of techniques for effectively implementing an optimal solution. This paper presents a comprehensive review of the reported frameworks in the literature that apply ML and RISs to improve the overall performance of the wireless communication system. This paper compares the ML strategies that can be used to address the RIS-assisted system design. The systems are classified according to the ML method, the databases used, the implementation complexity, and the reported performance gains. Finally, we shed light on the challenges and opportunities in designing and implementing future RIS-assisted wireless communication systems based on ML strategies.
Full article
(This article belongs to the Special Issue 6G Wireless Communication Systems: Applications, Opportunities and Challenges, Volume III)
►▼
Show Figures
Graphical abstract
Open AccessArticle
Using Optimization Techniques in Grammatical Evolution
by
Ioannis G. Tsoulos, Alexandros Tzallas and Evangelos Karvounis
Future Internet 2024, 16(5), 172; https://doi.org/10.3390/fi16050172 - 16 May 2024
Abstract
►▼
Show Figures
The Grammatical Evolution technique has been successfully applied to a wide range of problems in various scientific fields. However, in many cases, techniques that make use of Grammatical Evolution become trapped in local minima of the objective problem and fail to reach the
[...] Read more.
The Grammatical Evolution technique has been successfully applied to a wide range of problems in various scientific fields. However, in many cases, techniques that make use of Grammatical Evolution become trapped in local minima of the objective problem and fail to reach the optimal solution. One simple method to tackle such situations is the usage of hybrid techniques, where local minimization algorithms are used in conjunction with the main algorithm. However, Grammatical Evolution is an integer optimization problem and, as a consequence, techniques should be formulated that are applicable to it as well. In the current work, a modified version of the Simulated Annealing algorithm is used as a local optimization procedure in Grammatical Evolution. This approach was tested on the Constructed Neural Networks and a remarkable improvement of the experimental results was shown, both in classification data and in data fitting cases.
Full article
Figure 1
Journal Menu
► ▼ Journal Menu-
- Future Internet Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Drones, Electronics, Future Internet, Information, Mathematics
Future Internet Architecture: Difficulties and Opportunities
Topic Editors: Peiying Zhang, Haotong Cao, Keping YuDeadline: 30 June 2024
Topic in
Algorithms, Future Internet, Information, Mathematics, Symmetry
Research on Data Mining of Electronic Health Records Using Deep Learning Methods
Topic Editors: Dawei Yang, Yu Zhu, Hongyi XinDeadline: 31 August 2024
Topic in
Algorithms, Axioms, Future Internet, Mathematics, Symmetry
Multimodal Sentiment Analysis Based on Deep Learning Methods Such as Convolutional Neural Networks
Topic Editors: Junaid Baber, Ali Shariq Imran, Sher Doudpota, Maheen BakhtyarDeadline: 31 October 2024
Topic in
Entropy, Future Internet, Healthcare, MAKE, Sensors
Communications Challenges in Health and Well-Being
Topic Editors: Dragana Bajic, Konstantinos Katzis, Gordana GardasevicDeadline: 20 November 2024
Conferences
Special Issues
Special Issue in
Future Internet
Semantic and Social Internet of Things
Guest Editors: Konstantinos Kotis, Christos GoumopoulosDeadline: 31 May 2024
Special Issue in
Future Internet
Smart Sensorics and Robotics for IoT- and AI-Empowered Monitoring and Communication
Guest Editors: Zhongliang Zhao, Dmitry KorzunDeadline: 20 June 2024
Special Issue in
Future Internet
Machine Learning for Blockchain and IoT System in Smart Cities
Guest Editors: José A. Afonso, Joao FerreiraDeadline: 30 June 2024
Special Issue in
Future Internet
Internet of Things and Cyber-Physical Systems II
Guest Editor: Iwona GrobelnaDeadline: 20 July 2024
Topical Collections
Topical Collection in
Future Internet
Featured Reviews of Future Internet Research
Collection Editor: Dino Giuli
Topical Collection in
Future Internet
5G/6G Networks for the Internet of Things: Communication Technologies and Challenges
Collection Editor: Sachin Sharma
Topical Collection in
Future Internet
Computer Vision, Deep Learning and Machine Learning with Applications
Collection Editors: Remus Brad, Arpad Gellert
Topical Collection in
Future Internet
Innovative People-Centered Solutions Applied to Industries, Cities and Societies
Collection Editors: Dino Giuli, Filipe Portela