Journal Description
Multimodal Technologies and Interaction
Multimodal Technologies and Interaction
is an international, peer-reviewed, open access journal on multimodal technologies and interaction published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Inspec, dblp Computer Science Bibliography, and other databases.
- Journal Rank: CiteScore - Q2 (Neuroscience (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 14 days after submission; acceptance to publication is undertaken in 3.8 days (median values for papers published in this journal in the second half of 2023).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.5 (2022)
Latest Articles
Metaverse & Human Digital Twin: Digital Identity, Biometrics, and Privacy in the Future Virtual Worlds
Multimodal Technol. Interact. 2024, 8(6), 48; https://doi.org/10.3390/mti8060048 - 5 Jun 2024
Abstract
Driven by technological advances in various fields (AI, 5G, VR, IoT, etc.) together with the emergence of digital twins technologies (HDT, HAL, BIM, etc.), the Metaverse has attracted growing attention from scientific and industrial communities. This interest is due to its potential impact
[...] Read more.
Driven by technological advances in various fields (AI, 5G, VR, IoT, etc.) together with the emergence of digital twins technologies (HDT, HAL, BIM, etc.), the Metaverse has attracted growing attention from scientific and industrial communities. This interest is due to its potential impact on people lives in different sectors such as education or medicine. Specific solutions can also increase inclusiveness of people with disabilities that are an impediment to a fulfilled life. However, security and privacy concerns remain the main obstacles to its development. Particularly, the data involved in the Metaverse can be comprehensive with enough granularity to build a highly detailed digital copy of the real world, including a Human Digital Twin of a person. Existing security countermeasures are largely ineffective and lack adaptability to the specific needs of Metaverse applications. Furthermore, the virtual worlds in a large-scale Metaverse can be highly varied in terms of hardware implementation, communication interfaces, and software, which poses huge interoperability difficulties. This paper aims to analyse the risks and opportunities associated with adopting digital replicas of humans (HDTs) within the Metaverse and the challenges related to managing digital identities in this context. By examining the current technological landscape, we identify several open technological challenges that currently limit the adoption of HDTs and the Metaverse. Additionally, this paper explores a range of promising technologies and methodologies to assess their suitability within the Metaverse context. Finally, two example scenarios are presented in the Medical and Education fields.
Full article
(This article belongs to the Special Issue Designing an Inclusive and Accessible Metaverse)
►
Show Figures
Open AccessArticle
Exploring Human Emotions: A Virtual Reality-Based Experimental Approach Integrating Physiological and Facial Analysis
by
Leire Bastida, Sara Sillaurren, Erlantz Loizaga, Eneko Tomé and Ana Moya
Multimodal Technol. Interact. 2024, 8(6), 47; https://doi.org/10.3390/mti8060047 - 4 Jun 2024
Abstract
►▼
Show Figures
This paper researches the classification of human emotions in a virtual reality (VR) context by analysing psychophysiological signals and facial expressions. Key objectives include exploring emotion categorisation models, identifying critical human signals for assessing emotions, and evaluating the accuracy of these signals in
[...] Read more.
This paper researches the classification of human emotions in a virtual reality (VR) context by analysing psychophysiological signals and facial expressions. Key objectives include exploring emotion categorisation models, identifying critical human signals for assessing emotions, and evaluating the accuracy of these signals in VR environments. A systematic literature review was performed through peer-reviewed articles, forming the basis for our methodologies. The integration of various emotion classifiers employs a ‘late fusion’ technique due to varying accuracies among classifiers. Notably, facial expression analysis faces challenges from VR equipment occluding crucial facial regions like the eyes, which significantly impacts emotion recognition accuracy. A weighted averaging system prioritises the psychophysiological classifier over the facial recognition classifiers due to its higher accuracy. Findings suggest that while combined techniques are promising, they struggle with mixed emotional states as well as with fear and trust emotions. The research underscores the potential and limitations of current technologies, recommending enhanced algorithms for effective interpretation of complex emotional expressions in VR. The study provides a groundwork for future advancements, aiming to refine emotion recognition systems through systematic data collection and algorithm optimisation.
Full article
Figure 1
Open AccessArticle
Sound of the Police—Virtual Reality Training for Police Communication for High-Stress Operations
by
Markus Murtinger, Jakob Carl Uhl, Lisa Maria Atzmüller, Georg Regal and Michael Roither
Multimodal Technol. Interact. 2024, 8(6), 46; https://doi.org/10.3390/mti8060046 - 4 Jun 2024
Abstract
Police communication is a field with unique challenges and specific requirements. Police officers depend on effective communication, particularly in high-stress operations, but current training methods are not focused on communication and provide only limited evaluation methods. This work explores the potential of virtual
[...] Read more.
Police communication is a field with unique challenges and specific requirements. Police officers depend on effective communication, particularly in high-stress operations, but current training methods are not focused on communication and provide only limited evaluation methods. This work explores the potential of virtual reality (VR) for enhancing police communication training. The rise of VR training, especially in specific application areas like policing, provides benefits. We conducted a field study during police training to assess VR approaches for training communication. The results show that VR is suitable for communication training if factors such as realism, reflection and repetition are given in the VR system. Trainer feedback shows that assistive systems for evaluation and visualization of communication are highly needed. We present ideas and approaches for evaluation in communication training and concepts for visualization and exploration of the data. This research contributes to improving VR police training and has implications for communication training in VR in challenging contexts.
Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
►▼
Show Figures
Figure 1
Open AccessArticle
What the Mind Can Comprehend from a Single Touch
by
Patrick Coe, Grigori Evreinov, Mounia Ziat and Roope Raisamo
Multimodal Technol. Interact. 2024, 8(6), 45; https://doi.org/10.3390/mti8060045 - 28 May 2024
Abstract
►▼
Show Figures
This paper investigates the versatility of force feedback (FF) technology in enhancing user interfaces across a spectrum of applications. We delve into the human finger pad’s sensitivity to FF stimuli, which is critical to the development of intuitive and responsive controls in sectors
[...] Read more.
This paper investigates the versatility of force feedback (FF) technology in enhancing user interfaces across a spectrum of applications. We delve into the human finger pad’s sensitivity to FF stimuli, which is critical to the development of intuitive and responsive controls in sectors such as medicine, where precision is paramount, and entertainment, where immersive experiences are sought. The study presents a case study in the automotive domain, where FF technology was implemented to simulate mechanical button presses, reducing the JND FF levels that were between 0.04 N and 0.054 N to the JND levels of 0.254 and 0.298 when using a linear force feedback scale and those that were 0.028 N and 0.033 N to the JND levels of 0.074 and 0.164 when using a logarithmic force scale. The results demonstrate the technology’s efficacy and potential for widespread adoption in various industries, underscoring its significance in the evolution of haptic feedback systems.
Full article
Figure 1
Open AccessArticle
A Wearable Bidirectional Human–Machine Interface: Merging Motion Capture and Vibrotactile Feedback in a Wireless Bracelet
by
Julian Kindel, Daniel Andreas, Zhongshi Hou, Anany Dwivedi and Philipp Beckerle
Multimodal Technol. Interact. 2024, 8(6), 44; https://doi.org/10.3390/mti8060044 - 23 May 2024
Abstract
►▼
Show Figures
Humans interact with the environment through a variety of senses. Touch in particular contributes to a sense of presence, enhancing perceptual experiences, and establishing causal relations between events. Many human–machine interfaces only allow for one-way communication, which does not do justice to the
[...] Read more.
Humans interact with the environment through a variety of senses. Touch in particular contributes to a sense of presence, enhancing perceptual experiences, and establishing causal relations between events. Many human–machine interfaces only allow for one-way communication, which does not do justice to the complexity of the interaction. To address this, we developed a bidirectional human–machine interface featuring a bracelet equipped with linear resonant actuators, controlled via a Robot Operating System (ROS) program, to simulate haptic feedback. Further, the wireless interface includes a motion sensor and a sensor to quantify the tightness of the bracelet. Our functional experiments, which compared stimulation with three and five intensity levels, respectively, were performed by four healthy participants in their twenties and thirties. The participants achieved an average accuracy of 88% estimating three vibration intensity levels. While the estimation accuracy for five intensity levels was only 67%, the results indicated a good performance in perceiving relative vibration changes with an accuracy of 82%. The proposed haptic feedback bracelet will facilitate research investigating the benefits of bidirectional human–machine interfaces and the perception of vibrotactile feedback in general by closing the gap for a versatile device that can provide high-density user feedback in combination with sensors for intent detection.
Full article
Figure 1
Open AccessArticle
Exploring the Role of User Experience and Interface Design Communication in Augmented Reality for Education
by
Matina Kiourexidou, Andreas Kanavos, Maria Klouvidaki and Nikos Antonopoulos
Multimodal Technol. Interact. 2024, 8(6), 43; https://doi.org/10.3390/mti8060043 - 22 May 2024
Abstract
►▼
Show Figures
Augmented Reality (AR) enhances learning by integrating interactive and immersive elements that bring content to life, thus increasing motivation and improving retention. AR also supports personalized learning, allowing learners to interact with content at their own pace and according to their preferred learning
[...] Read more.
Augmented Reality (AR) enhances learning by integrating interactive and immersive elements that bring content to life, thus increasing motivation and improving retention. AR also supports personalized learning, allowing learners to interact with content at their own pace and according to their preferred learning styles. This adaptability not only promotes self-directed learning but also empowers learners to take charge of their educational journey. Effective interface design is crucial for these AR applications, requiring careful integration of user interactions and visual cues to blend AR elements seamlessly with reality. This paper explores the impact of AR on user experience within educational settings, examining engagement, motivation, and learning outcomes to determine how AR can enhance the educational experience. Additionally, it addresses design considerations and challenges in developing AR user interfaces, drawing on current research and best practices to propose effective and adaptable solutions for educational AR applications. As AR technology evolves, its potential to transform educational experiences continues to grow, promising significant advancements in how users interact with, personalize, and immerse themselves in learning content.
Full article
Figure 1
Open AccessArticle
Recall of Odorous Objects in Virtual Reality
by
Jussi Rantala, Katri Salminen, Poika Isokoski, Ville Nieminen, Markus Karjalainen, Jari Väliaho, Philipp Müller, Anton Kontunen, Pasi Kallio and Veikko Surakka
Multimodal Technol. Interact. 2024, 8(6), 42; https://doi.org/10.3390/mti8060042 - 21 May 2024
Abstract
►▼
Show Figures
The aim was to investigate how the congruence of odors and visual objects in virtual reality (VR) affects later memory recall of the objects. Participants (N = 30) interacted with 12 objects in VR. The interaction was varied by odor congruency (i.e., the
[...] Read more.
The aim was to investigate how the congruence of odors and visual objects in virtual reality (VR) affects later memory recall of the objects. Participants (N = 30) interacted with 12 objects in VR. The interaction was varied by odor congruency (i.e., the odor matched the object’s visual appearance, the odor did not match the object’s visual appearance, or the object had no odor); odor quality (i.e., an authentic or a synthetic odor); and interaction type (i.e., participants could look and manipulate or could only look at objects). After interacting with the 12 objects, incidental memory performance was measured with a free recall task. In addition, the participants rated the pleasantness and arousal of the interaction with each object. The results showed that the participants remembered significantly more objects with congruent odors than objects with incongruent odors or odorless objects. Furthermore, interaction with congruent objects was rated significantly more pleasant and relaxed than interaction with incongruent objects. Odor quality and interaction type did not have significant effects on recall or emotional ratings. These results can be utilized in the development of multisensory VR applications.
Full article
Figure 1
Open AccessArticle
User-Centered Evaluation Framework to Support the Interaction Design for Augmented Reality Applications
by
Andrea Picardi and Giandomenico Caruso
Multimodal Technol. Interact. 2024, 8(5), 41; https://doi.org/10.3390/mti8050041 - 14 May 2024
Abstract
►▼
Show Figures
The advancement of Augmented Reality (AR) technology has been remarkable, enabling the augmentation of user perception with timely information. This progress holds great promise in the field of interaction design. However, the mere advancement of technology is not enough to ensure widespread adoption.
[...] Read more.
The advancement of Augmented Reality (AR) technology has been remarkable, enabling the augmentation of user perception with timely information. This progress holds great promise in the field of interaction design. However, the mere advancement of technology is not enough to ensure widespread adoption. The user dimension has been somewhat overlooked in AR research due to a lack of attention to user motivations, needs, usability, and perceived value. The critical aspects of AR technology tend to be overshadowed by the technology itself. To ensure appropriate future assessments, it is necessary to thoroughly examine and categorize all the methods used for AR technology validation. By identifying and classifying these evaluation methods, researchers and practitioners will be better equipped to develop and validate new AR techniques and applications. Therefore, comprehensive and systematic evaluations are critical to the advancement and sustainability of AR technology. This paper presents a theoretical framework derived from a cluster analysis of the most efficient evaluation methods for AR extracted from 399 papers. Evaluation methods were clustered according to the application domains and the human–computer interaction aspects to be investigated. This framework should facilitate rapid development cycles prioritizing user requirements, ultimately leading to groundbreaking interaction methods accessible to a broader audience beyond research and development centers.
Full article
Figure 1
Open AccessArticle
Immersive Virtual Colonography Viewer for Colon Growths Diagnosis: Design and Think-Aloud Study
by
João Serras, Andrew Duchowski, Isabel Nobre, Catarina Moreira, Anderson Maciel and Joaquim Jorge
Multimodal Technol. Interact. 2024, 8(5), 40; https://doi.org/10.3390/mti8050040 - 13 May 2024
Abstract
►▼
Show Figures
Desktop-based virtual colonoscopy is a proven and accurate process for identifying colon abnormalities. However, it is time-consuming. Faster, immersive interfaces for virtual colonoscopy are still incipient and need to be better understood. This article introduces a novel design that leverages VR paradigm components
[...] Read more.
Desktop-based virtual colonoscopy is a proven and accurate process for identifying colon abnormalities. However, it is time-consuming. Faster, immersive interfaces for virtual colonoscopy are still incipient and need to be better understood. This article introduces a novel design that leverages VR paradigm components to enhance the efficiency and effectiveness of immersive analysis. Our approach contributes a novel tool highlighting unseen areas within the colon via eye-tracking, a flexible navigation approach, and a distinct interface for displaying scans blended with the reconstructed colon surface. The path to evaluating and validating such a tool for clinical settings is arduous. This article contributes a formative evaluation using think-aloud sessions with radiology experts and students. Questions related to colon coverage, diagnostic accuracy, and time to complete are analyzed with different user profiles. Although not aimed at quantitatively measuring performance, the experiment provides lessons learned to guide other researchers in the field.
Full article
Figure 1
Open AccessArticle
Design and Validation of a Computational Thinking Test for Children in the First Grades of Elementary Education
by
Jorge Hernán Aristizábal Zapata, Julián Esteban Gutiérrez Posada and Pascual D. Diago
Multimodal Technol. Interact. 2024, 8(5), 39; https://doi.org/10.3390/mti8050039 - 9 May 2024
Abstract
►▼
Show Figures
Computational thinking (CT) has garnered significant interest in both computer science and education sciences as it delineates a set of skills that emerge during the problem-solving process. Consequently, numerous assessment instruments aimed at measuring CT have been developed in the recent years. However,
[...] Read more.
Computational thinking (CT) has garnered significant interest in both computer science and education sciences as it delineates a set of skills that emerge during the problem-solving process. Consequently, numerous assessment instruments aimed at measuring CT have been developed in the recent years. However, a scarce part of the existing CT measurement instruments has been dedicated to early school ages, and few have undergone rigorous validation or reliability testing. Therefore, this work introduces a new instrument for measuring CT in the early grades of elementary education: the Computational Thinking Test for Children (CTTC). To this end, in this work, we provide the design and validation of the CTTC, which is constructed around spatial, sequential, and logical thinking and encompasses abstraction, decomposition, pattern recognition, and coding items organized in five question blocks. The validation and standardization process employs the Kuder–Richardson statistic (KR-20) and expert judgment using V-Aiken for consistency. Additionally, item difficulty indices were utilized to gauge the difficulty level of each question in the CTTC. The study concludes that the CTTC demonstrates consistency and suitability for children in the first cycle of primary education (encompassing the first to third grades).
Full article
Figure 1
Open AccessReview
A Narrative Review of the Sociotechnical Landscape and Potential of Computer-Assisted Dynamic Assessment for Children with Communication Support Needs
by
Christopher S. Norrie, Stijn R. J. M. Deckers, Maartje Radstaake and Hans van Balkom
Multimodal Technol. Interact. 2024, 8(5), 38; https://doi.org/10.3390/mti8050038 - 7 May 2024
Abstract
This paper presents a narrative review of the current practices in assessing learners’ cognitive abilities and the limitations of traditional intelligence tests in capturing a comprehensive understanding of a child’s learning potential. Referencing prior research, it explores the concept of dynamic assessment (DA)
[...] Read more.
This paper presents a narrative review of the current practices in assessing learners’ cognitive abilities and the limitations of traditional intelligence tests in capturing a comprehensive understanding of a child’s learning potential. Referencing prior research, it explores the concept of dynamic assessment (DA) as a promising yet underutilised alternative that focuses on a child’s responsiveness to learning opportunities. The paper highlights the potential of novel technologies, in particular tangible user interfaces (TUIs), in integrating computational science with DA to improve the access and accuracy of assessment results, especially for children with communication support needs (CSN), as a catalyst for abetting critical communicative competencies. However, existing research in this area has mainly focused on the automated mediation of DA, neglecting the human element that is crucial for effective solutions in special education. A framework is proposed to address these issues, combining pedagogical and sociocultural elements alongside adaptive information technology solutions in an assessment system informed by user-centred design principles to fully support teachers/facilitators and learners with CSN within the special education ecosystem.
Full article
(This article belongs to the Special Issue Multimodal User Interfaces and Experiences: Challenges, Applications, and Perspectives)
►▼
Show Figures
Figure 1
Open AccessArticle
Multimodal Embodiment Research of Oral Music Traditions: Electromyography in Oud Performance and Education Research of Persian Art Music
by
Stella Paschalidou
Multimodal Technol. Interact. 2024, 8(5), 37; https://doi.org/10.3390/mti8050037 - 7 May 2024
Abstract
With the recent advent of research focusing on the body’s significance in music, the integration of physiological sensors in the context of empirical methodologies for music has also gained momentum. Given the recognition of covert muscular activity as a strong indicator of musical
[...] Read more.
With the recent advent of research focusing on the body’s significance in music, the integration of physiological sensors in the context of empirical methodologies for music has also gained momentum. Given the recognition of covert muscular activity as a strong indicator of musical intentionality and the previously ascertained link between physical effort and various musical aspects, electromyography (EMG)—signals representing muscle activity—has also experienced a noticeable surge. While EMG technologies appear to hold good promise for sensing, capturing, and interpreting the dynamic properties of movement in music, which are considered innately linked to artistic expressive power, they also come with certain challenges, misconceptions, and predispositions. The paper engages in a critical examination regarding the utilisation of muscle force values from EMG sensors as indicators of physical effort and musical activity, particularly focusing on (the intuitively expected link to) sound levels. For this, it resides upon empirical work, namely practical insights drawn from a case study of music performance (Persian instrumental music) in the context of a music class. The findings indicate that muscle force can be explained by a small set of (six) statistically significant acoustic and movement features, the latter captured by a state-of-the-art (full-body inertial) motion capture system. However, no straightforward link to sound levels is evident.
Full article
(This article belongs to the Special Issue Multimodal Interaction in Education)
►▼
Show Figures
Figure 1
Open AccessArticle
Saliency-Guided Point Cloud Compression for 3D Live Reconstruction
by
Pietro Ruiu, Lorenzo Mascia and Enrico Grosso
Multimodal Technol. Interact. 2024, 8(5), 36; https://doi.org/10.3390/mti8050036 - 3 May 2024
Cited by 2
Abstract
►▼
Show Figures
3D modeling and reconstruction are critical to creating immersive XR experiences, providing realistic virtual environments, objects, and interactions that increase user engagement and enable new forms of content manipulation. Today, 3D data can be easily captured using off-the-shelf, specialized headsets; very often, these
[...] Read more.
3D modeling and reconstruction are critical to creating immersive XR experiences, providing realistic virtual environments, objects, and interactions that increase user engagement and enable new forms of content manipulation. Today, 3D data can be easily captured using off-the-shelf, specialized headsets; very often, these tools provide real-time, albeit low-resolution, integration of continuously captured depth maps. This approach is generally suitable for basic AR and MR applications, where users can easily direct their attention to points of interest and benefit from a fully user-centric perspective. However, it proves to be less effective in more complex scenarios such as multi-user telepresence or telerobotics, where real-time transmission of local surroundings to remote users is essential. Two primary questions emerge: (i) what strategies are available for achieving real-time 3D reconstruction in such systems? and (ii) how can the effectiveness of real-time 3D reconstruction methods be assessed? This paper explores various approaches to the challenge of live 3D reconstruction from typical point cloud data. It first introduces some common data flow patterns that characterize virtual reality applications and shows that achieving high-speed data transmission and efficient data compression is critical to maintaining visual continuity and ensuring a satisfactory user experience. The paper thus introduces the concept of saliency-driven compression/reconstruction and compares it with alternative state-of-the-art approaches.
Full article
Figure 1
Open AccessArticle
How New Developers Approach Augmented Reality Development Using Simplified Creation Tools: An Observational Study
by
Narges Ashtari and Parmit K. Chilana
Multimodal Technol. Interact. 2024, 8(4), 35; https://doi.org/10.3390/mti8040035 - 22 Apr 2024
Abstract
Software developers new to creating Augmented Reality (AR) experiences often gravitate towards simplified development environments, such as 3D game engines. While popular game engines such as Unity and Unreal have evolved to offer extensive support and functionalities for AR creation, many developers still
[...] Read more.
Software developers new to creating Augmented Reality (AR) experiences often gravitate towards simplified development environments, such as 3D game engines. While popular game engines such as Unity and Unreal have evolved to offer extensive support and functionalities for AR creation, many developers still find it difficult to realize their immersive development projects. We ran an observational study with 12 software developers to assess how they approach the initial AR creation processes using a simplified development framework, the information resources they seek, and how their learning experience compares to the more mainstream 2D development. We observed that developers often started by looking for code examples rather than breaking down complex problems, leading to challenges in visualizing the AR experience. They encountered vocabulary issues and found trial-and-error methods ineffective due to a lack of familiarity with 3D environments, physics, and motion. These observations highlight the distinct needs of emerging AR developers and suggest that conventional code reuse strategies in mainstream development may be less effective in AR. We discuss the importance of developing more intuitive training and learning methods to foster diversity in developing interactive systems and support self-taught learners.
Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
►▼
Show Figures
Figure 1
Open AccessArticle
EEG, Pupil Dilations, and Other Physiological Measures of Working Memory Load in the Sternberg Task
by
Mohammad Ahmadi, Samantha W. Michalka, Marzieh Ahmadi Najafabadi, Burkhard C. Wünsche and Mark Billinghurst
Multimodal Technol. Interact. 2024, 8(4), 34; https://doi.org/10.3390/mti8040034 - 19 Apr 2024
Abstract
►▼
Show Figures
Recent evidence shows that physiological cues, such as pupil dilation (PD), heart rate (HR), skin conductivity (SC), and electroencephalography (EEG), can indicate cognitive load (CL) in users while performing tasks. This paper aims to investigate physiological (multimodal) measurement of CL in a Sternberg
[...] Read more.
Recent evidence shows that physiological cues, such as pupil dilation (PD), heart rate (HR), skin conductivity (SC), and electroencephalography (EEG), can indicate cognitive load (CL) in users while performing tasks. This paper aims to investigate physiological (multimodal) measurement of CL in a Sternberg memory task as the difficulty level increases in both maintenance and probe phases. For this purpose, we designed a Sternberg memory test with four levels of difficulty determined by the number of letters in the words that need to be remembered. Our behavioral performance results show that the CL of the task is related to the number of letters in non-semantic words, which confirms that this task serves as an appropriate metric of CL (the task difficulty increases as the number of letters in words increases). We were interested in investigating the suitability of multimodal physiological measures as correlates of four CL levels for both the maintenance and probe phases in the Sternberg memory task. Our motivation was to: (1) design and create four levels of task difficulty with a gradual increase in CL rather than just high and low CL, (2) use the Sternberg test as our test bed, (3) explore both the maintenance and probe phases for measurement of CL, and (4) explore the correlation of physiological cues (PD, HR, SC, EEG) with CL in both phases. Testing with the system, we found that for both the maintenance and probe phases, there was a significant positive linear relationship between average baseline corrected PD and CL. We also observed that the average baseline corrected SC showed significant increases as the number of letters in the words increased for both the maintenance and probe phases. However, the HR analysis did not show any correlation with an increase in CL in either of the maintenance or probe phases. An additional analysis was conducted to investigate the correlation of these physiological signals for high (seven-letter words) versus low (four-letter words) CL loads. Our EEG analysis for the maintenance phase found significant positive linear relationships between the power spectral density (PSD) and CL for the upper alpha bands in the centrotemporal, frontal, and occipitoparietal regions of the brain and significant positive linear relationships between the PSD and CL for the lower alpha band in the frontal and occipitoparietal regions. However, our EEG analysis of the probe phase did not show any linear relationship between the PSD and CL in any region. These results suggest that PD, SC, and EEG could be used as suitable metrics for the measurement of cognitive load in Sternberg memory tasks. We discuss this, limitations of the study, and directions for future work.
Full article
Figure 1
Open AccessArticle
The Effect of Culture and Social-Cognitive Characteristics on App Preference and Willingness to Use a Fitness App
by
Kiemute Oyibo and Julita Vassileva
Multimodal Technol. Interact. 2024, 8(4), 33; https://doi.org/10.3390/mti8040033 - 17 Apr 2024
Abstract
►▼
Show Figures
Fitness apps are persuasive tools developed to motivate physical activity. Despite their popularity, there is little work on how social-cognitive characteristics such as culture, household size, physical activity level, perceived self-efficacy and social support influence users’ willingness to use them and preference (personal
[...] Read more.
Fitness apps are persuasive tools developed to motivate physical activity. Despite their popularity, there is little work on how social-cognitive characteristics such as culture, household size, physical activity level, perceived self-efficacy and social support influence users’ willingness to use them and preference (personal vs. social). Knowing these relationships can help developers tailor fitness apps to different socio-cultural groups. Hence, we conducted two studies to address the research gap. In the first study (n = 194) aimed at recruiting participants for the second study, we asked participants about their app preference (personal vs. social), physical activity level and key demographic variables. In the second study (n = 49), we asked participants about their social-cognitive beliefs about exercise and their willingness to use a fitness app (presented as a screenshot). The results of the first study showed that, in the collectivist group (Nigerians), people in large households were more likely to be active and use the social version of a fitness app than those in small households. However, in the individualist group (Canadians/Americans), neither the preference for the social or personal version of a fitness app nor the physical activity level depended on the household size. Moreover, in the second study, in the individualist model, perceived self-efficacy and perceived self-regulation have a significant total effect on willingness to use a fitness app. However, in the collectivist model, perceived social support and outcome expectation have a significant total effect on the target construct. Finally, we found that females in individualist cultures had higher overall social-cognitive beliefs about exercise than males in individualist cultures and females in collectivist cultures. The implications of the findings are discussed.
Full article
Figure 1
Open AccessSystematic Review
A Comparison of Parenting Strategies in a Digital Environment: A Systematic Literature Review
by
Leonarda Banić and Tihomir Orehovački
Multimodal Technol. Interact. 2024, 8(4), 32; https://doi.org/10.3390/mti8040032 - 12 Apr 2024
Abstract
►▼
Show Figures
In the modern digital landscape, parental involvement in shaping children’s internet usage has gained unprecedented importance. This research delves into the evolving trends of parental mediation concerning children’s internet activities. As the digital realm increasingly influences young lives, the role of parents in
[...] Read more.
In the modern digital landscape, parental involvement in shaping children’s internet usage has gained unprecedented importance. This research delves into the evolving trends of parental mediation concerning children’s internet activities. As the digital realm increasingly influences young lives, the role of parents in guiding and safeguarding their children’s online experiences becomes crucial. The study addresses key research questions to explore the strategies parents adopt, the content they restrict, the rules they establish, the potential exposure to inappropriate content, and the impact of parents’ computer literacy on their children’s internet safety. Additionally, the research includes a thematic question that broadens the analysis by incorporating insights from studies not directly answering the primary questions but contributing valuable context and understanding to the digital parenting arena. Building on this, the findings from a systematic literature review, conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, highlight a shift towards more proactive parental involvement. Incorporating 49 studies from 11 databases, these findings reveal the current trends and methodologies in parental mediation. Active mediation strategies, which involve positive interactions and discussions about online content, are gaining recognition alongside the prevalent restrictive mediation approaches. Parents are proactively forbidding specific internet content, emphasizing safety and privacy concerns. Moreover, the emergence of parents’ computer literacy as a significant factor influencing their children’s online safety underlines the importance of digital proficiency. By shedding light on the contemporary landscape of parental mediation, this study contributes to a deeper understanding of how parents navigate their children’s internet experiences and the challenges they face in ensuring responsible and secure online engagement. The implications of these findings offer valuable insights for both practitioners and researchers, emphasizing the need for active parental involvement and the importance of enhancing parents’ digital proficiency. Despite limitations due to the language and methodological heterogeneity among the included studies, this research paves the way for future investigations into digital parenting practices.
Full article
Figure 1
Open AccessArticle
MirrorCampus: A Synchronous Hybrid Learning Environment That Supports Spatial Localization of Learners for Facilitating Discussion-Oriented Behaviors
by
Shota Sawada, SunKyoung Kim, Masakazu Hirokawa and Kenji Suzuki
Multimodal Technol. Interact. 2024, 8(4), 31; https://doi.org/10.3390/mti8040031 - 11 Apr 2024
Abstract
A growing number of higher-education institutions are implementing synchronous hybrid delivery, which provides both online and on-campus learners with simultaneous instruction, especially for facilitating discussions in Active Learning (AL) contexts. However, learners face difficulties in picking up social cues and gaining free access
[...] Read more.
A growing number of higher-education institutions are implementing synchronous hybrid delivery, which provides both online and on-campus learners with simultaneous instruction, especially for facilitating discussions in Active Learning (AL) contexts. However, learners face difficulties in picking up social cues and gaining free access to speaking rights due to the geometrical misalignment of individuals mediated through screens. We assume that the cultivation of discussions is allowed by ensuring the spatial localization of learners similar to that in a physical space. This study aims to design a synchronous hybrid learning environment, called Mirror Campus (MC), suitable for the AL scenario that connects physical and cyberspaces by providing spatial localization of learners. We hypothesize that the MC promotes discussion-oriented behaviors, and eventually enhances applied skills for group tasks, related to discussion, creativity, decision-making, and interdependence. We conducted an experiment with five different groups, where four participants in each group were asked to discuss a given topic for fifteen minutes, and clarified that the occurrences of facing behaviors, intervening, and simultaneous utterances in the MC were significantly increased compared to a conventional video conferencing. In conclusion, this study demonstrated the significance of the spatial localization of learners to facilitate discussion-oriented behaviors such as facing and speech.
Full article
(This article belongs to the Special Issue Designing EdTech and Virtual Learning Environments)
►▼
Show Figures
Figure 1
Open AccessArticle
iPlan: A Platform for Constructing Localized, Reduced-Form Models of Land-Use Impacts
by
Andrew R. Ruis, Carol Barford, Jais Brohinsky, Yuanru Tan, Matthew Bougie, Zhiqiang Cai, Tyler J. Lark and David Williamson Shaffer
Multimodal Technol. Interact. 2024, 8(4), 30; https://doi.org/10.3390/mti8040030 - 10 Apr 2024
Abstract
►▼
Show Figures
To help young people understand socio-environmental systems and develop the confidence that meaningful action can be taken to address socio-environmental problems, young people need interactive simulations that enable them to take consequential actions in a familiar context and see the results. This can
[...] Read more.
To help young people understand socio-environmental systems and develop the confidence that meaningful action can be taken to address socio-environmental problems, young people need interactive simulations that enable them to take consequential actions in a familiar context and see the results. This can be achieved through reduced-form models with appropriate user interfaces, but it is a significant challenge to construct a system capable of producing educational models of socio-environmental systems that are localizable and customizable but accessible to educators and learners. In this paper, we present iPlan, a free, online educational software application designed to enable educators and middle- and high-school-aged learners to create custom, localized land-use simulations that can be used to frame, explore, and address complex land-use problems. We describe in detail the software application and its underlying computational models, and we present robust evidence that the accuracy of iPlan simulations is appropriate for educational contexts and preliminary evidence that educators are able to produce simulations suitable for their pedagogical goals and learner populations.
Full article
Figure 1
Open AccessArticle
A Two-Level Highlighting Technique Based on Gaze Direction to Improve Target Pointing and Selection on a Big Touch Screen
by
Valéry Marcial Monthe and Thierry Duval
Multimodal Technol. Interact. 2024, 8(4), 29; https://doi.org/10.3390/mti8040029 - 10 Apr 2024
Abstract
►▼
Show Figures
In this paper, we present an approach to improve pointing methods and target selection on tactile human–machine interfaces. This approach defines a two-level highlighting technique (TLH) based on the direction of gaze for target selection on a touch screen. The technique uses the
[...] Read more.
In this paper, we present an approach to improve pointing methods and target selection on tactile human–machine interfaces. This approach defines a two-level highlighting technique (TLH) based on the direction of gaze for target selection on a touch screen. The technique uses the orientation of the user’s head to approximate the direction of his gaze and uses this information to preselect the potential targets. An experimental system with a multimodal interface has been prototyped to assess the impact of TLH on target selection on a touch screen and compare its performance with that of traditional methods (mouse and touch). We conducted an experiment to assess the effectiveness of our proposition in terms of the rate of selection errors made and time for completion of the task. We also made a subjective estimation of ease of use, suitability for selection, confidence brought by the TLH, and contribution of TLH to improving the selection of targets. Statistical results show that the proposed TLH significantly reduces the selection error rate and the time to complete tasks.
Full article
Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Information, Mathematics, MTI, Symmetry
Youth Engagement in Social Media in the Post COVID-19 Era
Topic Editors: Naseer Abbas Khan, Shahid Kalim Khan, Abdul QayyumDeadline: 30 September 2024
Conferences
Special Issues
Special Issue in
MTI
Designing an Inclusive and Accessible Metaverse
Guest Editors: Callum Parker, Soojeong Yoo, Joel Fredericks, Youngho Lee, Youngjun Cho, Mark BillinghurstDeadline: 20 June 2024
Special Issue in
MTI
Multimodal User Interfaces and Experiences: Challenges, Applications, and Perspectives
Guest Editors: Wei Liu, Jan Auernhammer, Takumi Ohashi, Di Zhu, Kuo-Hsiang ChenDeadline: 30 June 2024
Special Issue in
MTI
Multimodal Interaction in Education
Guest Editor: Wajeeh DaherDeadline: 20 August 2024
Special Issue in
MTI
Effectiveness of Serious Games in Risk Communication of Natural Disasters
Guest Editors: Rui Jesus, Pedro Albuquerque Santos, Maria Ana Viana-BaptistaDeadline: 20 September 2024