DrawTalking: Building Interactive Worlds by Sketching and Speaking

We introduce DrawTalking, a prototype system enabling an approach that empowers users to build interactive worlds by sketching and speaking. The approach emphasizes user control and flexibility, and gives programming-like capability without requiring code. An early open-ended study shows the mechanics resonate and are applicable to many creative-exploratory use cases, with the potential to inspire and inform research in future natural interfaces for creative exploration and authoring.

Determining the Difficulties of Students With Dyslexia via Virtual Reality and Artificial Intelligence: An Exploratory Analysis

Learning disorders are neurological conditions that affect the brain's ability to interconnect communication areas. Dyslexic students experience problems with reading, memorizing, and exposing concepts; however the magnitude of these can be mitigated through both therapies and the creation of compensatory mechanisms. Several efforts have been made to mitigate these issues, leading to the creation of digital resources for students with specific learning disorders attending primary and secondary education levels. Conversely, a standard approach is still missed in higher education. The VRAIlexia project has been created to tackle this issue by proposing two different tools: a mobile application integrating virtual reality (VR) to collect data quickly and easily, and an artificial intelligencebased software (AI) to analyze the collected data for customizing the supporting methodology for each student. The first one has been created and is being distributed among dyslexic students in Higher Education Institutions, for the conduction of specific psychological and psychometric tests. The second tool applies specific artificial intelligence algorithms to the data gathered via the application and other surveys. These AI techniques have allowed us to identify the most relevant difficulties faced by the students' cohort. Our different models have obtained around 90\% mean accuracy for predicting the support tools and learning strategies.

A Large-Scale Feasibility Study of Screen-based 3D Visualization and Augmented Reality Tools for Human Anatomy Education: Exploring Gender Perspectives in Learning Experience

Anatomy education is an indispensable part of medical training, but traditional methods face challenges like limited resources for dissection in large classes and difficulties understanding 2D anatomy in textbooks. Advanced technologies, such as 3D visualization and augmented reality (AR), are transforming anatomy learning. This paper presents two in-house solutions that use handheld tablets or screen-based AR to visualize 3D anatomy models with informative labels and in-situ visualizations of the muscle anatomy. To assess these tools, a user study of muscle anatomy education involved 236 premedical students in dyadic teams, with results showing that the tablet-based 3D visualization and screen-based AR tools led to significantly higher learning experience scores than traditional textbook. While knowledge retention didn't differ significantly, ethnographic and gender analysis showed that male students generally reported more positive learning experiences than female students. This study discusses the implications for anatomy and medical education, highlighting the potential of these innovative learning tools considering gender and team dynamics in body painting anatomy learning interventions.

Saccade-Contingent Rendering

Battery-constrained power consumption, compute limitations, and high frame rate requirements in head-mounted displays present unique challenges in the drive to present increasingly immersive and comfortable imagery in virtual reality. However, humans are not equally sensitive to all regions of the visual field, and perceptually-optimized rendering techniques are increasingly utilized to address these bottlenecks. Many of these techniques are gaze-contingent and often render reduced detail away from a user's fixation. Such techniques are dependent on spatio-temporally-accurate gaze tracking and can result in obvious visual artifacts when eye tracking is inaccurate. In this work we present a gaze-contingent rendering technique which only requires saccade detection, bypassing the need for highly-accurate eye tracking. In our first experiment, we show that visual acuity is reduced for several hundred milliseconds after a saccade. In our second experiment, we use these results to reduce the rendered image resolution after saccades in a controlled psychophysical setup, and find that observers cannot discriminate between saccade-contingent reduced-resolution rendering and full-resolution rendering. Finally, in our third experiment, we introduce a 90 pixels per degree headset and validate our saccade-contingent rendering method under typical VR viewing conditions.