CelluloTactix: Towards Empowering Collaborative Online Learning through Tangible Haptic Interaction with Cellulo Robots

arXiv:2404.11876v1 Announce Type: new Abstract: Online learning has soared in popularity in the educational landscape of COVID-19 and carries the benefits of increased flexibility and access to far-away training resources. However, it also restricts communication between peers and teachers, limits physical interactions and confines learning to the computer screen and keyboard. In this project, we designed a novel way to engage students in collaborative online learning by using haptic-enabled tangible robots, Cellulo. We built a library which connects two robots remotely for a learning activity based around the structure of a biological cell. To discover how separate modes of haptic feedback might differentially affect collaboration, two modes of haptic force-feedback were implemented (haptic co-location and haptic consensus). With a case study, we found that the haptic co-location mode seemed to stimulate collectivist behaviour to a greater extent than the haptic consensus mode, which was associated with individualism and less interaction. While the haptic co-location mode seemed to encourage information pooling, participants using the haptic consensus mode tended to focus more on technical co-ordination. This work introduces a novel system that can provide interesting insights on how to integrate haptic feedback into collaborative remote learning activities in future.

Developing Situational Awareness for Joint Action with Autonomous Vehicles

arXiv:2404.11800v1 Announce Type: new Abstract: Unanswered questions about how human-AV interaction designers can support rider's informational needs hinders Autonomous Vehicles (AV) adoption. To achieve joint human-AV action goals - such as safe transportation, trust, or learning from an AV - sufficient situational awareness must be held by the human, AV, and human-AV system collectively. We present a systems-level framework that integrates cognitive theories of joint action and situational awareness as a means to tailor communications that meet the criteria necessary for goal success. This framework is based on four components of the shared situation: AV traits, action goals, subject-specific traits and states, and the situated driving context. AV communications should be tailored to these factors and be sensitive when they change. This framework can be useful for understanding individual, shared, and distributed human-AV situational awareness and designing for future AV communications that meet the informational needs and goals of diverse groups and in diverse driving contexts.

Ephemeral Myographic Motion: Repurposing the Myo Armband to Control Disposable Pneumatic Sculptures

arXiv:2404.08065v1 Announce Type: new Abstract: This paper details the development of an interactive sculpture built from deprecated hardware technology and intentionally decomposable, transient materials. We detail a case study of "Strain" - an emotive prototype that reclaims two orphaned digital artifacts to power a kinetic sculpture made of common disposable objects. We use the Myo, an abandoned myoelectric armband, in concert with the Programmable Air, a soft-robotics prototyping project, to manipulate a pneumatic bladder array constructed from condoms, bamboo skewers, and a small library of 3D printed PLA plastic connectors designed to work with these generic parts. The resulting sculpture achieves surprisingly organic actuation. The goal of this project is to produce several reusable components: software to resuscitate the Myo Armband, homeostasis software for the Programmable Air or equivalent pneumatic projects, and a library of easily-printed parts that will work with generic bamboo disposables for sculptural prototyping. This project works to develop usable, repeatable engineering by applying it to a slightly whimsical object that promotes a strong emotional response in its audience. Through this, we transform the disposable into the sustainable. In this paper, we reflect on project-based insights into rescuing and revitalizing abandoned consumer electronics for future works.

Advancing Explainable Autonomous Vehicle Systems: A Comprehensive Review and Research Roadmap

arXiv:2404.00019v1 Announce Type: new Abstract: Given the uncertainty surrounding how existing explainability methods for autonomous vehicles (AVs) meet the diverse needs of stakeholders, a thorough investigation is imperative to determine the contexts requiring explanations and suitable interaction strategies. A comprehensive review becomes crucial to assess the alignment of current approaches with the varied interests and expectations within the AV ecosystem. This study presents a review to discuss the complexities associated with explanation generation and presentation to facilitate the development of more effective and inclusive explainable AV systems. Our investigation led to categorising existing literature into three primary topics: explanatory tasks, explanatory information, and explanatory information communication. Drawing upon our insights, we have proposed a comprehensive roadmap for future research centred on (i) knowing the interlocutor, (ii) generating timely explanations, (ii) communicating human-friendly explanations, and (iv) continuous learning. Our roadmap is underpinned by principles of responsible research and innovation, emphasising the significance of diverse explanation requirements. To effectively tackle the challenges associated with implementing explainable AV systems, we have delineated various research directions, including the development of privacy-preserving data integration, ethical frameworks, real-time analytics, human-centric interaction design, and enhanced cross-disciplinary collaborations. By exploring these research directions, the study aims to guide the development and deployment of explainable AVs, informed by a holistic understanding of user needs, technological advancements, regulatory compliance, and ethical considerations, thereby ensuring safer and more trustworthy autonomous driving experiences.

Bringing Robots Home: The Rise of AI Robots in Consumer Electronics

arXiv:2403.14449v1 Announce Type: cross Abstract: On March 18, 2024, NVIDIA unveiled Project GR00T, a general-purpose multimodal generative AI model designed specifically for training humanoid robots. Preceding this event, Tesla's unveiling of the Optimus Gen 2 humanoid robot on December 12, 2023, underscored the profound impact robotics is poised to have on reshaping various facets of our daily lives. While robots have long dominated industrial settings, their presence within our homes is a burgeoning phenomenon. This can be attributed, in part, to the complexities of domestic environments and the challenges of creating robots that can seamlessly integrate into our daily routines.

Customizable Perturbation Synthesis for Robust SLAM Benchmarking

Robustness is a crucial factor for the successful deployment of robots in unstructured environments, particularly in the domain of Simultaneous Localization and Mapping (SLAM). Simulation-based benchmarks have emerged as a highly scalable approach for robustness evaluation compared to real-world data collection. However, crafting a challenging and controllable noisy world with diverse perturbations remains relatively under-explored. To this end, we propose a novel, customizable pipeline for noisy data synthesis, aimed at assessing the resilience of multi-modal SLAM models against various perturbations. This pipeline incorporates customizable hardware setups, software components, and perturbed environments. In particular, we introduce comprehensive perturbation taxonomy along with a perturbation composition toolbox, allowing the transformation of clean simulations into challenging noisy environments. Utilizing the pipeline, we instantiate the Robust-SLAM benchmark, which includes diverse perturbation types, to evaluate the risk tolerance of existing advanced multi-modal SLAM models. Our extensive analysis uncovers the susceptibilities of existing SLAM models to real-world disturbance, despite their demonstrated accuracy in standard benchmarks. Our perturbation synthesis toolbox, SLAM robustness evaluation pipeline, and Robust-SLAM benchmark will be made publicly available at https://github.com/Xiaohao-Xu/SLAM-under-Perturbation/.

Extending 3D body pose estimation for robotic-assistive therapies of autistic children

Robotic-assistive therapy has demonstrated very encouraging results for children with Autism. Accurate estimation of the child's pose is essential both for human-robot interaction and for therapy assessment purposes. Non-intrusive methods are the sole viable option since these children are sensitive to touch. While depth cameras have been used extensively, existing methods face two major limitations: (i) they are usually trained with adult-only data and do not correctly estimate a child's pose, and (ii) they fail in scenarios with a high number of occlusions. Therefore, our goal was to develop a 3D pose estimator for children, by adapting an existing state-of-the-art 3D body modelling method and incorporating a linear regression model to fine-tune one of its inputs, thereby correcting the pose of children's 3D meshes. In controlled settings, our method has an error below $0.3m$, which is considered acceptable for this kind of application and lower than current state-of-the-art methods. In real-world settings, the proposed model performs similarly to a Kinect depth camera and manages to successfully estimate the 3D body poses in a much higher number of frames.

Does ChatGPT and Whisper Make Humanoid Robots More Relatable?

Humanoid robots are designed to be relatable to humans for applications such as customer support and helpdesk services. However, many such systems, including Softbank's Pepper, fall short because they fail to communicate effectively with humans. The advent of Large Language Models (LLMs) shows the potential to solve the communication barrier for humanoid robotics. This paper outlines the comparison of different Automatic Speech Recognition (ASR) APIs, the integration of Whisper ASR and ChatGPT with the Pepper robot and the evaluation of the system (Pepper-GPT) tested by 15 human users. The comparison result shows that, compared to the Google ASR and Google Cloud ASR, the Whisper ASR performed best as its average Word Error Rate (1.716%) and processing time (2.639 s) are both the lowest. The participants' usability investigations show that 60% of the participants thought the performance of the Pepper-GPT was "excellent", while the rest rated this system as "good" in the subsequent experiments. It is proved that while some problems still need to be overcome, such as the robot's multilingual ability and facial tracking capacity, users generally responded positively to the system, feeling like talking to an actual human.

Risk assessment and observation of driver with pedestrian using instantaneous heart rate and HRV

Currently, human drivers outperform self-driving vehicles in many conditions such as collision avoidance. Therefore, understanding human driver behaviour in these conditions will provide insight for future autonomous vehicles. For understanding driver behaviour, risk assessment is applied so far as one of the approaches by using both subjective and objective measurement. Subjective measurement methods such as questionnaires may provide insight into driver risk assessment but there is often significant variability between drivers.Physiological measurements such as heart rate (HR), electroencephalogram (EEG), and electromyogram (EMG) provide more objective measurements of driver risk assessment. HR is often used for measuring driver risk assessment based on observed correlations between HR and risk perception. Previous work has used HR to measure driver risk assessment in self-driving systems, but pedestrian dynamics is not considered for the research. In this study, we observed driver behaviour in certain scenarios which have pedestrian on driving simulator. The scenarios have safe/unsafe situations (i.e., pedestrian crosses road and vehicle may hit pedestrian in one scenario), HR analysis in time/frequency domain is processed for risk assessment. As a result, HR analysis in frequency domain shows certain reasonability for driver risk assessment when driver has pedestrian in its traffic.

Towards Robotic Companions: Understanding Handler-Guide Dog Interactions for Informed Guide Dog Robot Design

Dog guides are favored by blind and low-vision (BLV) individuals for their ability to enhance independence and confidence by reducing safety concerns and increasing navigation efficiency compared to traditional mobility aids. However, only a relatively small proportion of BLV individuals work with dog guides due to their limited availability and associated maintenance responsibilities. There is considerable recent interest in addressing this challenge by developing legged guide dog robots. This study was designed to determine critical aspects of the handler-guide dog interaction and better understand handler needs to inform guide dog robot development. We conducted semi-structured interviews and observation sessions with 23 dog guide handlers and 5 trainers. Thematic analysis revealed critical limitations in guide dog work, desired personalization in handler-guide dog interaction, and important perspectives on future guide dog robots. Grounded on these findings, we discuss pivotal design insights for guide dog robots aimed for adoption within the BLV community.