How explainable AI affects human performance: A systematic review of the behavioural consequences of saliency maps

arXiv:2404.16042v1 Announce Type: new Abstract: Saliency maps can explain how deep neural networks classify images. But are they actually useful for humans? The present systematic review of 68 user studies found that while saliency maps can enhance human performance, null effects or even costs are quite common. To investigate what modulates these effects, the empirical outcomes were organised along several factors related to the human tasks, AI performance, XAI methods, images to be classified, human participants and comparison conditions. In image-focused tasks, benefits were less common than in AI-focused tasks, but the effects depended on the specific cognitive requirements. Moreover, benefits were usually restricted to incorrect AI predictions in AI-focused tasks but to correct ones in image-focused tasks. XAI-related factors had surprisingly little impact. The evidence was limited for image- and human-related factors and the effects were highly dependent on the comparison conditions. These findings may support the design of future user studies.

Integrated Control of Robotic Arm through EMG and Speech: Decision-Driven Multimodal Data Fusion

arXiv:2404.15283v1 Announce Type: new Abstract: Interactions with electronic devices are changing in our daily lives. The day-to-day development brings curiosity to recent technology and challenges its use. The gadgets are becoming cumbersome, and their usage frustrates a segment of society. In specific scenarios, the user cannot use the modalities because of the challenges that bring in, e.g., the usage of touch screen devices by elderly people. The idea of multimodality provides easy access to devices of daily use through various modalities. In this paper, we suggest a solution that allows the operation of a microcontroller-based device using voice and speech. The model implemented will learn from the user's behavior and decide based on prior knowledge.

CelluloTactix: Towards Empowering Collaborative Online Learning through Tangible Haptic Interaction with Cellulo Robots

arXiv:2404.11876v1 Announce Type: new Abstract: Online learning has soared in popularity in the educational landscape of COVID-19 and carries the benefits of increased flexibility and access to far-away training resources. However, it also restricts communication between peers and teachers, limits physical interactions and confines learning to the computer screen and keyboard. In this project, we designed a novel way to engage students in collaborative online learning by using haptic-enabled tangible robots, Cellulo. We built a library which connects two robots remotely for a learning activity based around the structure of a biological cell. To discover how separate modes of haptic feedback might differentially affect collaboration, two modes of haptic force-feedback were implemented (haptic co-location and haptic consensus). With a case study, we found that the haptic co-location mode seemed to stimulate collectivist behaviour to a greater extent than the haptic consensus mode, which was associated with individualism and less interaction. While the haptic co-location mode seemed to encourage information pooling, participants using the haptic consensus mode tended to focus more on technical co-ordination. This work introduces a novel system that can provide interesting insights on how to integrate haptic feedback into collaborative remote learning activities in future.

Developing Situational Awareness for Joint Action with Autonomous Vehicles

arXiv:2404.11800v1 Announce Type: new Abstract: Unanswered questions about how human-AV interaction designers can support rider's informational needs hinders Autonomous Vehicles (AV) adoption. To achieve joint human-AV action goals - such as safe transportation, trust, or learning from an AV - sufficient situational awareness must be held by the human, AV, and human-AV system collectively. We present a systems-level framework that integrates cognitive theories of joint action and situational awareness as a means to tailor communications that meet the criteria necessary for goal success. This framework is based on four components of the shared situation: AV traits, action goals, subject-specific traits and states, and the situated driving context. AV communications should be tailored to these factors and be sensitive when they change. This framework can be useful for understanding individual, shared, and distributed human-AV situational awareness and designing for future AV communications that meet the informational needs and goals of diverse groups and in diverse driving contexts.

Evaluating Tenant-Landlord Tensions Using Generative AI on Online Tenant Forums

arXiv:2404.11681v1 Announce Type: new Abstract: Tenant-landlord relationships exhibit a power asymmetry where landlords' power to evict the tenants at a low-cost results in their dominating status in such relationships. Tenant concerns are thus often unspoken, unresolved, or ignored and this could lead to blatant conflicts as suppressed tenant concerns accumulate. Modern machine learning methods and Large Language Models (LLM) have demonstrated immense abilities to perform language tasks. In this study, we incorporate Latent Dirichlet Allocation (LDA) with GPT-4 to classify Reddit post data scraped from the subreddit r/Tenant, aiming to unveil trends in tenant concerns while exploring the adoption of LLMs and machine learning methods in social science research. We find that tenant concerns in topics like fee dispute and utility issues are consistently dominant in all four states analyzed while each state has other common tenant concerns special to itself. Moreover, we discover temporal trends in tenant concerns that provide important implications regarding the impact of the pandemic and the Eviction Moratorium.

Advancing Social Intelligence in AI Agents: Technical Challenges and Open Questions

arXiv:2404.11023v1 Announce Type: new Abstract: Building socially-intelligent AI agents (Social-AI) is a multidisciplinary, multimodal research goal that involves creating agents that can sense, perceive, reason about, learn from, and respond to affect, behavior, and cognition of other agents (human or artificial). Progress towards Social-AI has accelerated in the past decade across several computing communities, including natural language processing, machine learning, robotics, human-machine interaction, computer vision, and speech. Natural language processing, in particular, has been prominent in Social-AI research, as language plays a key role in constructing the social world. In this position paper, we identify a set of underlying technical challenges and open questions for researchers across computing communities to advance Social-AI. We anchor our discussion in the context of social intelligence concepts and prior progress in Social-AI research.

Tinker or Transfer? A Tale of Two Techniques in Teaching Visualization

arXiv:2404.10967v1 Announce Type: new Abstract: In education there exists a tension between two modes of learning: traditional lecture-based instruction and more tinkering-based creative learning. In this paper, we outline our efforts as two Ph.D. students (who are skilled in visualization but are not, importantly, professionally trained visualization experts) to implement creative learning activities in an information visualization course in our home department. We describe our motivation for doing so, and how what began out of necessity turned into an endeavor whose utility we strongly believe in. In implementing these activities, we received largely positive reviews from students, along with constructive feedback which helped us iteratively improve the activities. Finally, we also detail our future plans for turning this work into a formal design inquiry with students to build a new class centered entirely around creative learning.

Human-Algorithm Collaborative Bayesian Optimization for Engineering Systems

arXiv:2404.10949v1 Announce Type: new Abstract: Bayesian optimization has been successfully applied throughout Chemical Engineering for the optimization of functions that are expensive-to-evaluate, or where gradients are not easily obtainable. However, domain experts often possess valuable physical insights that are overlooked in fully automated decision-making approaches, necessitating the inclusion of human input. In this article we re-introduce the human back into the data-driven decision making loop by outlining an approach for collaborative Bayesian optimization. Our methodology exploits the hypothesis that humans are more efficient at making discrete choices rather than continuous ones and enables experts to influence critical early decisions. We apply high-throughput (batch) Bayesian optimization alongside discrete decision theory to enable domain experts to influence the selection of experiments. At every iteration we apply a multi-objective approach that results in a set of alternate solutions that have both high utility and are reasonably distinct. The expert then selects the desired solution for evaluation from this set, allowing for the inclusion of expert knowledge and improving accountability, whilst maintaining the advantages of Bayesian optimization. We demonstrate our approach across a number of applied and numerical case studies including bioprocess optimization and reactor geometry design, demonstrating that even in the case of an uninformed practitioner our algorithm recovers the regret of standard Bayesian optimization. Through the inclusion of continuous expert opinion, our approach enables faster convergence, and improved accountability for Bayesian optimization in engineering systems.

CFlow: Supporting Semantic Flow Analysis of Students’ Code in Programming Problems at Scale

arXiv:2404.10089v1 Announce Type: new Abstract: The high demand for computer science education has led to high enrollments, with thousands of students in many introductory courses. In such large courses, it can be overwhelmingly difficult for instructors to understand class-wide problem-solving patterns or issues, which is crucial for improving instruction and addressing important pedagogical challenges. In this paper, we propose a technique and system, CFlow, for creating understandable and navigable representations of code at scale. CFlow is able to represent thousands of code samples in a visualization that resembles a single code sample. CFlow creates scalable code representations by (1) clustering individual statements with similar semantic purposes, (2) presenting clustered statements in a way that maintains semantic relationships between statements, (3) representing the correctness of different variations as a histogram, and (4) allowing users to navigate through solutions interactively using semantic filters. With a multi-level view design, users can navigate high-level patterns, and low-level implementations. This is in contrast to prior tools that either limit their focus on isolated statements (and thus discard the surrounding context of those statements) or cluster entire code samples (which can lead to large numbers of clusters -- for example, if there are n code features and m implementations of each, there can be m^n clusters). We evaluated the effectiveness of CFlow with a comparison study, found participants using CFlow spent only half the time identifying mistakes and recalled twice as many desired patterns from over 6,000 submissions.

VizGroup: An AI-Assisted Event-Driven System for Real-Time Collaborative Programming Learning Analytics

arXiv:2404.08743v1 Announce Type: new Abstract: Programming instructors often conduct collaborative learning activities, like Peer Instruction, to foster a deeper understanding in students and enhance their engagement with learning. These activities, however, may not always yield productive outcomes due to the diversity of student mental models and their ineffective collaboration. In this work, we introduce VizGroup, an AI-assisted system that enables programming instructors to easily oversee students' real-time collaborative learning behaviors during large programming courses. VizGroup leverages Large Language Models (LLMs) to recommend event specifications for instructors so that they can simultaneously track and receive alerts about key correlation patterns between various collaboration metrics and ongoing coding tasks. We evaluated VizGroup with 12 instructors using a dataset collected from a Peer Instruction activity that was conducted in a large programming lecture. The results showed that compared to a version of VizGroup without the suggested units, VizGroup with suggested units helped instructors create additional monitoring units on previously undetected patterns on their own, covered a more diverse range of metrics, and influenced the participants' following notification creation strategies.