Enhancing Human-Cobot Collaboration Through Transparent Learning

Collaborative robots, or cobots, are increasingly deployed alongside human operators, performing assembly and manufacturing tasks in shared workspaces. Traditional methods for teaching new tasks to cobots rely on reprogramming by skilled developers, a process that is costly and inflexible. Even minor procedural changes can cause failures if the original code is reused. Research has shown that enabling non-expert users to instruct cobots directly can reduce maintenance costs and increase adaptability, but this requires effective human-robot interaction.

Image Credit to wikimedia.org

To address this, a Transparent Graphical User Interface (T-GUI) was developed to facilitate bidirectional communication between cobots and human partners. The interface allows the cobot to explain its decision-making process and enables the human to provide missing instructions without programming knowledge. This approach draws on principles from Explainable Artificial Intelligence (XAI), focusing on transparency to improve trust and performance.

The T-GUI was tested using the Cranfield benchmark assembly task, a constrained sequence involving nine components such as base plate, pegs, shaft, pendulum, and separator. In prior work, Interactive Reinforcement Learning (IRL) enabled a cobot to learn the correct assembly order when all constraints were known. In the new study, several constraints were deliberately omitted to simulate incomplete knowledge, making it impossible for the cobot to finish the task unaided.

Two core functions were integrated into the T-GUI. First, the “Why did you select this object?” button lets the human request an explanation of the cobot’s choice. The cobot responds with a structured statement listing already assembled components and all valid next options according to learned constraints. This transparency helps the human understand the cobot’s strategy and adjust expectations. Second, the “Add a new instruction” button allows the human to teach missing constraints from a set of predefined options, enabling the cobot to restart and complete the assembly.

The methodology was evaluated through online experiments with participants recruited via Amazon Mechanical Turk. Due to COVID-19 restrictions, all cobot actions were pre-recorded, and videos were shown based on participants’ inputs. Two designs were used: a between-subjects design comparing T-GUI with a baseline GUI (B-GUI) that lacked transparency, and a within-subjects design showing both explanation styles side-by-side.

In the between-subjects experiment, 42 valid sessions were analyzed. Eighteen of 21 participants using T-GUI provided all correct instructions without errors, compared to nine of 21 using B-GUI. This demonstrated that explanations improved accuracy in teaching. Subjective ratings of explanation satisfaction and trust did not show significant differences in this design, possibly due to the lack of direct comparison.

The within-subjects experiment involved 25 valid sessions, with participants viewing both explanation types for each action. Here, Wilcoxon tests revealed significant improvements in ratings for T-GUI over B-GUI in both satisfaction (z = −3.86, p = 0.00026) and trust (z = −3.65, p = 0.00012). Participants reported better understanding of how the cobot worked and how to use it, found explanations sufficiently detailed and complete, and felt they could predict the cobot’s next actions. They also expressed confidence in the cobot’s accuracy and reliability, noting that transparency made it possible to decide when to trust its behavior.

These findings align with earlier work in transparent learning models, such as Chao et al.’s gesture-based uncertainty communication and Roncone et al.’s hierarchical task planners, but extend them by enabling bidirectional interaction. Unlike prior unidirectional systems, the T-GUI allows humans to directly supply missing knowledge, making previously impossible tasks feasible for the cobot.

The study underscores that transparency in cobot behavior not only enhances user trust and satisfaction but also improves task performance. By combining explanation generation with an intuitive instruction interface, non-expert users can guide cobots effectively, reducing reliance on expert programmers. This approach has potential applications across industrial robotics, where adaptable, human-friendly programming tools are increasingly valuable.

spot_img

More from this stream

Recomended

Discover more from Aerospace and Mechanical Insider

Subscribe now to keep reading and get access to the full archive.

Continue reading