Human-robot collaboration (HRC) promises to merge the precision and endurance of industrial robots with the adaptability and decision-making of human workers. Yet, safety regulations and the cost of physical trials have slowed its adoption. To address these challenges, researchers developed a virtual reality (VR) sandbox capable of simulating shared work environments with autonomous robots, enabling controlled, repeatable experiments without physical risk.

The VR sandbox was designed to replicate industrial settings with high visual and auditory fidelity, incorporating authentic machinery models, realistic lighting, and ambient factory sounds. In the study, participants collaborated with a virtual KUKA LBR iiwa 7 R800 robot arm to assemble pin-back buttons using a modeled Badgematic Flexi Type 900 press. The task involved nine alternating steps between human and robot, requiring coordination and spatial awareness.
Central to the experiment was the introduction of augmented communication channels for the robot: a text panel with natural language guidance, multi-colored actuator lights signaling operational status or hazards, and three distinct gestures for initiating, terminating, or pausing actions. These channels aimed to convey intent, provide procedural feedback, and alert participants to potential dangers. The text panel, positioned near the robot, displayed statements such as “I’m waiting for you to turn the platform,” reinforcing the perception of the robot as an active partner.
The robot’s control system combined inverse kinematics with Unity’s ML-Agents framework, enabling adaptive, autonomous behavior. It could detect participant movements, adjust speed to match human pace within ISO TS 15066 safety limits, and avoid collisions using both collision detection and raycasting. Machine learning agents were trained on recorded collaboration sessions, allowing the robot to execute tasks while dynamically responding to human actions.
Eighty participants, evenly split between augmented and non-augmented conditions, completed the assembly task in VR. The augmented condition included all three communication channels; the control condition omitted them. Objective measures included production quantity and collision count, while subjective measures assessed perceived safety.
Results showed clear benefits from augmentation. Participants in the augmented condition produced more pin-back buttons on average (M = 8.2, SD = 1.40) than those without augmentation (M = 6.15, SD = 1.53), a statistically significant difference (F(1,75) = 12.63, p < 0.01). Collision rates were also lower with augmentation (M = 53.57, SD = 47.40) compared to non-augmented setups (M = 118.82, SD = 81.49), again statistically significant (F(1,75) = 5.93, p < 0.01). Perceived safety ratings were higher in the augmented condition (M = 3.33, SD = 0.59) than in the control (M = 3.17, SD = 0.58), supporting the hypothesis that communication channels enhance both objective and subjective safety. Interestingly, the time participants spent looking at the text panel did not significantly correlate with productivity or collision rate. This may reflect limitations in gaze-tracking precision without dedicated eye-tracking hardware. The study’s design leveraged VR’s strengths: rapid scenario iteration, safe exposure to potentially hazardous interactions, and precise logging of performance metrics. The modular architecture of the sandbox allows researchers to swap robot models, alter tasks, and adjust communication modalities without rebuilding from scratch. This flexibility is particularly valuable for exploring AI-enhanced robots expected to operate autonomously in future industrial contexts. Limitations include the simplified nature of the pin-back button task, short exposure time, and a participant pool largely composed of engineering students rather than experienced industrial workers. VR’s lack of tactile feedback also means some real-world cues—such as the feel of contact with a robot—were absent. Nonetheless, participants still reported perceiving sudden robot movements as potentially threatening, suggesting VR can evoke realistic safety responses. The findings align with theories of group cognition, where shared mental models and clear communication improve collaborative performance. In HRC, non-anthropomorphic robots often struggle to convey intent; augmentations like text guidance, visual signals, and gestures can bridge this gap, fostering trust and efficiency. As manufacturing shifts toward more dynamic, reconfigurable production lines, such communication tools could reduce training time, maintain throughput, and enhance workplace safety. By demonstrating measurable gains in both productivity and safety perception, this VR-based research provides a compelling case for integrating communication augmentation into autonomous industrial robots. The sandbox itself offers a scalable, low-risk pathway for refining HRC designs before deployment on the factory floor.
