Teleoperation for robot imitation learning is hampered by hardware availability. Can high-quality robot data be collected without a physical robot? We present a system to augment Apple Vision Pro with real-time virtual robot feedback. By giving users an intuitive understanding of how their actions translate into robot movements, we enable bare-handed natural human data collection that is compatible with the limitations of the robot's physical hardware. We conducted a user study with 15 participants demonstrating 3 different tasks, each under 3 different feedback conditions, and directly reproduced the collected trajectories on the robot's physical hardware. The results suggest that live robot feedback dramatically improves the quality of the data collected, suggesting a new avenue for scalable human data collection without access to robot hardware.