Mixed virtuality transducer: virtual camera relative location displayed as ambient light
Dr. Rasika Ranaweera ,Senior Lecturer/Dean ,Faculty of Computing ,firstname.lastname@example.org
We have built haptic interfaces featuring mobile devices— smartphones, phablets, and tablets— that use compass-derived orientation sensing to animate virtual displays and ambient media. “Tworlds” is a mixed reality, multimodal toy using twirled juggling-style affordances crafted with mobile devices to modulate various displays, including 3D models and, now, environmental lighting. Previous releases of the system introduced self-conscious, ambidextrous avatars that, aware of the virtual camera position, switch manipulating arm to accommodate the human presumed to prefer visual alignment. That is, a player spinning a “padiddle“-style flat object or whirling a “poi“-style weight monitors virtual projection in a graphic display with a displaced, “2nd-person” perspective, able to see the puppet, including orientation of the twirled toy. As seen in the figure above, correspondence is preserved even as the camera moves continuously around the avatar between frontal and dorsal views in a spin-around “inspection gesture,” phase-locked rotation and revolution.