The recent VR & AR World event in London provided an overview of the current state of Virtual and Augmented Reality. Part conference, part trade-show, the event provided some visions of the future (such as the market being worth $120 Billion by 2020); but mostly provided an opportunity to see products and companies that had already launched.
Two presentations that stood out from an education and training point of view were from Boeing and the company Ubimax. Both provided case studies that showed augmented reality guidance/instruction could reduce the time taken to complete tasks, while at the same time improving the accuracy of those tasks.
Ubimax have a number of augmented reality supported applications for manufacture and maintenance; but the application demonstrated was to support picking items in warehouses (you can see a demonstration video here). The most beneficial aspect seemed to be the timely presentation of context-specific information – particularly in highlighting errors.
The Boeing presentation described a study comparing different ways of presenting information. Participants were tasked with assembling part of an aeroplane wing using instructions either on a desktop screen, a mobile tablet, or through augmented reality. The content was roughly similar to this published recording, and summarised in this online article.
Something that came through from the variety of things on show was how broad the definitions of virtual and augmented reality have become. Two products under the same label could have substantially different features; whereas something labelled as augmented reality could have very similar characteristics to something else labelled as virtual reality. I’ve produced a summary of the different attributes that either might have, and included it below.
The main distinguishing characteristic between augmented and virtual reality is the perception – with VR enclosed and AR transparent. It is also the characteristic that by far has the biggest impact on the user’s experience. Are they separated from their physical environment and transported somewhere else, or is data brought in and added to their surroundings?
With current technology levels many AR projects demonstrated had relation to place (e.g. guidance on a task that the user is currently engaged in); but the information was detached and free-floating in the display. In contrast there were HTC Vive demos that included physical objects (e.g. a car seat) that did have a connection to place – with the virtual car and seat perfectly aligned with the virtual environment.
The VR content that people are most likely to be exposed to are recorded real-world content, where the viewer is mostly passive. Whereas VR games would generally need to use computer graphics in order to allow interactivity.
It was widely acknowledged that virtual reality in particular had been around for over 30 years in various forms; but that it was the new wave of technology (arguably reignited by the Oculus Rift) that is starting to allow actual experiences to meet user expectations. However, there were also warnings that poor experiences (with low-budget equipment and/or badly designed environments) could still give people bad impressions; and so there was a responsibility to make each individual’s first use of the technology as positive as possible.