When it comes to augmented reality (AR), each day seems to bring with it a new term to understand. From field-of-view and field-of-regard to knowledge of the Vestibular System and its relationship to cyber sickness. There is seemingly an infinite amount of knowledge required to understand the world of AR. While it’s not a silver bullet, Unity3D has a small glossary covering some of these common, and not-so-common terms. It’s definitely worth checking out to help get a sense of what all these words mean, and to get an idea of what to consider when building your next AR solution.
Much like the terminology, each day also advances AR and the technology surrounding it, as well as the list of potential applications. Here are a few of those advancements that I’ve recently taken a look at.
LiDAR and the new iPad
The latest iPad released by Apple ships with a Light Detection and Ranging (LiDAR) camera. LiDAR provides the ability to scan a physical object or environment and make it part of the AR experience. It achieves this by illuminating a target with laser light and measuring the reflection with a sensor. By calculating the time it takes for the laser to return and measuring wavelengths, a 3D digital representation of the target can be formed as a geographic point cloud.
Environment or object scanning isn’t a new concept for AR, as ARCore and ARKit have had this functionality in some form for Android and iOS devices for a couple of years now. However, LiDAR does come with improved accuracy and precision that is maintained in low light. Unfortunately it still doesn’t solve scanning objects with high reflectivity like a mirror or a window. Because LiDAR scans are point clouds, it could be used to help augment GPS inaccuracy issues and help devices understand your position and orientation in the real world.
For more info on LiDAR, the short video in this article gives an overview of a few use cases.
Radiohead also used LiDAR to make a video clip for House of Cards without using a video camera.
Why aren’t AR glasses a thing already?
Well, it’s because they’re really hard to make. Dr Phil Greenhalgh, CTO of UK-based WaveOptics explains “[Designers] have to present a computer-generated image that’s pinsharp and has accurate color with a wide field of view right in the wearer’s central vision without the light engine… obscuring the real world… with a minimal amount of power while competing with a massive dynamic range of brightness from the real world” – Full article here.
On top of this, if not done right, users could experience issues related to Vergence-Accommodation (V-A), which is an eye-focusing problem that occurs when your brain receives mismatching cues between the distance of a virtual 3D object (vergence), and the focusing distance (accommodation) required for the eyes to focus on that object. The end result is a bit of a deal breaker – motion sickness.
As well as software, user experience and design issues standing in the way, there’s also some hardware limitations as well. The images that are overlaid on the lense need to be projected, and currently the field of view doesn’t expand much further than the wearer’s central vision. However, WaveOptics are one of many companies aiming to solve some of these issues by offering customisable and mass produced lenses with a larger viewing area with improved visual fidelity that could provide the necessary screen space for overlaying information without obscuring your view.
In other news, Apple are forecast to release their AR glasses in 2023! More info here.
Virtual collaboration
Spatial is a US company aiming to help distributed teams collaborate more effectively with the help of AR and holograms in the form of avatars that gesture and lip-sync to mimic the real user on the other end.
They’re teaming up with China’s, Nreal who aim to provide affordable XR solutions. This article has a little more info and suggests that servers behind a 5G network could help absorb some of the processing requirements for high quality digital environments through its high bandwidth and low latency.
Hand and finger tracking
For virtual reality (VR) users, hand tracking is old news and finger tracking is already available in the new-ish Valve Index and the low cost Oculus Quest.
AR users currently only have limited hand tracking and gestures to work with. However, Google has been using AI to bring finger tracking to AR devices (more on this here), and Microsoft is looking to use AI with their HoloLens 2 to solve the same problem.
With the introduction of the Project Soli radar chip in the Pixel 4, I’m wondering if it can in any way help with this endeavour.
By bringing finger tracking to AR we can allow more natural and complex interactions with the environment and could improve accessibility and remove the need for controllers.
Digital Twins
The concept of digital twins isn’t a new one, and given recent advances in technology, the term is making its way back into the collective consciousness of many industries. A digital twin can be thought of as a digital replica of a physical object or system. They can be utilised in many different ways including, but not limited to, designing and testing machinery before actually building, viewing the inner workings of a physical object without opening it up and aggregating data to run relatively cheap predictive simulations of complex systems to anticipate risk and ensuring all workers are as capable as the best worker when decisions need to be made fast. In any case, they’re generally useful for increasing operational knowledge, and cutting costs.
Digital twins are once again in the spotlight thanks to advances in technology. To point out a few of these, we now have access to better sensors and telemetry with the Internet of Things (IoT) making data collection more accessible and affordable. Cloud computing allows organisations to manage large data lakes and apply AI and machine learning techniques for advanced predictive modelling, and edge computing provides the benefits of the cloud while maintaining availability, which is important for field operators. Here’s a report on IoT Digital Twins and AI in Mining if you’re interested in learning more.
Gartner asserts that half of large industrial companies will use digital twins by 2021.If that’s true, then those companies will need software capabilities to create the digital twin. They’ll also need to update their data collection and curating capabilities and develop ways to provide analysis of that data.
So what does this mean for AR?
Without AR or VR, digital twins of 3D objects are represented on 2D screens. VR provides a way to immerse oneself in the environment of a digital twin and better understand it’s dynamics, which translates to better understanding of the physical object.
AR is a bit more limited in that it generally requires a close proximity to a physical object so that the device is able to recognise it and overlay the digital twin on the device’s screen. However, this limitation doesn’t make AR any less useful and its applications range from visualising the inner workings of a machine and its data flows for better and faster decision making, to visualising parts of a real world system that are not otherwise visible. For example the underground cables or pipes in a construction site that may be a safety concern, or otherwise difficult to determine where to dig.
Either AR or VR can also enhance predictive models by allowing operators to modify the inputs and device configuration of a digital twin, and view the outcome in real-time before applying the same steps to a physical device.
If digital twins are a new concept for you, this article is a good resource from an AR/VR perspective with a short and sharp video of some of its applications.
Holographic display
While it’s not AR, it’s pretty cool and it’ll be interesting to see what this tech leads to in the future. Looking Glass offers a holographic display and a Unity SDK to create 3D holographic content that is viewable without any head-wear. The engineers have also come up with an interesting way to allow 2D mouse interactions with a 3D object. This Linus Tech Tips video covers it all pretty well.
AR accessibility
Accessibility is important for a number of reasons, and I think w3.org summed it up nicely with these key points from their accessibility business case page:
- Drive Innovation: Accessibility features in products and services often solve unanticipated problems.
- Enhance Your Brand: Diversity and inclusion efforts so important to business success are accelerated with a clear, well-integrated accessibility commitment.
- Extend Market Reach: The global market of people with disabilities is over 1 billion people with a spending power of more than $6 trillion. Accessibility often improves the online experience for all users.
- Minimise Legal Risk: Many countries have laws requiring digital accessibility, and the issue is of increased legal concern.
This video from GoogleIO is a couple of years old, and while it’s not fully comprehensive it does have some good insights into accessibility for AR.
Some highlights include:
- Test in suboptimal conditions (noisy, bright, dark, while holding a cup of coffee).
- Provide audio and text feedback for the visually/audibly impaired.
- Some users are not able to look in all directions (turning around to see an object behind them in AR) so allow them to rotate the world around them.
- Allow remapping of controls to ensure users can use a control setup that works best for them and their needs.
- Provide assistive aiming for menu clicks for those with shaky hands. If the user is close enough to clicking the menu, then give them a little help.
- Allow resizing of AR objects so that they can fit into the physical space available.
- If a user is required to tap something, then make sure the ‘tap target’ is at least finger sized. Any smaller than this can make it much harder for users to select objects.
- Choose colours with good contrast and consider the colour blind when choosing colour pallets.
It’s also worth pointing out that these accessibility features aren’t just for those who are differently abled. Many people turn subtitles on even if they aren’t hard of hearing. And I often think back to a conference I attended where a member of The Able Gamers charity was speaking about their research. They gave credit to Call of Duty for providing such good visual feedback that a new father could play unencumbered with the sound turned right down while his baby slept nearby. Being accessible is opening yourself up to a bigger market share.
That’s all for this AR roundup
I hope this has given you some ideas to augment your current project or even sparked an idea for a new one. There are so many things happening in the AR/VR space, and this blog only scratches the surface. So, if you’ve found something interesting that wasn’t covered here, we’d love to hear about it! You can reach out to me at mszabo@dius.com.au.