top of page

The Future: Augmented Reality (AR) or Virtual Reality (VR) Part 2

Writer's picture: Joel ScharlatJoel Scharlat

Back on topic (sort of)


I was watching a 45-minute long presentation given at AWE16 discussing the differences between AR and VR and which was, as a result of those differences, better at describing the future of the industry. As the session continued on, it struck me that these technologies aren’t really competing in the sense of winner vs. loser for the future (revenue and top sales is a different story – I’ll touch on that later). I mentioned they are instead complementary. By choosing the word ‘complementary’, I was looking for a word that would help me make the point that they aren’t really competing against each other from a basic, technological, and “future of the world” perspective. Semantics being important, I’m no longer positive ‘complementary’ is the correct word. I have instead settled on ‘adjacent technologies’ as a better way of describing the relationship between AR and VR technologies. (For clarification, Merriam-Webster defines adjacent as “close or near: sharing a border, wall or point”. A little more nuanced, the website vocabulary.com defines adjacent as “immediately adjoining without intervening space” which seems to better make the point I was after.). With this in mind, let’s continue.


The Quadrant Returns


In my first post, I presented a quadrant developed for the Metaverse Roadmap and used to define metaverse technologies. I mentioned that this one diagram has been the most helpful to me in not only understanding what defines a metaverse technology but also how they are used. It also helps illustrate why I think ‘adjacent technologies’ better describes the relationship between AR and VR.



Recall that both the horizontal and vertical axes are continuum. Technologies are categorized by where they fall on the horizontal axis as a degree of focus on either the user or the user’s environment. They further fall on the vertical axis depending on whether they augment or simulate the environment. Let’s dive into each quadrant a little more.


Lifelogging Technologies


The top right quadrant is occupied by technologies that are focused on the user and are punctuated by virtual information (data) which adds value to a user’s life or otherwise is related to the user specifically. Typically, technologies that take digital information about a person or their life and store it for future use make up this type of metaverse technology. The most recognizable of these technologies today are personal sensors, including fitness trackers, GPS systems, and even GoPro cameras. These personal sensors tend to be (but aren’t necessarily) autonomous in that they don’t require constant user interaction to do their job. Also included are more “manual” technologies requiring specific input and include social networking platforms such as Twitter, LinkedIn, Facebook, Google+, Instagram, and Pinterest. At the end of the day, as the quadrant title suggests, these technologies serve to log a user’s life by augmenting their existence with digital data.


Virtual Worlds


Virtual world technologies occupy the bottom right quadrant. These technologies are still intimate technologies (focus on the user from their perspective) and so fall on the right side of the horizontal axis. We now move down the vertical axis from augmentation to simulation. The title of this quadrant is relatively self-explanatory. These virtual worlds are computer generated and allow users to explore them from the first person perspective. A classic example is Linden Lab’s Second Life. This particular virtual world came into existence in 2003 and enables users to create a world of their choosing. From their avatars, or virtual representations of themselves), to the literal world around them, everything is based on creating an experience for the user built in a persistent digital environment.


Mirror Worlds


Moving clockwise within the quadrant diagram, we next look at the bottom left quadrant. This quadrant contains technologies focused on a user’s external environment. Unlike virtual worlds where the user is inside the environment looking out, mirror worlds are the opposite. Here, the user interacts with the virtual environment from the outside. A geographic information system (GIS) is one example. Specifically, Google Maps. These technologies are virtual representations of the physical world and focus on being accurate representations (whereas virtual worlds aren’t necessarily exact replicas of the physical world).


Augmented Reality


The final quadrant in our trip around the diagram. Here, the axes combine to bin technologies that augment a user’s interaction with their external environment. If you are reading this, my guess is you have seen AR technologies and so are up-to-date on what these technologies are. Most of the products being offered are visual in nature, meaning glasses and other heads-up display type technologies that are adding visual information layered on top of the physical word. Google Glass is one type of technology in this area. We should not limit our categorization of these technologies to those augmenting only our vision. Doppler Labs has developed a product called Here, an augmented reality technology for sound. Wireless earbuds connected to a smartphone app provides dynamic control over a user’s live listening experience by providing access to volume control, a 5-band EQ, preset filter controls, and layered effects. Here serves as a reminder that each of our other five senses can also be augmented with digital information layered on top of the physical world as users interact with it. And they are.


Moving right along


So far I’ve gone out of my way to provide a relatively in-depth definition of metaverse technologies and promised to bring it all back to the original argument. Recall that Tony Parisi (the VR pioneer) and Ralph Osterhout (the AR pioneer) were on opposing sides of a panel discussion at AWE16. The topic was essentially about which technology would own the future. I argue that they are not in direct competition with each other. When I make that statement, I’m saying this isn’t a “VHS vs. Betamax” type of discussion. Look at the quadrant diagram. AR and VR are not even the same types of technology nor are they trying to offer the same type of experience. This is why I went through the lengthy process of discussing the metaverse technology quadrant. Now you can see that AR technology sits in the top left quadrant focusing on adding computer generated information on top of a user’s physical world. VR technology sits in the exact opposite quadrant putting the focus on a user interacting with a computer-generated world. Two completely different, adjacent technologies.  Bringing this back to my earlier thoughts about adjacent technologies, you can see that the definition makes a little more sense. They share a common boundary, but not a common space.


One of the reasons I think people consider AR and VR competing technologies is that most people only think of AR as visual (one of the points I made earlier in defining the AR quadrant).  While many of the early use cases for initial entrants into this market do focus here, this simply is not the case. In the event that you wanted to reduce AR to just those technologies providing visual augmentation, these two technologies still don’t compete directly for the reasons I’ve outlined above. These technologies are only competing for initial market share until people realize that they don’t have to choose between one or the other. They will provide different services, fill different nichés, resonate more with one group than another, but they will never provide the same service, at least not initially. Not until we get to a more ‘mixed reality’ environment where a single technology.

4 views0 comments

Recent Posts

See All

Yorumlar


bottom of page