TechStuff Classic: Augmenting Your Reality - podcast episode cover

TechStuff Classic: Augmenting Your Reality

Jul 07, 20231 hr
--:--
--:--
Listen in podcast apps:

Episode description

What is AR and how does it work? Learn about augmented reality's history and future!

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

Welcome to tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with iHeartRadio, and how the tech are you? It's time for a tech Stuff classic episode. This episode is called Augmenting Your Reality and originally published November twenty third, twenty sixteen. Obviously, augmented reality is still

a developing technology. I would argue that no one has really created the killer hardware for augmented reality as of yet. By the time you listen to this, is entirely possible that Apple has already unveiled its mixed reality headset, and maybe Apple will be the company that really succeeds where others just haven't yet. I don't know. I'm recording this way back in May twenty twenty three before Apple has held that event, So when it comes round, I guess

I'll find out. Anyway, that's going down a long tangent. Let's listen to this classic episode, Augmenting Your Reality, which published November twenty third, twenty sixteen. So I thought I would do a deeper dive, a bigger explanation about what augmented reality is. What it's all about how it works and sort of the applications we might put a R toward things that, you know, was it good for tons

of stuff? As it turns out, So the first thing we should do is probably defined some terms, because if you haven't really looked into augmented reality and you aren't familiar with AR, you might just be lost. I'm going to define it all for you right now, because that's the kind of stand up guy I am. Technically speaking, augmented reality is using digital information to enhance or augment

and experience in our physical real world. So the way we usually see this implemented involves some sort of display that has an image of the real world on it and it overlays digital information on top of that image. So think of like a camera's viewfinder, like an LCD screen on a camera, and it actually labels the buildings

that are in view. When you're out on the street and you hold the camera up, or a smartphone or even a wearable device like a head mounted display that you can look through so you can see the real world.

You're not just staring at a screen, or if you are staring at a screen, you're staring at a video feed that is provided by an external camera mounted just on the other side of the screen, so it's like you're looking through a display in the first place, but then on top of that view you have this digital information. That's the most common implementation we talk about, but it's not the only one. Augmented reality does not have to

only be or even involve visual information at all. You could have audio only augmented reality, for example, But the whole idea is that it's something that is created digitally to enhance your experience in the real world. Now we

can contrast this with the concept of virtual reality. Virtual reality, of course, is a term where you create an experience completely through computer generated means a computer is making all the things you see and hear, and maybe even beyond that if you have really sophisticated setups, so you might have some haptic feedback. Haptic refers to your sense of touch, so if you have haptic feedback, that means you're getting

information feedback through your sense of touch. A common example of this is a rumble pack inside a game controller where you fire a gun and a first person shooter and your controller rumbles as a result, letting you know that you are in fact, unleashing virtual destruction upon all you survey. Well, the same thing can be true with a virtual reality setup. So virtual reality is all about

constructing an artificial reality, a simulated reality. Augmented reality is all about enhancing the one that we are actually in. And then there's also mixed reality. Mixed reality is kind of sort of in between the two. You might have some physical objects within a room that are also mapped to a virtual environment, and then you use something like a head melted display to enter the virtual environment. That's

what looks like you're inside. But you have physical objects in the room around you that are also mapped to the virtual world, meaning you could pick up this physical object and you would see that reflect did within the virtual world, where you might pick up a sword and shield or move a chair or something along those lines. So augmented reality, virtual reality, and mixed reality are all kind of interrelated, so much so that their histories also

are very much interrelated. And there's some people who try to collect these different technologies, these different approaches and put them under a common umbrella, and they tend to use the phrase alternate reality, which is unfortunate because that's also ar but Alternate reality is kind of the umbrella for virtual,

augmented and mixed reality. Now that kind of gives you the definition of those basic terms, and it is important to understand them because they're becoming more and more important today. You are already probably aware of a lot of VR headsets that are out there on the market as well as VR well, they're kind of like case is that you slide your smartphone into, so your smartphone becomes the actual display on a VR headset. The headset itself is more or less just a head mounted case for your phone.

We've seen a lot of those come out over the last few years. We've also seen a lot of AR applications come out, typically for things like iPads and smartphones, but we've also seen some hardware come out that for wearable devices that falls into the augmented reality category, stuff like Google Glass, which i'll talk about more a little

bit later in this episode. For augmented reality to work to get this enhanced experience of reality around you, there are a lot of technological components that have to come together so that you actually do get an experience that is meaningful. You have to have technology that quote unquote knows where you are and what you are looking at or what you are close to in order to get

that augmented experience. It wouldn't do me any good if I put on an augmented reality headset, for example, and stared at let's say a famous painting, and instead of getting information about the famous painting, I see an exploded view of an car engine. That would make no sense.

So you have to build in technologies in order for the AR to understand what it is you're trying to do and to augment that experience, which meant that we had to wait a pretty good long time for the various technologies that we use to create this relationship to mature to a point where it was possible. Sometimes we had technologies that would allow us to do it, but it required tethering headsets to very large computers, which meant that you didn't have really any mobility and it really

limited the usefulness of the actual application. In other cases, you could say things like your head tracking technology was absolutely necessary for AR to develop the way it did. GPS technology as well. Remember it wasn't that long ago that we ordinary mirror mortals didn't have access to really accurate GPS information For a very long time that was purposefully made less accurate. It was a matter of national defense. It wasn't until the nineties that you started to see

GPS become more accurate. For the basic consumer. Way back in the day, you might get accuracy of up to around one hundred meters, which is not great if you're looking for the next place to make your turn. If it's one hundred meters away, that's pretty far. But now it's within a few feet, so it's much better. That sort of stuff all had to come together in order

for augmented reality to become viable. I almost said a reality, but that just starts to sound redundant at any rate, Let's talk about some of these technologies we really need, things like gyroscopes, accelerometers. These help devices understand their orientation where they are in respect to something else, like are they For a smartphone, it might be is it in

landscape mode or portrait mode? But for a head mounted display, it would help give the unit the information it needs to know which way you're looking, like are you looking to the east or to the west. That kind of thing also compasses obviously very important GPS sensors, image recognition software, but has become really important so that when you are looking at something, the system can actually identify what that is.

In some cases you can get around this. You can design an AR system where let's say you make a movie post and the AR application has the movie poster animate in some way if you hold up a smartphone

that's running the appropriate app. So I'm just gonna take a movie from my past that does not have an AR movie poster associated with it, but one that I can talk about as if it were a good example, and that has to be Big Trouble in Little China, universally declared the best movie that has ever been made.

So you've got your Big Trouble in Little China poster up on the wall, and you hold up your smartphone and you activate your Big Trouble in Little China movie marketing app and the camera on your phone to text

the poster. You know the posters there well, The app and the poster together are able to construct the augmented experience because there have been elements put into the poster that the app is looking for, and once the app identifies that, like, it sees maybe eight different points on the poster and because of the orientation of those points, it knows what angle it's at, what height it's at in relation to the phone, and can give you on

your display the augmented reality experience. In this case, it's obviously Jack Burton and the pork Chop Express eating a sandwich, because, as we know, the most riveting scene in the movie unfolds in this way. So that would be kind of an augmented reality experience where you didn't have to worry

about every possible application out in the real world. You made it for something very specific, which means in your software you can have the camera look quote unquote for these particular points of reference and thus create the augmented experience in that way. If you want to take that and move it to the real world where you can see augmented information about just the world around you, it

becomes way more complicated. You have to have very sophisticated image recognition saw so that the camera picks up the images, the software processes the information, identifies what those images are, and gives you the relevant information. So working with all

the sensors, augmented reality can make this a possibility. So another example, let's say you're out on the street in Atlanta, you're here in my hometown Atlanta, Georgia, and you're looking at a building and you wonder what it is, and you hold up your phone and you've got your little map app that allows you to look at a real world setting and tells you information about it, and it

tells you it's the Georgia Aquarium. Well, first of all, you would probably know that already because the signage there is actually pretty good. But the point being that this would be something that would tap into the GPS coordinates on your phone, so it would know where your location was and help narrow that down. The compass would tell it what direction you are facing the camera angle. Also, you have some image recognition going on there. The accelerometer

tells the orientation of the phone itself. All of this data together would give the software the information needed for it to display the labeled Georgia Aquarium on your phone. And it all happens in an instant that's pretty amazing. Typically, you also have to have some other method to communicate with a larger infrastructure, because we don't have the capability of building an enormously powerful computer that has all this real world information programmed into it and make it a

handheld or wearable device. So usually you have to pair these devices with some other larger infrastructure. Sometimes it's a double handshake. For example, with Google Glass, you would use Bluetooth to connect Google Glass to a smartphone. Then the smartphone would have the connection to the larger internet through

your smartphone's cell service provider. So while you're experiencing the augmented reality through the Google Glass, it's actually communicating through your phone to the infrastructure to get the data it needs to show you the information. It's showing you very important elements. And all of these components, like I said, came together more or less around the same time. Most of them were being developed independently of each other, and

it's just that now we're seeing them all converge. That's an old favorite word here at tech stuff converge together to create the augmented reality experience and make it possible. So how did we get here? How did these different elements develop? Well, there are a whole bunch of technology pioneers who really create the foundation for augmented reality as well as virtual reality and mixed reality. But one that I think we really need to concentrate on it first

is Ivan Sutherland. Now Sutherland was born in Hastings, Nebraska in nineteen thirty eight, and as a kid, he was fascinated with mathematics, particularly geometry, and also with engineering. He began to study and experiment with computers while he was in school, and this was at a time where personal computers weren't a thing. There were no personal computers at

this point. Computers were actually pretty rare, and they were huge, and in fact, they often would rely upon physical media formats like punch cards or paper tape to read a program. So you didn't even have a disc or like certainly nothing like a USB thumb drive or anything like that. You actually had to put physical media into the machine for it to read and then execute whatever program you

had designed for that device. He went to college at what is now Carnegie Mellon University on a full scholarship. He graduated with a Bachelor of Science degree. He would then go on to earn a master's degree at cal Tech and a PhD in electrical engineering from MIT. And actually his doctoral thesis supervisor was Claude Shannon. And we talked about Claude Shannon back in the twenty fourteen episode

Who is Claude Shannon. We recorded that not too long after Shannon's passing, So if you want to hear a really interesting story about a pioneer in computer science, you should go check out that twenty fourteen episode. Back to Sutherland.

For his thesis, he created something called Sketchpad and that was really, by most accounts, the first computer graphical user interface or guy A graphic goal user interface means that you interact with the computer through graphics representing various commands on the computer. Windows and the Mac operating system are both examples of graphical user interfaces, as is the interface on your smartphone. If you have a smartphone where you

choose applications on a screen, that's a graphical user interface. Well, Sutherland created what is largely considered to be the first one of those. After college, he entered military service and he was assigned to the National Seecurity Agency. We have great friends there. I assume I'm sure they're listening, because they're listening to everything at any rate. He entered the NSA as an electrical engineer, and in nineteen sixty four he replaced JC. R. Licklider as the head of DARPA's

Information Processing Techniques Office. Or IPTO. And also by back then DARPA wasn't DARPA, it was just ARPA. So this is the same group, by the way, that would end up doing a lot of work that would form the ARPANET a few years later, and the arpainet was the predecessor to the Internet in some ways. At least, the ARPAET was what ended up being the building blocks for

the infrastructure that would become the Internet. Now, all of that work happened after Sutherland had already departed the organization. His work became a fundamental component of both virtual and augmented reality. As I mentioned earlier, in nineteen sixty five, he wrote a piece and essay. It's very short, it's very easy read, and you can find it online. The

title of the essay is the Ultimate Display. And if you ever do any research and virtual reality or augmented reality, this essay is going to pop up in your research, so go ahead and read it. It's like two pages long, so it goes very quickly. In that essay he talked about several ideas, including the idealized display, the ultimate display, something that would be the furthest you could go with

display technology. Now, keep in mind in his time. By his time, he's still alive, by the way, this time in the nineteen sixties, he you know, things were just restricted to monitors. You might have a light pen, but usually you just use a keyboard. Like it was pretty bare bones. But he said, let's push this as far as we can imagine it. His example, he thought of a room that would be completely controlled by computers. Everything you would experience within that room would be generated by

a computer. Everything you see here, smell, taste, and touch, all of it generated by computers. The computer would even be able to form physical objects out of pure matter itself. Now, he wasn't suggesting that this would ever be a device that we would actually be able to build. He was just saying, what is the ultimate incarnation of display technology? And if you read it, you realize, oh, this is where the Star Trek Next Generation writers got their idea

for the Holidack. But unlike Star Trek the Next Generation, the Ultimate Display would not go on the Fritz every other episode and try to kill the crew. It was better than that. The Ultimate Display was sort of a foundational like Philosophically, it was foundational forvirual reality and augmented reality. This idea of a very immersive experience where you, as a user are surrounded somehow by this computer generated experience. And that's true both with augmented reality and virtual reality.

In augmented reality, the real world is still there, but you get this enhanced experience that is completely computer generated. So in nineteen sixty eight, Sutherland and a student named Danny Cohen would create a VR ar head mounted display or HMD, and they nicknamed it the Sword of Damocles. Why because you had to suspend it from the ceiling. It was too heavy to wear on your head. You

needed it to be nice and sturdy. It included transparent lenses, which meant you could overlay computer information on the lenses themselves, and thus you could look through the lenses at the real world and have these wireframe graphics on top of

what you were looking at. And it also had a magnetic tracking system, meaning that it had sensors that could detect magnetic fields, and as you turned your head or you change the inclination of your head, it would change the magnetic field and this would be relayed as a command to the visual center the actual lenses themselves, so that it would the change would be reflected in what

you saw. So if you have a virtual environment and you turn your head to the left, you want the view within the virtual environment to go to the left too, But without head tracking technology that's impossible. So this was a very early example of head tracking technology, and again it used magnets magnetic fields in order to do that. Obviously,

it's also really important for augmented reality. Again, if the AR system doesn't detect that you are looking around, then you're not getting relevant information for the specific thing you are looking at. Anyway, as I said, the graphics were pretty primitive, they were wireframe drawings, but they still showed that this was a viable approach to technology using HMD

for augmented or virtual reality use. Oh and one other note I should make So a lot of people say the sort of Damocles was the first head mounted display, and they say, you know, this is the first HMD was made in nineteen sixty eight. I take issue with that. I don't think of the sort of Damocles as the first head mounted display. That to me should go to a different invention called the head site hadsight now that was developed by phil Co and unlike the sort of damocles.

It didn't create a virtual world. Instead, the head site was sort of a remote viewfinder for a video camera. Imagine that you got a camera mounted on a mechanical swiveling mount, so you can move it left right, you can change the orientation the inclination as well, and then you have that mapped to a head mounted display, so that if I put the display on and I look to the left, the camera pans to the left. If I look to the right, it pans to the right.

That sort of thing. It was meant to be a way for people to operate a camera in a remote location that might not be very friendly to a human being standing there. For example, the exterior of an aircraft. You could have a camera mounted on the outside of your aircraft that would allow an engineer on the inside to look around and maybe help a pilot land or navigate in a dangerous situation, or just get an idea

of the status of the aircraft itself. This was very much a technology that was being pushed by the military, an idea to create more military uses using this technology to make the military more competent, more adept at very rapid changing situations on the technology front, so headsite preceded the sort of Damocles by about seven years. It came out around nineteen sixty one. But again it wasn't a virtual reality headset or an augmented reality headset. It was

kind of a like I said, a remote viewfinder. But still I consider that to be the earliest head mounted display, not the sort of Damocles. However, Sutherland would end up going on to make lots of other contributions in computer graphics as well as the overall concepts that would guide both virtual reality and augmented reality development over the next several decades. But now it'll be time for me to kind of move away from Sutherland and talk about some

other developments that were important in AR. And before I get to that, let's take a quick break to thank our sponsor. All Right, we just left off with Ivan Sutherland. Now let's talk about a different father of augmented reality, Myron Krueger or doctor Myron Krueger. In nineteen seventy four, doctor Krueger created an augmented reality lab called Video Place. He was really into this idea of seeing the interaction

of technology and people in artistic ways. He really wanted to explore artistic expressions using technology and people working together. So he wanted to create an artificial reality environment that didn't require the user to wear special equipment. You wouldn't have to put on a head Moulton display, or wear special gloves, or use in any kind of device to control your actions, because that's a barrier between you and

the experience. Instead, his version consisted of a laboratory that had several rooms all networked together, and each room had a video camera in it and a projector and a screen. Now, the video camera would pick up the motions of the person inside the room, it would send information to the projector, which would then project the person's silhouette on the screen.

And the silhouette was typically a really bright color, and you could move around and your silhouette would move around, So you almost became like a puppet master controlling your own silhouette. But then he started to incorporate other things, like other elements that were virtually on the screen. The projector was projecting things that were on the screen but not in the actual real room itself. So imagine a ball,

and a ball is being projected on the screen. Well, you could move as that your silhouette would interact with the ball and the ball would bounce away. That sort of thing. So you would be able to interact with virtual environments by moving around in a real physical space. And while those objects weren't really there in front of you, you could see the representation of them on the screen.

And this was really powerful stuff. And remember I said these rooms were all networked together, so you could actually have a system where a person in one room and a person in another room both have their silhouettes projected together in their respective rooms on the screen, and your silhouette would be one color, the other person's silhouette would be a different color, and you could interact with one another.

And according to reports from this art experiment, they noticed that whenever people would have their silhouettes cross one another, they would actually recoil in their physical rooms. Keep in mind, they're in different rooms, they're not in the same one together, they would recoil as if they had made physical contact or bumped into someone. So it showed that there was

a very powerful psychological element to this virtual presence. And again that psychological element plays a huge important role in VR and AR research and development, not just for creating products, but just to understand how we process information and incorporate it into our sense of reality. Not to get too deep for you guys, So experimentation in the field continued over the years. In the early nineteen eighties, doctor Krueger would write a book and publish it about artificial realities.

But while the principles for augmented reality were established, the technologies were still rather unwieldy. They were large, they weren't reliable, and it would require several years of work to improve those technologies, to create miniaturization strategies, to get the elements down to a size that was more practic for that sort of use, and what can require you to have a headmund display mounted to the ceiling, and all of that took time, but you could tell that the ideas

underlying augmented and virtual reality were already in place. In nineteen ninety there was a Boeing researcher named Tom Coddell who coined the term augmented reality, and he was specifically using it to talk about this approach to overlaying digital information on top of our physical world to enhance it in some way. Now, doctor Coddell earned a PhD in physics and astronomy from the University of Arizona, and before contributing the term augmented reality to the public lexicon, he

did extensive work and artificial intelligence research and development. He also became a professor in the fields of electrical and computer engineering at the University of New Mexico. So when he was working with Boeing, he used this phrase to talk about specific system he was working on, an augmented reality system, and the whole purpose of this was to help people who were helping construct airplanes lay cables properly.

The whole idea was to use this system so that an electrician can see exactly where the cable needed to go inside the partly constructed cabin of an aircraft, and that way you could follow the directions that you see through your display, lay the actual cable down where the guide tells you to go, and then you would have a properly wired airplane. And I'm sure, as we're all aware, properly wired airplanes are good airplanes. Improperly wired airplanes are

not so good. So it was a very important system to make this much more smooth and fast, and it meant that you didn't have to have as many experts to guide the process. You could actually have someone come in who had never done this before and just follow the directions through this augmented reality set system and they could wire the airplane properly. So really clever means of

using augmented reality. Also, we would end up seeing that same sort of philosophy used again and again in the future in more sophisticated types of technology, but it was the exact same approach, exact same idea underlying it. In nineteen ninety two, Lewis Rosenberg proposed a system that the Air Force could use to allow someone to control devices from a remote location, and that consisted of a video camera which would provide the visual data to the user

through a head mounted display. They would wear the display on their heads or they would look at a screen, but typically they'd wear a display, and then they would also wear an exoskeleton on their upper body that would allow them to control some sort of robotic device, typically

robotic arms. And usually the way this would work is that the display was designed in such a way with the video camera so that the view that the person had it made it look like the robot arms were their actual arms, which required a little bit of trickery on the part of Rosenberg. They had to fudge the distances between the video camera and the robotic arms to give this sort of feeling that the robot arms represented

your actual arms. So you move your arms inside the exoskeleton and the robot arms would move as well at their remote location. So it's kind of like a really fancy remote control. Now imagine that the robot arms are holding various tools. The suit would also provide haptic feedback, that touch based feedback to let a user know more about what is going on when they're operating the arms.

So if you were to do something that would make a robot arm and counter resistance, then you would feel haptic feedback in the suit that would indicate, oh, you're going beyond the parameters of where this robot arm is capable of going. So you learn very quickly where you can operate within that suit and make sure that you are not pushing it beyond its limits. You could also end up using these tools to do various things in

this remote environment. Now, Rosenberg called a system virtual fixtures, which meant that the user would see these virtual overlays on top of a real environment that they were looking at So I'm going to give a very basic example to illustrate this, because it's hard to imagine, it's hard to get it across in words. But let's say you're looking through a head mount display and in front of you is a board, wooden board, and it's just a

regular wooden board. There's nothing painted on it or anything in the real world, and it's in a room that's across the building from you. You cannot see this with your own eyes. You can only see it through the video camera. The virtual fixture overlay might be a series of circles, and the circles are things that you are meant to cut out of the board using the robot arms and a tool that's right there inside the physical environment,

across the building from you. So you follow the patterns that you see in this virtual overlay and you complete the task. That's a very simple example, and this system was meant to allow for that. That's what he would call the virtual fixtures, these overlays that you would see that would appear to be real, but actually were not

present in the physical environment itself. Now, also in nineteen ninety two, a group of researchers at Columbia University were proposing a system that they called the Knowledge based Augmented Reality for Maintenance Assistance aka Karma Cute. Their approach was

pretty novel. They pointed out that while augmented reality had tremendous potential, it also had a really big barrier in that it takes an enormous amount of time to design or animate and implement these graphic overlays for AR applications. So let's say you're in a room and you're looking at different objects, and little labels are popping up for each object. If you're having to do all that by hand,

it takes a huge amount of time. What they wanted to do was create artificial intelligence systems, or at least techniques to generate graphics automatically on the fly. So this would be similar to using image recognition software, so that if you look at a specific box, let's say, the image recognition software might be able to map that box to a specific product and thus give you an overlay of information about the product though inside that box. And

it would be able to do all this automatically. It would not require a human programmer to go through and look at every single product in every single type of box. And program all that out. That would be ridiculous, it would take forever. So it was the work of this group with Karma that really started the ball rolling with this AI approach to automatically fill in that information and

make AR a more practical experience. Around the same time, between ninety two and ninety three, the Laurel Western Development Labs, which was a defense contractor, began to work with the US military to create AR systems for military vehicles. And you can understand very quickly how AR would have enormous potential for military applications. And in fact, AR is very commonly used in lots of different things like pilot helmets where it helps pilots keep track of targets and potential threats,

that kind of thing. But in this case, they were really looking at creating a augmented reality system that would create virtual opponents for people working in simulated wartime conditions, so really a training program. Imagine that you're operating an actual military vehicle like a tank, and you have a view outside that is really an augmented reality system, so you're actually looking at the real world around you. You

aren't just sitting in a simulator inside of building. You are out there in the field controlling a real vehicle moving around in real terrain, but you also see virtual representations of enemies in that real terrain, and you can practice maneuvers and firing on enemies that sort of thing, probably not using live ammunition at that point, but having a more realistic simulation in a real environment, so that

you're not just trying to create a totally virtual scenario. Anyway, that work was done in ninety two and ninety three, the world wouldn't really learn about it at large until about ninety nine, because that's the way the military works. They're not so eager to talk about their stuff while are still doing it. Meanwhile, at the same time, artists were continuing to explore the relationships between physical performers and

virtual elements. You remember I talked about doctor Krueger earlier, while in nineteen ninety four, a different artist, Julie Martin, would create a piece called Dancing in Cyberspace, and in that piece, dancers on a physical space or a physical stage were able to manipulate virtual objects, so an audience would be able to see both the physical performance by the dancers and the virtual reactions. The things that happened within the virtual environment as a result of the dancers

moving around their physical space pretty neat. In nineteen ninety five, two researchers Rekimoto and Nagao created their the first real handheld AR display. But it was a tethered display. It wasn't free form. You couldn't just take it anywhere. It was called navicam, and you had to have a tether

cable essentially connect the navicam to a workstation. But it had a forward facing camera and you could use a video feed to go through this handheld device through the cable to the workstation, and it could detect color coded markers in the camera image and display information on a video see through view. So you could get that augmented

reality experience. Obviously very limited, you know, you could not just carry this around with you everywhere you go, but it showed the ideas behind augmented reality could in fact be realized in a handheld format. Now, it was just a matter of getting those different components small enough to all fit in a self contained mobile form factor. Now in the late nineties, we started seeing televi sporting events featuring augmented reality elements, or at least you did. I

don't watch sports ball. That's not entirely true, but I don't watch football or hockey, American football or hockey, and both of those were the sports that really got them. First off, I'm going to backtrack. I used to watch hockey, but then Winnipeg stole the Atlanta Thrashers from me. Winnipeg, okay, getting back to hockey. So hockey had the Fox track system, which Fox put into hockey games so that you could

easily follow the puck. Instead of trying to watch this little bitty black disc spinning around, you got to watch this very bright, highlighted neon colored disc that everyone hated. And after about two seasons, Fuck stopped doing it and people were happy until a Thrashers moved away, and then it was just miserable. American football would follow suit in the late nineties and have the first down line introduced, where they could on live video overlay the first down line.

Usually it's a bright yellow line that indicates how far the offensive team needs to go. And by offensive, I mean they're on the offensive. I don't mean they offend my sensibilities. I'm not that against American football, but it showed how far they would need to go in order to establish a first down, which I am told is something you want to do. That would start to get employed in nineteen ninety eight, and over time we would see that increase where eventually Skycam was able to even

use this system. At first, it wasn't You could get a skycam view, but you couldn't do the overlay of the first and ten line until later. Well, I've got a lot more to say about augmented reality, but before I do, let's take another quick break to thank our sponsor. Okay, we're back. Let's skip ahead to nineteen ninety nine. I guess it's not really skipping. I just talked about nineteen

ninety eight. Let's plot ahead to nineteen ninety nine. That's when NASA's X thirty eight spacecraft was using an AR system as part of its navigational tools, so people back on Earth could look at a view from the spacecraft a camera mounted on the spacecraft, and on top of that view they could overlay map data to help with navigation. And all of that, of course was controlled back here

on Earth. But it was sort of an experiment to see how augmented reality could be incorporated into space exploration. Missions in the future and make them more effective. Also in nineteen ninety nine, the Navy began work on the Battlefield Augmented Reality System or BARS, which is a wearable AR system for soldiers. You probably seen various implementations of this over the years. Is obviously evolved since nineteen ninety nine.

It's one of those pieces of technology that some soldiers took to, but a lot just felt that it created unnecessary distractions. Technology and warfare is very very difficult because there's sometimes where we think, oh, more information is always better, but in some cases that doesn't seem to hold true, and for some people with these head mounted displays or really its heads up displays HUDs, that can sometimes be the case, depends on the implementation. In two thousand, hiro

Katsu Kato created a software library called AR Toolkit. Very important software library was also open source, so anyone could contribute to it, modify it. Brent put out a new version that sort of stuff, and it uses video tracking to overlay computer graphics on a video camera feed, and it's still a component for a lot of AR experiences today.

Later on in the two thousands, this would be adapted so that it could also be used in web experiences, not just native experiences to specific devices, and we continued to see AR built into new experiences, including smartphones and tablets. By two thousand and four, some researchers in Germany were creating AR apps that could take advantage of a smartphone's camera.

But two thousand and fours pretty early for smartphones. It really would would be a few years before this would truly take off, because that's when Apple came out with the iPhone in two thousand and seven. That was the real revolution in smartphone technology. There had been smartphones before the iPhone, don't get me wrong, and many of them were really good, but the iPhone was something that caught

the public's attention and made smartphones sexy. And because of that, there was a ton of money poured into the smartphone industry as well as not just Apple, but also to other companies, like the companies that were offering Android smartphones. But I think we can really thank Apple for all of that happening in the first place, especially things like seeing that accelerometer where you could switch from portrait to

landscape mode. I remember everyone freaking out about that when Steve Jobs showed it off in two thousand and seven at Macworld and everyone thought, wow, this is amazing. Well, we take it for granted now, but it was a

big deal then. So once that smartphone revolution happened, it was a landslide victory for both augmented reality and virtual reality research and development, because it meant that so much money was being poured into creating newer, thinner, more capable smartphones that we saw an explosion in technological development that could also be used for virtual and augmented reality experiences. So, for example, think of those sensors I talked about earlier,

accelerometers and gyroscopes, that sort of thing. Well, we saw a lot of development in those spaces in order to make smartphones better, and people who were working in AR and VR experiences could take advantage of those same sensors either creating apps specifically for smartphones. Thus, you don't have to build any other hardware, you just use existing hardware, but that limits how you can use it, right because you don't typically wear your smartphone directly in front of

your face. Or they could end up taking advantage of those new, smaller sensors and incorporate them directly into brand new hardware like various types of wearables like Google Glass, for example, but that would be a few more years. In twenty eleven, Nintendo launched the Nintendo three DS, which

included a camera. It was the three D capable handheld device and included actually a pair of forward facing cameras, so you could take three D photos if you wanted to, and it also had some AR software included with it. You would get these special Nintendo cards kind of like playing cards, and if you were to point the camera of the three DS at the card and look at the screen, you would see a little virtual three dimensional character pop up on the card. So Mario would be

an obvious example. You put the Mario card down on the table, you hold up the three DS, and you aim the camera at the card, and you look at the screen and there's Mario, and Mario appears to be jumping around on your physical table. Now, obviously, if you look off of the display, there's no Mario jumping around, but on the display there he is, and it was

pretty cute. I remember being really impressed with this very simple implementation of AR when we got our three DS, and then I took our three DS apart, and then I took pictures of it, and then I posted it on Twitter and people got sad, it's a great day.

In twenty thirteen, Google introduced Google Glass. That was the wearable that included a small display position just above the right eye, so when you look straightforward, you could tell that there was something kind of above your natural eyeline, but it didn't get in the way too much to look at the screen. You actually had a glimpse. You had a glance upward, and then you could see what was on the display. Google Glass had augmented reality features

like crazy. You could see video calls. You could actually use the glasses to not just take a video call, but show the other person what you are looking at so they could see from your point of view. You could also overlay direction, so if you're walking down street, you could glance up at the screen and it would tell you if you need to keep going straight, or turn left or turn right, that kind of thing. It

was really useful. I had a pair of these Google Glass and I really liked the direction they were going in. I felt that it wasn't a fully realized product at the time, and eventually Google agreed and after a couple of years they took Google Glass off the market entirely, and now you can't get them anymore. They were clever, but they were expensive, and they had some limitations, And like I was saying earlier, you know, it's hard to

build all the components you need into one headset. So Google Glass would communicate via Bluetooth to your smartphone, and your smartphone would act as the actual nexus point to the Internet. But it was a neat idea, and I enjoyed getting to use them while I did, so I keep hoping to see a return of that kind of technology, but perhaps in a more mature and less expensive format. Now we've also seen applications similar to the ones we mentioned earlier, the ones that are meant to guide people

into laying out or repairing a system. We've seen that in the car world. Not too long ago, there was the MARTA system introduced by Volkswagen. MARTA makes me chuckle because that's also the name of Atlanta's public transportation system, But in this case, it stands for Mobile Augmented Reality Technical Assistance, and it's specifically designed for mechanics who are

working on the XL one vehicle. So if you hold up an iPad that has this app on it, and the camera is pointed at an XL one and you look at the display, you'll see information overlaid on top of the car, including labels for all the different parts. So let's say you're a mechanic and you have to do a specific repair on this vehicle. You hold up the iPad, you look through the display, and you see exactly what you need to do. It gives you a set of instructions. It shows you how you need to do.

It tells you where you need to stand based upon the angle of the view. So if you hold it up and it says no, you need to move about a foot to the right, you can do that. Then hold up the iPad again and you'll say, all right, you're in the right spot. Make sure you loosen this particular bolt first, that kind of thing. And it's meant to be an interactive maintenance guide in a way maintenance and repair guide. This is one of those applications of augmentary reality. I think is a no brainer to me.

It's a killer app The idea of having an ability to work with something you are not one hundred percent familiar with, but you're able to leverage the expertise of people who either designed it or built it, or just fully understand it and get guidance based on their expertise in real time, so you're not having to go and consult an article of it or watch a YouTube video. You get step by step instructions overlaid on top of your view of that thing. To me, that's the most

compelling use of augmented reality from a practical standpoint. There are a lot of other uses that I'll talk about towards the end that I think are also really super cool. So don't get me wrong, It's not the only one. But let's move on to twenty fifteen. That was when Microsoft would unveil the Holow Lens, something I still want to try out. I have not had a chance to

try a Holow lens yet. That is a headset capable of advanced AR applications everything from what I was just talking about, giving you guidance, step by step instructions on how to do like a repair job on say an electrical outlet. You could even use a Skype system to call an expert who can then view your point of

view and interact with that point of view. So let's say I'm looking at the outlet the expert electrician i'm talking to can see what I see, and he or she can also make notes on the display, which shows up in my field of view. So he or she might circle a specific wire and say you need to you need to remove that one first, and I know I need to do that one first because I can

see which one they are talking about. Or they might circle another wire and say, no, matter what you do, don't cut this wire, or the toilet upstairs will explode like lethal weapon two. And I won't do that because you know that guy's like three days from retirement. So I have a heart. But no, this is this is a really neat idea, having this interactive ability to overlay the information from the world, the digital world, onto your physical world. And beyond that, the HoloLens has lots of

other functions. It's not just something to do, you know, home repairs around the house. Else. You can also use it for entertainment purposes, like you could create a screen that can show you video from various sources and you can assign it a place on a wall in your environment. Let's say that you're in your living room and you just create a screen so you can watch Netflix and you slap it on a wall, and it will stay in that same position relative to your point of view.

So if you look to the left or right, the screen stays where you put it, as if it were physically there on your wall. But keep in mind it's just a virtual screen, and when you look back to that part of your wall, you'll see the virtual screen there playing whatever it was that you wanted to watch.

I think that's a super cool idea. And they've also shown off games like a game of Minecraft that uses hollowle in so you can actually view a Minecraft world sitting appearing to sit at any rate on top of a table, so you can walk around the table and view the Minecraft world from various angles and play that way. I think that's super neat. Don't know how compelling it is, because again I haven't tried it myself, but I really like the idea. This year, twenty sixteen, AR got another

big boost from a little game called Pokemon Go. Although I have to admit this was a really primitive, basic implementation of augmented reality. Really, it was not much more than just a In fact, it was nothing more than just an animated overlay that would exist on top of the camera view of of your device. So I'd say I'm holding up my smartphone and I'm trying to catch a Jigglypuff and the jigglypuff is currently bouncing up and

down on the sidewalk in front of me. That's about as far as the augmented reality actual experience would go. So very primitive. But because Pokemon Go became so popular so quickly, it really pushed the concept of AR to the front of the minds of people everywhere, including business owners who immediately said, we need an augmented reality app. Whether they actually needed one or not is beside the point. A lot of people got into AR because of Pokemon Go,

for both good and bad. I always think that you have to come up with the experience first. You have to understand why you need to use a specific strategy to create a specific experience, and then build it. Not hey, we need to augmented reality. Make something that's AR. To me, that's the backwards way of going about it. But what do I know, not a programmer, So I'm sure the programmers feel in a similar way to me, But that's

just a guess. Now, the future of AR depends heavily upon the applications we see in which ones end up being successful and which ones aren't. Right now, I would say that the best bet is to see more AR features built into smartphones and tablets, maybe not necessarily built into them, but have apps available that create AR experiences for very specific contexts, like let's say it's a museum app.

You might download a museum app on your phone, and when you go to the museum and you use your phone, you can get more information about the paintings and sculptures and other installations that you see in the museum. That's an easy one to understand. But that same app isn't going to be useful once you leave the museum, you no longer have the context that it is tied to. I think that smartphones are probably going to be where the greatest development is going to be in the near term,

because wearables is still really hard to do. We still don't have a consumer version of the HoloLens out available for anyone to purchase, and it may never come out as a consumer product. Microsoft hasn't shown a whole lot of interest in making it a consumer product. Maybe that will change, but at the moment, I wouldn't hold my breath so I would argue smartphones and tablets are pretty

much where it's at. Maybe some implementation with some existing VR headsets which have external cameras mounted on them as well, like forward facing cameras, you could build ar experiences there. Then it gets a little weird because you're also you know, you're looking at a monitor, so you're looking at a video feed of your surroundings, and on top of the

video feed you get the overlay. Same thing is true for your smartphones and tablets, by the way, but different that from the Google Glass implementation, where you're looking at the actual physical world, not a video representation of it, but the real world. And then, because the display itself that you are looking through is transparent, you're looking at a transparent overlay of digital information that gives you more

info about the world you are in. I hope you enjoyed that classic episode from twenty sixteen, Augmenting your Reality. I've had a lot of experience using various augmented real apps, and I've always found them really intriguing and interesting and potentially really useful, but I haven't actually made use of one that I felt was, you know, really necessary or added a whole lot of value to whatever the experience was. I could see the potential, but it just didn't quite click.

And it may very well be that's because I'm using the wrong equipment and probably the wrong apps. But I definitely see the potential for AR. I just haven't experienced it being really transformational. I hope that actually changes, because I really do think this is a technology that could potentially do a lot of good in a lot of different applications. That's it for this classic episode. I hope you are all well, and I'll talk to you again

really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.

Transcript source: Provided by creator in RSS feed: download file