Welcome to this week's classic episode. Let's play conspiracy Jeopardy. The category is Google the Pentagon and AI.
I don't even know what to say about this one. Guys, like, this is again, this is twenty eighteen. We didn't even understand, We didn't even understand episode.
No, it's probably a really teachable moment for all of us. We should go back and listen to the whole thing too, and learn from our past selves. That's one of the things that's weird about doing podcasts, you guess ever think about that there's this weird version this history of us being kind of varyingly behind and ahead of the eight ball over the years, and sometimes it terrifies me to go back and revisit it. So y'all out there in conspiracy realists think can do it for us.
From UFOs to psychic powers and government conspiracies, history is riddled with unexplained events. You can turn back now or learn this stuff they don't want you to know.
Welcome back to the show.
My name is My name is Noah.
They call me Ben. We are joined with our super producer Paul Big Apple decand most importantly, you are you. You are here, and that makes this stuff they don't want you to know. Today, we're going to talk about a company called Google, commonly known as Google. That's its street name. That's such a popular name, that's become a verb in American English. They have a parent company who could call it, the real company, which is known as Alphabet.
And we've talked about this a little bit in the past, but we want to whatever we mentioned Google establish that from the jump at the top, because it's strange how
relatively obscure that fact remains. And now, while Google O'tabet whatever you want to call it may not be the largest company in the world in terms of capital or territory or material possessions, it is undoubtedly one of the most influential organizations on the planet, at least in terms of its connection to human beings or the ways in which it influences networks or organizations or webs of human beings.
Most people on this planet, literally most people think about that have heard of Google, even if they have somehow not directly interacted with it, And arguably, on a slightly more controversial note, most people have in some way to some degree been affected by the actions of the US military, whether that's something that happened generations ago in their own country or an adjacent country, or to an ancestor, or whether something is happening to them now as we record
this episode, and so today begins the question spoiler alert. It also ends with a question, and our beginning question is this what happens when these groups Google Alphabet, whatever you want to call it, and the US military or Uncle Sam, what happens when they join forces? Because they have so, they have vultrons up, and they have vultroned up in a again controversial way, it involves unmanned aerial systems, also known as drones.
You are the ones that like deliver your Amazon packages, right.
Yeah, and some variation of them. Yeah, they're commonly called drones, of course, but unmanned aerial vehicles or UAVs have been a long running trope in science fiction, and surprisingly for many people, some version of this technology existed long before the modern day. There wasn't a history rattling moment when the first drone ever went airborne. And that's in large part because it's very difficult for us to isolate exactly when the early version of an invention ends and the
first modern versions begin. Matt when you and I worked on the invention show Stuff of Genius. We ran into this pretty often. Somebody comes up with a rough approximation of an idea and then over a course of years, decades, or even centuries in some cases, people continually make small improvements until it becomes the thing we know. Most inventions
are not a Eureka moment in reality. I mean, they may be a Eureka moment in terms of the idea or the epiphany, but most inventions in reality, and when we get to the hardware phases are going to be created by people standing on the shoulders of giants.
Yeah, it ends up being the person who makes it marketable rather than just creates it.
Oh good call, and a very sad and capitalist call. Right, it's the person who popularizes it. So if we look at the first modern version of a drone, an unmanned vehicle, would it be one of those bombs in the eighteen hundreds that was strapped to a balloon and then just sort of put off like something bad will happen, We can't steer it. There wasn't a person on that and it was being used as a weapon. Or was it the initial V one rockets that Germany deployed during World
War Two. In the early nineteen hundreds, military groups did use drones. They had radio controlled versions that were just for target practice. Engineers developed unmanned aircraft that again had munitions of some sort. These weren't drones the way we think of them in the current age. But there's you know, there are other examples, like cruise missiles. These things were
the first cruise missiles, often called flying torpedoes. They were not meant to return to base, and drones are meant to function like very very smart, sometimes very very deadly boomerangs. They come out, they ain't come back, and somebody.
Claps yeah, generally not the people being targeted.
Generally, yes, yes, generally not. You are correct, my friend. During the Cold War and the Vietnam War or the series of conflicts collectively known as the Vietnam War here in the US, Uncle Sam ramped up research on drones. It was no longer just an interesting idea posed by a few eccentric scientists and engineers. In fact, by the time of the Vietnam War, unmanned drones flew thousands of high risk reconnaissance missions, and they got shot down all
the time. But In that process, they saved the lives of human pilots who otherwise would have been a board. And I thought about I thought about you, Matt when I was looking into this because another precedent for drones, which we won't go into in this episode, another president for drones where weaponized bats. Did you ever hear about this?
The oneted to drop the little bombs?
Yeah, well, yes, that's a version of it. Yeah, they said bats can fly, we have access to them.
Well yeah, I mean you could go back to carrier pigeons as being a type of drone as well.
Yeah, I mean there's not a person aboard.
There's a really cool episode of the podcast of the Memory Palace about the bats that's called idy Bitty Bombs.
It's quite good.
Yeah, yeah, so check that one out for more information. Around the same time Vietnam War, it goes this far back. Around the same time of Vietnam War, engineers started adding real time surveillance ability to drones, so cameras. Right today we look at I guess the closest thing we have to a military version of the first drone would be the MQ one Predator, was first unveiled in nineteen ninety five, which is weird when you think about it because that
was such a long time ago. And in two thousand and two, the CIA used the MQ one Predator to make its first successful kill of a quote unquote enemy combatant in Afghanistan. Since then, since just two thousand and two, this technology has grown by cartoonishly extreme leaps and bounds, and it's currently on the bleeding edge of scientific advancement. Yeah.
And if that MQ one Predator drone, that's the drone that you have seen before, that's the one that is in pictures all over the place, that is the I mean, I guess being one of the first, it just became iconic in that way. Just gonna want to throw that in there.
Yeah, it looks like a windowless commercial plane with an upside down tail fin, right, two upside down tail fins.
Yeah, it's kind of The shape is slightly slightly odd, but yeah, it's it's pretty cool.
And as these drones become more and more common in theaters of war or in commercial industries like Amazon using them, or recreational use like you buy a when one is a gift, they're also doing the inorganic version of Specie eighting you know, speci eating is when a relatively common ancestor through generations and generations, produces different species of the same template. Right, So these drones are becoming increasingly specialized.
They have varying ranges of abilities and odds. Are you've seen one, right, We've all seen toy drones.
The little quad copter guys.
Yeah, and you gotta wonder too, like how does that work patent wise? Like did someone come up with that design and it was just generic enough to be copied? Like I always wonder about things like that, like the fidget spinner, Like did that guy just do a bad job of patenting the original fidget spinner? Or is it so generic that that doesn't even apply. I'm wondering that that's the same thing about the design of those quad toy drones.
Entirely speculative, but it's probably at the very least a series of patents. Yeah, they have to multiple things in play, I would imagine.
Pro tip speaking of a play, if you're gonna get one of those for your kids, get a good one because the cheap ones you might think you got a deal, they just don't fly, They suck.
And they're battery vampires too.
Oh, my god, that was the thing, Like we got one, like, oh, it flies for about four minutes, four minutes, Like that's not fun tops tops. Yeah, but what kind of batteries do the big boys use? Are they fueled or are they do you know that's classified?
Yeah, sorry, we can't talk about it.
You can't tell me.
No, I thought we were pals. Google it all right.
Oh yeah, so some of the on a serious note, some of those power systems would vary, and by the time we dive into one and this episode comes out, it may have changed. Interesting, that's how quickly they're moving on these. So we've seen these, right, and we've probably seen photographs or videos of the larger of these tiny drones, things like the Predator, like Matt just described. You can
find a photograph of it pretty easily. The differences are stark and astonishing, and missiles are not the biggest difference. The fact that they're weaponized. While that is incredibly dangerous and possibly fatal, that is not the biggest difference. Some of the bigger differences are going to be on the inside of the drone. We're talking things like GPS, real time satellite feeds, super complex microprocessors, and more, they're putting
brains on board of these things. And perhaps most disturbingly, some drones are edging closer and closer and closer to what's commonly known as artificial intelligence or technically known as machine consciousness. And we're close to an old science fiction question becoming a real question. Well, the first true artificial intelligence, the first true machine consciousness be a weapon of war.
Yes, highly likely, definitely.
Yeah, why is that met?
That's because innovation through the military is one of the ways that innovations occur. The biggest innovations and inventions occur because there's funding from a military somewhere, and it's not like it's the military itself doing these things. It is usually a private company or someone who's being given a grant, a government grant to do it.
We see this time and time again with many of the stories that we've talked about, where the military is also always like decades ahead of what consumers will ever get their hands on, and it might be ten twenty years before it trickles down into the public where maybe we can have a you know, terabyte thumb drive, just as a silly example, where they've had that technology probably for a decade at this point.
I would say the only way for it, not for the first machine consciousness, true machine consciousness, to not be a weapon, is for every private company on the planet to say I'm not going to do.
That, right, Yeah, for us all kumbayah and do some hands across America on the tech sphere, right, or hands across the world. So with this in mind, we're also hitting on a larger phenomenon, this kind of kind of unfortunate. We're a family show, but we also endeavor at our best to be an honest when the great leaps and technology are not almost always, but the majority of the time, the great leaps and technology are driven by human vices.
A desire to win a war, a desire to have more you know, social attention or more financial access than a rival.
Or how do we compress all of this?
Porn Right, that's the other one. That's the third one, a desire for us because in a very very real and concrete and provable way, pornography is the reason that you or your parents had a VHS instead of a Beta MAX. It's the reason, Like we talked about this I think previously. Yes, yeah, so often we all too often we as a species are driven by non noble motivations,
but that doesn't mean that we're not smart. We are a jar of very very smart, very belligerent cookies, and we've been putting some of the smartest cookies of our species into the search for artificial intelligence and machine learning, a search that continues as we record this episode.
So just a little while ago, in July of twenty seventeen, there were these two researchers named Greg Allen and Taniel Chan and they published a study which was called Artificial Intelligence and National Security, and in this they put forward goals for developing law policies specifically that would that would put ideas forth in how do you with AI when it comes to national security military spending? Like what should we as a species do when it comes to these
two things colliding? And here are some of their findings. They again said the most transformative innovations in the field of AI are coming from the private sector and academic world,
not from the government in military. So currently, right now AI is something that's happening outside of the realm of the military, which in July twenty seventeen, yay, that's a good thing, maybe because you know, there could be some nefarious private sector humans as well, But that's point A. The second one is that the current capabilities of AI have quote significant potential for national security, especially in the
areas of cyber defense and satellite imagery analysis. And then the last thing, I'm just going to read this whole thing as a quote. Future progress in AI has the potential to be a transformative national security technology on par with nuclear weapons, aircraft, computers, and biotech, so some of the most influential technologies on weaponry on war. They're saying AI is going to be the next big thing.
And they're not just talking the talk, right.
No, they are not just talking to the talk at all. According to the Wall Street Journal, the Defense Department spent seven point four billion dollars on artificial intelligence related areas in twenty seventeen. Seven point four billion dollars on AI.
And you know, not to not to harp too much on Eisenhower, but I believe he's the one who had a quote that said every rocket the US builds represents this many schools that we could have built but didn't. And there are a couple of things, if we're being entirely fair about that number, that we have to acknowledge. First is that this is only the money we know of, so seven point four billion, let's call that an at least, right, And then on the other end artificial intelligence related areas.
That doesn't mean they're just building a silicon brain, right, that there are many other things that we might not associate directly with machine intelligence, that maybe function is support or infrastructure. So this is all, This is the whole meal. This isn't just the hamburger.
Yeah, well, and we agreed. We should especially point out that the Department of Defense budget for fiscal year twenty seventeen was five hundred and forty billion dollars around that number.
Still the biggest in the world. Yeah, and that's publicly acknowledged. Again, I can't keep saying that enough. Publicly acknowledged.
That's what they ask Congress for.
Yes, it's true. Before I derail us, what do you think should pause for a word from our sponsor?
That seems wise.
Here's where it gets crazy. We've painted a little bit of the picture, right. We know this story now, fellow conspiracy realist. It is a story of drones. It is a story of private industry. It is a story of government interaction, a story of war and a story of secrets. But in this age of instantaneous information, we know more than we would have known, say even two decades ago.
We know more about what's happening now. Similar to the drones acquiring real time surveillance technology, we are able to be more aware of surroundings in a distant land. And that's how we know about something called what's it called METT.
It's the Algorithmic Warfare Cross Functional Team, also known as Project Maven.
And this was announced on April twenty six of twenty seventeen in a and internal memo from the Deputy Secretary of Defense Robert Bob o work.
Yeahobo, he goes by bob work. But I just thought, yeah, Boba work. So I guess we can just read parts of this memo if you want to. It's kind of hard to digest, that's the only thing. So maybe if we.
Anyway walk through parts, get down, humanize it.
Yes, let's let's do that. Let's do that. Here we go. I'll be I'll be the dry and the dry part. Here, Here we go. The aw CFT's first task is to field technology to augment or automate processing, exploitation, and dissemination for tactical unmanned aerial systems. Okay, okay, so good god,
Well let's keep going a little bit. So, so it's gonna augment or automate stuff for UAVs or drones and mid altitude full motion video in support of the Defeat ISIS campaign, and this will help to reduce the human factor's burden of full motion video analysis, increase actionable intelligence, and enhance military decision making.
So what that all means is the PED processing, exploitation, and dissemination means that they're hoping again to get a brain on board your local neighborhood predator or whatever drone they end up using by the time you are listening to this episode. And the processing part is going to be the analysis, which often would be the human part
of the equation. The exploitation would be the ability to take action on that analysis, right to use that knowledge, and dissemination would be sharing the information, determining what information is relevant, what is irrelevant, and to send where that's PED And that's the full motion video is similar to the same thing because it's it's a little bit easier to take a bunch of numbers that are sensed a bunch of temperature variables, for instance, and tell a computer
or an automated system that whatever is within this range, do action X. Whatever is outside of this range, do action.
Why, you know, even if the action is just alert the human user, the person that's going to actually analyze the thing.
Yeah, absolutely absolutely, And that's tougher with video, right because there are many many more factors of play. Would you say that sounds correct?
Absolutely? It just depends on what kind of sensors they're using on the drones. If it's you know, is it a heat based imaging system, is it just a you know, I mean, what are you looking at? Essentially? What is the data that you're looking at? And that's really hard to tell because it's going to change the depending on the model of drone and what they're you know, deploying.
The really interesting one of the really interesting things here in my mind is that, you know, we're talking about a single drone and having an onboard system, but they're also talking about having a system that can analyze the footage from all the drones that they have deployed in a single space and then have the machine be able to take all that data in and tell you what's going on, and then send that whatever the algorithm decides needs to happen, disseminate all of that information out to
the entire team that's also there, so you really have that. One of the things we talked about recently in another episode, a full scope picture of the battlefield essentially, so everyone at all times sees exactly what's happening on the battlefield.
As close to an omniscient observer as possible. Yeah, as well, we used to say humanly possible, but that's no longer the case, right.
Yeah, exactly, So let's keep going on here a little bit with what the memo said. This program will initially provide computer vision algorithms for object detection, classification and alerts for full motion video ped Further sprints will incorporate more
advanced computer vision technology. After successful sprints in support of intelligence surveillance reconnaissance, this program will prioritize the integration of similar technologies into other defense intelligence mission areas, so it's not just for this single drone operation that they're gonna deploy it to. It's going to be for bigger things.
It goes on to say, this program will also consolidate existing algorithm based technology initiatives related to mission areas of the Defense Intelligence Enterprise or DIE, including all initiatives that develop employ or field artificial intelligence, automation, machine learning, deep learning, and computer vision algorithms. So really they're talking about everything in the future for military applications.
So beyond far beyond drones. Oh yes, these could also be submarines for instance. These could be satellites, these could be space shuttles.
It could be surveillance, just video camera surveillance that's just on the ground either, right, you know, is this.
The kind of stuff where eventually you could program a drone or some kind of killing machine with parameters that are like legally sanctioned and you can say, okay, search and destroy X target or this type of target.
That's the fear. The potential is there, The potential is there. Absolutely, So essentially, what they're saying here is that DARPA and the Pentagon have already poured tons of money into this automated collection of data. And they're not alone. Other members of the Alphabet Soup Gang, which is a very endearing name for them, like the Apple Dumpling Gang, right, the Alphabet Soup Gang as other members of course n SA
and so on. They have also put forth tons of money and time into automating the collection of data, and the next step is discover is discovering a means of automating the analysis of data. And the step after that, the especially spooky one, is enabling these unmanned machines to immediately act in real time on the results of their
onboard analysis. So to the questioning Posonal, the answer would be yes, they want to have at least the capability to make a judge, jury, and executioner on an unmanned vehicle. This doesn't mean they'll do it. A lot of tech companies do have problems with it, or at least with actualizing that potential.
Yeah, you even see problems with the self driving cars. I mean, as much as much R and d as have gone into those, they're certainly not infallible.
And they get into fender benders and make mistakes.
Absolutely of Let's just have one more little quote here, and this is from the chief in charge of this Algorithmic Warfare Cross Function Team or Project MAVEN, and he's speaking specifically to what they hope to achieve in the near term with this project. And he says, quote people in computers will work symbiotically to increase the ability of weapons systems to detect objects. Eventually, we hope that one analyst will be able to do twice as much work,
potentially three times as much as they're doing now. That's our goal. So in the short term, somebody sat down with him, had a meeting, and we want to increase just basically what our analysts can get done, and we're gonna save money on that end, We're gonna save time and we're gonna be more efficient. That's what this whole project is about, at least in his eyes or his pr thing that he got to say. So what we're talking about with this project so far is just what
it means to achieve what it wants to achieve. We haven't discussed how it was going to achieve all of this, and that is where we have our friend Google.
Yes, Google, Google is nobody's friend. If Google was an ice cream flavor, they'd be pre Lanes and Google.
Yes.
And I can't say that word on the podcast. Every time you say project Mayven, I think it's a project Mayhem.
Yeah, slightly different. So we know here that both the Defense Department and Alphabet and Google are saying that this collaboration, them working together in any way on this at all, is all about AI assisted analysis of drum footage. That is, that is what they're saying. That is the party line on both sides. The p rtists put forth is everything's cool. We're just working on looking at video guys.
It's cool, right, right, specifically through something called tensor flow AI.
Yeah, Noel, you looked into this, right. What is TensorFlow?
This is from the website of TensorFlow trademark. I'm gonna do it like I'm doing an ad read. TensorFlow is an open source software library for high performance numerical computation. It's flexible architecture allows easy deployment of computation across a variety of platforms CPUs, GPUs, TPUs, and from desktops to
clusters of servers to mobile and edge devices. Originally developed by researchers and engineers from the Google Brain team within Google's AI organization, it comes with strong support from machine learning and deep learning, and the flexible numerical computation core is used across many other scientific domains.
Yep, that one.
Yeah, that's the what.
No, seriously, TensorFlow can actually feel kind of like totally future science fiction kind of stuff for people who are not already familiar with how machine learning works. But there's some pretty awesome introductions primers out there if you would like to learn more, particularly a YouTube video.
Yeah, it's about a forty five minute essentially on how it functions. But it's good, it's interesting, it's over my head.
And the cool thing when I found that one the cool thing about that guy is he's If you watch the video, he says, if you have any questions, you can email him directly.
Oh my gosh, I don't really.
Yeah, I don't know if he knew that his email address was going out on YouTube, but he seemed pretty approachable.
And that video is on the Coding Tech YouTube page and it's called TensorFlow one oh one. Really awesome intro into TensorFlow.
Super awesome.
But yeah, but forty five minutes is probably about the shallowest deep dive you're going to get into the subject. And I mean that is a glowing endorsement of the quality of this this walkthrough.
Yeah, so what exactly has TensorFlow done so far?
So in the research for this show, we came across an open letter that was written from the International Committee for Robot Arms Control, and they go into just some of the details of exactly what is happening here the collaboration between Google and the Department. So let's kind of look at this. They were looking at an article from Defense one in which they note that the Joint Special Operations Forces they've conducted trials already using video footage from
this other smaller UAV called scan Eagle. It's a specific surveillance drone, and the project is also slated to expand to even quote larger medium altitude predator and reaper drones by next summer. So these are the ones we spoke about in the beginning. These are the ones that are actively functioning in Syria and Afghanistan and other in Iraq and other places like that, and eventually to this thing called gorgon Stare, which sounds amazing, right.
Sounds scary, yeah, and it.
Turns it might. So this is these are specific sensors and it's a technology package that's been it's been used on these MQ nine reapers, which is just the newest relation. Yeah yeah, and man, it sounds really scary. But you can learn more about this right now. If you search for gorgan go r gon Stare and maybe drone or predator, that's probably the best way to look into it. And there's all kinds of information that's been officially released on this program already. So okay, so we know that this
thing's been used in these smaller projects. Now it's going to be moved on to these Predator and Reaper drones, and they're thinking that it's going to get expanded even further. And I don't even we don't even know what that expansion could be at this point, because the drone technology is growing. It's just a lot of it is classified.
Right, Yeah, a lot of it is classified, and as we've established earlier in the episode ode, that's really common. In fact, a lot of it will probably be classified for a number of years.
Well yeah, if you think about it this way. So the gorgon Stare, it can allegedly and I just say this because I haven't actually seen it, but it uses a high tech series of cameras that can quote view entire towns.
Like what like Google Earth style like yeah, that kind of zoom, yeah, okay.
Active running looking at an entire town. And then if you apply this machine learning to it and you can see every human being walking around, and then the technology gets more and more powerful, then you could really just have a complete surveillance state from above.
Yeah, that's that's pretty scary, Like you can see them in their homes. Is it have some kind of like heat vision that works from afar?
That's a great question. Then I don't know the answer too. I certainly hope not. Maybe it would be good for a military application, but that's terrifying.
That's the problem, right, Like you can't put this stuff back in the box. Like, even if it starts off as a military application, it's going to eventually get in the hands of Not to say that the military has our best interests at all times, but it could make its way into a more nefarious use.
If we were to put a list of all the technologies that expanded past their intended use on one side, and a list of all the technologies that people agreed not to pursue on the other side, one list would be very very long. I mean, we for a species that loses cities and even entire civilizations, we're pretty good at finding and replicating technology, and we don't like to lose it. We did an earlier episode on things like
so called Damascus steel or Greek fire. But looking in that it was very difficult for us to find examples of technology that was truly lost in the modern day. Once Pandora's jar is unscrewed, then the it's kadie bar the door. Yeaistic statement, I Ardi.
Those badgers get out of that bag.
They will eat you alive, and you can't get them back in.
No, man, Well, here's the thing. The badgers with this project badgers in the bag. They don't even know where the other one is right now. They're so far gone
from each other. Because this is where you get into the whole targeted killing thing that was really big in what two thousand and seven, two thousand and eight as we got on through the Obama administration, where drones were being used more and more to target mostly males, and males were considered if you're of a certain age, you could kind of prove that this male was of a certain age even from a drone, that person would be considered an enemy combatant in certain aspects or in certain ways.
And if you're using these algorithms to just figure out you know, if you can just figure out it's a male in in you know, above a certain age. Then that's what Google is helping them do is figure out how to kill people from Afar that are males of a certain age. Right.
And this goes into a quotation that you found, Matt, that gave me some chills.
Oh yeah. A quote from Eric Schmidt, who was who up until last year or the end of last year, he was the executive chairman at Google in alphabet, and he says, there's a general concern in the tech community of somehow the military industrial complex using their stuff to kill people incorrectly.
Hold on there, how do you kill people incorrectly?
That's that's the way of doing Yeah, that's what gave me chills, Matt, when you were originally talking about this, the idea of killing people incorrectly. The implication there is frightening because Schmidt is jumping straight across and completely sidelining the recognition that maybe the tech community doesn't want to kill people in general.
Is that just linguistic jiu jitsu kind of it's a bit.
Yeah, absolutely, yeah, it's a bit.
And that just so you know, that was a quote from a keynote address he was giving at the New American Security, Artificial Intelligence and Global Security.
Summit, right before the panel on how to correctly kill people.
Yeah, it was in November of twenty seventeen. Yeah, it was right before that panel.
You shoot him in the face, that's how you do it, and make eye contacts.
Huh yeah.
Yeah.
So the situation then seems on the cusp of something a rapid evolution, perhaps an evolution that's already occurring. And we've talked about the motivations of the military, we've talked about the technical aspects, we've talked about what the suits and the generals want. But what about the employees, What about the people who are actually doing the work. As we know, often those are the people who are the most.
Ignored, a lot of them bucked right, yes, and we'll learn about that right after a quick break. Oo.
So to bring the focus on the employees, the ones who are actually out there writing the code that you know, many times they probably won't get credit for, we need to look at their own internal activism. See Google and Alphabet employees were well aware of these moves far before the official announcement, way before twenty seventeen. Of course, they know this stuff is happening because they're working on things like this. Behind the scenes.
Might be compartmentalized, but they're working on things. Sure they can you know, employees talk.
Sure. For instance, do you remember that terrible sci fi series The Cube guys we.
Do the movie?
I think there are three?
There was one of them was Hypercube. I remember that.
I like the first one though, I quite enjoyed it. But yeah, it's it's it's shlock, but it's fun.
But the thing about it is it's a bit of a cautionary tale because the premise of The Cube spoiler alert is that a bunch of different experts were paid to research and construct specific components of what would later become this death trap, and they didn't know that what they were building would be used in this certain way. They thought they were just making heat sensors, right, they thought they were just making hatches. They thought they were just making lasers, Yeah, laser grids.
The first scene the guy actually gets cubed by a laser grid in a cube in a cube. That's pretty mad. And the thing too about that is like, is it just for funzies? Like you don't really know, Like why why are they in there? Is this just some like rich a hole. It's just trying to like, you know, you don't find out.
That, yeah, you will, really, you gotta keep going.
We got a cue. How far? How far? How deep does this rabbit hole, this rabbit cube go?
My friend, Well, somebody's going to make another one, and we're gonna find out.
You think some of the definitive one.
Sure, it'll probably be a prequel or something that's cool. You can also, you know, I'm sure read spoilers on the wiki or on the online in a form somewhere. But luckily, at least for their own ethical well being or their ability to sleep at night, the many of the Google and Alphabet employees were able to communicate with each other. They were aware of the implications of what would happen if all this stuff was assembled together like the Avengers or Captain Planet and the Planeteers, so they
took action. First, they gathered to write an internal petition
requesting the organization pull away from this agreement with Uncle Sam. Petitions, it should be said, are not inherently unusual at Google, but this one was particularly important and particularly crucial to the future of the company because first, it was created with the knowledge that this program, this cooperation with Project Maven is only a very small baby step in what management hopes will be a long, elaborate staircase, a much larger series of cooperations. You see, Maven is not a
one off thing. It's not its own it's not its own centerpiece. It is a pilot program. It is taking a concept and running it around the block to see how it drives. Secondly, Google employees believe that this project is symptomatic of what they see as a larger and growing problem in Google. They say the company is becoming less and less transparent across all all facets, public facing, government facing, employee facing, user facing, and they've even shed their old famous motto don't be evil.
Yeah, but now if you go to let's see the Google dot com slash about our company slash our dash company, it says this is what their mission statement is now quote organize the world's information and make it universally accessible and useful, and maybe be a little evil if needed. I don't see that written anywhere where.
I think it's implied. I think getting rid of that original mission statement and getting into these kind of deals implies such things.
They do have a on their Code of Conduct they made some changes that have a wonky timeline about them too, Because there's an article on Gizmoto by Kate Conger wherein the author finds the section of the old Code of Conduct that was archived by the way back Machine on and listening to the State carefully April twenty first, twenty
eighteen that still has all that don't be evil mentioned. Yeah, right, And then the updated version that was first archived on May fourth, twenty eighteen took out all but one of those mentions of don't be evil, and the Code of Conduct itself says it has not been updated since April fifth of twenty eighteen, So the timeline doesn't match up. User error. Possibly hmm, possibly.
Wayback machine, what you're doing over there?
Yeah? Did they get to you, wayback machine? Regardless of you know, whether you feel that this is just a series of unconnected dots being forced into a larger perceived pattern, whether you think people are practicing confirmation bias, or whether there really is something rotten there in Silicon Valley, the fact of the matter is you can read the full text of the petition online and find out what the Google employees themselves thought. We have a few highlights here.
We cannot, they say in the petition outsource the moral responsibility of our technologies to third parties. Google stated values make this clear. Every one of our users is trusting us, never jeopardize that. Ever, this contract referring to MAVEN puts Google's reputation at risk and stands in direct opposition to our core values. Building this technology to assist the US government in military surveillance and potentially lethal outcomes is not acceptable.
Recognizing Google's moral and ethical responsibility and the threat to Google's reputation, we request that you cancel this project immediately, draft, publicize, and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.
And a writer for en Gadget actually reached out to Google and they were provided with the following statement.
Quote.
An important part of our culture is having employees who are actively engaged in the work that we do. We know that there are many open questions involved in the use of new technologies, so these conversations with employees and outside experts are hugely important and beneficial. MAVEN is a well publicized DoD project, and Google is working on one part of it, specifically scope to be for non offensive purposes and using open source object recognition software available to
any Google Cloud customer. The models are based on unclassified data only. The technology is used to flag images for human review and is intended to save lives and save people from having to do highly tedious work.
I'm just gonna finish. I think it's worth it. Any military use of machine learning naturally raises valid concerns. We're actively engaged across the company in a comprehensive discussion of this important topic and also with outside experts as we continue to develop our policies around the development and use of our machine learning technologies.
So basically, we're gonna keep making this project. Sorry, guys.
Yeah, it's cool, but it's okay.
Because we're going to have a quote unquote healthy conversation about it. Here's what happens next. As you said, mat, Google refuses to backwave from this project. They're still going to go through Noel. As as we can clean from the statement you read, they say it is going to be nonviolent and there's no spooky stuff going on. It's all unclassified. Over three thousand engineers signed this petition and
a dozen resigned. More may resign in the future. This resignation wave is happening as we record this, and it's due to a coldly fascinating moral quandary. It's this, how responsible are these engineers for the applications of their inventions? Is for example, the creator of the Winchester repeating rifle guilty for the deaths caused by that weapon. His wife thought so, and that's why she built a crazy mansion out west, which we still, believe it or not, have never visited.
Oh man, I really want to go.
Yeah, I think we should absolutely go. Paul, would you go with us if we If we go to the Winchester mansion, we got we got an ardent thumbs up from him, they both.
Paul's got some some beefy thumbs Yeah.
You go into all the rooms first, Paul.
Yeah, it's just a test for traps.
Yeah. No.
But it also like Oppenheimer, right, he was famously had serious qualms about the way his technology was used in terms of being weapons of mass destruction.
And Einstein also tortured by the He didn't build a physical thing, but he was tortured by the idea that his own realizations had led to this. So with this in mind, if an engineer builds software for the purpose of, say, more efficiently delivering payloads to locations in near Earth orbit, and that same software is later used to deliver payloads of bombs to people that the drones owners don't care for. Is that engineer then responsible for the ensuing damage and
or death. It's a quandary that a lot of people are having a hard time answering to go back and noll to something you mentioned earlier about the dilemmas of autonomous vehicles. One of the current nie unanswerable questions, or at least when that hasn't been answered yet, is what happens if there is an imminent accident that's going to occur and you're in an autonomous vehicle. Does the vehicle
swerve and hit a pedestrian? Does it prioritize the person in the vehicle their life over the life of a person who deserves to live just as much but happens not to be in the vehicle, And if so, who is responsible? Is it the person who is in the vehicle for not taking control if they could. Is it the company that manufactured the vehicle? Is it the engineer who made the software that made the vehicle make that choice.
The trolley problem of the old thought exercise becomes increasingly concrete, increasingly dangerous, and increasingly complicated, and frankly, we're not equipped to answer it at this point.
No, it's it's it's it's an utter conundrum because what you know to be that programmer who gets to make that decision, you're essentially playing god, aren't you.
It's like what life is more valuable? You know? Is it our customer or is it the pedestrian.
There's really no way of properly answering that without making a serious judgment, call a hard and fast choice.
Right, Yeah, you could bake in something where it's just an absolute calculation of the number of lives. So, for instance, in that case, if there is a crowd of five people who are gathered for their Paul Decant fan club meeting, which inexplicably takes place on the sidewalk by a busy road.
If those five people are gathered for that meeting and there's only one person in the car, does the car say, Okay, we'll just pop the airbags and hope this one person survives, but we're not going to kill five people for one And then what if there are two people in that car and one of them is pregnant. And what if the five people on the sidewalk celebrating with the Paul Decant fan club, what if they're all in their seventies right and pass report of age generally speaking? You know
what I mean? This this is the kind of stuff that we have not yet, as far as we know, learned to program for.
Then before you know it, you've got you know, power hungry cars mowing down old ladies because they've you know, they've they've lived their life.
They're expendable.
But like, what what about a school bus? Does a school bus, based on the number of kids it has, does that have just immunity to always prioritize the children on the bus. It's very politically difficult to argue against.
That, oh man man.
Matt leaned back for that.
Oof, Yeah, because of what happens when a public bus and then a school bus are both operating and there's a crash a minute.
I don't know, because I'm marta. Most of the buses are empty, Oh shade, It's true, It's true.
What about the street cars? The street car even emptier empties.
There's there's not even a driver.
Yeah. At this point, the Marta buses are just hopping on the trolleys.
This is some serious Atlanta inside baseball.
It is true. Learn about our public infrastructure debacles. We're working on it. We're doing our best things.
It's gonna be called it's gonna be rebranded to the atl.
WHOA, that's cool.
Yeah, nothing weird about that, nothing forced about that. Right. So Google is unfazed. They're not only going to continue this project, which we should also say is not super super big. It's publicly described as being worth a minimum of nine million dollars. That's a lot of money to normal people. That's not a lot of money to Google. But they are vying for additional projects within this military space,
and this is not unprecedented. IBM has a long standing relationship with the US military on a number of fronts and also cough cough, Nazi Germany cough cough.
Yes, did you guys know the Hugo boss designed the Nazi uniform?
Yes?
I did not know that until yesterday. Shock.
What a great pr team they must have to still be around after that.
Seriously, well, I mean there were a lot of German companies that.
Were for sure Volkswagen, even I think was interesting.
On an additional note, IBM itself may be drawing an ethical line. They have signaled that they will not create intelligent drones for warfare, and a blog post they had recently IBM said they're committed to the ethical and responsible advancement of AI technology and the AI or machine consciousness should be used to augment human decision making rather than
replace it. So that's similar to the argument that automated vehicle systems for consumers should be doing a lot of assisted driving, but there should always be a human behind the wheel.
That's probably a good call International Business Machines.
That's true. Oh, that's true. And of course shout out to everybody who listened to our other episodes on the evil of machine consciousness, And personally I prefer that term to AI because what why is it artificial?
Right?
Why are we so special?
Yeah, And at this point I'm just going to urge everybody to go through and read that open letter that was written by the International Committee for Robot Arms Control. There are some pretty terrifying things that they go into there, specifically about the slippery slope that Project Maven represents. So I would go and read that. You can find it Robot Arms Control Open Letter.
I when you first showed this to me, Matt, I was cartoonishly tickled because I originally read the title as the International Committee for Robot Arms.
Yeah.
Yeah, and I didn't read the control part. At the end, Man, it was a long day. So what do they say, though, what's one thing that really stood out to you in this letter?
Well, here's the quote. We are just a short up away from authorizing autonomous drones to kill automatically without human supervision or meaningful human control. Tight that alone. Just you know, if you arm these weapons, these flying autonomous weapons, with the ability to make those decisions, yeah, that's a target that's not a target, that is definitely a target kill.
What happens if they get hacked as well.
Or if they just are just operating someone loses control somehow just however.
Or if they're operating independently somewhere and they have some sort of cognitive leap, and they say, you know, the best way to achieve the objective is to get rid of everyone. That would almost never happen. We're talking the odds of winning the lottery eight times in a row.
We're talking about like a terminator scenario, right.
Some Skynet stuff for now that is still relegated to the realm of dystopian science fiction.
But they do make another really great point. I just want to say here. They are deeply concerned about the possible integration of Google's data, the data that they have on everyone on you, probably massive data set, integrating that with this military surveillance data and combining that and applying it to this targeted killing.
Notion so you can cross reference.
Yeah, and this is a you know, these aren't just some Joe Schmoe's getting together and saying, hey, we need to write this letter, you guys. These are some serious engineers and academics and people who are working in these fields saying like raising a flag and saying no, no, no, no, no, we got to watch this.
We know where this could go. Yeah, and now we draw this episode to a close, and we don't have a solid answer or prediction. We're right here with you, folks, unless you are working directly for Project Mavin, right, we don't know. No one knows where exactly we are going to paraphrase Jim Wilder and Willie walk up.
Unless we forget to mention that Google is also obviously doing some super innovative, fascinating things that are quite good for humanity. Potentially, they're using machine learning to develop testing software that can actually find different complications related to diabetes and also early signs of breast cancer, and the FDA, according to this article from Wired, is already in early stages of approving AI software that can help doctors make
very important life or death medical decisions. And this is from an article called Google's new AI head is so smart he doesn't need AI. About this guy, Jeff Dean, who is a big part of a lot of Google's AI innovations.
That's a great point because we can't we cannot forget how large and varied Google and Alphabetter's organizations are. There's Google dot org as well, where you can learn about how they're using data to uncover racial injustice or building open source platforms to translate books for disadvantaged kids. It's
absolutely right. I really appreciate bringing it up because they're not just this solely evil thing, right, And sometimes those aims of these these large departments within Google may even contradict one another.
And you got to think, too, like an argument, maybe that a big high mucky mucket Google might make about getting involved in something like this is like, well, if it's not us, you know, and we're obviously going to handle it with thoughtfulness and care. If it's not us, it's going to be somebody else who might do a less good job, you know, and not think about the ramifications. So you know, even that statement that we read earlier, it was pretty measured.
I thought it was actually not bad.
Yeah, just kill people correctly, right.
Well, okay, you got me there, Ben, I guess what I mean is though it did sound like pure pr it was like, we get it.
It's a thing.
We understand that there is there are always consequences associated with this kind of stuff, but we're aware of them and we feel that we're equipped to help mitigate some of that. But it's against the badger're out of the bag scenario, right, It's not up to you anymore at that point.
And if you would like to learn more about Project Maven specifically, please check out tech Stuff. They have a podcast that just came out on this program. It's hosted by our longtime friends sometimes Nemesis and Complaint Department Jonathan Strickland. Available twenty four to seven for any issues or criticisms you have of stuff they don't want you to know, you can reach of me is Jonathan dot Strickland to HowStuffWorks dot com.
He just launched a live chat kind of situation too, so we can suss out your issues in real time.
Yeah, yeah, yeah, like a drone, exactly like a drone. It'll especially help you out on Twitch if he's ever on there.
No, let's say Google does somehow create a policy banning any and all military partnerships. He let's go a little bit further and say that all US tech companies follow their lead and do the same thing, because there are industry wide calls for everyone not to play this particular obo. I mean, let's even go further and say that all tech companies in the world refuse to help the US build self aware weapons of war. You know who will
continue this research? All caps, every single country that can afford it, every single private company that does not have the same moral quandaries about hypothetical scenarios, every single individual that does not have those kind of quandaries. It is again going to happen, and it.
Gets worse the AI race.
Yeah, it is. That's a good way to put it and to paraphrase Billy Mays, but wait, it gets worse. We'd like to end today's episode on an ethical quandary question that you could feel free to suss out yourself. Have a couple of couple of beers over, if you drink, meditate, if you do that, whatever you do to get yourself in a thoughtful headspace, and let us know what are the implications here. What sort of machine consciousness are we creating?
Are we making something that will be inherently belligerent? And if so, If the world's first machine consciousness is built to kill, how will this influence is later increasingly self directed actions, its thoughts, it's recognizations, it's emotions, or whatever those equivalents are. Imagine whether they're imagine if you would
that there were two different versions of machine consciousness. One maybe is built to optimize agricultural projects in a transforming ecosystem, and the other is built to finds people and annihilate them or buildings. What happens next? Where do they go? Do these things like the early drones? Do they increasingly
speciate and become more attenuated. There's the argument that we have with biological entities, with human minds, and that is that, yes, people change, but they don't change in the way you think. Over time, we tend to become more concentrated versions of who we were in the beginning. Now one could say that this is not something to be too disturbed by, because a human engineer could drop in and maybe incept this machine mind with a different, different line of code
that would alter its behavior. And maybe now it says I'm tired of killing right, or I have new programming. But we have to consider that if we are as a species creating minds that may one day have their own agency as much as as a human being, maybe more eventually, maybe in our lifetimes, they will have more agency and potential because they won't have the same biological, hardwired limits that human beings have. What kind of minds are we building? And why?
You can find us on Twitter and Facebook where we're conspiracy stuff, conspiracy stuff show on on uh Instagram that's the where that one is. You can find our podcast and everything else that's stuff they don't want you to know dot com. UH tell us what you think of, like what kind of minds are we creating? Like Ben speaking to what's happened Heady stuff heady mind. Yes, this is uh terrifying and I need to use the restroom now because I am the p has been done scared right out of me.
We better, we better take care of that right now.
Oh we need As we are not AI, we still have to, you know, take care of our bodily needs.
But in the meantime, if you.
Would like it, and that's the end of this classic episode. If you have any thoughts or questions about this episode, you can get into contact with us in a number of different ways. One of the best is to give us a call. Our number is one eight three three st d WYTK. If you don't want to do that, you can send us a good old fashioned email.
We are conspiracy at iHeartRadio dot com.
Stuff they Don't want you to Know is a production of iHeartRadio. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.