Welcome to Tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with iHeartRadio and how the tech are you? So? Originally I had a different plan for today. I was getting work done on an episode that I will still bring to you. It'll just probably be a little while longer before I do it because as I was working early on this episode, I developed an ocular migraine,
which has still affected me right now. For those who are not familiar ocular migraines, you end up getting these weird effects on your vision. I don't have any pain, at least not yet, but I can't see very well. So it took me a while to even open up the recording software for me to be able to do this bit. But yeah, I am not able to do my work right now because I can't really see. But
these have happened before. They don't last that long. Usually this one's a longer one, but they clear up pretty much on their own, and most of the time I don't get on migraine migraine with them, So here's hoping that's the case today. But I didn't want to leave you without an episode, so I thought it would bring you this one that we published a few years back
in September of twenty twenty. It is called tech Stuff Looks Inside the Black Box because I think this topic in particular is incredibly relevant today, particularly when we are talking about artificial intelligence. A lot of AI systems work inside what we would call a black box, and so I thought, why not bring that one back and have that one rerun today while I get my vision under control. I hope you're all well, and I'll chat with you again at the of the episode. So this is tech
Stuff Looks Inside the Black Box from September second, twenty twenty. Today, I thought it was a good time to take another opportunity to chat about one of the subjects I really hammer home in this series. And I don't make any apologies for this. It's about critical thinking. So yeah, this
is another critical Thinking in Technology episode. Now, in these episodes, I explain how taking time and really thinking through things is important so that we make the most informed decisions we can and so that we aren't either fooling ourselves or you know, allowing someone else to fool us when it comes to technology. Though, I'll tell you a secret.
Using these skills in all parts of your life is a great idea because it can be really easy for us to fall into patterns where we let ourselves believe things just because it's convenient, or you know, it reaffirms our biases, our prejudices, that kind of thing. So if you use critical thinking beyond the realm of technology, I ain't gonna be mad. Specifically, I wanted to talk about a general category of issues in tech that some refer
to as the black box problem. Now, this is not the same thing as the black box that's onboard your typical airplane. In fact, i'll explain what that is first because it's pretty simple, and then we can move on. First, the black box inside airplanes is typically orange. So right off the bat, we have a problem with nomenclature, right, I mean, you had one job black box. Actually that's not true. The black box has a very important job
and it requires a couple of things to work. But the black box, which is orange, is all about maintaining a record of an aircraft's activities in a housing capable of withstanding tremendous punishment. Another name, or a more appropriate name, really for this device for the black box is the flight data recorder. Sensors in various parts of the plane detect changes and then send data to the flight data recorder,
which you know, records them. If a pilot makes any adjustments to any controls, whether it's the flight stick or a knob, or a button or a switch or whatever, the control not only does whatever it was intended to do, assuming everything's in working order, but it also sends a signal that is recorded on the flight recorder. So the job of the flight recorder is to create as accurate a representation of what went on with that aircraft as
is possible. The Federal Aviation Administration or FAA in the United States, has a long list of parameters that the flight recorder is supposed to keep track of, more than eighty in fact, and these include not just the aircraft's systems, but what was going on in the environment. So if a pilot encounters a problem on a flight, or in a worst case scenario, in the event of a crash, the flight recorder represents an opportunity to find out what
actually went wrong. You know, was it a malfunction, was it pilot error, was it weather? The crash survivable memory units inside the heavy duty casing of the black box are really meant to act as a lasting record, So if the recorder is recoverable, it gives investigators a chance to find out what happened. But, as I said, that's not the black box I really wanted to talk about for today's episode, so I'm not going to go into
any more detail about that. Now, you could argue that the black box I want to talk about is sort of the opposite of what we find in airplanes, because in an airplane, the black box contains a record of everything that has gone on, and it can help explain why a certain outcome has happened. In technology in general, we use the term black box to refer to a system or technology where we know what goes into it and we can see what comes out of it, but we have no idea of what went on in the
middle of that process. We don't have a way to understand the process by which the device takes input and produces output. Now, in most cases, we're not talking about an instance where literally nobody understands what's going on with a device or a system. It's more like the creators of whatever system we're talking about have purposefully made it difficult or impossible for the average person to understand or in some cases even see, what a technology is doing.
Sometimes it's intentional, sometimes it's not. So Maybe purposefully is being a little strong there, but that's often how it unfolds. Let's take a general example that has created issues with a particular subsection of tech heads, and those would be the gear heads, you know, the people who love to work on vehicles like motorcycles and cars and trucks and stuff in what you might think of as the good old days, at least from the perspective of DIY technology.
There's plenty of other things that were wrong back then, but in those terms, a car's systems were pretty darn accessible. A motorist would need to spend time and effort to learn how the car worked and what each component was meant to do. But that was actually an achievable goal. So with a bit of study and some hands on work, you could suss out how an engine works. You know, how spark plugs cause a little explosion by igniting a
mixture of fuel and air inside an engine's cylinders. How that explosion would force out a piston which connects to a crank shaft, and how that reciprocating motion of the piston would translate into rotational motion of the crank shaft that could then be transmitted ultimately to wheels through a transmission. You could learn what the carburetor does, how the various fans and belts work and what they do. You know where the oil pan is and how to change out
oil and all that kind of stuff. What's more, you can make repairs yourself. If you had the tools, the replacement parts, and the knowledge and the time you could swap out parts. You could customize your vehicle. You know, I've known a lot of people who have taken on cars as projects. They'll purchase an old junker and then they will lovingly restore it to its former glory or
turn it into something truly transformational. And all of that is possible because those old cars had really accessible systems. They were relatively simple electro mechanical systems. Once you understood how they worked, you could see how they worked or how they were supposed to work, and you could understand what was going on. Through that understanding, you could address
stuff when things weren't going well. And that's how cars were for decades but that began to change in the late nineteen sixties, but it really accelerated no pun intended in the nineteen seventies. So what happened well in nineteen sixty eight leading into nineteen sixty nine, Volkswagen introduced a new standard feature for their Type three vehicle, which was sometimes called the Volkswagen fifteen hundred or sixteen hundred, there
were a couple of names for it. These were family cars, and Volkswagen's intent was to create a vehicle with a bit more luggage and passenger space than their Type one, which was also known as the Volkswagen Beetle. The feature for these cars that I wanted to talk about was an electronic fuel injection system that was controlled by a computer chip, and the marketing for this particular feature said that this quote electronic brain end quote was quote smarter
than a carburetor end quote. Now, the purpose of a carburetor is to mix fuel with air at a ratio that is suitable for combustion inside the engine cylinders. But an engine doesn't need exactly the same ratio of fuel to air from moment to moment. It actually varies as a vehicle runs longer or it travels faster, or it starts climbing a steep hill, or you know, lots of stuff. The ratio changes somewhat, and the carburetor manages this with a couple of valves, one called the choke and another
called the throttle, among other elements. But it's all mechanical, and while it works, it's not as precise as an electronic system could be. And that's where the Volkswagen system came in. Volkswagen was pushing this as a more efficient and useful component than a carburetor. It would prevent the engine from being flooded with fuel and not enough air for combustion. It would handle the transitions of fuel and
air mix ratios more quickly and precisely. It was the start of something big, but were it not for some other external factors, it might not have taken off the way it did, or at least not as quickly as it did. Those other factors, as I said, were external, and there were a pair of doozies. One was a growing concern that burning fossil fuels was having a negative impact on the environment, which turned out to be absolutely the case. Cities like Los Angeles, California, where getting around
pretty much requires having a car. We're dealing with some really serious smog problems, and so organizations like the Environmental Protection Agency in the United States began to draft requirements to reduce car emissions that would mean that automotive companies would have to create more efficient engine systems. The other major factor that can atributed to this was the oil crisis of the nineteen seventies, which I talked about not
long ago in a different Tech Stuff podcast. This was a geopolitical problem that threw much of the world into a scramble to curb fossil fuel consumption because the supply was limited. The double whammy of environmental concerns and the oil crisis forced a lot of car companies to rethink their previous strategy, which was pretty much more power, make bigger, much fast, go go go, go guzzle. That kind of is how they were thinking back in the day. Turns
out that was an unsustainable option. And if you look back, especially at American cars during the fifties and sixties, you see that trend of the engines getting bigger and more powerful, and that was just the way things were going until we started to see these external changes come in and so more car companies began to incorporate computer controlled fuel injection systems. But this move also marked a move away from the design that made it harder for the DIY
crowd to work on cars. Working on a damaged carburetor was one thing, Dealing with a malfunctioning computer chip was another. It didn't fall into the typical skill set of your amateur mechanic, and of course we didn't stop computer controlled fuel injection systems. Over time, we saw a lot more automotive systems make the transition to computer control. Today, your average car has computer systems that control the engine, the transmission,
the doors, entertainment system, the windows. And these systems are all individual and they have a name. They're called electronic control units or ECUs. Collectively, they form the controller area network or can CAN. The connections themselves, the physical connections are called the can bus bus, and that's really just a way of saying these are the physical connectors that allow data to pass from one ECU to another. And there wasn't really like a central processing unit or anything.
There was no like central brain. It was more like ECUs that depend upon one another would send relevant information to each other and not to anything else, so you know, if the door sensor is showing a door as open, it can send an alert to other systems so that that information is appropriately dealt with. Now, at the same time that these individual systems were evolving, so too we saw the rise of what would become the onboard diagnostic
system or OBD. And the OBD keeps an eye on what's going on with the various systems in the car, and it sends notifications to the driver via dashboard indicators when something is outside normal operating parameters. So let's say that this diagnosis computer picks up that there's something hinky happening with the fuel air mixture and it activates that pesky check engine light on the dashboard that gives you
next to no useful information. The problem is that these days it can be challenging or sometimes impossible to figure out exactly what caused that check engine light to come on without the access to some special equipment and expertise. The car systems have become so sophisticated that it could be a challenge to figure out what exactly has gone awry. Mechanics use devices called OBD scan tools, and these tools connect to the computer on board a car, and then
the car provides an error code to the scanner. This, by the way, took a long time to standardize because you've got a lot of different car companies out there, and obviously there was a need to move towards standardization so that you didn't have to have fifty different scan tools and fifty different code charts to deal with all the different car companies. But the code corresponds to the
specific issue the OBD has detected. So not only do you need a special piece of equipment to diagnose what has gone wrong with the car, you also need to know the codes, or else you haven't really learned anything. If I get a eight digit code and I don't know what that code refers to, then I'm not really any better off than just looking at a check engine light. On top of all of that, even if you know what is wrong, you might not be able to easily access the problem or fix it due to the level
of complexity, sophistication and computerization of vehicles. Not all cars or motorcycles or whatever are equal. Obviously, some are a bit easier to work on than others. Some require a lot of specific care. Though. For example, if you're driving a Tesla, chances are the amount of personal tinkering you're going to do on your car is going to be fairly limited. Now, I'm not saying it's impossible, just that
it's really challenging. So, in general, we've seen cars go from a mechanical system or electro mechanical system that the average person can understand and work on, to a group of interconnected specialized computer systems that are increasingly difficult to access. The cars have become a type of black box. This can be extra frustrating for gear heads who have actually an understanding of underlying mechanical issues that could cause problems.
They might even know how to solve an issue if they can just get to it, but they are finding themselves with fewer options in order to address underlying issues. Now, cars are just one example of technologies that have moved toward a black box like system. Lots of others. But apart from making it harder to tinker with your tech, what's the problem. But when we come back, I'll talk about some of the pitfalls of turning tech into a black box. But first let's take a quick break. We're back.
So the card transformed from a purely electro mechanical technology to one that increasingly relies on computer systems. But the computer itself can also be something of a black box for people now. In the very early days of the personal computer, it was hobbyists who were ordering kits through the mail and then building computers at home. Typically, these hobbyists had a working understanding of how the computer systems operated, you know, the actual way in which they would accept
inputs and process information and then produce outputs. Before high level programming line languages, programmers also had to kind of think like a computer in order to program them to carry out functions. As computer languages became more high level, meaning there was a layer of abstraction between the programmer and the actual processes that were going on at the hardware level of the computer, that connection began to get
more tenuous. Now, I'm not saying that programmers today don't have a real understanding of how computers work, but rather that this understanding is less critical because programming languages, computer engines, app developer kits you know, software developer kits and so on, provide a framework that reduces the amount of low level work programmers need to do in order to build stuff
as software. For the average user, you know, someone who isn't learned in the ways of computer science, computers are pretty much black boxes. They work until they don't. You push buttons on a keyboard, or you click on a mouse, or you touch a screen, and you know, the computer does the stuff how it does stuff like how it detects a screen touch and then translates that into a command that is then executed to produce a specific result.
You know, that's not important to us. We don't care or need to know how that works in order to enjoy the benefits of it. So for us, it's just the way things are. You push that button and this thing happens. It just does the black box of the computer system, which can be a desktop, a laptop, tablet, smartphone, video game console, you know whatever. It just takes care of what we need it to do. That's not to
say a computer is impenetrably a black box. You can learn how they work and how programming languages work and so on. Computer science and programming classes are all built around that. So while a computer system is effectively a black box to the average user, it wasn't made that way by design, and it can be addressed on a case by case basis, depending on the time, the interest of the individual computer user, and their dedication to learning.
But sometimes people will set out to make technologies with the intent of them being black boxes from the get go. These technologies are dependent in part or in whole on obviuse skating how they work, in other words, by obscuring it.
Sometimes that's in an effort to protect an invention from copycats, the whole idea being that if you come up with something really clever, you don't want someone else to come along lift that idea and do the same thing you are doing, but selling it for less money or something. But other times you might be hiding how something works specifically with the intent to deceive. And now it's time to look much further back than the nineteen sixties. It
was seventeen seventy in Europe. As the story goes, the European world had seen a great deal of advancement in mechanical clockwork devices. At that point. Clocks themselves, often powered by winding a spring and keeping time using gears with a reliable and consistent basis that was much better than earlier methods, even allowed people the ability to carry a time keeping device with them phenomenal. Based on similar principles, various tinkerers had come up with toys and distractions that
also ran on clockwork, like gears and springs. Some of these were quite elaborate, such as figures that appeared to play musical instruments, and one of them was particularly impressive. It appeared to be an automaton that could play expert level chess. The figure, made out of wood, was dressed in Turkish costume, leading to it being called the Turk,
or sometimes the mechanical Turk. If you were to sit down to play against the Turk, you, as an opponent, would move a piece, and then you would watch as this mechanical figure would shift and move a piece of its own in response. And the Turk was a pretty good chess player. It frequently beat the opponent's at face. Sometimes it would lose to particularly strong players, but it held its own pretty darn well. The man behind this invention was Wulfgan von Kimplin, who was in the service
of Maria Teresa, Impress of the Holy Roman Empire. He had been invited to view a magician's performance in the court. So the story goes and The Impress had invited him specifically and afterwards asked him what he thought, and allegedly he boasted he could create a much more compelling illusion than anything this magician did. Now, according to the story, the Empress essentially said, oh yeah, well, prove it, buster, and he was given six months to do just that.
The turk was what he had to show for it in six months time, and it reportedly went over like Gangbusters. The wooden turk stood behind a cabinet, on top of which was the chessboard, and Kempland would reportedly open the cabinet doors and reveal some gears and mechanics to prove
that it was purely a mechanical system. In fact, the gears were masking a hidden compartment behind them, in which a human chess player was sitting inside, hunched over, keeping track of a game, using a smaller chessboard in front of him, and using various levers to move the turk's limbs in response. Now, a lot of folks suspected that something was up from the get go, but you know, part of the fun of a magic trick is just not knowing what's going on. Some folks try very hard
to figure out the process. I am not one of them. Others are just happy to be entertained by a very well performed trick. But in a way, the Turk was a kind of black box. In fact, you could argue that a lot of magic tricks pretty much fall into the black box category. The process is purposefully hidden from the viewer. If we could see what the magician was doing from beginning to end, all the way through and
without any misdirection, then it wouldn't be magic. We might admire the skill of the magician, how quickly they were able to do things, but we wouldn't really consider it magical. So the output is dependent upon people not knowing the process the inputs went through. Now that's not to say that you can't appreciate a really good magic trick even if you know how it's done. One of the best
examples I know of is pen and Teller. They did a phenomenal version of the cups and balls routine where they used clear plastic cups and balls of aluminum foil to demonstrate how cups and balls works, and you can watch the entire time, and even being able to see through the cups and see the moves they're being made. Teller does them with such skill that it is truly phenomenal. It doesn't hurt that Pen is spouting off a lot of nonsense at the same time and misdirecting even as
you're watching what's going on. I highly recommend you check it out on YouTube. Look for Pen and Teller cups and balls. You won't be disappointed. Now. The Turk, as far as I can tell, was always intended to be an entertainment, not necessarily something that was specifically meant to perpetuate some sort of hoax. You wouldn't call a stagement a huckster or a con man or anything like that.
Their occupation is dependent upon misdirection and making impossible acts seem like they really happened, but always or nearly always with the implication that it's all an illusion or a trick of some sort. But not everyone is quite so forthcoming about the fact that the thing they are doing is done through trickery. For the scam artist, the black
box creates an incredible opportunity. As technological complexity outpaces the average person's understanding, the scam artists can create fake gadgets and devices that they claim can do certain things and then count upon the ignorance of the average person to
get away with it. Typically, the go to scam is to convince people with money to pour investments into the hoax technology in an effort to fund whatever the next phase of development is supposed to be, whether that's to bring a prototype into a production model or to refine a design or whatever. But the end result is pretty
much the same across the board. The con artist tries to wheedle out as much money from their marks as they can before they pull up stakes and skip town, or they find some way to shift focus or punt any promises on delivering results further into the future, like
that's a future me problem kind of approach. Once in a blue moon, you might find someone who is just hoping to make enough time to come up with a way to do their hoax for realsies, or at least to simulate it close enough so that people are satisfied. That typically doesn't work out so well, Fairness, I'll get
back to that. So let's talk about some examples of outright scams that leaned heavily on the black box concept, whether by having their supposed and actual operating mechanisms hidden or by obscuring how they really worked with a lot of nonsensical claims and technobabble. One historical scam artist was a guy named Charles Redheffer who claimed to have built
a perpetual motion machine. If he had managed to do such a thing, it would have been a true feat, as it would break the laws of physics as we understand them. So let's go over why that is just pretty quickly. For perpetual motion to work, and thus for free energy in general to work, a machine would need to be able to operate with absolutely no energy loss, and for free energy, it would have to generate that
energy in some way. A perpetual motion machine, once set into motion, would never stop moving unless someone or something specifically intervened, But if it were left to its own devices, it would continue to do whatever it was doing until the last syllable of recorded time. To borrow a phrase
from the Bard. Now, if we look at our understanding of thermodynamics, we'll see that doing this in the real world is impossible, or at least it would go against fundamental ways that we understand regarding how our universe works. The first law of thermodynamics says that energy is neither created nor destroyed. Energy can, however, be converted from one
form into another. So if you hold a water balloon over the head of a close personal friend, let's say it's ben boleen of stuff they don't want you to know. The water balloon has a certain amount of potential energy. If you let go of the balloon, that potential energy converts into kinetic energy, the energy of movement. You didn't
create or destroy energy here, it just changed forms. So if you have what you claim to be a perpetual motion machine and you set it in motion, the energy you gave that machine at that initial point should sustain it forever, and it would never have that initial energy change form into some other type of energy that could then escape the system and show a net energy loss for the system itself. Remember, the energy is not being destroyed,
but it can be lost in another form. This means that such a machine could not have any parts that had any contact with one another, which would make it a really strange machine. And that's because friction would be a constant means for energy to convert from one form to another form, in this case, kinetic energy the energy of movement into heat. Friction is the resistance surfaces have
regarding moving against each other. So if the machine has any moving parts at all, those parts will be in countering friction, which means some of that moving energy will be converted to heat and thus escape the system. So the overall system of the machine itself will have a net loss of energy. There will be less energy to keep it going, which means gradually it will slow down and ultimately just stop. As a result, it might take a long time if the machine is particularly well designed,
but it will eventually happen. You would need some form of energy input to keep things going on occasion, kind of like a little push. Imagine that you've got a swing like a rope with a tire at the end of it. No one's in it right now. You would have to give that tire a little push every now and then to keep it swinging, otherwise it will eventually stop. But that means you wouldn't have a perpetual motion machine. There are other factors that similarly make perpetual motion impossible.
If the machine makes any sort of sound, than some of the energy of operation is going in to creating the vibrations that make sound. Sound itself is energy. It's kinetic energy, so that would mean the machine as a whole would be losing energy through that sound. A machine operating inside an atmosphere has to overcome the friction of
moving through air, and the list goes on. Moreover, if we could build a perpetual motion machine, we'd be able to harness it for energy, but only up to whatever the starting initial energy was to get it moving in the first place. Because again, energy cannot be created. We can build devices that can harness other forms of energy and convert that energy into say electricity, but these are
not perpetual motion or free energy machines. These machines are just collecting and converting energy that's already in the system or already present, so they're not making anything. Redheffer, however, claimed to have built a perpetual motion machine that could potentially serve as a free energy generator. Now, if true, this would have been an astonishing discovery. Not only would our understanding of the universe be proven to be wrong, but we would also have access to an inexhaustible supply
of energy. Redheffer showed off what he said was a working model of his design in Philadelphia, and he was asking for money to fund the construction of a larger, practical version of his design. A group of inspectors from the city came out to check out how this thing worked,
and they noticed something hinky was going on. Even though Redheifer was doing his best to run interference and prevent anyone from getting too close a look at the machine, the gears of the device, which was supposedly powering a second machine, were worn down in such a way that it was pretty clear that it was actually the second machine that was providing the energy to turn the quote unquote per petual motion machine, not the other way around.
So if we were talking about cars, this would be like discovering that the wheels turning were causing the pistons of the engine to reciprocate in their cylinders. It's going the opposite way. So the investigators then hired a local engineer named Azaiah Lukens to build a similar device, using a secondary machine to provide power to what would be the perpetual motion type machine, and then they showed it to Redheifer, who saw that the jig was up and he hoofed it out of town to New York City.
He tried to pull essentially the same scam there, this time using a machine that was secretly powered by a hand crank in a secret room on the other side of the wall. Technically, it was just a feller sitting there with a hand crank in one hand and a sandwich in the other, providing the work to turn this machine. Robert Fulton, an engineer of great renown, exposed the whole device as a fraud when he pulled apart some boards on the wall and revealed the man sitting there cranking
away and Red Heifer fled again. Records of what happened next are sketchy. It seems he might have tried to pull the saying Dangs scheme in Philadelphia again a bit later, but he disappeared from the historical record after reportedly refusing to demonstrate his new device. When we come back, i'll compare this to what I mentioned before a farahnose before we chat about other concerns regarding the black box problem.
But first let's take another quick break. Okay, So THEARRHNOS this is the biomedical technology company that was founded by Elizabeth Holmes, and she is currently awaiting a trial on charges of federal fraud in the United States. The trial was supposed to begin in August twenty twenty, but has since been delayed until twenty twenty one due to COVID nineteen. Now,
the pitch for Therahnose was really really alluring. What if engineers could make a machine capable of testing a single droplet of blood for more than one hundred possible illnesses and conditions, So rather than going through multiple blood draws and tests to try and figure out what's wrong, you could get an answer based off one little pinprick within
a couple of hours. Maybe you would even be able to buy a Theoroughnose machine for your home, kind of like a desktop printer, and that would allow you to do a quick blood test at a moment's notice. Maybe you would get a heads up about something you should talk to your doctor about, preventing tragedy. In the process, you might learn that with some changes in your lifestyle, you could improve your overall health or stave off various illness.
It would democratize medicine, giving the average person more control and knowledge about their own health and giving them a better starter point for conversations with their doctors. And yeah,
that's a great goal. It's a fantastic sales pitch, and it did get Holmes and Therphnos a lot of interested investors who really wanted to tap into this because not only is it something that you would want for yourself, you could easily see that if this is possible, that business is going to be like the next Apple, It'll become a trillion dollar company. Something that powerful would undoubtedly
become a powerhouse. Now I've done full episodes about Bharrahnos and how it fell apart because spoiler alert, that's exactly what happened. The technology just didn't work. But I think a lot of what happened with Bharaphnos was largely dependent
upon naivete ignorance, and wishful thinking. Our technology can do some pretty astounding stuff, right, I mean if you had told me in two thousand that by the end of the decade I would be carrying around a device capable of really harnessing the power of the Internet in my pocket and I would have access to it all the time, I would have thought you were bonkers. So if technology can do incredible things like that, why can't it do
something equally incredible with blood tests? The idea is that, well, we're already seeing this amazing stuff happen. Why isn't this other amazing thing possible? And that is dangerous thinking. It equates all technological advances and developments, and that's just not how reality works. Moore's law, the observation that generally speaking, computational power doubles every two years, has really helped fuel
a misunderstanding about technology in general. We extend that same crazy growth to all sorts of fields and technology when that doesn't actually apply, and it gives us the motivation to fool ourselves into thinking that the impossible is actually possible. That I think is what happened with Pharrhos. Now. I'm
not saying Holmes set out to deceive people. I don't know what she really believed was possible, but based on what I've read and seen and listened to, to me, it sounds like she figured there was at least a decent chance her vision would become possible. And so a lot of Therahnosa's activities, in my personal opinion, appear to have been meant to stall for time while engineers were working on very hard problems to make the blood testing device
work as intended. The further into the process, the more the company had to spin wheels to make it seem like it was making more progress than it actually was. The company had raised an enormous amount of money from the investors, so they were beholden to them. They had also secured agreements with drug store chains to provide services to customers, so they needed to perform a service. It had to show progress, even if behind the scenes things
had actually stalled out. On top of that, you also have the reports of executives like Holmes herself living the high life and really enjoying incredible benefits of wealth because of the enormous investment into the company, so that plays a part too. Sharrhnos's operations were effectively a black box
to the outside world. It was meant to misdirect and give the implication that things were working fine behind the scenes, while the people who were actually there were trying to keep up the illusion while simultaneously attempting to solve what appeared to be impossible problems. At some point, based on how things unfolded, I would say that executives that Sharrhnos appeared to be perpetrating a scam, not just you know, trying to maintain an illusion while getting things to work.
They were actively scamming people. In my opinion, maybe they were still holding out hope that it would ultimately work out, but that doesn't change that it was a classic case of smoke and mirrors to hide what was really happening, such as using existing blood testing technology from other companies in order to run tests while claiming that the results were coming from actual therenose devices. But again, this is all my own opinion based on what I've seen and
read about the subject. A court we'll have to determine whether or not Holmes and others actually committed fraud. A lot of the technology we rely upon in our day to day lives is complicated stuff, and there are limited hours in the day, and it's a bit much task anyone to become an expert on all things tech to figure out exactly how they work. Tech is also becoming more and more specialized, so you might become an expert in one area of technology and be completely ignorant of another.
That's not unusual because it takes a lot of time to become an expert at specific areas of tech. These days, they've become so specialized. But by overlooking the how we can make ourselves vulnerable to bad actors out there when it comes to technology. Maybe they are actively trying to pull the wool over our eyes, or maybe they're just
simply misguided and they misunderstand how stuff works. But either way, our own ignorance of how tech does and what it does, and the limitations that we all face based on the fundamental laws of the universe as we understand them, that all makes us potential marks or targets. That's where critical thinking comes in and plays a part. Knowing to ask questions and to critically examine the answers, and to ask follow up questions, and to not accept claims at face
value are all important traits. Now, we do have to be care not to go so far as to embrace denialism. If we are confronted with compelling evidence that supports a claim, we need to be ready to accept that claim. I'm not advocating for you guys to just go out there and say that any and every claim is just bogus. That's not the point. I'll close this out by talking about something we're seeing unfold in real time around us,
and that involves machine learning and AI systems. Now, if you follow the circles that report on this kind of stuff, you will occasionally see calls for transparency. Those calls are to urge people who are designing these machine learning systems and AI systems to show their work as it were, and to have the systems themselves show their work. It's not enough to create a system that can perform a
task like image recognition and then give us results. We need to know how the system came to those conclusions that it produced. We need this in order to check for stuff biases, which is a serious issue in artificial intelligence. Honestly, it's a really big problem for tech in general, but we're really seeing it play out rather spectacularly in AI. Now i'll give you an example that I've already alluded to,
facial recognition technology. The US National Institute of Standards and Technology conducted an investigation in twenty nineteen into facial recognition technologies, and it found that algorithms were pretty darn good at identifying Caucasian faces, but if they were analyzing a black or an Asian face, they were far less accurate, sometimes one hundred times more likely to falsely identify somebody based
on an image. The worst error rates involved identifying Native Americans, so let's let that sink in, because when we talk about issues like systemic racism, we sometimes forget about how that can manifest in ways that aren't as intuitive or obvious as the really overt stuff. We live in a world that has cameras all over the place. Surveillance is
a real thing that's going on all the time. Police and other law enforcement agencies rely heavily on facial recognition algorithms to identify suspects and to search for people of interest. And if those algorithms have a low rate of reliability for different ethnicities, a disproportionate number of people who have no connection to any investigation are going to be singled out by mistake by these algorithms. Lives can be disrupted, careers can be ruined, relationships hurt all because a computer
program can't tell the difference between two different faces. That is a serious problem, and it points to a couple of things. One of the big ones is a lack of diversity on the design inside of things. We've seen this with tech for a long time. There is a really critical diversity issue going on with technology. The people who are building algorithms and training machine learning systems are largely failing to do so in a way that can
be equally applicable across different ethnicities. Meanwhile, organizations like the American Civil Liberties Union are calling upon law enforcement agencies to stop relying on technology like this, entirely, pointing out that the potential for harm to befall innocent people outweighs
the benefits of using the tech to catch criminals. A machine learning system trained to do something like identify people based on their faces needs to be transparent so that when a bias becomes evident, engineers can go back to the machine learning system and look and see where it went wrong, and then train it to eliminate the bias. Without transparency, it can be hard or impossible to figure out exactly where things are going wrong within the system. Meanwhile,
real people in the real world are suffering the consequences. Now, if we extend this outward and we look into a future where artificial intelligence is undoubtedly going to play a critical part in our day to day experiences, we see how we need to avoid these black box situations. We need to understand why a system will generate a particular output given specific inputs. We've got to be able to check the systems to be certain they are coming to
the right conclusions. Artificial intelligence has enormous potential to augment how we go about everything from running errands to performing our jobs, but we need to be certain that the guidance we receive is dependable, that's the right course of action, and so I hope this episode has really driven home how it's important for us to hold technology up to
a critical view. It's not that technology is inherently good or bad, or that people are specifically acting in an ethical or unethical way, but rather that without using critical thinking, we can't be certain if what we're relying upon is actually reliable or not. I also urge, as always that
we pair compassion with critical thinking. I think there's a tendency for us to kind of assign blame and intent when things go wrong, and sometimes that is appropriate, but I would argue that we shouldn't jump to that conclusion right off the bat. Sometimes people just make bad choices, or they are misinterpreting things, but they don't have any
intent to mislead. So while I do advocate that we use critical thinking as much as possible, let's be decent, nice human beings whenever we do that if it turns out someone is truly being unethical and trying to deceive others, that's obviously a different story. But before you know for sure, I say, we employ that compassion and hopefully we are able to solve these problems before they have these real world impacts, because the consequences of those are dramatic and
terrible and avoidable if we use critical thinking. I hope you enjoyed that rerun episode from just a few years ago. Boy seems like a real, totally different era because that was obviously several months into the COVID nineteen pandemic when lots of us were still on lockdown and stuff. Very different time from today. But yeah, as I said, black boxes, it's still very much an ongoing topic in tech, particularly in the field of artificial intelligence, but not just that.
We see it in the right to repair movement as well, as advocates argue that companies can't, of few skate the workings of their products so that it makes it impossible for you to do any kind of maintenance or repair on them yourself or with an independent operation that can do it for maybe less than what it would cost you to take it to A quote unquote official repair site.
So yeah, black Box is still very much a thing, still very much an ongoing concern in the field of technology, and I'm sure we'll end up talking about it again and again as the various artificial intelligence stories continue to unfold. I hope you are all well, and I will talk to you again really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.