Welcome to tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with iHeartRadio. And how the tech are you. It's time for the tech news for Tuesday, August eighth, twenty twenty three. And despite my voice catching there, I feel a lot better than I did yesterday. Thanks for asking.
Let's get to the tech news.
And we've got a lot of AI related stories today. No big surprise there. It has been the topic of twenty twenty three, at least whenever Elon Musk isn't demanding all the headlines. And it's also sadly no surprise that one of the AI stories we're covering today has to do with faulty facial recognition technology and the mistake of relying on that tech for the purposes of law enforcement.
Porsche Woodruff, a black woman in Detroit, found herself arrested and attained for eleven hours when police acted on an incorrect facial recognition match while seeking a suspect in a carjacking and robbery crime. Not only that Portiae is eight months pregnant, and the surveillance footage from the crime in question showed a woman who very much was not eight
months pregnant. So not only was this a case of facial recognition software giving a false positive, it's also a case of cops apparently working under the assumption that a woman can go from not visibly pregnant to eight months pregnant in a very short amount of time, which is wild. So here's kind of what unfolded. A man reported being
the victim of a robbery and carjacking. The police were able to secure surveillance footage, and they used a tool called data works plus to unmatches against mug shots that were stored in a police database. Woodroff had been arrested back in twenty fifteen, so her mugshot was one of
the images in that database. This tool pulled a match between her mugshot and the surveillance footage, and the guy who was robbed also mistook a photo of Woodroff her mugshot as that as the same person as the perpetrator. So the police go and they arrest Woodroff, and of course she was not involved in the crime, could not have been, and just the fact that she was, you know, eight months pregnant should have been the immediate giveaway that
this is not the same person. The New York Times subsequently reported that Woodroff's case was the third in the city of Detroit alone that resulted in a wrongful arrest due to incorrect facial recognition matches, and that all three of those cases, as well as three other cases that were not in Detroit, all involved black people. I think it's safe to say that even the faulty facial recognition technology is able to see a pattern emerging here, and
it's one of racial bias in surveillance and identification tools. Now, as we have covered on this podcast, several cities and jurisdictions have banned the use of facial recognition for law enforcement purposes. Personally, I think that is merited. I can't help but imagine what being wrongfully arrested must be like. It's got to be incredibly traumatic and disruptive and.
Potentially cause lots of issues.
In your life, and you at no point were at fault for any of it. And if a technology is disproportionately leading to that kind of thing, we should not be using that technology for those purposes. I don't know how there's any argument against that. If the tool is leading to innocent people getting arrested and their lives getting up ended in the process.
You got to stop using the tool.
British researchers showed how a deep learning algorithm, once trained, would be able to decode keystrokes just from the sound of typing. This isn't totally new. I've heard of these
kinds of attacks before. But imagine for a moment that you are set up in some public space and someone else happens to have their phone out and you know, you don't know this, but what they're actually doing is activating the phone's microphone, and they're picking up on the sound of you tippity tappany typing away, and you're oblivious to any threats. So like maybe you're you're like being careful with your screen or whatever, but you're not thinking
about the actual keystrokes. And meanwhile, a computer program on the other end of that microphone is effectively transcribing everything you've typed, potentially including your login information. This is an acoustic attack, and I bet that makes all those mechanical keyboard clackie clackie types out there a little nervous. So is it possible that some hacker out there could get your log in credentials just by having a computer listen to you type. Technically, yes, it is possible. It is
not necessarily straightforward or easy to do. It is possible. So if you're at a location, then you need to know that is a possibility. But like a lot of locations have a lot of other noise, it's hard to set up a microphone in such a way that you're going to get a very clear recording of that sound.
Maybe if you're at a coffee shop, you should just start making CLACKI clacky noises with your mouth at the same time as you type, you know, to throw off any potential batties who are trying to listen in on you.
I don't know.
It's a wild world out there. Reuter's reports that the Walt Disney Company has created a task force to research how artificial intelligence could be used within that company, which of course encompasses lots of different divisions. You've got the entertainment division, You've got theme parks, You've got merchandising, you've got advertising. There's tons of businesses underneath the umbrella of Walt Disney. Well, the company currently has nearly a dozen
job openings that mention artificial intelligence research and development. So this does look like it's a big push and could include things like imagineering, but it also ranges to other stuff, you know, theme parks to advertising to also Disney television. And that last one is pretty notable because many of the contentious elements that are at the center of the ongoing strikes in Hollywood, which if you weren't aware, involves both writers and actors. They're both on strike in Hollywood
right now. Well, one of the things they're striking about is all about how studios should or should not use artificial intelligence moving forward. Reuter's cites some unnamed folks connected to Disney, no people who didn't want to have this get back to them, but they said that the company really has little choice here, that if it doesn't incorporate AI into its strategy, it runs the risk of becoming obsolete.
Maybe that's true.
But my knee jerk reaction kind of is triggered because back in two thousand and four, Disney famously shut down its two D animation studios, and it was unthinkable for a company that had built its reputation on traditional two
D animation to suddenly turn its back on it. It has subsequently changed that, But for a while, it looked like two D animation and Disney were just things of the past, and it just seemed like there was this attitude among Disney executive leadership that computer animation was somehow not just different from traditional hand drawn two D animation,
but innately better than two D animated films. Like, audiences didn't want to see two D animation, they just wanted computer animation, and for proof of that you would look
at Pixar. Here's the problem. Pixar was investing heavily in developing great stories to tell, and yes, the computer animation was getting more and more impressive with every single film, but they were really putting story first, whereas the animated side over at Disney had fallen a long way since the early days of the so called Disney Renaissance, which included movies like The Little Mermaid and Beauty and the
Beast in Aladdin. You know, if you look at nineteen ninety five, for example, I would say I don't think Toy Story, which came out in nineteen ninety five, was a better movie than Pocahontas, which also came out in ninety five, just because Toy Story was computer animated and Pocahontas was hand drawn. I think Toy Story was a
better movie because the script was better. But you know, Pocahontas also had to follow up on the amazing work of Ashman and Mancoln and Ashman had passed away in nineteen ninety one, so there are a lot of other to getting factors there. Anyway, Disney hasn't really commented on how it plans to incorporate AI into its processes. It's planning on doing it, but it hasn't talked about what
that might look like. So you know, you could have AI incorporated into very mundane stuff, right, like automating things like scheduling and finding the most efficient means to do that, which isn't necessarily impacting the creative side of the business that much. Right, if you're using it to handle stuff that is otherwise tedious and takes up a lot of time but is easy enough to automate, that's not necessarily
a bad thing. That can end up making a company more efficient without also displacing employees in the process, or at least freeing those employees up to do more rewarding work instead of something that's really tedious. The concern is whether or not that use of AI could end up replacing very important and creative roles for people, whether it's actors or writers or imagineers or whomever. So yeah, big
concern in AI with Disney. Yesterday, Zoom made a change to its terms of service after receiving some pretty harsh criticism from customers over the weekend. So around Sunday, people who were paying attention to zoom terms of service started to post screenshots of those terms, and they included a passage saying the company has the right to collect, store, and use quote unquote service generated data. Now that alone seems a bit concerning. Right. Let's say your company uses
Zoom for business meetings. You probably don't like the idea of Zoom potentially snooping in on your calls. On top of that, the term said that Zoom could use that data essentially to train AI, and that really got people upset. The thought that their video calls could be used as
material to train another aimodel seemed invasive. Now, a Zoom rep explained that the collection features are part of an opt in system users can choose to enable generative AI features such as transcription services, and the company does not use any customer content without first gaining the consent from the customer. To that end, now, Zoom has updated its terms of use to clearly say it will only collect audio and video data with consent from the user first.
In related news, Zoom leadership has also called on Zoom employees who live within fifty miles of a Zoom office to actually attend work in person at least two days a week.
So that's right.
The company that made the tool talented as one of the most important during lockdown, the one that enabled remote work, is now restricting remote work at the corporate level at Zoom, which is you know how the tables have turned. I guess okay, we're going to take a quick break. When we come back, we've got some more tech news. We're back, and actually have a couple of other AI stories to finish up with before we move on to other tech news,
so we're not done with AI just yet. AP News reports that the folks over at Dungeons and Dragons, you know, Wizards of the Coast, which in turn is part of Hasbro, are now telling their artists not to use artificial intelligence as part of the generative process to create fantasy arts, so they're telling the artists, hey, don't use AI when
you're making art for us. This comes after several D and D fans raised questions about an illustration that included a giant that they said looked a little weird, like perhaps it had not been made by a human being, and they asked, hey, was this made by a robot?
So D and D reached.
Out and contacted the artist, talked with them, found out that yes, there was some use of AI generative features to collaborate and make this art, and the company now is clarifying rules on what can and cannot be used to make fantasy art for the games. The particular piece that triggered you folks to ask, is this AI generated is actually going to be appearing soon in an expansion
book called Big Bie Presents Glory of the Giants. I think it comes out next week in fact, so maybe collectors will rush out and grab a copy in case future editions will remove the AI generated giant person. But you might ask why is D and D, Why is Wizard of the Coast, And perhaps by extension, Hasbro saying don't use AI to generate fantasy art. Why is that a big deal? Well, part of it is about copyright quest like who owns the copyright to a machine generated
piece of work. Obviously, Wizards of the Coast want to be able to copyright their their stuff and not have some other party claim ownership of something that's featured in a Wizards of the Coast work. But also there's the
issue of you know, copying another person's style. Right. If artist A produces a ton of fantasy art and then artist B uses a generative tool that coincidentally is referencing artists a work and it's doing so extensively, and then creates a new piece in the style of artist A, well then it's almost like artist B is copying Artist A. And you could argue, well, then that means artist A could have landed this gig and gotten a paying gig out of it, but instead their work was then you know,
sort of repurposed and reimagined without their consent. This is an ongoing issue with generative AI in the visual arts realm. There's also a very similar effect, the same problem that's going on within the written word. Right. There are authors and poets who are arguing that AI models being trained on published works are effectively copying these folks without their consent. So, if you'll excuse me, I need to roll to see if my AI generated image of an elf will deceive
Wizards of the Coast and oh, critical fail. Okay, well, I guess the fingers are all nudely and there's like fifteen of them, so I guess that's a dead giveaway.
Okay.
You know a lot of folks have voiced concerns about AI and the possible dangers that it could bring, and we can now count the Pope as one of those voices. Pope Francis has called on a global reflection today that
is all about how AI could be really dangerous. Pope Francis has said in the past he is largely unfamiliar with modern technology, including stuff like computers and the Internet, but that he also sees that these tools can be incredibly helpful when they are put to appropriate use, which you know, is a refreshing take from someone who is unfamiliar with technology that they recognize tools aren't necessarily good
or bad in of themselves. It's really all in how we make use of those tools, and if we commit to using them. In ways that aren't harmful, we can see great benefit. But some of these tools like AI, for example, have the potential to be very dangerous if we are using them improperly or if we don't have a full understanding of the consequences before we use them, so we have to take extra care when we're working with them. It's not that AI is not worthwhile or
could never do anything positive. That's clearly not the case. We just have to be very very methodical in our approach to using AI, and right now you could argue that is not what we're seeing. Apple has reportedly struck a huge deal with chip manufacturer TSMC out of Taiwan that will see Apple purchase essentially all of TSMC's chips made with their next generation manufacturing process, which is called
the three nanometer manufacturing process. Just as a reminder, once upon a time, when we would use things like nanometer to describe the chip manufacturing process that actually referenced the size of individual components found on the chips, that you would actually see stuff on the chips that measured at that unit. But these days it's really just a naming convention. It's really just to designate the generation of the chip manufacturing process, the individual elements on the chips are not
three nanometers in size. That would end up being a big disaster because of the way quantum physics works. So yeah, just a reminder that the whole nanometer thing, it's just a naming convention now. It doesn't actually reference anything other than this is the newest one and it has to keep getting smaller. So it does raise questions of do we go down to the atomic scale once we get
past one nanometer. Anyway, according to the information, Apple has essentially ordered every single TSMC three nanometer chip at least in the short term, and by short term I mean Apple will have exclusive use of chips made by that manufacturing process from TSMC for about a year, and that definitely gives Apple a leg up on the competition that wants to use TSMC's chips. There are other fabricators out there. TSMC is not the only game in town. It's just
the biggest one. Meanwhile, there is a political battle surrounding TSMC's planned fabrication facility that would be here in the United States, in Arizona. So the Taiwan based company plans this fabrication plant in Arizona, but just last month announced that there was going to be a construction delay that
would last until twenty twenty five. The reason being, according to the company, is that there are a lack of skilled workers here in the US who would be needed to prepare and open the facility, not to work once it is open, but to actually get everything in place as well. They're just the talent isn't here, and instead TSMC wants to bring around five hundred employees from Taiwan to the United States to do that work instead. That has led to US politicians weighing in and they have
argued that these jobs should go to US workers. Now there's a lot going on here, and it gets very messy and it gets very political. But from a high level, the US decades ago seeded as in got rid of pretty much all major chip fabrication because it's expensive. It is very expensive to build chip fabrication plants. You have to update them constantly because as we were just talking about, you're always evolving the technology to make more powerful chips,
which means you got to retool everything. Sometimes you have to build totally new facilities and that's a huge investment and a lot of US companies got out of that game ages ago, and instead that work went to places
like Taiwan and TSMC in particular. So because America got rid of didn't totally get rid of chip fabrication, but largely pushed that out to other places in the world, you might say that TSMC could at least have a partly legit point to make that the US lacks the experts needed to open an advanced fabrication facility simply because the US hasn't really been focused on that part of
chip manufacturing for a while. Yes, in the US you have a lot of people developing the next chips, designing the next generation of chips, but the actual fabrication is taking place elsewhere. So while the expertise is definitely in design, the argument is it's not in creating the fabrication facilities,
and that's where the problem is. On the flip side, there's a concern that the reason TSMC really wants to bring in Taiwanese workers to the US is not just because they have expertise in the area, but also because they're less likely to resist a push to work really long hours, including working weekends and stuff, whereas US workers have this pesky habit of arguing that they need to be you know, fairly compensated and have work life balance.
So in other words, there's a concern that TSMC is really looking to exploit a workforce in an effort to keep costs down and to speed up building out the facilities. As for what happens once the facility opens, TSMC says it's committed to providing around twelve thousand jobs and that US employees will fill those roles, so that like the actual jobs of working at the facility will go to US citizens, not you know, it won't be Taiwanese workers brought over to do that particular work. So yeah, like
I said, it is political. There is a technical side to it too, but it's messy. And this is why you can't just leave politics out of discussions of technology, because politics affects us and it affects the tech sector a lot. In fact, you know, we can often see it most acutely in the tech sector. Not that it's not impacting other sectors as well, it's just when it hits tech, people take notice because it's high profile stuff. So we can't really avoid it here. I don't know
what the actual story is back here. I mean, it may very well be that TSMC could not find the talent they needed in order to prepare the fabrication facility properly here in the US. Maybe that's true, or maybe it's not. I just don't know, but I do know that it is an ongoing issue right now. So we'll have to see how that plays out in the short term. Okay, I got a few more stories to cover, but before I can get to that, let's take another quick break.
We're back.
Ours Technica has an article about how scientists at the Lawrence Livermore National Lab in California have, for the second time now, produced a fusion reaction that generated more energy than was needed to initiate the reaction. We're gonna put an asterisk on that because we're gonna come back to
it now. I have talked about fusion a lot on the show, but just as a quick reminder, fusion occurs when you take two lightweight atoms, like hydrogen atoms, for example, and then you blast those atoms with enough pressure and or energy like heat, in order to.
Fuse them into a new.
Atom, helium. In this case, this is what happens in the sun. You know, hydrogen is forged into helium at a temperature of millions of degrees as how does or why does the sunshine would tell us anyway, in this process, you also end up with a release of energy, right, That's why the sun actually does shine. It's releasing it energy,
it's not just doing this process. So if you end up with more energy than you used to start the reaction, you've got a potentially viable alternative to other kinds of power facilities, you know, like coal power plants, or even things like solar and wind farms, or traditional nuclear power
plants which rely on nuclear fission. That's the process of splitting heavy atoms into lighter atoms, and that also releases a huge amount of energy in the process, but that also creates things like nuclear waste, which you have to figure out how do you process that or deal with it or store it. It comes with a lot of again political issues that make that technology difficult to pursue.
Here in Georgia, we actually had a nuclear power plant come online for the first time in decades, and it was supposed to have been built like fifteen years ago, I think at this point somewhere around there. But there were so many different delays and then the the cost of building out the facility exploded as a result of that. So even though the technology is proven, there are a
lot of drawbacks to nuclear fission. So nuclear fusion could be a way to have an alternative that doesn't have the same issues that nuclear fission has, but it's hard
to do so. Researchers have managed to create a few fusion reactions over the years, including some uncontrolled ones in the testing of the fusion bomb, but typically when you were trying to make a fusion reactor for the purposes of power generation, the result was that you were getting an energy output that was less than the amount of energy you were using to initiate the reaction, meaning you're
operating at a net loss. Right, you're using more energy to start a reaction, then you're getting out of the reaction. That is not a viable way to generate power lose energy in the process. This most recent experiment generated three point one five megadewels of energy, and the laser that
was lasers that were used to initiate this reaction. We're blasting a two point oh five megadules amount of energy at the fuel, So two point oh five energy is hitting the fuel, the reaction generates three point one five megadels of energy. That means you're getting about one and a half times more energy out than you're putting in with the laser. Now, let's go back to that asterisk I said about, you know, creating more energy, or not creating,
but releasing more energy than you're pouring into it. Now, the lasers did emit two point oh five megadels of energy. However, the draw from the power grid to create those lasers was much, much, much larger. So when you step back and you say, all right, well how much energy did it take for me to make a laser that could emit two point zero five megadels of energy, that's where you start to see that you're having to use a lot more power to get that three point one five
mega jewels out of the reaction. So that means ultimately it's a net loss when you look at it from a big picture standpoint. According to Ours Technica, scientists think we're going to have to hit energy generation that's around thirty to one hundred times more than what the lasers are blasting out in order for fusion to be a viable power source.
That is a huge leap from.
One and a half time, so, which is what we saw in this most recent experiment. You know, we have to get up to thirty to one hundred times in order to reach efficiencies where we're able to get more energy out than we put in. Plus we have to make it something that can be repeatable rapidly. Right now, you're talking about months between experiments at this laboratory. A power plant is going to need to do this many times a second in order to continue lead generate energy
for the purposes or release energy. I keep saying generate. You know, energy can be neither created nor destroyed. It's really just released or converted from one form to another.
Anyway.
You would have to make that sustainable in order to do things like create electricity for people. Otherwise you would just have these spikes and they wouldn't be useful for anything. Yeah, you could say like, wow, we generated a ton of energy there. We released a huge amount of energy, But unless you can make that something that can consistently provide electricity, it's not really that useful. However, if we are able to crack that code, we would have an incredible future
ahead of us. So let's hope for it, and let's also hope that it's not a perpetual twenty to thirty years situation. You know, that's where scientists say we're twenty to thirty years out from a technology maturing, but then we never get there, like ten years later, we're still twenty to thirty years out. Let's hope it's not one
of those cases. A while back, I talked about how Boeing has had to delay testing its Starliner crew vehicle with actual, you know, human astronauts after discovering some issues with the capsule's design. So, the purpose of the star Liner capsule, it's a spacecraft but looks a lot like, you know, like the old Apollo capsules. The purpose of the star Liner is to serve as a vehicle that will take astronauts to and from stuff like the International
Space Station. Then you would have like the Orion capsule, which is larger. This is the one NASA plans to use for future Moon missions. However, last month, Boeing had to scrap test plans for the star Liner after review showed that the capsule's quote unquote soft links in its parachute system failed to measure up to NASA's safety requirements. So they had to go back to the design board and fix that and to replace those soft links, which
they now have said they've done. Plus, Boeing had used some tape in the star Liner's wirings this ms that have been flagged as a potential fire hazard that under certain conditions they can become flammable. So Boeing has subsequently, you know, started to remove all that tape and replace it with other stuff. There are a few areas where Boeing says that's not feasible to be able to actually remove the tape because doing so would damage the vehicle
in the process. So instead they're coating the tape with material that will be fire resistant so that you know, it still won't end up causing a potential disastrous fire inside the capsule. Now, all of this means that we're looking at twenty twenty four at the earliest for a test of the Starliner with a crew aboard. That is disappointing news for Boeing as well as for NASA, but NASA can continue to depend heavily on SpaceX's Dragon two
vehicle in the meantime. In more optimistic space related news, NASA and the Department of Defense performed a recovery test for the Orion crew module. So the Orion, like the Apollo spacecraft of decades ago, is meant to splash down in the ocean, specifically the Pacific Ocean upon returning to Earth. Once it splashes down, a retrieval team will then maneuver to a few thousand yards of the spacecraft and then send retrieval teams to help the four astronauts exit the
vehicle safely. So this particular test is part of the Artemis two mission. Artomis two will send astronauts around the back side of the Moon for the first time in many many decades. Artemis three is the one where astronauts will actually set boots on the Moon for the first time in ages. So the goal is to retrieve the crew safely in less than two hours after the capsule
has splashed down in the ocean. The process involves Navy divers who first go and check the capsule to make sure that it's safe to deploy the raft that's around the capsule and for the crew to emerge from the capsule. The raft is called the front porch, and it serves as a platform for the crew to step out on once they leave the capsule, and from that point a different retrieval crew will actually fly out to the splash site and airlift the Orion crew and then transport them
back to a recovery ship. Once the crew is safely aboard the recovery ship, then engineering teams will connect the Orion capsule to the ship so it can be towed back to land. So the test was successful, which is a good step toward Artemis two. I got a couple of article recommendations for you before I sign off. One is in the Verge. The article is titled why thread is Matter's biggest problem right now, and Jennifer Patson two
A wrote the piece. This deals with the technologies that serve as the foundation for home automation tech and explains how some high level decisions are making things perhaps a little more complicated instead of simplifying them, which is what Matter was really supposed to do. I'll have to do a full episode about this in the future, but meanwhile, this is a great start if you're wondering why is
home automation so darn complicated? Why are there so many competing different systems using proprietary approaches where you can't have interoperability between everything. This is a good way to kind of get a ground understanding for that. The second article I want to recommend is by Gregory Barber of Wired, and it's titled The Cloud is a Prison? Can the
Local First Software Movement set us Free? So this piece talks about how developers and consumers and corporations are grappling with issues related to cloud platforms and a movement that could potentially bring about an alternative to cloud computing. And spoiler alert, it relies on technology that it has actually been around for quite some time. But really interesting because you know, seeing this this sort of seesaw movement between
centralized computing, decentralized computing, local computing versus cloud computing. It's it's kind of you can start to see patterns in the way people are using computers and what they do when they encounter challenges in one model versus another. So I highly recommend both those articles. As always, I have no connection to either of those publications or the authors behind those pieces.
I do not know them.
I just thought they were interesting and that if you are into tech and you really want to learn more, those are two good articles to read. Okay, that's it. This was a long episode for a news episode. Probably means Thursdays will be short. Here's hoping. I hope all of you are well and I'll talk to you again really soon. Tech Stuff is an eye iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.