Welcome to tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host Jonathan Strickland, Diamond executive producer with iHeartRadio and how the tech are you? Sign for the Tech News for Tuesday, May twenty third, twenty twenty three. And first up, we got a couple of stories about how social network platforms facilitated the rapid spread of misinformation and how that in turn created more chaos. Also, this first one has a dash of AI in it,
so that's a bonus. So first up, Yesterday, an AI generated image showing a plume of dark smoke that appeared to be the result of an explosion that was apparently near the Pentagon here in the United States went viral. But as I said, that is image was AI generated. There was no such explosion that happened anywhere close to the Pentagon. In fact, if you zoomed in a bit on that image, you would see that there were some
hinky ness to the image. There were details in the photo that looked a bit off, kind of like how you know, if you do an AI generated image of a person, the AI just doesn't seem to get fingers right. Often the fingers in images that are generated by AI are the things of nightmares. I guess AI just thinks that we have spaghetti at the end of our hands. Anyway, at the time of this recording, I haven't seen anything
about who might have generated the image. Initially reportedly, it was actually on Facebook before it really took off on Twitter. We do know that several Russian based news sites or propaganda sites depending upon your point of view, ran with the story and they published it as a breaking news item, and it even caused a small dip in the stock market. But this stumble corrected itself once where it got out
the whole thing was just a hoax. So this is a case where misinformation really was more of an inconvenience than a real threat because of the rapid response and the debunking of this image. But it does show that again, social networks really facilitate incredible rapid spread of misinformation. Now let's go to story number two, and this one takes place in the UK. Now, this one doesn't involve AI, but it does involve social networks and a very real tragedy.
So yesterday, a couple of teenagers were riding on an off road bike or scooter in Cardiff, Wales, and they got involved in a traffic accident and both teenagers died from their injuries. Now that is undeniably terrible, a horrible loss. Police then arrived on the scene of the accident, but on social media there was this narrative that began to form that accused the police of actually causing the accident.
The narrative said that the police were in a pursuit and that in turn created the accident in which the two teenagers lost their lives. But that was just not true. The police weren't involved in a pursuit. There was no police presence until after the accident happened and police were
called to the scene. However, this didn't stop the story from spreading online rapidly and people in the community began to assembol and what started off as kind of a demonstration of anger toward police escalated into a full blown riot, with the crowd throwing stuff at police officers. Some of those police officers suffered injuries, although from what I understand, none of them were really serious injuries. And again it turned out that the story that the police had contributed
to this accident was just a lie. But the time that message was getting out there, things were already out of the crowd continued to roam the streets until the early hours of this morning, and then they dispersed. Now, to be clear, I do not think it's fair to blame social networks for the actual misinformation. Rather, social networks facilitated the spread of misinformation. They didn't make it. They
just made it way easier for it to spread around. Now, I do not know if any recommendation algorithms played a part in that. It's possible because the algorithm could promote stories that seem to be driving a lot of engagement among people of a specific region. Right, it might be, Oh, people around you are really interested in this particular story, and then you get served up that story and it
perpetuates itself. That's a possibility, But I don't know for a fact that that happened, But you know, it's a possibility, and at the very least, it definitely did spread across social media. Well, misinformation has been a thing long before social networks ever existed, and rumors have passed around well and truly without social networks. Right, So it's not like if we got rid of social networks, this would no
longer be a problem. It's just that it's extremely efficient to spread misinformation at this point, much more so than it was in the past. And now let's talk about end to end encryption. It continues to face challenges from political leaders around the world. I've talked about how many nations, including the United States, have looked for ways to work around end to end encryption or perhaps even ban it outright.
And it's usually in the desire to scan messages for signs of illegal content, so it could be an attempt to look for communication between would be terrorists or to search for evidence of people trafficking in illegal materials like child pornography. Spain's government has joined the list of governments that are very much taking aim into end encryption. This is not a unique view even in the European Union.
There are lots of countries in the EU that have proposed creating rules that would allow a government to scan and monitor communications, which means that you would have to get rid of into end encryption, because the very nature of into end encryption is that only the people at either end can access the encrypted information. So anyone who tries to intercept the information somewhere in the middle they're just going to be left with encrypted nonsense. They can't
read it. Now, this is a really complicated problem. So on the one hand, you do have the legitimate concern that more needs to be done, for example, to protect children from becoming victims. I think it's hard to deny that right, that we need to be better at protecting children from child predators. On the other hand, this measure means an end to private communication, and there are some
situations where such communication is absolutely critical. You know, authoritarian governments could abuse and have abused this kind of process to crack down on perceived threats, and those threats might just be someone like a journalist or an activist, or you know, a political rival, and yeah, it's all done in the name of protecting the state, but it really comes down to an authoritarian display of power and denying other people the right to privacy. And I can't pretend
to have the answers here. I don't think getting rid of ENDO end encryption is a good idea. However, Now, one agency that would probably love to see into end encryption go away is THEBI. I say that with some level of confidence because the US Foreign Intelligence Surveillance Court released an April twenty twenty two opinion detailing more than two hundred and seventy five thousand instances of the FBI conducting warrantless searches of citizen communications between twenty twenty and
early twenty twenty one. Now I should add that opinion. That court opinion is also highly redacted, so there are a lot of blacked out spots in that report. But essentially it's saying the FBI searched through communications more than a quarter of a million times of presumably American citizen communications without securing a warrant to do it in one year. Now, the FBI was relying on the Foreign Intelligence Surveillance Act
or FISA. FISA, which allows for warrantless digital searches and monitoring of communication, but only for communications between foreign individuals outside of America. Like, it's not supposed to be used to spy on communications of American citizens. However, the law allows officials to do a sort of three degrees of separation style gain. They can look at the target's communications, so they've identified somebody that they want to surveil for
whatever reason. This person's a foreign individual, they're not in the United States, and thus there's no warrant needed, So they look at the target's communications, but then they can also look at whom the target has been in contact with. Then they can even go one step further. They can look at the contacts that the contact had. So let's say you got an old fishing buddy, and your old fishing buddy happens to be friends with a shady person
who turns out to have been on the FBI's radar. Well, by extension, you could be on the FBI's radar two because you're connected to your friend and your friend is connected to this other person. Now, FISA isn't supposed to let agents investigate American citizens, but because of this degree of separation thing, it can happen like a lot, like two hundred and seventy eight thousand times in the matter
of a year or so. And the court opinion shows that the FBI was using these techniques to run searches on people who most assuredly were not foreign agents communicating overseas, such as protesters during Black Lives Matter protests in the wake of George Floyd's death. So here we have what appears to be a pretty clear series of offenses against
American citizens perpetuated by the FBI. And this is just one reason why end to end encryption is important because if the quote unquote good guys are breaking the rules, that's not good. All Right, we're gonna take a quick break. When we come back, we've got some more tech news to cover. Okay, now, we got a double whammie section for Meta. So first up, the EU has leveled a one point three billion dollar fine against Meta. That's a princely sum, saying that the company failed to keep EU
citizen data safe and private. So essentially, the violation here involves transmitting EU citizen data to US based servers, which is something that the EU is very much against, without there being further protections in place for that information. So you might remember in past episodes, I've talked about how the fear about TikTok largely centers around this belief that it's a company that could be sending personal data belonging
to American citizens to China. Well, that fear exists despite the fact that we don't actually have evidence of this having happened. It could have happened. I'm not saying it didn't, I'm just saying that we we haven't seen evidence of it yet. However, here we have a case of an American company essentially doing the same thing, channeling EU citizen data from the European Union to US based computer servers, and you could say, wow, the turns they have tabled.
And now Meta has been ordered to pay more than a billion dollars in fines relating to this offense. Now, Meta, of course plans to appeal the ruling and the fine. I think that's obvious, But regulators say that Meta's offenses are systemic and continuous in violation of the rules of GDPR. I think Meta is hoping to wait this out so that the US and the EU come to an agreement on how and under what circumstances a US based company can transmit EU based data back to the United States.
And part of the hold up is this concern that the US government could potentially spy on European Union citizen data. They could use it as a surveillance tool, which you might say sounds far fetched, But we just got done talking about the FBI doing that to American citizens. So there you go. The other big punch to Meta's stomach this week comes in the form of Giffee, or if you prefer Jiffy, not the peanut butter, but the animated
giff database and search engine. Now. Meta purchased Giffee a few years ago for four hundred million dollars, But then regulators in the UK determined that Meta's possession of Giffee constituted anti competitive business practices and ordered Meta to divest itself of the company, which now Meta has done. Meta sold Giffee off to another company called shutter Stock. You might be familiar with them, but Meta did not recapture the four hundred million dollars it had spent on Giffee
just a few years ago. Instead, shutter Stock purchased Giffee for the equivalent of fifty three million dollars. Now that's still a healthy chunk of change, Don't get me wrong, it's a far cry from four hundred million AUCI. The US Surgeon General has issued an advisory stating that there's not enough evidence to say social media is safe for kids to use, which I suppose you could flip and say, is there evidence showing that the use of social media
is harmful to kids? I mean, I know that that's the belief, but what does the actual evidence, say, and I think the problem is that there's not enough research to draw conclusions. However, there is a concern that social media could contribute to mental health problems among the youth, which at least seems to make sense. But we don't have all the data yet, right, so I don't know that we've yet been able to determine whether social media
use among kids is good, bad, or indifferent. I think one problem with a lot of studies is it comes down to a chicken or egg kind of problem. And what I mean by that is are people developing mental health problems because they spend too much time on social media? Or is it that people who have mental health challenges are more likely to spend more time on social media, So it could be like a correlation but not a
causation situation. Here it falls into a similar challenge of determining if violent video games have a negative impact on mental health. Do violent video games make people violent or do violent people tend to like violent video games which could also be enjoyed by people who aren't violent at all. So this advisory is really meant to encourage families to think really seriously about social media use and to encourage healthy family behaviors, and I think that's a good message
no matter how the research ultimately shakes out. Rapper ice Cube has a few things to say about AI, and they are not complementary. He actually called AI demonic and referenced the recent songs featuring AI generated voices mimicking people like Drake. Ice Cube said, quote, somebody can't take your original voice and manipulate it without having to pay end quote. That's not necessarily the case. As we've said on the show before, existing law does not really cover synthesized voices.
You can't copyright a voice. You can't trademark it either. But ice Cube's concerns are understandable. If someone is replicating a specific person's voice in order to make something new, that sort of proves that the original voice has value to it. Otherwise, why are you copying it? Why wouldn't you just make a new synthesized voice that doesn't sound
like anyone? In particular, if you're using AI to copy the style and the sound of someone specific, that kind of confirms that the original has value, and that to me suggests that we do need to develop laws to protect those things, and some states do have some laws that protect that, but it's not across the United States. Other parts of the world need to think about this too. It's a brave new world to have such Ai people
in it. In space news, NASA has awarded the private space company Blue Origin a contract to land astronauts on the Moon. The lunar lander will be named Blue Moon, which makes me want to launch right into do wop music. But I'll spare you, as well as my super producer Tari from having to endure that dig de don ding Blue Moon. Sorry slipped out. This won't be the lunar lander used in the upcoming planned missions to the Moon that are part of the early part of the Project Artemis.
Those are actually going to use a lunar lander that's created by SpaceX, and that one is the Human Landing System or HLS. It's interesting that NASA is using both companies for this purpose, but eventually the plan is to establish a permanent facility on the Moon. As for when Blue Moon will see us standing alone without a dream in our heart, well, it's going to be like twenty
twenty nine or so. Finally, IBM is investing a huge amount of money, like one hundred million Bucks and we'll be partnering with the Universities of Chicago and Tokyo to build a quantum supercomputer that aims to have one hundred thousand cubits, and I guess that would be a hundred kill cubits. Currently, I think the largest quantum computer is IBM's Ospray. I could be wrong about that, but I think it's the Ospray, and as I recall, the Osprey has four hundred and thirty three cubits. IBM is also
planning on launching the Condor computer sometime this year. I believe that one is going to have slightly more than one thousand cubits, but one hundred thousand is tremendous. It is so enormous. Now, just a reminder, a cubit is a quantum bit, and unlike a classical bit, which can either be a zero or a one, a cubit can be placed into superposition, meaning it can be both zero and one simultaneously. Technically every value in between as well,
and it all gets very very quantum. So I recommend looking through the tech stuff archives for episodes about quantum computing if you want to learn more. As for a deadline, IBM's looking a decade out with a goal of this quantum supercomputer doing science and stuff by twenty thirty three. And that's it for the Tech News today, May twenty third, twenty twenty three. I hope you are all well, and I'll talk to you again really soon. Tech Stuff is
an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.