Welcome to tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host Jonathan Strickland, domind executive producer with iHeartRadio and how the tech are you. It's time for the tech news from March twenty third, twenty twenty three. Oh wow three two three two two three. Sorry got distracted. Okay, let's jump start this episode with more AI news stories. The story of twenty twenty three.
So first up, Avraham Pilch of Tom's Hardware wrote an article titled google Bard plagiarized our article, then apologized when caught. So google Bard is Google's version of you know, the chat box features that are built on top of a large language model. It's similar in many ways to chat g PT that's incorporated into stuff like bing. Bard is currently in beta testing, so it's not openly rolled out
to everyone. Pilch's article explains that he was testing Bard and asking it to compare two different processors against one another and to recommend which of the two processors would be fastest. And Bard generated a response, and Pilch saw that some facts looked awfully familiar, and it ended up being because the information that Bard was referencing originated in a benchmark test article that was also published on Tom's
Hardware just a couple of days earlier. So Pilch asked Bard for the source of the data, where did you get this information. That's when Bard explained it had pulled the info from the Tom's Hardware article. And at that point Pilch essentially asked Bard if that perhaps constituted plagiarism,
and Bard kind of said yeah. And this is just one of the concerns folks have about chat bot AI tools like Barred and chat gpt, that they could pull data from sources without giving credit, and that both denies the original creator of that content any recognition or ability to monetize their work, and it also makes it difficult to fact check the answers. You know, the information has to be coming from somewhere. These AI chat bots are
assembling answers based off available information. They're not just inventing it, they're not just lying. In other words, but you don't necessarily know where they're pulling that information from, and that means not only that it may or may not be trustworthy, but also that someone somewhere is getting the short end of the stick. They generated that content, and yet you
know they're not being compensated for that, right. So it reminds me of how content creators were really worried when Google would start to include short descriptions on a search result page that could potentially negate the need to click through to an actual page on that topic, thus denying
those pages of views and ad revenue. Why would you write stuff for the web if that stuff ends up being appropriated by some AI chatbot and then regurgitated to users who never see your actual article, and thus the website never gets any visitors, and eventually the website stops employing you because they can't afford to do it. Like it just becomes this kind of self defeating cycle. But anyway,
go check out the full article on Tom's hardware. Again, it's called Google bard plagiarize our article, then apologized when caught because I actually do believe in sending people to the proper sources. James Vincent of The Verge has a different warning relating to AI chat I can't actually give you the full title of the article because it ends with some profanity, but I'll give you most of the
title of the article. It's called Google and Microsoft's chat bots are already citing one another in a misinformation insert profanity here. It's a kind of storm, I'll say. In the article, Vincent mentions a peculiar series of responses that when one asked Microsoft Bing if Google had shut down It's Bard chat bot, then Bing would say yes. Essentially, Bing would say yes, Bard has been shut down by Google.
Now as evidence, Bing was citing a tweet in which someone said that they had asked Bard when Bard would get shut down, and Bard claimed it already had been. So where was Bard getting this information, because clearly that's not true, right, Bard was answering the person, so it
could not have been shut down. Well, Bard was pulling its information from a joke that someone left in the comment section of a Hacker news piece, and then someone else had taken that joke and generated a chat GPT article around it, like they actually gave that chat gpt to write an article about Bard being shut down by Google. Again, this is all a joke at this point, but Bard cites this joke as if it's an actual news item.
This tweet talks about how Bard said that it was already shut down, and then Being says yeah, Bard's been shut down because it's citing the tweet. So again, this really makes a very salient point about you know, this goofy little tweet ended up being used as if it were reliable hard news information, and that AI chatbots aren't capable of telling the truth from humor, or, lies or satire. That you could end up asking these chatbots a question.
It is entirely possible that they might reference a site like The Onion, for example, and the Onion is a satire humor website, like it's meant to write articles that are not true for the purposes of humor, and the answer you get from AI would probably be interesting, but it would not be reliable. Goodness knows, there are already tons of sites out there that claim to be satire. Usually this claim is hidden in a little about page somewhere that makes it really hard to tell at first.
Like I've seen so many, not as many these days as maybe five years ago, but man, I used to come across them all the time. And in reality, these websites only existed to publish fake news that would then go viral on various social platforms. So if you dug down deep enough you would find some disclaimer on the website somewhere saying this is meant for entertainment and satire. It wasn't satire, it was just lies because it wasn't humorous at all. It wasn't presented to be humor or
to give any insight. It was just meant to go viral. Well, AI chatbots don't know necessarily that that kind of content isn't reliable and could present it as such. So yeah, another example of how chatbots can give us some misleading information. Not all uses of AI are bad, of course, you be soft. The video Game Company released a video showing off an AI tool called ghost Writer Write Her, not ghost writer as in the Biker with a Skull flaming
Skull for ahead. So ghost Writer aims to make the tedious task of generating background chatter for NPCs in games and automated task. So if you've played any open world style games, you're probably familiar with hearing NPC's holding conversations around you, or maybe even commenting on your appearance as you move into view. Even if you haven't played a lot of games, chances are you've heard people reference the iconic line from Skyrim. I used to be an adventurer
like you, then I took an arrow to the knee. Well, someone has to write all these little lines of NPC dialogue like that. Someone's job is to flesh out a world by writing all these possible lines that people could say in the background, some that players may never even
consciously register. It's just chatter in the background. So it can get pretty dull, particularly if you're trying to work in enough variety to make the world feel like it's inhabited by actual people and you don't have everyone just
saying rhubarb, rhubarb in the background. So what ghostwriter does is helped generate variations of dialogue options, so you can kind of put in a line and it will start to use tools to express the same thought but in different ways, and you can actually go through and edit the responses so that way, if there are any grammatical
mistakes or anything like that, you can fix them. You can accept or reject suggestions, and over time, ghost writer gets better at learning what it is you're trying to do, and it starts to give you better suggestions the next time you use it to generate stuff. This gives writers the chance to flesh out their game much more quickly and dedicate more of their efforts and their brain power and creativity to writing the stuff that really matters and
helps drive the game's narrative forward. I think it's pretty cool, though. I kind of want to have a game now where your character passes into a world that's just populated by NPCs from all sorts of different games. You know, kind of like if Central Casting had been in charge of everything and they just grabbed anyone they could and shove there,
and it's all random INPCS. So you've got fantasy and you know, modern day crime in PCs, all sorts of stuff, all just intermingling and trying to have conversations, and you start to hear these iconic in PC lines from all the different famous games out there as you move through the area. Someone make that for me. Okay, I've got a lot more stories to cover before we get to those.
Let's take a quick break. Okay, we're back. Let's shift to talk about TikTok So she Too, the CEO of TikTok, submitted testimony in advance of his appearance before Congress, which is happening as I record this episode. He's currently in front of Congress to answer questions about TikTok as the US government ramps up resistance to the app and the company. So in the testimony he submitted, Cho claims that the average US user of TikTok is quote an adult well
passed college end quote. This was reported in Insider. The Insider piece also cites a sales presentation within TikTok that leaked in twenty twenty one and said that around seventeen percent of users were between the ages of thirteen and seventeen, and forty two percent were between eighteen to twenty four.
And you might think, well, why does this even matter. Well, some of the arguments that politicians have made against TikTok focus on how the app can promote harmful messages, particularly to younger users, and that this can range from misinformation to glorifying self harm to encouraging people to participate in dangerous viral challenges. Now, I suppose if TikTok were to say, but the people who use our app are actually older than that, that it's not that many kids, it's mostly adults.
That I suppose removes a tiny bit of the oath behind the argument that TikTok is bad for kids. But I mean if it is true, then that is super bad news for Meta because for the last couple of years, Meta has looked at platforms like TikTok and also to others like Snapchat as dangerous competition. That's where the young people were going to instead of to Meta. And meanwhile, Meta's user base is aging, but there are fewer young people coming in, which is bad for long term success
for the platform. But if it turns out that TikTok is not the place where young people are going, then who the heck is Meta gonna copy in order to try and get those users. Anyway, I'm certain Congress will have plenty of other concerns they want to addressed. In fact, I know they do because I dipped in just briefly
to watch a little bit of the hearing. They really want to know more about things that honestly, TikTok has tried to address multiple times in the past, namely the company's relationship to its Chinese parent company Byte Dance and then bitte Dances, obligations to the Chinese government, and whether or not TikTok is actually keeping safe you know, private information and that kind of stuff, or if it's just
acting as a data siphon for China. You know again, TikTok reps repeatedly have said that they have taken steps to prevent that kind of stuff from happening. But those excuses or reasons, however you want to look at it continued to raise skepticism in US government quarters. Yeah, it's a complicated thing, and I'm sure I'll talk more about this probably next week when we have heard all the
outcomes of this hearing. ABC News reports that the US Securities in Exchange Commission, or SEC, is going after Justin Sun, a cryptocurrency company founder who the SEC claims was transferring large amounts of specific cryptocurrency tokens back and forth between two different wallets that he owned. So they were both his wallets, and he was just transferring large amounts back and forth again and again. So why would he do that?
While according to the SEC, it was an effort to inflate the trading volume of the tokens, but to do so artificially. So, in other words, if someone from the outside's looking in says, oh wow, are these tokens are being traded back and forth a lot? This is actually being actively used as a currency that helps stabilize the value of the tokens because people have a confidence in that token for it to hold onto that value if in fact it's being actively used and not just hoarded
or sold off. And it also gave sound the ability to try and offload stuff without it impacting the value of the tokens itself. So that's what the SEC was saying, that he was manipulating the system in order to profit off of it. He was fixing the game. In other words, there's another charge as well that enlisted the help of celebrities to endorse these various cryptocurrency tokens, but there was no attempt to divulge the fact that they were being
paid to do this. So, in other words, the celebrities were coming across as if they had just personally researched this cryptocurrency and that they were engaged with it on their own and they were promoting it because they thought it was really cool, as opposed to, hey, i've partnered with such and such and they make this thing and you should check it out right. So there are very specific rules that are in place for endorsements. You have to divulge the relationship you have with a sponsor. If
you are an endorser. You're being paid to endorse something, you have to make it clear to people. Otherwise it's considered a type of false advertising because it gives the appearance that you are independently excited about this product that you might not have even heard of had it not been for this relationship. So that's a big no no here in the US is not acknowledging that there was
payment exchange for that endorsement. Some of the celebrities named in the operation have already agreed to hand back the money that they had been paid to endorse the crypto tokens in the first place. There are a couple of holdouts whom I expect will discover the government would very
much like to have a talk with them. I haven't covered stories about tech companies cracking down on remote work for a while, largely because a lot of the big companies have essentially put in tough restrictions or have just outright denied work from home approaches, but platformers Zoe Schiffer reports that Apple is taking steps to keep tabs on employees to make sure they come in at least three times per week by monitoring their employee badge activity, so
like a security padge when you tap in or in some cases out of a building. I don't know if Apple requires you to tap in and out. I remember back in the day Discovery did, which became a big deal because we when we would visit Discovery back when
I was part of How Stuff Works. How Stuff Works got acquired by Discovery for a while, How Stuff Works still had security, but a less thorough security approach where let's say that I was arriving at the office with my arch nemesis Ben Bolin, I might tap in and then both of us just walk in. At Discovery, each person was required to tap in in sequence. Like it didn't matter if you all arrived at the building in a big group, you each had to tap in. I
assume Apple is the same way. So now, according to Schiffer, Apple is tracking that data and if someone is not tapping in and out three times per week, they get a warning, and if they do it again, they get an escalating warning, which presumably ultimately leads to some form of reprisal. So that's fun. Nothing like being monitored at work.
It's the best really helps drive up productivity. Now, I will say that my guess is that in the current work environment where you have so many big companies laying off thousands of employees. I mean, I think even indeed, a company that's meant to help people find the right kind of staff, they laid off a couple of thousand
people recently, like fifteen percent of their staff. When that's the kind of lay of the land, I imagine there are a lot of employees who don't feel comfortable advocating for remote work solutions, and so they will do their best to conform with these kinds of policies where you have to come in a certain number of times per week. But yeah, it's not a good luck. But again, the work environment being what it is, I don't know that
people feel like they have a lot of alternatives. The United States Federal Trade Commission, or FTC, is really stepping up recently. The reason I say that is that the Verge reports that the FTC is saying subscriptions should be just as easy to cancel as they are to initiate. Now, I'm sure a lot of you have encountered the experience
of needing to cancel a subscribe service. Maybe it's your ISP, maybe it's a phone plan, maybe it's a streaming subscription, or Heaven help you, it's a gym membership, and you've probably encountered a situation where you had to go through like a wild goose chase just to get out of
this stupid subscription. I'm actually reminded of when Ryan Block tried to cancel his service with Comcast and he was put through a ridiculous routine which he recorded and then later shared online back in twenty fourteen, and just as a personal anecdote, when that story broke, I read about it. They didn't initially identify it as Ryan Block, so I read it and it's like, oh man, this poor this poor guy. He was really just run up the wall
with the sales representative. I can't believe it. And then when I found out who it was, I laughed and laughed because I don't know Ryan personally, but I've known his wife for like a decade, So when I found out it was happening to someone you know that I kind of know, it got particularly absurd to me. Anyway, the FTC wants that kind of stuff to be buried in the past and for companies to adopt a click to cancel policy that makes getting out of a subscription
way less of a hassle. It's supposed to be just as easy to end a subscription as it is to start one. And since I don't think companies are going to win to make it more challenging to sign up for a service. You know, the harder it is for you to join a service, the less likely you're going to do it. Like you might be convinced at first, Oh yeah, no, that doesn't sound bad, I'm in. But if you start to see there's a curve, like a
barrier to entry, you might bounce. Those people don't want that, Like, they want you to be as committed as you possibly can be to the point where you are actually hooked in. So they're not going to make signing up more complicated, but it does mean that they have to make it
less complicated to cancel out of something. It would also mean that companies that use various incentives to try and keep customers on board would have to offer some sort of total opt out pathway for people who just don't have the time to listen to that kind of pitch. So again, if you ever tried to cancel out of a phone plan, you probably heard well, you know, if you decide to resign with us will give you blah blah blah blah blah blah blah this new set of rules.
Say no, no, no. You can just say right off the top, I'm not interested in hearing any other offers. I just want out. However, this particular set of rules would not apply to non commercial services, so stuff like charitable donations or political donations those would not necessarily get covered by these rules. The proposal received a three to one vote in the FTC. The one person who voted against it is the Loan Republican member of the FTC board.
But it's still going to be open for public comment, so people can actually weigh in on what they think first, and that'll all happen before the FTC can adopt the rules, which they may end up changing before they adopt. Also, the FTC itself would not actually be taking action against
companies that failed to comply with these rules. Instead, the rules would give regulators the ability to enforce them, So essentially, it's saying regulators who are already in charge of enforcing other rules for companies would just have new rules that they could continue to enforce. So pretty good news if you have ever suffered the experience of having to try and cancel out a something that was designed to make
it very hard to do that. Okay, got a few more stories to go, but before we get to that, take another quick break. Before the break, we were talking about the FTC, the Federal Trade Commission. Now let's talk about the FCC or Federal Communications Commission. It is taking aim against spam text messages the same way that the agency targeted robocalls a couple of years ago. So if you're in the US, you might remember that the FCC passed rules for telecom companies to shut down robo calls
whenever possible. It actually led to one network getting shut out of the American telecommunications infrastructure where they weren't able to interface with any other telephone network because they were failing to shut those down. So that was a fair success. I mean, I still get robocalls, so I don't think it was a total success, but it definitely has cut back on that activity. Now they want to do the
same thing but for spam text messages. So the new rule says that phone companies will have to block text messages originating from quote invalid, unallocated or underused end quote phone numbers. So if it's a phone number that has been associated with spam, then the phone company should just
block those text messages as a rule of thumb. The vote passed unanimously within the FCC, So that makes sense because there have been a lot of reports of fraud connecting to spammy text messages that have been on the rise in recent years, and so there's a real need to protect the public from scam artists and people who are you know, trying to fish for data that kind of thing. And you know, some people are really really vulnerable to that, particularly the older generation tend to be
more susceptible to those kinds of attacks. So yeah, I'm glad to see this happening as well. I honestly remember a time where I would get a phone call that you know, obviously I wouldn't answer it. I would go online and try and search up the number to a reverse search, and there were a lot of resources out there that would track whether or not something was a spam call. For whatever reason, these days, I can't easily find those resources anymore. I don't know if they just stopped,
or if maybe they're just buried in search results. I haven't really dug deep into it, but it got to a point where I was getting the stress that I couldn't easily see if something that was coming in with
spam or not. So knowing that there are steps being taken to at least shut down the known perpetrators of spam, I find that refreshing, because goodness does I just want to for my device to be useful, and if I'm discouraged from using it because of all the robocalls and spam, then I just become a hermit, which, you know, some
days it's an attractive thought. Okay. Sharon Harding at Ours Technica has a terrifying article titled journalists plug in unknown USB drive mailed to him it exploded in his face, and yeah, the headline is scary, but it actually kind of gets worse. So five journalists from Ecuador received USB drives in the mail sent from another part within Ecuador, so it was within the country that they received these. And one of these journalists, a guy named Lennon Artieda,
inserted the drive into a computer. You know, he plugged the USB drive into a laptop or computer, and then the USB drive exploded. There's a little capsule sized amount of explosive in there that once it received voltage enough voltage, it detonated. Fortunately the injuries that Artada received were not serious,
but I'm sure it was a terrifying experience. So in other cases, people received these drives, but they hooked them up through adapters that did not provide the voltage needed to detonate the device, and they discovered that, in fact, they were other explosive devices. As I said, it's been five people so far, at least according to the artist technic apiece. And you might wonder, well, why the heck
are journalists in Ecuador receiving explosive devices? And details are really scarce on that, Like, there's a lot of speculation about what could be the root reason for this. It seems reasonable to conclude that this is an attempt to intimidate and silence journalists. But honestly, outside of Ecuador, there's not a whole love information about out who is responsible for this and what purpose it is? Like, what are they being silenced about? I am not sure neither is
Harding at Ours Technica. But Harding does remind us that we should never plug in an unknown USB device to a computer. If you happen across a USB device. Don't attach that to a computer system. You never know what's on it. Now, normally I would say don't do it, because there could be malware on that USB drive and you might introduce that malware. You might inject it into
your computer and overall into like a network system. Heck, that's how stucks Net infected centrifuges in nuclear facilities in Iran. Sucks that being some malware that presumably was developed by Israel and possibly the United States, probably some sort of combination there that then got introduced to otherwise gapped systems within Iran. Not easy to do unless you're able to hide it on say a USB drive and convince someone to connect that drive to the otherwise air gapped systems.
So yeah, that's one reason you would never want to plug a USB drive that you came from some unknown source into your computer. But another is that it might just explode. Finally, last night at Cape Canaveral, and aerospace startup called Reality Space became the first company to launch a three D printed rocket successfully. Now that's the good news. The bad news is that the rocket, designated a Terran one, failed during its second stage separation. So it wasn't able
to achieve low Earth orbit. The three D printed components of this rocket made up about eighty five percent of the launch vehicle. So this isn't like someone just hit print on their laptop and then many, many hours later there was a fully built rocket standing there. But instead it was about lots of components that were printed three D printed, including metal components that were three D printed to build this rocket. This approach could really bring down
launch costs. It ends up simplifying the design and manufacturing of rockets, which could really make it more cost effective to send stuff to space, which is pretty cool. And the fact that the rocket held together for the launch is by itself a great achievement. Sure, the second stage separation did not go off as planned, which is unfortunate, but as we have said many many times on this show,
rocket science is really hard y'all. The company, meanwhile, has aspirations developing rockets that in the future are as much as ninety five percent printed, and it's just really exciting. It's really cool. I think it has the possibility of taking on some of the duties of launching smaller payloads into space at a much reduced cost, which comes with
its own challenges. Obviously, you don't want to launch too much stuff because then you've got space junk just orbiting the planet and potentially creating obstacles that you have to plan around when you're doing future space missions. But also it might mean that we could really take advantage of some cool opportunities that otherwise would be too expensive for us to pursue, and that to me is really exciting. All Right, that's it for the news for Thursday, March
twenty third, twenty twenty three. Hope you're all well, and I'll talk to you again really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.