Tech News: Is ChatGPT Talking About You Behind Your Back? - podcast episode cover

Tech News: Is ChatGPT Talking About You Behind Your Back?

Apr 04, 202338 min
--:--
--:--
Listen in podcast apps:

Episode description

Germany looks to block ChatGPT out of privacy concerns. Samsung Semiconductor adjusts its own stance on ChatGPT as employees share a bit too much information with the chatbot. And a ChatGPT detector might flag a student's legitimate work as AI material. Plus lots more!

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

Welcome to tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with iHeartRadio and how the tech are you. It's time for the tech news for Tuesday, April fourth, twenty twenty three, and we've got more AI news to get us started today. Big Surprise. The Handel's Black newspaper in Germany reports that the German government is debating blocking chat GPT in the country out of a

concern for data security. This would follow a similar ban that has been issued in Italy. This doesn't have anything to do with how reliable the chatbot can be, or it's an ability to tell the difference between legitimate information and misinformation or satire or you know, otherwise unreliable sources. Instead, this has to do with how chat GPT handles privacy. Italian regulators said that open ai failed to follow EU rules regarding a measure that would prevent minors from being

able to use the chatbot service. There should be some sort of age gate keeping methodology in there. On top of a concern about how open ai collects user data and then makes use of it, and of course, we know that last month a glitch revealed user chatbot conversation histories inadvertently, where people were suddenly able to see how

other folks had been using chat GPT. Whether this will become a trend in the EU at large remains to be seen, and I suspect Microsoft, which is heavily invested in open AI in chat GPT, will start taking measures to create a version of the chatbot that is more in compliance with EU rules and regulations. It would shock me if that were not the case. Now it's not just countries clamping down on chat GPT, Samsung Semiconductor is

tweaking its approach to the tool as well. So until recently, Samsung allowed engineers to kind of have unfettered use of chat GPT in an effort to assist in the fabrication process. So engineers were using chat GPT to do things like look for errors and source code, or to take notes from a meeting and convert it into graphics or even a presentation, that kind of stuff. But the use of chat GPT led to three prominent data leaks in the

span of twenty days, which is a big concern. Now, I don't mean that chat GPT got all loose lipped

about it and leaked the information to the outside world. Rather, this was an internal problem that engineers, in their eagerness to leverage the tool, made their you know, to make their jobs like you know, easy and efficient and error free, they began to share a bit more than what Samsung was comfortable with, including proprietary and top secret code and information, stuff that is really important to Samsung Semiconductor, and they

don't want to share it with the outside world. And you might think, well, sure, but you're you're just sharing it with a chatbot. Was the chatbot going to do

with that information? But then you also have to remember, wait a second, this chatbot comes from another company, right, It comes from a company that also happens to be tightly tied to Microsoft, And you start to worry about who might be able to access that data, Like could Microsoft see how people at Samsung Semiconductor were using chat GPT?

If so, then Microsoft could see some of this top secret information, which would be a big no no. So now Samsung has cracked down a bit and limited the amount of information that employees are allowed to share when they use chat GPT. They can still use it, but they have strict limitations on it, and they're simultaneously developing their own version of the tool that they can administer themselves, So that way the creepy AI presence will be an in house creepy AI presence, not one that has ties

to potential competitors. This is a fairly high profile case, right Samsung Semiconductor. That's a big company. But it makes me wonder if we're going to start seeing other companies have second thoughts about letting employees use AI enabled tools out of similar concerns. It is not difficult for me to imagine executives deciding that these nifty AI tools that are incorporated into say Microsoft office could pose as a

potential risk to data security. It's possible that Microsoft and other companies, by jumping into the AI games so enthusiastically right now, they could be setting themselves up for a fall. If we see companies at large say, well, what happens if I use Microsoft to take notes from an important internal meeting and turn them into a presentation that we

can show other departments. But still internally, what happens if Microsoft is able to actually access information because of this connection with the AI enabled tools in their productivity suite. As we've had more and more of our operations moved to the cloud, this has become a non trivial concern, and I imagine we're going to see companies start to really reconsider how they go about handling top secret proprietary information.

I mean, I just see companies saying, all right, well, for this, we can't use any of those tools because the possibility even if it's you know, even if the companies are saying no, no, no, we're not doing that, the possibility of that being leaked might be too much.

It actually is making me think a lot about TikTok, right, Like, the big argument against TikTok, or at least one of the big arguments, is that it could serve as a data siphon, pulling, you know, information that's important out of the US and siphoning it off to China, and that

potentially that could impact national security. Well, I could see companies looking at AI enabled tools and saying similar things that how can we trust that the productivity stuff where we're using isn't just sending our data to Microsoft, for example, And that's a tough question. So I'll be curious to see if this becomes a bigger thing throughout this year and into next. Jeffrey Fowler of the Washington Post has

an interesting piece about chat GPT and education. The piece is titled we tested a new chat GPT detector for teachers. It flagged an innocent student, and yeah, the headline gives you a strong idea about what the story is all about. So the deal is Fowler was testing software from a company called Turnatin. Turnatin, according to Fowler, provides more than two million teachers, a tool that's meant to detect instances

of plagiarism. So if it turns out that there's a passage in a student's essay that was lifted directly from some other source, it's supposed to be able to flag that. But it's also supposed to be able to detect if a student has made use of a tool like chat GPT to generate all or part of a work. Unfortunately, just like chat GPT itself, it seems as though this

tool isn't totally reliable. Now. I've warned folks repeatedly that chat GPT's responses are really only as good as the source material it used to generate those responses, and often you have no way of knowing what that source material was. But I should also point out that the tools meant to detect chat GPT aren't perfect either. So in Fowler's experiment, a student submitted an essay that she wrote without the help of chat GPT or any other AI, and yet

the software flagged her work as being AI augmented. Now this concerns me for lots of different reasons. So first off, understandably there are a lot of teachers who are concerned about chat GPT. If students use chat GPT without putting forth any real effort of their own, they won't really learn anything except how to use AI so that they can avoid thinking about stuff. Right, they might get really good at gaming the system, but that's all they learned.

They don't learn to think critically. They don't learn to think analytically. They can't start to think laterally. Like there's when you really break it down. Education, at least it should be all about learning how to think and to think effectively and to think critically. So if they're just using chat GPT to do their work, you might as well not assign them anything at all, because the net effect is going to be much the same. If the students are not using chat GPT, but there's no way

to verify whether they are or not. Well, you have a situation where teachers are suspicious of their students, and that's not good either, right for teachers to just constantly be wondering if their students are cheating because they have no way of verifying. And then it could be that the students are all legitimately submitting their own work, but you've got this suspicion there. That's not good. That's not

conducive to a learning environment either. And then if you have a tool that's meant to detect chat GPT, but it's not a very good tool or it's not accurate

enough to be fully reliable, then that means two things. One, it's going to miss some instances where students were essentially cheating and they're just gonna submit their stuff and it won't pick up on the chat GPTs thing at all, and the students will be coasting through without really learning anything and ultimately become terrible citizens or potentially become terrible citizens.

I can't pretend like that's going to be the case every time, but you know, I get head up about this because both of my parents are teachers, or it will be a poorly made tool where it falsely accuses a student and says that their legitimate work was ai augmented when it wasn't. That's not good either, right, Like, this is just a bad kind of situation all around, And you know, the Pandora's box has been opened. There's no getting the evils of the world back in there.

But figuring out the best way forward is going to be really important if we are to have good supportive education systems in place that give people the resources they need to become good people right to learn how to think, and to encourage that. We often get stuck in the

weeds anyway with education. We can't see the forest for the trees because we're looking at the specific instance of learning as opposed to the overall approach to learning how to learn and learning how to think, and we end up getting really hyper focused on the specifics, right Like, I need to write this essay about how Shakespeare used the fool to speak truth to power, that kind of thing, and I'm really focused on that as opposed to know,

this is about more than that. This is about learning how to take in information and analyze it and create your own response to it, and use critical thinking in the whole process. All of these things, with chat GPT and the tools meant to detect chat GPT, they make me worry because I feel like we're going to get more and more hyper focused on particulars and lose sight of the overall goal of education. Maybe I should write

a book about that. No one will read it, but at least I can get it out on my system and you won't have to hear it in tech stuff news episodes anymore. Okay, one other AI story before we go to Break. According to a cybersecurity firm called dark Trace, Generative AI may be partly responsible for a massive surge in novel social engineering attacks this year. Dark Trace calculates that in January and February of twenty twenty three, there was a one hundred thirty five percent jump in social

engineering attacks. Social engineering is when you attempt to penetrate a security system, not by sitting down at a dark computer screen and typing in password guesses. Instead, you trick someone into handing over access to you. It's way easier than trying to break the tech side of a security system.

You just have to convince someone that you know, you're just there to install an update on their machine, or maybe that their computer has been flagged as being compromised so you're there to clean it up, when in reality you're actually there to really compromise the machine. Social engineering is the tool of the con artist, and it's right up there with the skills practiced by snake oil salespeople, mentalists and stage magicians, and these tactics work. They've worked

for all of human history. I've fallen for them in the past multiple times. There's no guarantee that I won't fall for it again in the future. All I can do is try to be as careful as I can, to use critical thinking and to avoid acting on impulse

or emotion anyway. It sounds like scammers and bad actors in general are making use of generative AI to craft messages that are more likely to convince a target to follow through on some kind of action, such as opening up an attachment or clicking on a link that's going to download malware to their machine, or getting them to fill to form that will send important information to the attackers.

And chat, GPT and other tools can create all new messages that are designed to get these kinds of reactions, stuff that attackers haven't necessarily thought of yet, And that also means that the targets will not have encountered those kinds of attacks in the past, so it makes those attacks more likely to succeed. Plus, chat GPT can avoid some of the tail tale alerts that otherwise we rely

upon when we encounter these malicious emails. You know, stuff like grammar and spelling mistakes, which are typically a dead giveaway that it's an attack not a legitimate email. Well, chat GPT won't make those mistakes, so it'll get harder and harder to tell the scam emails from the legit ones. I expect we're going to see this continue unless somehow companies like open ai and Microsoft build into their AI

tools some sort of method for detecting ill intent. But to me, that seems like that's a whole nasty bee's nest of its own that we don't want to get into. Right now. Okay, we're gonna take a quick break. When we come back, we've got some more news stories. All right, we're back, and Tarique the spy music. Okay, do you remember the NSO group that's the Israeli company behind tools like Pegasus that was a type of malware that could compromise iPhones just by sending an eye message to a

targeted device. You just had to know their phone number and send an eye message using Pegasus, and it would, without the user's interaction at all, compromise the iPhone due to a vulnerability. Apple would subsequently patch that vulnerability, but it's what gave them the foot in the door in the first place, and this malware would target phones and

turn them into or valence systems. Essentially, the attacker would be able to do stuff like turn on the microphone and even the camera on the iPhone to be able to survey what was going on in the surroundings. They could access data that was on the device itself. Really nasty stuff. Now worse than that NSO group allegedly counted among its customers some of the most notorious authoritarian leaders who use this tool to spy on all sorts of people.

And while the NSO group was marketing this tool as a means for governments to keep tabs on terrorist organizations, the truth is that these authoritarian leaders were using a very liberal definition of terrorists to include pretty much anybody they didn't like, that included political rivals, that included activists, and included journalists. Anyway, it's this kind of thing that

democratic societies tend to disapprove of, at least publicly. And you know, the ability to spy on people and abuse power and whatnot doesn't fly so well in democratic societies. And here in the United States, the Biden administration issued a ban on American companies from doing business with the NSO crew. But now the New York Times reports that someone disobeyed some federal agency, which one is not clear

as I record this episode. Apparently worked outside the system to get access to an NSO group product called Landmark, which allows you to keep tabs on a person's physical location at all times. It's essentially a bug tracker that you just install on someone's device. Well, you know, the physical location of the infected device is what it really tracks it. But to infect the target's device, I am not sure what the mechanism is to do this. I don't know how it works, right, I haven't heard enough

about Landmark to understand. If it's as insidious as Pegasus was. It won't even require the target to take any action at all. Which makes it really scary, and then you just monitor that device's location. Apparently, this unknown federal agency was using Landmark to keep tabs on targets in Mexico who I don't know and for what purpose I don't know. The agency went to some trouble to acquire this tool. They used a dummy corporation called Cleopatra Holdings, which turned

out to be a fake company. It was supposedly headed up by a guy named Bill Malone, a fake CEO. But the real go between for this agency and the NSO was a company called Riva Networks, which is a defense contractor located in New Jersey. Reva Networks in turn did business with a company called Gideon Cyber Systems. This company is a holding company, meaning it doesn't actually do anything on its own, it's just there to hold assets, and Gideon cyber Systems is in turn owned by another

company called novol Pina Capital. Novol Pina, it turns out, is a majority owner of the NSO group. So this circuitous series of connections is what allowed this federal agency to get hold of this forbidden product. The Biden administration says that it was unaware of this activity, So if that's true. That means this federal agency was directly going against the administration's wishes and was trying to hide their tracks in the process, So it means they essentially have

gone rogue. Of course, it could be that this was a secret but sanctioned acquisition that the administration is saying they were unaware of it. But it's possible they were aware of it. We just don't know, and that all this skull duggery was in place just to obfuskate what was going on and to avoid detection. Only they were detected. I'm sure I will follow up on a story as more information becomes available. Jumping over to Twitter and tick removal.

By that, I mean the check marks on verified Twitter accounts. I'm not talking about blood sucking arachnids, So to my friend Shay, I apologize. I'm talking about checks, not ticks. Twitter has reportedly started removing the blue ticks on verified accounts that have yet to subscribe to Twitter. So we've been talking about this for a while. That one of Elon Must's directives was that that little check mark verification notice was no longer going to be a sign of

a verified account. It was instead going to be a sign of a subscribed account. So if you want that check mark then and you're an individual, then you have to pay eight bucks a month to the Twitter Blue subscription, and that includes the blue check mark next to your name, which just means you're a subscriber. Organizations have to pay

a lot more. They have to cough up a grand per month for the privilege of having that check mark there, And a lot of folks, including myself, have already said that we're not going to pay, and that the whole darned thing misses the point of verification in the first place.

It's not verification at all anymore. And further, this really is nothing but an attempt for Elon Musk to generate revenue after having alienated a good chunk of advertisers, and Twitter had previously depended almost exclusively on advertising revenue for its revenue, and verification means nothing if you're just paying

for it, like, it's not verification anymore. The whole point of verification was so that users would know that the account they were following legitimately belonged to whomever it claimed to be. So if you're following the account of a celebrity and there's a check mark there, You're like, Okay, this is really them. So a lot of people are saying, well, now on Twitter says it's not going to allow impersonation,

But how is it really going to enforce that. How many people are going to go get that check mark and then change their user name so that it reflects a notable public figure. It's a huge mess anyway. One of the voices that has criticized this move the loudest belongs to The New York Times, the newspaper, the New York Times. So now Twitter has stripped the New York

Times of its check mark. Now, to be clear, there will be a big group of heavy hitting Twitter accounts, ones that have lots and lots of followers, like the top accounts on Twitter, that are going to be able to keep their check mark without having to pay that monthly fee. For those, I guess the check mark will still mean that it's a verified source. You see how this all gets confusing and muddled. But I would have imagined that the New York Times would have been on

that list. It is a prominent news source. I mean, there are other news sources that are on that list, like CNN and the Washington Post, but Twitter has pulled the mark off of The New York Times. Musk has also been in a rather public snit over The New York Times in the past few weeks. So it's very hard to avoid thinking that this is really a personal issue, not a business one. It feels like this is personal

and I am biased. I have a lot of feelings about Elon Musk that are negative, and so like, I am inclined to think that this is a very petty personal move on Musk's part, But I have to say I don't know that for sure. It's it's how I feel, but I have to admit that's just an opinion, and I could be one hundred percent off the mark. It's just it's hard for me to hold back on that thought. Anyway.

Lots of other news outlets still have their marks, including news outlets that have already said that they are also not going to pay the monthly fee, So that raises questions. I mean, why was the New York Times singled out when other news outlets that also said they weren't going to pay still have that check mark. One reason for this could be that, from what I understand, the process to remove those check marks is a manual one, so it's going to very gradually happen across the checked accounts

that are on Twitter. So if that's the case, if people manually have to go and review each of these and then remove ones that are not from subscribers, then yeah, it's that kind of explains things, but it does make it seem like the New York Times was being held out. For an example here, I just know that as of

this morning, I still had my check mark. I got the verified check mark years ago, but I know it's on borrowed time because I am not paying for it, and I imagine one day just notice that it's gone, and probably by that time it will have been gone for ages, because while I check tech Stuff's Twitter account pretty regularly, I very rarely go into my own these days. So yeah, it'll happen, and then maybe like a week

or two later, I'll figure it out. Pour one out for Virgin Orbit, which had the backing of a billionaire, but turns out a billionaire support is still not enough to send payloads into space. All right. So Virgin Orbit was a company that was backed by Sir Richard Branson. Virgin Orbit's business was to send payloads into orbit by using rockets that would fly aboard specially outfitted commercial jets,

essentially like a seven forty seven. Once the jet reached a high enough altitude, it would deploy the rocket, The rocket would ignite its engines, and then this would at least theoretically send the payload into orbit. And this approach accomplished a few things. For one, there was no need for a launch pad, right You didn't have to have a launch vehicle launch pad at Cape Kennedy. You just needed an airport capable of handling the commercial jet that

was used as the mothership. That would free up Virgin Orbit to launch from places that don't have their own rocket launch facilities, like say the United Kingdom, and it would reduce the need for rocket fuel. You wouldn't need as much to get a payload into space. Of course, you were using a lot of jet fuel as well. Unfortunately, the business of Virgin one was already struggling when earlier this year a flight from the UK that was meant to launch a payload into orbit failed to be clear

the launch of the rocket failed. The aircraft was fine, but the payload was unable to reach orbit, and investors who were already uncertain about this business began to abandon ship and I mentioned in an earlier news segment this year that Virgin Orbit would possibly have to declare bankruptcy,

and that is in fact what has happened. The company has officially declared or filed for bankruptcy, and they're looking for a buyer reportedly, so there is a chance that Virgin Orbit will still live on, possibly under some other name and definitely under some other corporate governance if it happens, or it may end up being essentially liquidated as much as possible, and for the majority of the staff who worked there, it's already too late, because last week the

company announced it was going to be laying off eighty five percent of its workforce. Youch now, layoffs aren't the only way that companies are trying to reduce costs right now. CNBC reports that Google is cutting down on amenities like fitness classes for employees. It's also going to expect Googlers to stick with their computers for longer, so employees will not be able to get laptop replacements as easily or

frequently as they have in the past. Also, someone should tell Milton that he's best off if he's guarding that red swing line stapler with his life, because Google also won't be handing out staplers to employees either. Those things are going to be like gold when Google Society collapses and everyone turns into scavengers. So yeah, guard that red swing line. Milton. Also, we probably know when Google Society

will collapse. My guesses it's going to happen on a Monday, because, as Google's memo to employees pointed out, the company quote baked too many muffins on a Monday end quote. Now, I gotta be fair, I'm giving that quote totally out of context just for the purposes of poking fun. But what the document was actually saying was that the company's expenses are too large. They're spending too much money, particularly in an office environment where employees are not coming in

five days a week anymore. They said, the policies that we had in place were for when everyone is coming into the office every single day of the week, or at least every day of the work week, and now you're not doing that. So since you're not doing that anymore, then we have to cut back on these things. And that's why you're starting to see a lot of the employee programs and benefits go away. The question I have is how many of those benefits are going to return

once the economy improves. I have a guess. That guess is pretty darned close to the figure of zero. The amenities Google had are the kind of things that you typically associate with startups, but they are also things that tend to get phased out when startups become massive corporations.

Now Google has been a massive corporation for many years at this point, so the fact that that was hanging on to these amenities for so long was partly a sign of how competitive companies have to be in Silicon Valley. In order to attract talent, right, if you wanted the best, you need to have lots of bells and whistles on top of really good salaries in order to attract them to your company and make sure that the competition didn't

get hold of them. But now we're in a world where there's a sudden excess of talent out there in the world because all the companies out there are laying off thousands of people. So it's way less critical to make sure that the folks who are still at your company are being catered to right now. They're terrified about losing their jobs. It doesn't really matter if you get rid of all the stuff that costs money but kept employees happy, because where are they going to go. Everyone's

laying everybody off. It's pretty grim. Okay, we're going to take another quick break. When we come back, I've got a couple more news stories to talk about. All right. We have an update on a story that's been going

on for many years now. So several years ago, from twenty fifteen to twenty sixteen, a guy named Owen Diaz worked as a contract employee for Tesla at its Fremont factory outside of San Francisco, and he brought charges against the company, saying that he was faced with racist attacks multiple times as he worked there, including from his supervisor, that there were people using racial slurs, and that it was a very adversarial workplace. And he brought a lawsuit

against the company and he won. He was found to be in the right and that Tesla was guilty of allowing this environment to establish itself within the Fremont facility, and initially the jury awarded him a staggering one hundred thirty seven million dollars, mostly in punitive damages. However, the judge in that trial felt this fine was, you know, a tad much, and reduced it down to fifteen million dollars. Not quite one tenth, but you know, fifteen million dollars.

That's a lot of money, but it's obviously nothing compared to one hundred thirty seven million dollars. Diaz decided to challenge that fifteen million dollar amount, and the matter went to a new trial for a jury to decide whether or not those damages reflected what he had experienced. Now, in this more recent trial, the matter of Tesla's guilt was not at issue because that had already been decided in the first trial. So we start from the position that Tesla was in fact guilty of the things Diaz

accused the company of. So the only thing that was really at stake here was the amount of money Tesla was going to have to award mister Diaz. The trial lasted five days, and then the jury came back with a new figure of three million dollars, which as one fifth of fifteen million. So I'm sure Diaz is now really frustrated and disappointed in this result. I'm not going to pass any judgment here. I'm not going to speculate on what I would have done, because I'm fully aware

I don't face racist depression. So it's impossible for me to say whether I would have accepted that initial fifteen million dollars that the judge had already knocked down, or if I would have gone out to seek more in damages. I don't know what I would have done. I don't

know what mister Diaz has experienced or gone through. Mostly I came away from this story really, really, really sincerely hoping that Tesla has taken measures to make certain that employees never face that kind of environment again, and if any instances of racist activity arise in the future, that the company takes swift action to address it. To me, that's really important, more so than how much money is Tesla going to have to pay. I do feel for mister Diaz. I it's a is this is not the

outcome that he wanted. I feel pretty comfortable saying that. Finally, Ride Wiseman, Christina Hammock, Koch, Victor Glover, Jeremy Hanson. These four people will crue NASA's artemists to mission. They will fly aboard and Orion spacecraft, and they will journey from Earth out to the Moon. They'll pass behind the Moon, and then they'll return home. So this mission is the predecessor to NASA's Artemists three, which will actually see astronauts set foot on the Moon's surface for the first time

since nineteen seventy two. So while these particular astronauts will not touch down on lunar firma, they will journey further away from the Earth than any other human has in several decades. Of the four astronauts, three have previously been to space. Only Jeremy Hanson will be taking his first space flight with this mission. So Artemis one was an uncrewed mission, meaning there were no people aboard the Orion spacecraft.

It was a test flight for NASA's Space Launch System or SLS, also known as the Big Honkin Rocket and the Orion spacecraft, So that was supposed to happen back in twenty sixteen. Originally, it finally really happened on November sixteenth, twenty twenty two, so about six years later than the original plan for Artemis. That mission was a success. The Orion spacecraft, with no humans aboard, it entered Earth's orbit and stayed there for several days. Orion returned to Earth

on December eleventh, twenty twenty two. It landed in the Pacific Ocean and the crew was able to retreat the spacecraft. So Artemis one is in the books. When can we expect Artemis two to launch? Well, right now, the plan is to aim for a November twenty twenty four launch, though, as we've already seen with lots of different space missions, there's no guarantee that will be able to make that date.

Hopefully we will. That's the earliest NASA plans to be able to launch, so it may be significantly later than that, as we have seen it. It's difficult, you know. Keep in mind also that we're talking about you know, elections and stuff, so that can complicate things because as much as we don't like to bring politics into the space program,

politics definitely affects the space program. Still in the not too distant future, we may once again have people staring at the far side of the Moon in person, which I have to admit is pretty darn cool. Okay, that wraps up this news episode of tech Stuff. Hope you are all well. If you'd like to reach out to me. You can do so on Twitter. The handle for the

show is tech Stuff HSW. There's no tick mark for that one, but trust me, it's for the show and if you would prefer, you can download the iHeartRadio app. It's free to downloads free to use. If you navigate over to tech Stuff by typing tech Stuff into the little search field, it'll pull up the result. You can pop into the podcast and you will see a little

microphone icon. If you click on that, you can leave a voice message up for thirty seconds in link let me know what you would like to hear in the future, and I'll talk to you again really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.

Transcript source: Provided by creator in RSS feed: download file