Welcome to Tech Stuff, a production from iHeartRadio. Hey therein Welcome to Tech Stuff. I'm your host Jonathan Strickland. I'm an executive producer with iHeartRadio. And how the tech are you? It's time for the tech news for Tuesday, May thirty, twenty twenty three. Nvidia CEO Jensen Wang spoke at the Computech's forum in Taiwan yesterday and said something that I'm
sure has a lot of programmers anxious. Huang said that due to how quickly generative AI evolves, we're entering into an era in which essentially anyone can be a programmer. It won't require you to have studied computer languages or even computer science. All it will take is a sufficiently sophisticated AI model to take your prompts and then turn that into a well designed program. Wang unveiled a platform called the DGXGH two hundred, essentially a supercomputing pro platform.
It's designed to help build the next generation of generative AI models. This actually reminds me of the fictional supercomputer Deep Thought in The Hitchhiker's Guide to the Galaxy books, which explained that it was not sufficiently powerful enough to provide the question to life, the universe and everything. It gave us the answer, but not the question. However, it was powerful enough to build a computer that could do that. That seems to be what this is indicating. It's a
supercomputer meant to build better AI. Anyway, Wang's keynote seemed to indicate that not too long from now, you won't need a background in programming in order to be a programmer, or at least that's how the articles covering his keynote seemed to frame it. I'd like to think that generative AI will help programmers and help them be more productive and more efficient, and that AI will give them the tools to build code that contains fewer errors, or to
check for mistakes as they're coding that kind of thing. So, in other words, my hope is that this isn't a step toward invalidating an entire career path or and this is really the cynical part of me. I hope it's not an attempt to justify hiring relatively unskilled employees at a much lower salary than what it would cost to bring on a qualified programmer. Right. That's the fear is
that if you've dedicated your time and energy. You pursued an education in computer science and computer programming, and you've built the skill set that would normally guarantee you a chance of landing a lucrative career in the field you love, and then you find out, oh, no, you're overqualified. We just need someone who can talk to this computer and make the thing that we want. It's not good. So my hope is that a lot of the news that covered this was doing so in a way that was
perhaps not reflective of what Wang was actually saying. I say that because I didn't get a chance to actually watch his keynote, so I'm not certain how he worded it. I can only react to the way it's been reported. Reuter's reports that deep Media, which is a company that works to identify and track the proliferation of deep fake videos and other deep fake content online, has indicated that there's been a pretty significant jump in the number of
instances of deep fakes this past year. So, according to deep Media, there are three times as many deep fake videos circulating this year as there were in the same span of time back in twenty twenty two, and there are eight times as many deep fake voice recordings and deep fake voice examples. I'm not sure how many examples we're talking about with that, because keep in mind, this all depends on how many were circulating in twenty twenty two.
They just said it's eight times as many. So if only one deep fake voice recording had popped up in twenty twenty two, that would just mean that there were eight that popped up this year, So the details matter.
This is also, by the way, when you hear about really the growth of any business, but specifically within technology, when you hear the growth is being expressed in percentages, you really need to say, all right, but what are the actual numbers, because if the actual numbers are very low, a huge percentage in growth still can mean a pretty
low number, right, That's important anyway. Deep Media estimates that by the end of this year, there will be around half a million deep fake videos and voice recordings that will be shared across social media, and as you probably suspect, a lot of those will likely center around politics and misinform As the deep fake generators become more sophisticated, it can be a challenge for a normal human type person to tell the difference between real videos and deep fake videos.
There are often indicators that you might be able to tell if you're looking on a large enough screen and a high enough resolution, but if you're like watching little videos on your phone, you might not notice. There are detection tools that are more effective for spotting really good deep fakes out there, but you can imagine the damage that can be done with this type of technology, particularly for people who are already predisposed to believe certain ideologies.
If there's a video that seems to reinforce that ideology, they may not take the time to question the authenticity of that video. Various companies in the generative AI space have been working on different approaches to mitigate this issue to prevent the misuse of the technology, but these are
are nowhere close to being comprehensive or fully effective. Already, there have been a few cases in which folks in the political sphere I shall not name names, but some prominent people in politics have shared deep fake videos, sometimes with like a half hearted disclaimer of something along the lines of, I don't know if this is real or not, so just a side note. If you don't know if it's real, just don't share it. But you know, no,
I get it. You know. The folks who share this kind of stuff typically they don't really care if it's real or not. They're just looking for the effect. I mean, if it's real, it's better, but it doesn't really matter because they're just looking to stir up a group of people, so it doesn't really matter if it's real. All I can say is that the use of critical thinking is more important than ever, that employing critical thinking also takes work.
You have to be actively working and engage in critical thinking. You can't just, you know, lean back and rely on upon it to kick in. Because goodness knows, I've been guilty of being too lazy to use it in the past. It's happened to me. I talk about critical thinking all the time, but I'm also guilty of not employing it on occasion. I have to think about it, I have to actively do it, and we all need to try harder because it's getting tricky to tell the difference between
fact and fiction. There are a lot of tools out there and a lot of bad actors out there that collectively can start to push false narratives and to trick us. And it's across the spectrum. It's not like it's just one group doing this. There are lots of different people with different motivations doing the same sort of stuff, and we have to be on the lookout for it. And now for a piece about the consequences of relying on
generative AI. So, a lawyer named Stephen Schwartz apparently used chat GPT while doing some legal research on a case that he was working. Chat GPT provided some background research that Schwartz apparently then used in filing this case. But there was a truly huge problem. Chat GPT cited other legal cases in order to provide support for Schwartz's legal argument, except those cases weren't real. They had never happened. Chat GPT created, or, if you want to use the parlance
of the AI times, it hallucinated these cases. And I've talked about AI hallucinations not too long ago on this very show. You how generative AI models are essentially using a complicated statistical model to generate responses. And this model draws on a lot of archived information, but sometimes it just invents stuff and essentially is saying what word would
most likely follow this word. Then you get a response that sounds perfectly cromulent, as the Simpsons would say, but is in fact hogwash or gibber jabber, or balderdash or just plain fake. And as you might imagine, presenting fake court cases as if they are a legitimate precedent to your own case is not looked upon kindly by the court. If you claim a precedent, then you should expect the court to look into the precedent to make sure that
what you're saying is accurate. And when the court did do that, when they double checked Schwartz's filings, they found that the numerous cases Schwartz presented as suggested by chat GPT didn't actually exist. And when pressed, Schwartz said he was not aware that chat GPT could just invent stuff. He assumed that everything that chat GPT presented came from actual, real information that had been stored somewhere. Now, the judge has ordered a hearing a few weeks from now to
quote unquote discuss potential sanctions. Woof So. On the one hand, yeah, absolutely, it would be unthinkable to let people submit fake evidence to support their arguments and then receive no repercussions afterward. On the other hand, I mean, the hype around chat GPT and other generative AI models is absolutely painting an inaccurate picture of what they can do. Though, honestly, it really doesn't take that much work to find out that
these AI systems are flawed. It's just that I could understand why someone would put too much stock in chat GPT's performance because of the way it has been hyped. Again, it really doesn't take that much work to find where the problems are. So I can't give Schwartz a pass here. I could just say I could understand why he would think, oh, this is a valid tool for me to use, and there shouldn't be any problems. I can kind of understand that.
But if he had taken even just a little bit of effort, he would have seen that perhaps rely so heavily on chat GPT, without you know, fact checking, it would have been foolish. Okay, we've got a lot more news to cover before we get to that. Let's take a quick break. So we're back, and we've got another open letter, actually just a short warning from various AI experts about the potential dangers of AI. It is very short. It is to the point I'll actually read the whole
thing because it isn't long. Quote. Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war end quote that is it, that's the warning putting AI on the same level as things like pandemics and nuclear war as a potential threat to the human race. A lot of high profile AI experts have already signed their names
to the statement. That includes Sam Altman, the CEO of Open Ai, who's been sending some kind of mixed messages about AI regulation in the United States versus in the EU, and then the CEO of Google's Deep Mind added their signature to it, among other experts. So, in other words, this warning has the backing of some very influential, educated and knowledgeable people who are in the field of AI.
Of course, that's assuming that the signatures are legit, because we did have a case not that long ago where there was an open letter warning about AI that contained signatures from people who subsequently said they never signed it or even heard about it. So assuming that this is all legit, it sounds like we should really pay attention, right, I mean, these are people who are working in that field.
That being said, I actually worry that some folks are going to interpret this as meaning AI is on the verse of becoming super intelligent and self aware or something. As I mentioned earlier, generative AI leans on statistics and training to create stuff. It's not thinking in the same way that humans do, and I worry that if people interpreted otherwise, then folks will be focusing on trying to solve the wrong problem. AI definitely has the potential to
cause harm. Don't get me wrong. AI is potentially very harmful, but it doesn't need to be, you know, brainiac in order to be harmful. We can just look at the instances of vehicles that have been operating in autonomous modes that subsequently got involved in fatal car accidents. That proves AI can be harmful and it doesn't need to be super intelligent to do so. So I guess what I'm
saying is I believe that this warning is warranted. I do think AI poses a threat and that we need to have rules and regulations and approaches to AI that are responsible and are least likely to cause harm. But you know, it's it's trying to figure out how to do that by framing the problem correctly. That's going to be a big challenge. Right. You need to make sure that you understand what the actual problem is and not conflate it with something like super intelligence, in order to
create the proper framework to actually rectify the issue. And in fact, like the people behind the statement said that the whole goal of making something short and blunt was to avoid issues of perhaps suggesting an approach that then would just devolve into an argument over the best way forward. But on the flip side, I would argue, well, yeah, but if we're not suggesting approaches, then what good is the warning? Like it's obviously we're not going to stop
developing AI. I mean, even the people who signed this warning are actively pro promoting and developing AI right now. There's like an AI arms race going on in the world of computer science. So if is it just so that you can fall back and say, well, I know, we blew up the world, but we warned you back in twenty twenty three. I don't know. Maybe I just am a little too cynical about how these experts are viewing the dangers of AI without any actual solutions to
address the problem. Moving on from AI, Japan's Space Administration JACKSA is working with private companies in Japan to do something pretty ambitious, So The plan is to launch collecting satellites into space as early as twenty twenty five, and these satellites will collect solar energy, convert that energy into microwave beams, and then beam that energy down to receiving
stations here on Earth, which is entirely feasible. Questions remain regarding how much energy these satellites will be able to collect and transmit, but the point of this project is to explore whether or not we can make solar collection in space and microwave transmission a part of a larger renewable energy strategy. For a country like Japan that might have limited space for things like terrestrial solar arrays, this
could have a lot of appeal. I mean, it all depends on how large the receiving station has to be. I would imagine has to be quite big, so that you're not having a super concentrated beam of microwave energy. But it's neat like. If you can get this to work and become a supplemental part of your energy strategy, then it could be really useful, especially since with the positioning of satellites, if you have a whole network out there, you can be collecting solar energy twenty four to seven,
So and weather will never be an issue. I mean, solar weather could potentially be an issue if you have like a chronal mass ejection or something like that, but you know, terrestrial weather wouldn't be an issue. So a lot of the complaints around solar the fact that you know you're only collecting when it's daytime and only when you have a clear view of the sun, that ends up being negated. But you still have questions of all, right, well,
how efficient is this going to be. How much energy are you going to lose converting from solar to microwave, you know, how much is lost in the transmission. These are all questions that need to be answered as well. But this is a cool step toward answering those questions. Will it pan out, I don't know. I hope it does, though obviously it also opens up other things that you have to consider, like the potential for more space junk, right, So there's always other things you have to bring into
the equation. But I think it's a cool project. Now. Over this past weekend, I actually went to the amusement park six Flags Over Georgia's for the first time in many, many years, and I have to say. The park has digitized a lot of operations since the last time I went there, but at a couple of other Six Flags parks, not at Georgia, but a couple of other ones, one in California and one in New Jersey, that digitization is
ramping up a bit. And that's because those two parks have partnered with Amazon and Coca Cola to incorporate one of Amazon's grab and go cashierless shop concepts into the parks. So just as you would in one of Amazon's operated stores, you would be able to walk into one of these places, pick up a product, and then stroll right out, and you would be builled automatically through the Amazon system on
the back end. So products will include stuff like Coca Cola products just as you would imagine, plus like snacks and other stuff like the New Jersey location says it will include necessities like sunblock and rain ponchos, that kind of thing. And it's interesting to hear that the parks are actually doing this because Amazon has actually shut down several of its own locations, I think like eight total
in cities like New York and Seattle. However, it wouldn't surprise me if the plan all along was to attract partners like six Flags, where Amazon can serve as the back end operations, but someone else is in charge of, you know, restocking and cleaning the place and that kind of thing. Over the weekend, Toyota entered a hydrogen powered
racing car into an endurance race in Japan. The company said it looked at the race as a sort of testing ground for the technology and an opportunity to uncover areas of improvement that wouldn't necessarily pop up in a laboratory setting, which is understandable, right Like in a lab you can test technology quite a bit and see where there may be areas that you need to focus on to fix things, but it's not until you really get something out in the real world and really put to
the test that some problems will become evident. And let me tell you, an endurance race, a twenty four hour endurance race. That's a heck of a test for a technology. Toyota has actually been working on hydrogen fueled cars for a while, and in fact has even fielded hydrogen fueled cars in various races, but the big difference in this most recent vehicle is that the car was using liquid
hydrogen rather than gaseous hydrogen. Now, liquid hydrogen comes with a whole bunch of challenges, like you have to keep the hydrogen at a very low temperature to keep it liquid. But it also means that you end up with a higher energy density fuel, right because liquid hydrogen packs more energy per volume than gaseous hydrogen does, which is an important consideration for an endurance race. You know, you want
to have a fuel or an energy rich fuel. I should say I couldn't find any information on how well it performed. The race happened just this past weekend, but I did see a lot of articles saying, you know, it did it. It didn't tell me how well it did. Critics have long argued that Toyota has been dragging its
feet on developing electric vehicles. Obviously that's where the automotive industry is really shifting toward, and as a result, because Toyota did not jump onto that particular approach, it is now lagging behind competitors as it tries to make up
for lost ground. Toyota, however, has long argued that it's going to take longer to transition to pure electric vehicles than most people expect and as a result, in order to bring down carbon emissions while also transitioning to electric vehicles, Toyota has said, we need to invest in alternatives to just pure electric vehicles. This has been Toyota's message for years and years, with fuel cell vehicles, hydrogen powered vehicles,
various hybrids, that kind of thing. There are critics who say that Toyota has made the wrong call, that the company is just trying to justify its approach to a different branch of vehicle development, and that it's kind of like it's sort of a sunk cost fallacy. It's gone so far that it can't come back. Although the new leadership at Toyota has been a bit more pro ev
side than previous leadership has. It's interesting there's a lot of challenges that are associated with hydrogen based vehicles, including how to harvest pure hydrogen without using too much energy in the process. You may know hydrogen is the most plentiful element in our universe, but it binds with stuff, which means in order to get hydrogen, we frequently need to expel energy to break those molecular bonds to harvest pure hydrogen. And if you are spending more energy to
separate the hydrogen from stuff. Then you're getting out of the hydrogen itself. Then you're at working at a net loss, and you may need to just re evaluate what you're doing. It may turn out that there's a different thing you can do where you just eliminate that step and you're wasting less energy. I'll have to do another episode in the near future to kind of go down the list of pros and cons of things like a hydrogen based economy.
A few decades ago, that was a really big thing, at least in rhetoric in the United States, and we haven't really seen it mature and turn into a real technology here in the US, and I thought it might be a good idea to kind of do a follow up and talk about what are those pros and cons and is it really a viable approach and a viable alternative to your traditional internal combustion engines. Keeping in mind
a hydrogen based car also uses combustion. It's just when you are you know, when hydrogen goes through combustion, you're not getting the same byproducts as you would if you were burning gasoline. Okay, I've got a couple more things I want to talk about But before we get to those, let's take one more quick break. We're back. So the United States is not the only country planning to put people on the Moon again. So NASA obviously has Project Artemis Artemis one, which was a test flight of the
spacecraft and the launch vehicle. That's already happened, so Artemis one was a success. Artimist two will see a crew of astronauts launch off Earth and then circle around the backside of the Moon, not the dark side because there's the dark side changes, but the backside, the far side of the Moon, and then return to Earth. They are not actually going to touch down. And then Artemis three is the mission where astronauts would land on the Moon again.
That one currently is projected for twenty twenty five. I'll be shocked if we make that goal. But meanwhile, China's Manned Space Agency announced that it plans to land Chinese astronauts on the Moon by twenty thirty. To accomplish that goal, the agency is developing a new launch vehicle and a new spacecraft. Again, that's a really aggressive goal. If the launch vehicle and the spacecraft are still in development, it often can take a very long time to get that
kind of stuff buttoned up. Also, it will be interesting to see if lunar real estate becomes the next big land grab. You know, there are obviously space treaties that are meant to prevent such things, but I would be shocked if we don't see some rather aggressive moves to claim lunar landscape for various purposes. So we'll keep an eye on that too. I can't wait for it to
become some sort of Heinland novel. All right, now we're toward the end of the episode, and occasionally I like to end episodes with some suggested reading material for y'all. So rather than go through all of this as a news item, I thought I would talk about a few articles, three of them in particular, that I think are worth your time to read. Also, these articles tend to fall more into investigative journalism or interesting experiences and less on the news side. First up is an art article in
Ours Technica that Dan goodin ours Technica phenomenal resource. If you are not visiting ours Technica on the rag to read up on their journalism and tech, you need to
change that because Ours Technica is a fantastic resource. The article I'm referencing is called inner workings revealed for Predator, the Android malware that exploited five zero days, so zero day being an exploit that the company that's behind the software is unaware of, and that is in there front the very beginning, and you can just if you find out about it, you can exploit it to your heart's
content until someone notices what's going on. So the article details a dramatic story about how companies specializing in a double whammy of discovering and exploiting vulnerabilities as well as turning mobile phones into remote surveillance devices are making an awful lot of money selling those tools to very dangerous customers who then employ the technology to target perceived threats. We've seen this story play out in other areas as well.
There was obviously the case of the Israeli tech company that was selling an exploit for iOS systems that took advantage of a vulnerability and eye message. Very similar case, and this article goes over something like that, but one
that affected Android devices as well. Highly recommended. CNBC has an article titled Chinese apps remain hugely popular in the US despite efforts to ban TikTok, and this one touches on some stuff that I have mentioned on tech stuff in the past, namely, TikTok represents a very high profile example of a problem that actually goes well beyond TikTok, and perhaps it might be better to take a step back to consider whether or not TikTok is kind of
standing in as a scapegoat for a much bigger problem. And it's a problem that even expands beyond the possibility of a country like China harvesting all this information, because obviously we've got all these other huge companies that are in the United States that are also harvesting information, and that maybe the problem isn't just with who is getting it,
but the fact that it's being done full stop. The piece also points out something that a lot of others have been saying for a while, that a lot of the TikTok suspicion is being fueled by companies, primarily Meta, that would stand to benefit tremendously if TikTok were to go away. So knowing that Meta has a vested interest in TikTok dying helps kind of put all this into
context as well. That's another reason why TikTok is so prominent in this discussion because you've got companies that have a lot of money that are very eager to support the narrative that TikTok is a danger. That's not to say that TikTok's not a danger. I'm not saying that. I'm saying it's like, let's do a laser focus on this one instance and ignore the larger problem that remains
unaddressed as long as we're only focusing on TikTok. Finally, the third article I want to recommend is by Maxwell Stretchen, and my apologies for that pronunciation of your name, Maxwell, I'm sure I butchered it. But Maxwell has a piece on Motherboard titled I asked Chat GPT to control My life and it immediately fell apart. Now, this is a pretty amusing story about Maxwell experimenting with chat GPT to
create a daily schedule. Like Maxwell just highlighted the things that he needed to do and wanted to do and gave it a chat gpt to tell them how to do it. And it highlights a few interesting things, including how open ai is trying to build in guardrails to prevent bad stuff from happening, or at least to prevent
the optics from going bad. So like chat GPT saying hey, autonomy is really important and you shouldn't just hand it over to someone, which you know may or may not have been a legitimate and earnest statement all the way to how chat GPT has trouble meeting all of Maxwell's daily goals reasonably. If you've played the sims, you know how frustrating it is. There just aren't enough hours in the day to do everything you need to do plus
everything you want to do. Turns out AI has that same sort of problem, So yes, I recommend those three articles. Check those out. All right, that's it for the news for Tuesday, May thirtieth, twenty twenty three. I hope you are all well and I'll talk to you again really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.