Welcome to tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with iHeartRadio. And how the tech are you. It's time for the tech news for Thursday, June fifteenth, twenty twenty three, and first up, Google leadership have reportedly warned employees against using chatbots to do stuff like, you know, organize information that could include sensitive or proprietary data,
and that also includes Google's own chat bot Bard. And you might think that that's a bit concerning that a company that has developed an AI chatbot and is actively marketing that chat bot to business customers has now warned its own staff against using such tools in the first place, for everything from organized information to developing code. And I also think that's concerning. Google's messaging has been that the company wishes to remain transparent that it acknowledges that these
tools are far from perfect. Not only can they generate responses that are unreliable through AI hallucinations, they also have the potential to incorporate any information that you submit to them. So, in other words, if you're using an AI chatbot to help you create a presentation meant for an internal meeting, and that data includes stuff that's not meant for public consumption.
It's possible the chatbot could essentially absorb that data and who knows, maybe populate some future response with information you provided to it to someone else, perhaps a competitor. We've already seen how an unintentional error could compromise information you shared with a chatbot. Chat GPT famously had a glitch in which users were able to see past conversations that
other people had had with chat gpt. Interestingly, Google does offer a version of BARD to commercial customers that, for a price, will keep conversations strictly on the DL that is barred, won't incorporate data entered into such interactions into the larger public database of information. I think the story really reinforces the fact that Google rushed into the AI space and felt pressured by the launch of chat GPT, and that the company had not intended to go to
market so soon with BARD. But that's already known. But the fear is that tools like chat GPT could potentially spell disaster for traditional web searches, and so to Google, the emergence of generative AI represented an existential threat. With Google's business so heavily dependent upon ad search, a hit to the search business would be potentially devastating, and so
the company barreled into launching its own AI chatbot. Whether this ultimately keeps Google's business safe remains to be seen, but based upon the company's warning to its own employees, I'm not confident that it's the best tech to push out worldwide. I guess Google's former motto don't be evil is now do as I say, not as I do.
Vira Jarova, the VP of the European Commission's Office of Values and Transparency, is calling for EU legislators to update the Code of Practice on Disinformation to include new rules about generative AI. She is calling for tech companies in the space to create guidelines and rules that will protect EU citizens against misinformation and disinformation created by AI, and that could include things like labeling when content was generated
by AI in the first place. According to pcgamer dot com, she was asked if the new rules would have a negative impact on the freedom of expression. To that replied quote, I don't see any right of machines to freedom of expression end quote. That's an interesting point, saying that well, machines don't have that right. They're not people. The EU is also a region where in the past advocates have argued that they should consider if robots have a right
to personhood. And I know robots and AI chatbots are different things, but they start to bleed together awful quickly, so it might sound a bit premature to hold conversations about whether or not robots should be treated as people. To be clear, we are a very long way away from machines that have a sense of self. But part of this call for such consideration was to help lay out clear rules as to whom should be held responsible if a machine operating under artificial intelligence causes harm either
to people or property. Should the manufacturer be responsible in that effect? Should programmers the end user? These are big questions, and they are relevant in a world where we have stuff like vehicles that are operating at some level of autonomy. If there's an accident, who whom should be held responsible?
But back to Jarova's point, She believes that the Code of Disinformation needs updates, specifically to adjust the threat of generative AI, and that any restrictions on that communication can't
be considered a violation of free speech. I'm not sure that argument would actually fly here in the United States, not because there's a strong legal basis to provide freedom of speech to machines, there's not, but because such speech could be seen as an extension of a corporation's communications, namely the company responsible for creating the AI in the first place. And as I'm sure most of you all know, here in the United States, corporations are legally considered to
be people, complete with the right to expression. So it will be interesting to see where this goes from here, whether it will progress in the EU, and how companies and the government in the United States might end up shifting as kind of a result of that. Now Here in the US, there is a bipartisan effort to alter a different law to make exceptions for generative AI. This
time we're talking about the infamous Section two thirty. Now, generally speaking, this law limits the liability of web based platforms for the stuff that their users are posting to those platforms. So, in other words, because of Section two thirty, if someone were to post illegal material on Facebook, Meta would not be held responsible for hosting that information. Now,
this rule has its own limitations. Platforms are supposed to take reasonable steps to address illegal material, And there are further questions as to what role the platforms actually play in disseminating information, including misinformation and illegal material, because obviously these platforms have their own recommendation algorithms, So there are questions about, well, if a platform is promoting something, then doesn't that mean the platform is at least partly accountable
for it. But generally speaking, platforms enjoy a great deal of legal protection regarding the stuff that people post to them, which has vexed both sides of the political aisle here in the United States for very different philosophical reasons. Now, Democrat Richard Blumenthal and Republican Josh Holly have proposed a bill that would essentially say content from generative AI exists
outside the protections of Section two thirty. So if this bill were to become law, there would be a legal foundation to bring lawsuits against platforms that allow harmful generative AI content on them that includes stuff like deep fake
videos or a I impersonations of people's voices. So if this did become law, and someone uploaded a deep fake video that appears to show you committing illegal or awful acts, you would have the legal foundation to bring a lawsuit against not just the person responsible for creating the deep fake, but against say Meta for allowing it on the Facebook platform in the first place. I imagine the various platforms out there will have a lot of objections to this proposal.
Most of them have limited, if any, involvement with actual generative AI as it stands. And it also raises the question of what makes AI generated harmful material different from harmful material created by a person. Right if Meta can't be held responsible because Jimbo posted illegal material, but it can be held responsible if Jimbo bought posted the illegal material, what's the reasoning behind that? Why is one allowed and
one not? Or why is one protected in one not? Now, I suppose one major difference you could argue is that you could create a whole lot of harmful material using AI a whole lot faster than if you went you know, the more bespoke malicious content route. This content was handcrafted to be evil. Anyway, this is still in the bill phase. It will have to move through a lot of different steps if it ever is to become law. There's no guarantee that that will happen. But it is interesting that
it's a bipartisan proposal. Do you need a get rich quick scheme? Well, how about creating an AI centric company? The Financial Times reported on a new startup in the EU called Misstral, which is like a month old. And you might wonder what is Mistral or right now? It's a company that has about a dozen staff and no product. It's also a company that received more than one hundred million dollars in C funding over the last month. So you've got yourself a small staff, you have nothing to sell.
What could possibly make this company worth that much of an early investment. Well, Mistral's goal is to produce its own large language model. That means it would be in competition with the likes of companies like Google and open Ai, among many others. And it shows how the AI gold rush is still going strong, that lots of people are betting big on AI having a huge impact moving forward.
For that to extend to investing in small startups early on, when you've already got established players like Google and open Ai out there is pretty remarkable. And the investors are folks who are heavy hitters, including a former Google CEO namely Eric Schmidt. Now I should add that's not like Mistral has just a bunch of folks who don't know what they're doing. It's not like a bunch of people who said we're going to make some AI and they
have no background. That's not true. In fact, the CTO for the company is Tim Lacroix, who cut his teeth over at deep Mind. That is the AI focused subsidiary of Alphabet, which, in case you've forgotten, is Google's parent company. So I suppose that knowing the expertise of the team in place goes a long way with raising expectations among investors and gives them the confidence to pour that kind of huge money into a small company that doesn't have
anything to show for it so far. All right, We've got some more news items we're going to cover in this episode, but first let's take a quick break. We're back. So tech Crunch reports that Twitter's offices in Boulder, Colorado are headed toward eviction. The Chicago based company that owns the office space sought approval to evict Twitter after the
company failed to pay three months worth of rent. In fact, according to tech Crunch, Twitter was using a letter of credit worth nearly a million dollars with this landlord and simply was drawing upon that letter of credit to cover the rent month after month until the credit ran out back in March of this year. Now a judge has ordered the Sheriff's office to assist in the eviction before the end of July. We've heard time and again that part of Twitter's cost saving strategy was, you know, to
just stop paying vendors and rent and stuff. I'm sure this move didn't come as a complete surprise to the company when they were told that their offices were going to get evicted. I'm also pretty sure that after all the rounds of layoffs, there probably aren't that many Twitter employees left in Boulder to begin with. But it's another ugly chapter in the post Elon Musk takeover Twitter story. The United States Federal Communications Commission, or FCC, previously approved
a label law that applies to broadband service providers. Now, this law is meant to make the various elements of a customer's bill more transparent, so that consumers can actually see what they're really paying for, like how much of the bill is going to cover the base service versus government fees versus you know, weird service fees that don't seem to cover anything other than boosting up the bill.
And Comcast, a ginormous broadband service provider that also owns lots of other stuff, including NBC Universal, has filed a request to drop some of the requirements of the labeling, stating that quote two aspects of the Commission's order impose significant administrative burdens and unnecessary complexity in complying with the
broadband label requirements end quote. Now I'm not sure that arguing that your fees are so complicated that explaining them is a hardship is the right way to go, but anyway, the purpose of the rules is to make it harder for broadband providers to offuskate how much consumers will actually
pay when they sign up for service. So the argument goes that providers will often run promotions that will make it seem as if customers will pay relatively low monthly bills, but then when it actually comes time to pay the bill, the customer will see all sorts of different fees piled on top of their basic service, which inflates the amount significantly.
So the rules are meant to force providers to be more upfront about such things, and Comcast is essentially saying this is too hard now to be somewhat fair to Comcast,
some of the rules do get pretty involved. For example, providers have to keep a record of when they provide labels to customers through quote unquote alternate sale channels, and that can include stuff like if you were to have an interaction with someone at a kiosk like a Comcast kiosk in a mall, or if you were to call up a customer service rep and you were to ask
to be told the label over the phone. All of those would count as an alternate sales channel, and Comcast is supposed to keep a record of every single one of those. And Comcast argues that the company has millions of interactions with customers and potential customers, and recording every single time that a Comcast employee communicates the label information to a customer would rapidly become very difficult to manage.
And you know what, I think that's probably true. But at the same time, the FCC wants to hold these companies accountable, and part of that is keeping track of when they're following the rules and when they fail to do so. So I'm not sure there's an easy solution here. I do think that more transparency is absolutely needed because a lot of those fees, if I'm being honest, same
a bit sus if you're asking me. San Francisco, California, has become the first city in the United States to have more than half of its car sales to be either in electric or hybrid vehicle categories. Now, technically the city hit that benchmark in March. April sales figures showed that even and there was even more of a tendency
for car shoppers to go electric or hybrid. And that's particularly good news for Tesla because about half of all those vehicles actually came from Tesla, So about a quarter of all car sales in San Francisco or Tesla vehicles. That's incredible. Now, this doesn't necessarily mark a trend for
the larger United States. For one thing, you do have states like Wyoming that have obstinately introduced legislation that would actually phase out new electric vehicle sales, which was just a response of being told that they should phase out internal combustion engine vehicle sales. I should also add that those states have very small populations, so ultimately they don't have a whole lot of say in the matter, because the automotive industry is going to respond to the majority,
not the minority. So you know, it doesn't make sense to produce internal combustion engine vehicles if it's only the state of Wyoming that's buying them. But anyway, another reason why this isn't necessarily a trend across the entire country is that electric vehicles are still more expensive than other vehicles, and the people who buy them typically are on the affluent side, and that is a large part of the
population of San Francisco. Obviously, there are a lot of people living in San Francisco who are not affluent, but it has a larger affluent population than a lot of other cities do, and so we're not likely to see a huge shift toward electric vehicles just because San Francisco did it. But it still has become the first US city where more than half of the cars purchased in a month went to electric or hybrid vehicles. Finally, a rear admiral for the country of Iran recently showed off
what was claimed to be a quantum processor. It was a circuit board with lots of chips installed on it. There was like a circular array of chips. It looked, you know, circuit boardy, but if you know anything about quantum computers, you would probably think, well, that can't be right, and you would have been correct. The circuit board, while admittedly kind of nifty looking, turned out to be nothing more than a z board Zinc seven thousand development board.
So it's a system on a chip or an SoC and it's meant for developers to use, you know, develop applications and software and stuff. It is not a quantum processor. It's a classic computer processor or system on a chip. And it's not even that powerful either. It has five hundred and twelve megabytes of dd R three RAM. Now note I said megabytes, not gigabytes, as a dual core arm Core tex A nine processor, so it's not the kind of equipment you would need to keep cubits in
superposition as you run incredibly complicated quantum algorithms through the system. Now, I'm not sure if the Iranian government or the Iranian military was behind this as kind of a way of posturing and making claims that they don't actually you know, they can't back up, or if perhaps the authorities had been hoodwinked by some snake oil salespeople who passed off this development board as a quantum processor and said, yeah, yeah, sure,
sure it's quantum now cough over the dough Mac. Because there's a history of tech scam artists in the Middle East passing off substandard or outright fake technology as if it were the real thing, so it's hard to say. Now. Before I go on Monday, we will be publishing an episode of IBM Smart Talks in the feed, so you'll see that as opposed to a normal tech Stuff episode. And yeah, that's about it for that. We'll be having more Smart Talks episodes, publishing about once a month moving forward,
and everything else is all Jonathan all the time. So I hope you are all well and I'll know I'll talk to you again really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.