US Department of Justice Proposes Google Sell Off Chrome Browser - podcast episode cover

US Department of Justice Proposes Google Sell Off Chrome Browser

Nov 22, 202420 min
--:--
--:--
Listen in podcast apps:

Episode description

The Department of Justice has a list of proposed actions Google must take to end anticompetitive practices, and Google is not happy about it. Apple probably isn't thrilled either. But the matter still has to head to court next year, so nothing is decided just yet. Plus, Microsoft makes some questionable choices and an AI expert leans a bit too hard on AI. 

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

Welcome to tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host Jonathan Strickland. I'm an executive producer with iHeart Podcasts and How the tech are you. It's time for the tech news for the week ending Friday, November twenty second, twenty twenty four. This week, the US Department of Justice filed proposals regarding what to do about Google, and by what to do, I mean how best to address the issue of Google

absolutely dominating web search, among other things. So this is like an anti trust lawsuit proceeding, so as expected, the suggestions the DOJ has are pretty extensive. One really big one is that the DOJ says Google should stop paying out gobs of cash to other platforms in order to make Google Search the default search tool on those platforms.

So Google literally spends billions of dollars every year on this effort and pays out companies like Apple to use Google Search as the default on their various platforms, whether that's iOS or whatever. And the DOJ argues that this has led to Google essentially squashing competition and entrenching itself as the dominant web search provider. But the proposal goes further and says Google should also sell off its Chrome

browser entirely. Google introduced Chrome back in September two thousand and eight, and since then it has become the most popular web browser on the market. Stat Counter estimates that it holds around sixty six percent of the market share

right now. The DOJ says the Google should not be allowed to release some other browser during the term of judgment, so not to create a workaround by oh, yeah, we'll sell off Chrome, we just happen to release a different browser CO called really highly polished Metal or something like that would be a no no. Also, if Google doesn't do what the DOJ says, then the department indicated that

it might come after Android, which is Google's mobile operating system. Well, Google has of course protested these proposals in these strongest possible terms, even going so far as to suggest the changes that the DOJ is demanding would essentially break the Internet and harm America's standing in the global technology market.

Companies like Apple are probably feeling a little antsy too, because if the DOJ's proposals do move forward, well, Apple would be saying bye bye to several billion dollars of revenue every year, so that would be rough. The DOJ argues that the payments to companies like Apple also end

up being incentives that Google is paying out. Essentially, Google is incentivizing companies to not develop competing products on the market because these companies can make more money just taking money from Google and using Google's tool than they would if they develop their own product individually. These proposals are just proposals right now. These are not rules that Google

is going to have to immediately abide by. In fact, this is something that will be brought before a judge in the United States later in twenty twenty five, possibly in the second half of twenty twenty five. So we're far from a point where Google is facing actual consequences just yet. And seeing where the US elections went this past year, I would be shocked if the government continues to take such a strong stance against corporations and anti

competitive practices moving forward. Generally speaking, Donald Trump has shown to be very friendly toward corporations and less concerned about things like, you know, anti competitive behaviors, so it would really surprise me if we see continued movement on those fronts.

But that being said, the Trump administration also has its own acts to grind against Google for perceived bias, not saying there's actual bias, but there's definitely they perceive a bias and have since identified Google as like an enemy of the people by presenting search results that conservatives sometimes

argue are purposefully biased against them. And so it could be possible that we'll still see government action brought against Google, but it will be driven more by retribution than by a desire to break up a trust, which is kind of a weird place to be. Bahima abdel Rahman and Joe Teddy have an article on BBC World Service. It was in Arabic and so I had to rely upon a translate feature, so hopefully everything that I read was

accurately translated. That can always be tricky, but it revealed some pretty disturbing information about verified users on X, some verified users a small selection, and of course X is the service formerly known as Twitter. So the reporters say that a BBC investigation found verified users on X who were sharing links to sites that traffic in images and videos of child sexual abuse material or CEESAM. The reporters state that once they alerted X to these accounts, in question.

The service did ban those accounts, but questions remain about the actual verification process, because one would hope that if you're providing a check like a verified check, and if you're allowing verified accounts to have a much larger reach than standard accounts, you would also have a process that would include some sort of vetting to make sure that

those accounts in question aren't violating the law. The BBC reported that at least one verified account had been posting links to illegal material for at least six months without any intervention on behalf of x The investigators found the accounts by searching for a few keywords in quote unquote Arabic dialect. These words have been used as a sort of code among se Sam traffickers as a means of

finding one another without directly raising suspicion. According to the BBC, the verified accounts didn't have very many followers, but because those accounts were verified, their posts actually had a really broad reach and thousands of people saw these posts. So while there weren't a lot of followers for the individual accounts,

they still had a pretty big impact. And obviously the story is absolutely horrifying, and in my opinion, it also puts a spotlight on how X's approach to verification, where it becomes just a simple paid option, is far inferior to the way it worked in the old Twitter days. But I guess I should stop beating that drum and just accept that X is, in my opinion, totally inferior to what Twitter used to be. Smritty Malapti and my

apologies for Armalapati. I suppose I'm mispronouncing this name terribly anyway. A reporter for Nature has an article that's titled a Place of Joy Why scientists are joining the rush to Blue Sky. Now, if you recall, blue Sky is an alternative to x, slash, Twitter or Meta's threads, it's closer to mast it on. It's like the federated version of these services, which means that it's housed on multiple servers that connect to one another, but it's not centralized the

way Twitter was or is. Blue Sky itself started as a project within Twitter. Jack Dorsey was behind the initial creation, although he is no longer involved with the company. So why are scientists migrating to blue Sky? Well, according to

the article, it's for several reasons. One, there's more control over what you see on Blue Sky, You're more likely to see messages posted by people and accounts you actively follow, rather than stuff that some algorithm is just shoving at you for whatever reason, For example, prioritizing posts from folks who paid to be verified over other stuff that you actually want to see. Moderation has taken far more seriously at blue Sky. If you block someone on blue Sky,

it is a real block. It's not the way it is over on X. People who are blocked from you on X can still see what you post, which is wild, and it sounds like their reasons for leaving X are pretty similar to my own reasons when I left X a couple of years ago. If you still have an account on X, I'm not throwing any shade at you. I mean, lots of people have accounts for lots of reasons. But it just became clear that X is not the right place for me, and I think for a lot

of other folks they're coming to a similar conclusion. Sticking with Blue Sky, Jonathan Vanian of CNBC has an article in which Blue Sky's CEO Jay Graber has explained that Blue Sky is quote unquote billionaire proof, meaning that Blue Sky couldn't be bought and repurposed the way Elon Musk

bought and transformed Twitter. Blue Sky's design is based on an open source approach with federated service as I mentioned, so even if someone were buying up blue Sky servers, you could create a new one and port your server over to that one and maintain all your previous connections with those whom you follow and those who follow you. And Blue Sky's approach appears to be resonating with lots of people. The service has seen millions of people sign

up in the wake of the US elections. It now has more than twenty one million registered users, but that is still minuscule compared to a service like Twitter. Twitter has reported having hundreds of millions of monthly users. So let's keep everything in perspective. Okay, we've gotten more news to get through. Before we get to that, let's take a quick break. We're back. The US Consumer Financial Protection Bureau or CFPB issued rules that will group large digital

payment providers under the Bureau's regulations. So this is only going to apply to digital payment apps that handle more than fifty million transactions a year. That covers heavy hitters

like Google Wallet and Apple Pay that kind of thing. Initially, the CFPB was planning on a much more comprehensive list that would include apps that handle five million transactions or more per year, but it switched to this, so these services are now going to be subjected to regulations the same way that banks and credit unions are here in

the United States. The CFPB issued a statement saying that the rules mean that consumers are going to receive more protection as a result, including against actions like illegal account closures, just meaning that if these digital payment systems do certain acts that are questionable or illegal, the CFPB has the authority to regulate that and to punish a company for doing those kinds of things, and that this was just to establish that they do have to follow these rules.

There's some rough news for gamers as we head into the holidays. Nvidia announced that it is facing a potential gaming GPU shortage this quarter. This is a into an

article by Hassam Nasir of Tom's Hardware. Nasir reports that during the most recent Nvidia earnings call to shareholders that chief financial Officer Collette Kress revealed a possible squeeze in GPU supplies, one that would likely be addressed early next year, and this could mean that finding graphics cards in the short term the holiday season, particularly graphics cards that are

not exorbitantly expensive, might be a little tricky. Nasir writes that one possible reason for this shortage could actually be in Vidia's plan to launch a new series of cards, a new generation of GPUs called Blackwell, and this will happen in January of next year, So the reduction in supply could partly be due to Nvidia wanting to set the stage for a huge launch with a new generation

of cards next year. And I know it's a lot easier to sell a bunch of new cards if there aren't a bunch of previous generation cards on sale for a lower price on the market already. Tom Warren of The Verge has an article explaining how Microsoft appears to be urging Windows ten owners to upgrade to a new computer in order to migrate to Windows eleven and to

take advantage of copilot features. Warren reports that some Windows ten users are encountering full screen pop ups that not only point out that Microsoft will be ending support for Windows ten in October next year, but also that it might be time to get a new machine because as Warren points out the messaging is suggesting, you know, well, you need a new computer to run Windows eleven because Windows eleven has system requirements that a lot of older

computers just don't meet. But Warren also points out the messaging could be considered a little misleading because Microsoft will continue to provide limited ongoing support for Windows ten. However, you will have to pay thirty dollars a year to

get those extra updates. But yeah, I don't know how I would feel if I were working on my computer and I just got a message that completely took up the entire screen that essentially is saying, Hey, I know stuff's real expensive right now, and you probably have a lot of other things on your mind, but you should really get a new computer. That would really cheese me off. Obviously,

the actual messages don't say that. I am liberally paraphrasing and interpreting here and switching over to Alfonso Maruccia of techt Spot, let's talk about another article that details a move by Microsoft that is rubbing people the wrong way. That article is called the Official Bing Wallpaper app does some nasty malware like things to Windows. Yikes. So here's the deal. The app is meant to let users swap out their desktop wallpaper on their computers in a really

easy and seamless way. Only it seems to do stuff that's not at all related to displaying wallpapers, you know, stuff like decrypting cookies, including cookies saved in browsers other than Microsoft Edge, like Chrome or Firefox. It also apparently incorporates some sort of geolocation features and installs bing visual search on the computer, as well as prompts users to make Edge their default browser and install some sneaky browser

extensions and competing browsers of Chrome and Firefox. So it certainly sounds like the Being Wallpaper app is drastically overstepping itself here. I'm reminded of Sony and DRM, where Sony inadvertently created malware with its digital rights management approach. That sounds kind of like what we're talking about here, But I have no idea what the intent was, but it definitely doesn't doesn't sound like it was a good move

on in my opinion. Maxwell Zef of tech Crunch reports that Apple is apparently developing an updated version of Siri that will lean heavily on the large language model approach to AI. This means that Siri would ideally become more conversational. Presumably this will make it possible to use Siri to interact with apps in a deeper, more complex way, but it is going to take some time. Zef says. The plan is for Apple to release this new version of

Siri in the spring of twenty twenty six. In the do As I Say and Not As I Do category, our next story is about generative AI and why you should not rely on it, especially for important stuff like say, filing expert testimony in a lawsuit that is aimed to take on generative AI. All right, so not really generative AI.

Deep fakes. It's related, but not the same thing. So in Minnesota, there is a state law that makes it illegal to knowingly disseminate deep fakes up to ninety days before an election if the material in that deep fake video was made with an intent to influence the election, and if the subject of the deep fake video did not consent to being in it. Christopher Coles has challenged this law, filing a lawsuit that argues this violates the First Amendment the Freedom of speech in the US Constitution.

The state Minnesota has tapped the director of Stanford University's Social Media Lab a guy named Jeff Hancock to provide expert testimony regarding the dangers of deep fake technology. So Hancock did, but apparently the testimony he submitted contains hints that he himself relied on Generative AI in order to write it, which is a big old whoopsie. Hancock's testimony cites a study that doesn't appear to actually exist, which

suggests it is an AI hallucination. Now, some of y'all might remember that several months ago I did an episode of Tech Stuff that was quote unquote written by Generative AI, and one of the things I found really upsetting when I did this was that the AI invented experts in order to present certain information as having academic validity, like it was presenting a point of view, and then inventing a person to have apparently given that point of view.

But those experts, as far as I can determine, were not real people at all. So the same sort of thing appears to have happened here in this case with the expert testimony, and at the very least that is embarrassing. Now, for the record, I do think deep fakes are incredibly

dangerous and that regulation is needed. I understand the First Amendment argument, but if I can create a video that appears to show you proclaiming beliefs that you absolutely do not hold, or that shows you admitting to a crime that you did not commit, or your calling for action that you would never actually agree to, all of that

is a problem. Right, Like, if I create something that makes it seem like you, like it's coming from you and you are the ones saying these things, that's not really, in my opinion, a First Amendment thing, because the First Amendment covers my freedom of expression. But if I'm using deep fakes, it appears that I am co opting your freedom of expression to say whatever it is I want you to say. At least that's my opinion. I'm not

an expert. I am not a legal expert by any means, but yeah, I think the law actually has merit in this case. But I guess that's a matter for the courts to decide. And of course it kind of stinks. I mean, it really stinks that the expert witness apparently used generative AI to create their testimony, because it really undermines their credibility and I think hurts the state's case, and I don't want to see this go the other way.

One last story page, Gaully of Vice dot Com has a piece titled AI Jesus is now taking confessions at a church in Switzerland. That headline, I think is a tad bit misleading. The AI Jesus is not meant to take confession at least, it's not meant to perform the sacrament of confession. So instead, this AI power generative tool is meant to communicate in a way that's aligned with at least depictions of Jesus. I don't know which depiction of Jesus, like I don't know which interpretation of the

Bible was used to create this particular AI chatbot. But church attendees can go into a confessional booth and have a conversation with an AI chatbot that's meant to emulate Jesus, and they receive answers to their questions that are meant to engage their spiritual wary thoughts and questions and things like that, And it sounds like lots of people find the experience actually pretty enlightening, but others have dismissed it

as just a gimmick. It actually reminds me a lot of early chat thoughts, because those were programmed to mimic specific kinds of social interactions, like talking to a psychoanalyst, for example. So this doesn't exactly surprise me. But again, this isn't to say that there's some sort of robopowered, coin operated confessional booth or something. We haven't gotten to that point. We're not quite at Futurama levels of absurdity just yet. But you know, give it a year, we'll

see where we end up. That's it for the tech News for this week, the week ending November twenty second, twenty twenty four. I hope all of you out there are doing well, and I'll talk to you again really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.

Transcript source: Provided by creator in RSS feed: download file