Tech News: Augmenting (Or Outright Replacing) Reality - podcast episode cover

Tech News: Augmenting (Or Outright Replacing) Reality

Aug 23, 202420 min
--:--
--:--
Listen in podcast apps:

Episode description

Meta and Snap each plan to introduce new AR glasses, but you probably won't be able to get your hands on them. Plus, Cruise strikes up a partnership with Uber, Google introduces a photo editing feature that muddies the waters and an enterprising hacker attempts to fake his own death. Through hacking. Plus more!

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

Welcome to tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with iHeart Podcasts and How the tech are you. It's time for the tech news for the weekending Friday, August twenty third, twenty twenty four. And first up, we have what I guess you could call non news news. So recently online tech news outlets began to report that Google was essentially killing off the Fitbit brand.

Google acquired Fitbit back in twenty nineteen, and over time, the company has incorporated more of the same sort of tech found and Fitbit activity trackers into the Google branded smart watches. So the reporting seemed to suggest that Google was going to sunset the Fitbit brand, or at least reduce Fitbit to just a few models of its fitness trackers, while the fit Bit branded smart watches like the Sense and the Versa would essentially go away. However, Sharon Harding

of ours Tetnica followed up on this. She contacted Google directly and asked if, in fact, the company was going to reduce the fit Bit line to activity trackers like the Inspire and the charge models. The rep from Google said the earlier reporting was inaccurate, that Google, in fact had just released a fit Bit branded smart watch for kids. This is like a tongue twister, and I'm terrible at those anyways. Harding points out the concerns about Google killing

off product lines. That's understandable because Google has a huge body count when it comes to products and services that it once launched that have long since pushed up the daisies. But it seems like Fitbit is not going to be one of those, at least according to company representatives. The vergess Lauren Finer has an article titled Google sales reps allegedly keep telling advertisers how to target teams. So this relates to a story I talked about recently on Tech Stuff.

Google follows an industry practice that, in theory at least means the company does not engage in targeted or personalized advertising for any user under the age of eighteen. However, according to multiple sources, now Google has made use of a bit of a loophole. The company can target unknown users.

So these are users who do not have age information built into their profiles, like they have not indicated what their age is to Google, so there's no confirmation one way or the other to say the user is above or below the age of eighteen. So you could say this is kind of a case of plausible deniability. But as I'm sure you're well aware, it really does not take much work to get a general feel for someone's

age based upon their web activity. So with even a relatively simple data analysis, pass can start putting folks into various buckets, including by age range. So while technically the unknown users aren't registered as teens, you can effectively target teens with advertising through this kind of roundabout approach. Finer cites an article on Adweek and another in Financial Times indicating that this is an issue that multiple outlets have

looked at recently. Google's response includes a statement from Josell Booth, a spokesperson for the company, who categorically emphasized that Google's policy is not to personalize advertising for anyone under the age of eighteen, regardless of whether it's through direct data or inferring age from supporting data, and that the company would stress this to sales reps so that you know

they stop suggesting it to potential clients. That Google can you know totally get advertisers linked up to impressionable teenagers to knock that stuff off. Alex Heath, also of The Verge, has a piece that lets us know reality is about to get a bit more augmented. That is, both Snap and Meta have plans to unveil AR glasses in the upcoming weeks. In Snap's case, this would be the latest version of the company's Spectacles product. This would be generation

number five for those of y'all keeping count. According to Heath, Snap will be showing the soft terraing Snap's partner summit on September seventeenth, Then just a week later, Mark Zuckerberg plans to unveil Meta's own AR glasses, which are code named Orion. However, Heath's sources tell him that in neither case will these products ever hit the consumer market. These are more like Microsoft's HoloLens in that regard. These pieces of hardware are really meant to give developers a platform

to build upon. The issue here is that while the potential for AR seems pretty darn limitless, the truth is very few apps have been built that leverage AR in a way that makes it the best tool for the job. As we have seen with Apple's vision headset. Having great hardware where isn't necessarily enough. You need the applications to be there too. So while we should be seeing some impressive demonstrations of this technology, assuming everything goes as planned,

we won't actually be using this stuff anytime soon. Heath says that Snap will have around ten thousand units produced and Meta will have even fewer than that, and I'm sure that won't stop some tech enthusiasts from trying to get their hands on these things, despite the fact that there just isn't much you can do with them yet. Some folks just have an almost pathological need to have the latest technology. I should know. I used to be

one of them. Time to talk about AI for a good long while, because I mean, it's twenty twenty four, y'all.

So first up, multiple news outlets have reported that Meta has rolled out a couple of new web crawling bots designed to gather data for the purposes of training AI models, and further that these bots ignore attempts to block them, so web page builders, in case you've never done this before, they have the option to include a little line of text that essentially tells bots to buzz off, so you might want to do that if you do not want your web page indexed for like search purposes, And you

might want to do that if you only wanted authorized individuals to even know about the page in the first place. But these bots reportedly ignore these kinds of lines of text, and they will crawl a site even if the web

administrators said, please don't index the site. And I find that really interesting because Meta very much takes the stance that crawling sites like Facebook and the like is expressly against their rules, that no one is supposed to treat Meta that way, and yet here they are producing bots that are apparently engaging in the very same behavior that

Meta prohibits on its owned and operated sites. How about that anyway, This is really all part of the AI arms race, where various AI companies are desperate to get ever larger pools of data in order to train their large language models and make the next AI tool guaranteed

to create massive ethical problems. Web administrators can end up in a pickle when companies like Meta and Google engage in this kind of activity, because often it means that if you are successful in blocking the AI crawlers, it means you're also having to block the index bots, which means your site is not going to pop up in like search results and such. So website operators they're pressured to allow this AI crawling activity or else potentially miss

out on being discoverable on these massive platforms. And it doesn't do you much good if you built something and no one knows about it, right, So it becomes this double edged sword. It could be like, well, yeah, I want to be in search because I want people to be able to find me, but I don't want it to be crawled for the purposes of AI. Well, when it's the same companies doing both, that becomes a problem, and it's kind of like extortion if you think about it.

It's almost as if some of these big companies act in ways that can be a bit anti competitive. Folks over at the University of Texas have developed an earthquake detection tool that uses AI to look for signals that could indicate an upcoming earthquake, and the results have been promising. According to SyTech Daily, the researchers achieved a seventy percent accuracy rating in predicting earthquakes a week before the earthquakes

actually happened. The research was conducted in China. I think this is pretty darn cool, but we do need to remember this is by no means a perfect tool. It's got a long way to go. But according to the researchers, there were eight false positives, meaning the tool predicted an earthquake but nothing actually happened. That's an issue. Also, the predictions weren't exactly laser precise. According to the article, the fourteen earthquake predictions that the researchers counted as successes were

within two hundred miles of an actual earthquake. That does make one wonder if the tool was successful at predicting the earthquake, or maybe it was just a matter of coincidence. However, according to the article, the predicted strength of the earthquakes was very close to what actually happened. That makes me more inclined to think that this is not just coincidence. It's one thing to say an earthquake is going to happen on this day, generally around this time, and at

about this strength. If you were only getting like an earthquake happening within two hundred miles of where you predicted, but the strength wasn't anywhere close to what you predicted that to me would feel like coincidence. Getting the strength just about right. That seems to whittle that down a bit, but it does mean there's a limitation as to how precise this tool can be when it comes to locating where an earthquake is going to happen. The research obviously

needs to continue. A lot more work needs to be done in order to turn this into a really useful technology. As it stands, sending out a warning a week ahead of time that someone might be within a two hundred mile radius of a future earthquake, that seems limited in its usefulness unless we're talking about like a real whopper of an earthquake, in which case it could potentially help

save countless lives, assuming that people actually heeded the warning. Okay, we've got a lot more to talk about in today's news, but first let's take a quick break to thank our sponsors. We're back and we're headed back to the Verge. Last week it was all ours Technica. This week gets the Verge. Anyway. There are a pair of articles covering the same general topic over on the Verge. One is by Alison Johnson, the others by Sarah Jong, but they're both about Google's

Reimagine function in the new Google Pixel nine smartphones. So this feature allows you to make some pretty massive changes to photos that you've already taken, and you can use text based prompts to have AI alter those images in

various ways. So one example that the articles used was they showed a photo of just a street, just a normal street, and then a subsequent text edit added in a massive pothole incorporated into that street, and sure enough, it was an edited photo that looked very convincing, like like the pothole was actually there. Both Johnson and to a greater extent, Johng point out that the feature allows

for fakery and image manipulation on a grand scale. Once upon a time, you needed at least to be proficient with tools like Photoshop to manipulate images convincingly, and even then there were like telltale signs that some altering had happened. But now AI takes care of all of this for you,

and it can be pretty darn good at fooling folks. Now, the AI is supposed to have guardrails that are meant to prevent users from doing really awful stuff, like you wouldn't want someone to have a photo and then use AI to just litter the ground with like dead puppies or something that would be horrifying. But the folks at the Verge found they were able to insert a great deal of troubling imagery into photographs just by adding some

creative thinking to get around the guardrails. While being direct and blunt in your text directions might result in a denial saying no, that's against the policy or whatever, if you're a little more circumspect, you can often get the same results. And so the Verge showed off images that appear to portray such disturbing scenes as a collision that happened between a car and a bicycle on a city street, or images where there appears to be a body laying

underneath a bloody sheet on the ground. It's not exactly the most positive showcase for an image manipulation tool using AI. Moreover, even if we remove the obvious cases of like unintended consequences, the end result is that this tool means seeing is absolutely not believing. When image manipulation is so easy that anyone with the right kind of smartphone can do it with no training needed, what does that mean for information?

How do we know what to trust. How could such a tool be used to deceive others, either just for kicks or for personal gain or whatever. Are the benefits of this technology such that they actually outweigh the risks. Then there's also the element of the liar's dividend. This is the defense that someone who is absolutely guilty of something could use. They could say, Oh, sure it looks like that was me robbing that convenience store, but that's

clearly an AI altered image. I'm innocent. That's the liar's dividend. Making matters worse is that Reimagine, at least currently doesn't apply a digital watermark to altered images like purely AI generated images often have a water mark, but not these. Johnson points out that the meta data for the image includes a record that it was edited through Reimagine, But she also points out that you can get around that just by taking a screenshot of the photo in question.

That just strips out all the metadata, because now you just have a picture of a picture and there's no record there that Reimagine was used to alter it. Blah. You might remember that way back when the twenty twenty four Democratic primary were going on, which seems like it

happened in a different world at this point. But you might remember there were reports of an AI generated voice that was impersonating US President Joe Biden, and it was going out to potential voters in New Hampshire, and the voice was urging voters to just stay home and not go and vote in the primaries. Well, now the Federal Communications Commission, or FCC has ordered the telecom company Lingo Telecom to pay a one million dollar civil penalty for

allowing those calls to go out over its network. Now, to be clear, Lingo Telecom wasn't responsible for creating those calls. That honor falls to a political consultant named Steve Kramer, who in turn was working on behalf of a candidate named Dean Phillips who was running an opposition to Joe Biden. But Lingo Telecom allowed the calls to go over its network, which the FCC deemed as a violation of the know

your Customer and Know your Provider sets of rules. Kramer is also facing a fine could be up to around six million dollars. These penalties are meant to send a message to folks who are considering a similar scheme that if you do this kind of thing, it's going to cost you. As for Lingo, the company also agreed to make some changes to how it operates to WITT, weeding out spoofed phone numbers and only presenting a number when Lingo can verify that it's exactly where a call is

really coming from. Presumably you would otherwise see something like unknown caller or something like that on your caller ID. Lingo must also verify the identities of customers and work with upstream providers that have quote unquote robust robocall mitigation. So yeah, this is really sitting a message of saying this kind of thing will not be tolerated. In twenty twenty three, GM's cruise business shut down effectively after one of its autonomous robotaxis dragged the pedestrian for twenty feet

in San Francisco before it came to a stop. The pedestrian had already been struck by another car that one was operated by a human. Pretty awful, like a really hard ruble sequence of events, and CRUZ faced a massive investigation. The CEO of the division promptly jumped ship, as did several other leaders, and GM laid off a significant number of workers within the Cruz division. But now Cruz is back in the news, having struck a partnership deal with Uber.

New CEO Mark Witten said, quote, we are excited to partner with Uber to bring the benefits of safe, reliable autonomous driving to even more people, unlocking a new era of urban mobility end quote. Which is interesting to me. Uber itself really was aggressively pursuing robotaxi strategies several years ago, because I mean, cutting human drivers out of the equation means more money for the home office. Am I right?

But snarky comments aside. Uber pretty much pulled the plug on its own efforts after a tragic incident in twenty eighteen involving another pedestrian accident. Uber then switched to partnering with companies in the autonomous vehicle space rather than pursuing their own program. So can these two companies, each with blemishes on their respective records, team up to create something

that's safe and reliable. Cruz is currently conducting autonomous vehicle testing with supervising safety drivers in cities like Houston and Phoenix. No word yet on when those driver lest Uber rides will become a reality, or specifically which markets that might happen in I know what you're thinking, you know, Jonathan, it's been a hot minute since I've learned about a new streaming video service launching. Well, rumor has it that we might be getting yet another one from a seemingly

unlikely source. That source is Chick fil A, the fast food restaurant known for chicken, among other things. Deadlines Peter White reports that Chick fil A is planning a service centered primarily around reality television and unscripted content and game show programming, all with like a family friendly focus. Presumably the programming on this service would in some way advertise or promote the company, perhaps through the production of branded content.

How many folks are out there itching to sign up to yet another streaming service, let alone one spearheaded by a restaurant company. I have no clue. I'm not going to write it off just yet, as it could always surprise me. But my first impression is this is going to be a very tough sell, particularly during a time when people are already taking a harder look at their family budgets for stuff like entertainment. Jesse Kiff is in hot water for hacking into a government registry in Hawaii

for what purpose? Faking his own death. Kiff hacked into the Hawaii death registry system and marked himself down as previously alive or no longer breathing, or debt as a doornail. To be honest, I don't know what the checkboxes actually say, but the point is Kiff was faking his own debt. He was also using fake credentials in an effort to secure a credit card or debit a card account. It's awfully hard to navigate the modern world unless you've got

access to that cheddar. So why was Kiff doing all this? Well, apparently it was in order to avoid having to make child support payments. He's already been tried and found guilty. He faces a prison term of sixty nine months. Nice he'll have to serve eighty five percent of that sentence, after which he will be released, but will remain under

supervision for three years. Now for some recommended reading, So first a recommend checking out Eric Berger's piece for Ours Tetnica titled Against All Odds and Asteroid Mining Company appears to be making headway, which is a cool story. We've been hearing about potentials for asteroid mining for several years now,

so it's neat getting an update. Next up, Patrick George has a piece in the Atlantic titled The Hardest Sell in American Car Culture, and it's about how the Ford Car Company wants to encourage American car shoppers to think about smaller vehicles rather than the trucks and SUVs the industry has kind of migrated to in the US over the previous years. And that's because smaller cars are lighter.

Lighter cars are easier to move, and that means the battery requirements for EV's that are smaller are more manageable than for those big old chonkers that are currently favored by the US. So if the US is to move to more evs, part of the picture may also mean driving smaller vehicles. That's it for this week. I hope you are all well and I'll talk to you again

really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.

Transcript source: Provided by creator in RSS feed: download file