Welcome to tech Stuff, a production from I Heart Radio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with I Heart Radio and a love of all things tech. And this is the tech news for Thursday, August five, twenty twenty one. We've got a lot of it, including some really crazy quantum physics news that I am going to do my
best to try and impart unto you, dear listener. But let's get started with some Facebook stories, because I just can't seem to go an episode without talking about Facebook. So one of these stories is likely not going to be a surprise to any of you listeners at all. And that's the fact that many law enforcement agencies are crawling on social networking sites to collect images of people and use those images for the purposes of facial recognition systems.
The m Live site, which covers news in Michigan here in the United States, published an article this morning about this, quoting Eric Goldman of the High Tech Law Institute at Santa Clara University, and Goldman said that if you have a public social networking accounts, such as a Facebook account that posts publicly, it stands to reason that a lot of different people and organizations have grabbed your photos at
some point because they are all publicly available. Publicly available information is you know, public, so anyone can see it. And we've seen numerous groups use software to crawl through enormous platforms to pull down mountains of data for the purposes of identification and analysis, and those are acts that platforms frequently resist for a variety of reasons, like there's an AI platform that has done this that has gotten in trouble for it in the past because of that.
I should also add that while I say if you post publicly, that's not even the full picture obviously, right, Like let's say you don't even have a Facebook profile, you could be in photos that your friends take, right, and those photos might be on their their profiles, and they might be public which means that those photos can be captured and used for facial recognition purposes. So you
don't even have to be involved in this. You have no say at that point if someone is posting your photo publicly, and it's fair game for a lot of these organizations, at least as far as they're concerned. Now, we've also seen that many facial recognition systems have an element of bias in them, with a tendency to be
less accurate for people of color. And that might not sound you know, terrible, like, oh, it's less accurate, but then you figure that a less accurate system means that sometimes a person could be inaccurately identified as, say, a person of interest in a legal case, and then you can really see how it does become a problem. Such a misidentification can lead in a best case scenario, it can lead to an inconvenience. That's the best case scenario.
Worst case scenario ends up being incredibly harmful and disruptive. I mean that could even include wrongful arrests which have happened as a result of facial recognition software mistakenly identifying someone as someone else. And the fact that this disproportionately harms specific communities, like communities of color. That's a real issue now, as the Peace and m Life points out, there have been documented cases of police action based off
incorrect facial recognition analysis already in Michigan. There's a growing resist to the use of this technology. There's been a lot more pressure on local and federal governments to outlaw it's used there's also been pressure on companies to not make that technology available to law enforcement agencies, and this ties in with several other stories that are unfolding around the world, many of which tap into this issue of wanting to lean on technology in order to solve a
particularly difficult social problem. But the issue is that in some cases the technology isn't necessarily reliable enough to depend upon, So I may have to do a full episode dedicated to that concept because there are a lot of different factors and play here. Right, there's a genuine desire to right a social wrong or to prevent a particular social
problem from happening. There's a perception that the technology could be the solution to this, and then there's the reality of technology that may not be reliable enough for us to really leverage it that way, or else we have these exceptions pop up that then affect innocent people in harmful ways. Moving on, The Guardian published an article about how the fossil fuel industry was able to leverage Facebook
to push climate misinformation. The article cites a group called influence Map, which is based in London, which uncovered this issue. Influence Maps said that there was an increase in advertising by fossil fuel companies like ex On Mobile in during the election season, with the campaign specifically creating a misinformation narrative to shape the conversation around climate change and fossil
fuel policies that were meant to address that problem. Further influence MAP alleges that this campaign violated Facebook's own policies with regard to false advertising, and yet Facebook did not remove or label these posts as being misleading, which allowed those views to really take hold within certain communities and Facebook.
The organization also found something that a lot of people have observed over the years that the messaging around conservation and you know, having an impact on climate change really was shifting the focus to the individual rather than to companies and industries. In other words, putting the burden of addressing climate change solely on individual people rather than both
on people and on corporations. So this is kind of the message of turn off the lights after you leave a room, like or only you can prevent forest fires kind of approach, rather than place regulations on or otherwise tax companies that are consuming vast amounts of electricity, right Like that would be the other message. Uh, we frequently see this kind of personalized approach to messaging, and yeah,
we all do have a part to play here. But when that messaging completely ignores how industry contribut mutes to a large extent, to climate change issues, then it becomes
misinformation because it's purposefully omitting important details. Right. If you don't say, hey, the transportation industry ends up being a massive contributor to carbon emissions and you're instead saying turn off the lights and don't run stuff when you're not using it, then you allow the major contributing factor to go on unhindered, and it then the problem worsens as a result. So Facebook has received some pretty critical opposition to this trend already and will likely face even more
scrutiny in the future. And it should. I mean, it's it's cashing those checks, right, The advertising checks are coming in and Facebook is cashing them, So it does definitely
play a part. Oh and the CNN Business story about influence maps look into Facebook reveals that these fossil fuel ads showed up more than four hundred thirty one million times for US users, in which is not a bad return on investment for nine point six million dollars that was the given amount for that those collective ad campaigns now think about this a super Bowl ad. A commercial during the Super Bowl costs around five million dollars for a thirty second spot, and the last Super Bowl had
around ninety s point four million people watching it. So for a little less than twice the cash, you can get four times the number of views by going through Facebook,
which is not a bad deal right. Meanwhile, the petroleum industry companies have spoken out against the Influence Map study, claiming that these companies are spending a great amount of money on limiting carbon emissions and using carbon sequestration strategies and such, which you know, could be totally true, but doesn't change the fact that these companies actively try to
undermine climate action. Heck, and x On lobbyists essentially spilled the beans on how they do that on video, which then prompted the CEO of x On to apologize for it. I'm sure they were sorry that they were caught. And we are not yet done with Facebook. We've got one more Facebook story here. Bloomberg reported yesterday that Facebook has disabled the personal Facebook accounts of some researchers from New York University who have been studying political ads on that platform.
Facebook says this group has been using tools like automated tools to scrape Facebook of data, which is going against Facebook's policy. This is kind of alluding to what I was mentioning earlier about facial recognition technologies. You know, Facebook isn't supposed to allow that to happen. Now, the research
team saw their project pretty much hamstrung by Facebook. The company deactivated various apps and pages that were associated with this research project, in addition to deactivating those personal pages of the actual team involved. Now, on the face of it, my danger starts to get up on behalf of the research team. However, I have to admit this story is
actually more complicated than that. See back in two thousand nineteen, the Federal Trade Commission or FTC here in the United States hit Facebook with a fine of five billion dollars as billion with a B, which is a princely sum indeed, And the reason for that fine was that the FTC found that Facebook had not done enough to protect users from developers who were relying on Facebook to collect personal information on a large scale, think of things like Cambridge Analytica.
So Facebook has kind of an obligation to push back against groups that are using automated means to collect data off the platform, even if that group has a seemingly good reason for doing so. Facebook requires groups to see out permission from the company before using those kinds of tools, and this group, you could argue, failed to secure permission, and in fact, Facebook had reached out and given the
group a couple of warnings in the past. Now that being said, Facebook is also under a great deal of scrutiny for how it serves up political ads and how it targets people and how it chooses who gets to see what. So, as I said, this is a really complicated issue. The study could call more attention to problems with Facebook's methods of accepting and serving ads to users, which the company would likely want to suppress. It doesn't want those problems to be brought into, you know, harsher light.
But then Facebook also has this obligation under the FTC, and that at least gives the company a fairly compelling reason to push back. Though one could imagine that this issue could be solved if Facebook and the research group could just get that permission to the researchers, but I really doubt that Facebook is eager to do that. In this particular case, we have more stories to cover. But
before we do that, let's take a quick break. Reuters reports that are ransomware attacked targeting more than one thousand organizations last month might be just the beginning of a trend of attacks on service providers. The attack leveraged vulnerabilities in a management software product from a company called CASA, which affected more than fifty managed service provider organizations or
m sps. Now, an MSP is a company that other companies rely upon to outsource services of some sort, like they could be human resources services or I T services, And that means that the MSP represents a really juicy target for hackers because you could attack a specific company. Let's say that you wanted to target I don't know, like like, uh, Intel, you want to target Intel, and you want to compromise one of Intel's systems. Well, that's
a single target. But let's say you go after an MSP which could service dozens or hundreds of clients, and Intel happens to be one of them, but it also services all these others. Well, you can disrupt all of those companies by targeting the MSP specifically, right if they are providing a service to all these companies and you interrupt it, you've affected not just one organization, but potentially
hundreds of them. This puts immense pressure on the MSP companies to solve the problem quickly, because if all of its customers are depending upon those services, and you've got like a hundred important clients, you might be answering calls all day long because you can't get their widgets to work because of a problem with your product. You have a high motivation to fix the issue that might include
paying off an exorbitant ransom or worse. The hackers might use the MSPs as a launch point and find a way to use an MSP product to have a supply chain attack and thus infect the MSPs customers as well. So the MSP becomes the entry point and then all of its customers become targets. That multiplies the number of targets that the hackers are able to to actually compromise, and potentially multiplies the number of ransoms that they could
receive as a result. Reuters states that hacker groups are feverishly researching ways to target MSPs for just that reason, which suggests we should be on the lookout for similar attacks in the future, and it should serve as a warning to all organizations MSP and otherwise to be extra
cautious with security. Moving on on, the state of Massachusetts is seeing a group called the Massachusetts Coalition for Independent Work Lobby to have a special ballot introduced that would create some special exemptions for companies like Uber and Lift to be largely excluded from the restrictions and regulations of the state's labor laws. And you will be shocked, shocked, I tell you to learn that Uber and Lift run this special interest group whose name makes it sound like
it's a worker advocacy organization. The companies have actually done this before. In fact, they did so successfully in California by introducing a measure called Proposition twenty two, which effectively excluded them and several other gig economy companies from having to abide by the laws that govern other businesses in the state of California. And it looks like the companies are going to spend a lot of money in Massachusetts
to fight the same sort of battle there. Whether that were us are not remains to be seen, because generally speaking, there's a growing awareness regarding the company's efforts to avoid things like acknowledging the drivers are in fact employees as opposed to you know, contracted freelancers and so on. But we'll have to watch to see how this plays out, because we have to keep in mind these companies already have a track record for selling a political message effectively
to the public, and they might do it again. And now for the latest of if misogyny in the workplace is such a problem, why don't more women come forward? Department, Well, to you, I submit the story of Ashley Giovic, a program manager at Apple's engineering department. According to the verge, Yovic found herself put on administrative leave indefinitely. In fact,
she actually requested that, but we'll get to that. After she tweeted about how she found Apple to be a hostile work environment and one that tolerates sexism within the workplace. She had already gone through internal system ms numerous times in order to address this, so tweeting this out was kind of like, you know, the fact that she wasn't
seeing progress being made within the company. And when you see a company essentially sideline someone after they come forward with, you know, these these issues, that's really one of the reasons why more women don't come forward because they've they've been intimidated. They've seen time and time again that the
people who speak out end up getting sidelined. Gil Vic tweeted quote so following raising concerns to hashtag Apple about hashtag sexism, hashtag hostile work environment, and hashtag unsafe work conditions. I'm now on indefinite paid administrative leave per hashtag Apple employee relations while they investigate my concerns. This seems to
include me not using Apple's internal back end quote. Giovic also said that the company offered to provide her medical leave rather than address the underlying issues, which is kind of like saying, hey, I know Pete in accounting keeps stabbing you, but we've got these band aids so you can patch yourself up every time it happens, rather than,
you know, going to arrest Pete from accounting. Apple is, of course, just one of dozens of companies currently being called to reckon with its culture, and once again, it it's stories like these that feed into that feeling that human resources isn't there to protect the humans who work for the company, but rather to protect the company. Let us now segue over to a moment i'd like to call we live in the future, which can sometimes be
on inspiring and sometimes terrifying. We'll start with something terrifying. Apparently the Pentagon is making use of an AI application to analyze real time information in an effort to predict what the enemy will do next, with the Pentagon claiming that the AI can let them see quote days in
advance end quote. As a result, makes me think of the precogs in Minority Report, except instead of being put to use in law enforcement to predict crime, they are being put in military use to predict what the enemy is going to do in any given situation. And it also makes me think of Grand Admiral Thrown in the Star Wars Extended Universe. He could look at a culture's work of art and figure out what that culture how they would react, you know, to any given situation, which
is pretty far fetched. But using AI to analyze what's going on within a specific region and then make a strategic decision seems like it's a little bit less fantastical, right. I Mean, you could argue that this is just an extremely much more complicated version of a war game, like a simulated war game. The AI is part of the Global Information Dominance Experiment or g I d E. And boy,
that name is kind of scary. Obviously. The goal is to predict what an enemy might do in order so that we could prepare against that very move, potentially protecting the lives of soldiers and civilians in the process and maximizing the effectiveness of an armed presence in any given region.
We might dissuade someone from doing something. For example, if the analysis says, hey, based upon historical data, such and such country is likely to launch a submarine from this port in the next two days, if you build up a show of force around that area, then you might dissuade them from doing that. That kind of thing. Well, the Pentagon points out that this is work that people have actually been doing for years by analyzing data in
search of patterns and trying to make predictions. But obviously it took a long time in the past, so we are now in an era where we can collect and more importantly, analyze mountains of information rapidly, so we can find those patterns much more quickly and then act on that information. Of course, this requires a very careful design of the algorithm, and it's always possible for an algorithm
to mistake noise for a meaningful pattern. But that's why this initiative is in a testing phase, and it concerns some critics of artificial intelligence in use with, you know, military applications, because they see it as a potential starting point for a slippery slope in which AI ends up not just pointing out possible enemy decisions as a result of what's going on in the world, but then going on to potentially making life or death decisions of its own,
and then potentially even getting to the point where it enacts those decisions without human oversight, because obviously human oversight slows things down. So if you have really powerful AI making all the decisions, and you have at least assumed that the AI is infallible, you rely on the AI. We're always off from that sort of dystopian future, but being wary of that eventuality is probably not a bad idea.
So while you could argue that it's premature to worry about that kind of stuff, the counter to that is, well, if we don't bring it, bring it up now, we could see this evolved slowly enough where it happens before we were even aware of it. I've got a couple more stories to tell, but before we get to that, let's take another break. Okay, it is time, my friends, and by time, I mean it's time to talk about time crystals, which full disclosure, I did not even know
we're a thing before this morning. Apparently, time crystals are a phase of matter, kind of like you know how gas is a phase of matter, solids, liquids, plasmas, and that the most likely context we would observe such a thing as a time crystal is on the quantum level, so you know, they're they're they're very small, like smaller
than Natman small. Some Google scientists have written a paper that still has to go through the pure review process, so we can't just assume that they got everything right. But that paper claims that they have found a way to use quantum computers to study time crystals, which is pretty much the most science fiction sentence I think I could say today. But let's break this down a bit, because that's just words salad, right unless we understand what
we're talking about. Quantum computers are very complicated machines that only somewhat resemble classic computers, and they are very delicate as well. They have this tendency toward decoherence, which means that your quantum computer kind of breaks down and becomes a very low powered classical computer as a result. So with a traditional computer, we talk about information in terms of the basic unit of a bit or a binary digit, which can either be a zero or a one. Quantum
computers are different. They have a cubit as the basic unit of information, and a cubit or a quantum bit can kind of sort of be both a zero and a one at the same time, and every value in between. Technically, this is a thing called superposition. Uh. There's also the issue of entanglement, but we're not going to get into all that. That would require a full episode by itself.
But the important thing with quantum computers is that with the proper design of algorithms, that is, processes that are specifically engineered to take advantage of these quantum properties, computer
scientists could use these quantum computers to solve particularly difficult problems. Now, a quantum computer would not be much good for your classic computer processes, so in other words, you wouldn't want these as a gaming rig so you can play Overwatch on it or something, but quantum computers would be really good at tackling certain subsets of computational problems that would tie up a classical computer for thousands of years or more.
So that's quantum computers from a very very high level. Alright, what about time crystals. Well, here we got to talk about entropy. So you can kind of think of entropy as the natural tendency for stuff in the universe to kind of fall apart or more accurately, to go from
more ordered two more disordered. So let's say you've got a big room and you've subdivided this big room with a removable panel, like it completely bisects the room in half, and the panel has like heavy thermal insulation, and you make one side of that room really hot, and you make the other side of the room that's on the opposite end of this panel, like on the opposite side of the panel, you make that side really cold. So you've got effectively two rooms here, right, You've got one
that's really hot, one is really cold. Then you remove the panel. Well what happens, But we would actually see these temperatures mix, right, we would see these the molecules from the cold side and the molecules from the hot side would intermingle, and eventually the room would reach an equilibrium temperature, so you would not have one side of the room just stay hot and the other side just
stay cold. In other words, that those ordered systems would break down and you would have this entropy take effect unless the room itself was a time crystal. In that case, a time crystal can get stuck in two different high energy configurations. So in our example, we would say, all right, well, that means hot and cold in this case, and then a time crystal would switch back and forth, never actually
moving towards equilibrium. So we would have a room that at times would be very hot and that other times would be very cold, or would have hot and cold on specific sides and then hot and cold on opposite sides. Now I can't pretend to really understand this because it flies in the face of the second law of thermodynamics. But apparently it's a thing, and Google says it can
use its quantum computing systems to study time crystals. Further, as it stands, there's no real application for time crystals that anyone has proposed, but it wouldn't mean learning more about how our universe works and whether there really is a big exception to an otherwise extremely well established rule, that being the second law of the thermodynamics, which again blows my mind. Finally something that doesn't blow my mind
nearly as much but still is really cool. Sony introduced a new VR technology for the PS five to developers this week. It's kind of a pitch to developers to say, we've got this hardware coming out, we want you to make stuff for it. The headsets will have oh LED screens with two thousand by twenty forty resolution per eye, which is pretty darn incredible. They will also have a degree field of view, which is an improvement over the current ps VR system that's on the market. These systems
might also support HDR that's high dynamic range. That is a technology that can enable more vibrant colors to be displayed through greater dynamics between those colors and brightness or luminosity. The headset is said to also have some haptic feedback built in, That is, systems that provide a tactile or touch based feedback system. So for example, you might feel a slight vibration through the headset as you use a
controller to move around a virtual environment. The idea being that you're kind of tricking your brain into thinking you're actually walking around this virtual space while you are, in reality physically standing still inside your living room, for example, and hopefully this would cut down on the motion sickness than some folks like me experience when they are using
VR sets. There's also talk of technology that will detect where a user is looking while they are wearing the headset, which can be useful because developers could build beefier VR experiences and really lean on this, because anything you're looking at directly would need to be distinct and in high resolution, so that's where it would need to really look good.
But the stuff at the periphery of your vision could be a bit more hazy and you wouldn't notice because that's not the way humans focus, right, And that could cut down on the work that the PlayStation five would have to do in order to provide whatever this experience is. That really just means that developers can have a bit more elbow room to really push the hardware to its limit. And Sony is reaching out to some big Triple A studios essentially saying, hey, why don't you know, make something
really cool for this? So we've seen time and again that you know, hardware can be really exciting, but unless there's a strong software library to go with that hardware, it doesn't tend to have very much staying power. Tends to fade into obscurity. And VR has always had an issue when it comes to building out a really diverse, compelling library of experiences. And of course there's a pretty good reason for that. VR tends to be pretty expensive, and my guest this new headset is going to be
no different. I'm guessing it's going to be a pretty expensive piece of equipment. And when something is expensive, that means, by nature, fewer people can afford to buy the product, which means that developers are looking at a smaller pool of potential customers. And that means that whatever return they're going to get from selling something that runs on that platform is going to be relatively smaller than it would
be for, you know, a more widely adopted platform. So if you're a game studio, you might have yourself saying, so do we pour a hundred million bucks into developing this game that is going to target ten of the PS five owners? That's the number I'm just kind of pulling out the air. By the way, or do we instead pour million bucks developing a more traditional style console
game and target all of the PS five owners. And you can see how this situation becomes a vicious cycle, right, Some folks will resist buying into VR because they see there's a lack of content, and then developers are reluctant to make content because there's a lack of VR owners out there. But maybe this will be the device to really break that open for Sony. Rumor is that the hardware will get a reveal sometime early next year, so we won't have to wait too long to learn more.
And that's it. That's all the news that I have for you guys today, Thursday, August five, twenty twenty one. I hope you are all well. I'm on the mend myself, so each day is going to get better. That's my hope. Anyway, if you have suggestions for topics I should cover in future episodes of tech Stuff, please reach out to me. The best way to do that is on Twitter. The handle for the show is text Stuff h s W and I'll talk to you again really soon, y. Text
Stuff is an I Heart Radio production. For more podcasts from I Heart Radio, visit the i heart radio, app, Apple podcasts, or wherever you listen to your favorite shows.