Welcome to tech Stuff, a production from I Heart Radio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with iHeart Radio and I love all things tech and it is time for the tech news for Thursday, March eleventh, twenty one. Let's
get to it. Last week I told you about how the social network site gab, known for the right wing political philosophy of most of its users, was the victim of a data breach as a hacker accessed gab systems and stole around seventy gigabytes worth of data, including three million private posts. Well earlier this week, gab has been hacked again and even went offline temporarily as the administrators of the site conducted and investigate into the security vulnerability
that made this possible. Now, during the downtime, users who are trying to go to gab got an error message, but the site is back up as of this recording. A hacker using the handle captain Jack Sparrow that's j A x p A r Oh claims responsibility for both hacks, though who knows if that handle represents just one person or it's being used by a group. Working together. The hacker used authentication tokens that they gathered during the first hack, and they used those to carry out the second one.
They were able to access the systems because those authentication tokens were still good, and that showed the GAB failed to reset those security tokens. The hacker also left a message that claims that this hacker gave GAB CEO Andrew Torba and ultimatum cough up eight bitcoins, which is worth about four fifty thousand dollars, and the hacker would return at least some of that stolen data. Torva apparently refused, which, for the record, is what security experts say is the
right call. Capitulating to hacker demands typically just reinforces that hackers decisions, and it also encourages other hackers to follow suit, because if you pay the ransom, people know that there's money to be made, and it just makes things worse. Plus, there's never actually a guarantee that you'll ever get anything back anyway, or that there won't be copies of the stuff you get back floating around now. I have to say GAB definitely comes out looking like a terrible steward
of user security and data like absolutely awful. The company has completely failed to protect user assets. And just to be clear, while I am about as far away in political ideology as you can get from the typical GAB user, I also don't really care for hackers who promise sites and then issue ransom notices. I don't think that's a valid approach either. I don't think anyone comes out of this situation looking like a good guy, and that's gonna be a kind of common message in today's news items.
I think my guess is that the leaders at gab are kind of in over their heads. They got the boot from various other platforms due to the actions of their users and different violations of terms of service, so they're kind of forced to take whatever they can get, which is something we've also seen with Parlor. Moving on to a different hacking story, I've covered the Solar Winds hack numerous times already about how it looks as though
Russian backed hackers. I mean, it's all but confirmed that they are Russian backed hackers who compromise Solar Winds software build system and that let the hackers push out software updates as if they were legitimate updates. But those updates really carried malware two Solar Winds customers, and that gave the hackers the ability to infiltrate thousands of computer systems
and they could then follow up on that attack. This was the supply chain attack, where targeted systems accepted the malware because it was coming from a previously trusted source, that being Solar Winds. Well, now cybersecurity researchers have published a report that shows hackers backed by China we're targeting Solar Winds at the same time in a separate operation, one that I have previously conflated with the Russian hackers. So it's time to set the record straight. Based on
what these researchers found. While the Russian hack used the method I just described a moment ago, it appears that it was the Chinese hackers who were specifically targeting a software project called Ryan and these hackers, according to the researchers, were really going after a specific Solar Winds customer and they install a malware shell now called Supernova around that Orion software. So this sounds like it was a much
more targeted attack. The Russian approach was different. It let hackers blast out malware two thousands of potential targets, and then the hackers could follow up with the specific hits that they wanted to. They could look at who they were able to capture with that first blast and follow up for further infiltration. Whereas the Chinese approach was different. They were looking at a specific target and went right
for them, as opposed to doing that blast attack. And we are not done with the hacking stories just yet. Vercada or Verkada is a Silicon Valley startup that sells security cameras with cloud based services, and they were hacked earlier this week. Now I say hacked, but when I tell you how it actually happened you, I think that hack might be a bit of a generous term for this particular thing. For Kado practices terrible security, and for a company that's in the security business, that's not great.
So the hackers responsible belonged to a group called APT six nine four to zero arson cats. Okay, but APT, by the way, stands for Advanced Persistent Threat. It's a common term in the cybersecurity world. Anyway, these hackers discovered that Verkada had an Internet portal for the company's internal development system away to log in to make changes to code and various Arcadia products. You know, it's the way
that the developers can access stuff. Now, that makes sense, particularly in a world where presumably a lot of the people working for Vercada are doing so remotely. But what does not make sense is that the hackers say that
this was essentially a publicly accessible portal. There were no log in credentials required to get to that portal system, so you get there without having to first verify that you work for the company, and on that landing page or that landing site, they found the login credentials to get what they called super admin level access to Arcada's systems.
So as a result, they were able to access more than one hundred thousand cameras the number I frequently see a one hundred fifty thousand camera feeds belonging to various Arcaded customers. They also got a list that was around twenty four thousand entries long that that names those customers, and they include businesses, churches, health care facilities, jails, and the p g A of all things. Tesla is one
of their customers. Though Tesla says that the hack really just gave the hackers of you into one of Tesla's suppliers sites, but not the main manufacturing facility in Shanghai. A software developer named Tilly Kottman gave details about the hack, presumably having played some part in carrying it out. Now,
what we call this a hack? I would argue that finding log in credentials on a publicly accessible landing page is pretty much like walking up to someone's password protected computer and seeing that they put the log in and password on a sticky note and stuck it to the monitor. It's not exactly safe. I mean, there's no point in having a locked door if you're hanging a key off
the door. Knob Kottman says that part of the reason that the hacker group published the information and shared video from various locations you know Varcada customers, is that they wanted to point out how widely distributed Varcada's systems actually are and how inherently unsafe they are. Now, it's hard for me to disagree with those points. I would argue you this kind of puts the hackers into a sort of gray hat area when it comes to the spectrum
of hackers, so we typically describe them by hats. So you've got white hat hackers. These are hackers who probit systems. They look for vulnerabilities, but the intent is to tell the respective system administrators about those vulnerabilities, hopefully before someone else can exploit them. So the whole goal is to find weaknesses and systems and then say, hey, you need to fix this. But then you've got your black hat hackers.
These are the people who are trying to profit from being able to access systems and exploit them in some way. Gray hats are somewhere in the middle. The hackers didn't just go to Rakeda to alert the company of the mistake. They didn't go to say, hey, you've got this massive
security vulnerability, you need to fix it right now. They went public with this revelation, which definitely makes the company look bad, And honestly, I think you can make a good argument that there's some merit in that, considering that the whole value proposition for a security company is that it's safe. This is also a good reminder that security systems that include ways to access a camera feed remotely
represents a potential security vulnerability. It's always a good idea to do your research before you choose a security solution on a related note, Jason Cobler and Joseph Cox over at Vice dot Com pointed out that another really big issue with this hack is that Verkada offers facial recognition solutions in their security camera technologies, and that means that hackers weren't just able to look at video feeds, they could potentially identify the people who showed up in those
video feeds, And as they pointed out, quote, the breach shows the astonishing reach of facial recognition enabled cameras in ordinary workplaces, bars, parking lots, schools, stores, and more. End quote.
I think that that's putting it lightly because honestly, while this is all about security cameras and a company that has this kind of proprietary approach to facial recognition, we have to remember that just about everybody carries a camera with them, and depending upon the apps being used, a lot of these companies are able to take advantage of massive amounts of data and do facial recognition on their own.
So yeah, this is a very acute UH case that we can point to, but it's by no means an outlier. It is easy to imagine a scenario in which malicious hackers would not only breach a system like Verkada, but also keep it quiet, right, They might never come forward letting people know that they got access, and then in the meantime they could use these surveillance cameras for their own purposes and perhaps even spy and potentially blackmail specific people.
The facial recognition genie is out of the bottle, and the fact that there are numerous big companies are making use of the technology in a widely distributed way means that whenever you're on camera, you are potentially identifiable in real time, and you're pretty much always just moments away from being on camera if you're out and about. So fun times. So hey, let's stay on this topic for
a little bit. The United States Army Research Laboratory has been working on image recognition AI applications that will be able to identify faces even if those images were taken in darkness. So this research team took half a million pictures of three people, which is a pretty small sample size, believe it or not. Now, some of those photos were taken in normal conditions, normal lighting conditions using a standard camera. Others were taken in low light conditions, and some with
thermal cameras. I think it's pretty obvious to see where the benefits are from a military perspective of having technology that can identify a person even in low lighting conditions, But there is no denying the idea is more or than a little creepy. That being said, half a million photos, like I said, is a very small sample size, and the team says they're making progress, but they are nowhere near a level where anything is sophisticated enough for deployment.
The system is still struggling to identify images and low lighting conditions. Thermal cameras produce very different kinds of photos than our normal cameras do, and the computer systems haven't quite figured out how to reliably map those thermal images to specific people. Also, they said that just a small change in the camera's viewing angle, like the angle between the person's face and the camera, can make it a lot harder for a computer to suss out who they
are looking at. I've often talked about image recognition by using coffee mugs as kind of an example. If you were to feed millions of images of red coffee mugs to a computer, but every single one of those images showed the red coffee mug having it's handled pointing off to the right side in some way, then the computer might balk at seeing that same red coffee mug. But what they handle pointed to the left side that could be enough to throw the computer off. Computers are remarkable,
but they can still be pretty dumb in some ways. Still, this area of research is a bit scary, particularly since we already live in a world where lots of entities like law enforcement agencies, rely on facial recognition technologies. And that's without even getting into the existing problems we already have with facial recognition, like racial and gender bias that can lead to inaccurate results. And gosh, I wish I
had a happier story I could segue too. But I should also mention that the Los Angeles Times published an article titled clear View AI uses your online photos to instantly I du that's a problem, lawsuit says, And yeah,
the headline pretty much tells the story. So you've got this company, clear View AI, and they have an an enormous image database, and they created it by scraping various websites and services, particularly social networking platforms like face Book and Twitter, but also other types of services like Google,
and Venmo, and they started gathering photos that way. According to the l A Times, that means that the company has a database of more than three billion photos and has software that creates a digital faceprint of each person based on those photos, which allows for faster facial recognition identification if the system encounters a new image of someone
who is already in the database somewhere. So all those photos that people share on different sites without really thinking about it, and I include myself in this group of people, you could be part of that massive database. That means this company could potentially be using those photos to make it possible for a clear view customers like once again, law enforcement agencies to use it in real time with the various solutions. Now, this prompted some civil liberties activists
to file a lawsuit against the company in California. They said that the company's practices violate privacy and create a chilling effect on protected activities such as the right to assembly. Now, right now, we're in a pandemic. People shouldn't really be assembling in public and big numbers anyway. But they're specifically
talking about things like the Black Lives Matter movement. So the people who have had their accounts scraped, were never given any notice that that was happening, let alone ever given the chance to give consent for it. Now, the flip side of that argument is that you could say, someone posting to Facebook or whatever is doing so in a semi public way, right if they haven't protected their account in any way, then anyone could potentially see those photos.
But then you have to ask, what about people who don't have a Facebook account. Maybe their friends have taken photos of them and put those photos up on their own account. Maybe those photos even have identifiable information in them, And then you've got someone who doesn't even have a Facebook account who had their picture scraped for this kind
of thing doesn't matter in that case. So the plaintiffs are focusing just on California and California's citizens for this particular lawsuit, and they are seeking an injunction that would force clear View AI to stop collecting biometric information on people in California, But they also want the company to delete all the biometric data that it has already collected. Clear View AI faces a similar lawsuit in Illinois. That one is being brought against it by the American Civil
Liberties Union. I'm sure. I will be following up on this story later on in the year as it plays out. It really is indicative of how people are becoming aware that the early choices we made when we were building out stuff like social networking sites, we're rather shortsighted and have had bigger consequences than we anticipated back then. I mean, heck, Facebook started as a website for students to rate how hot each other happened to be, So that's kind of
creepy all on its own. It doesn't have the same sort of weight as this is a site that some company is going to use later down the road to build out an identification engine that law enforcement agencies might be using ethically or otherwise in the future. Uh. And because of those consequences, we now have to revisit those early decisions we made and ask questions like, how can we address this? Are there any ways to fix it? Will it require us to create a ban on the
private use of facial recognition technologies? Honestly, those are big questions that we don't have answers to yet, and I don't know what answers we will arrive at, but I will continue to cover the cases. That's it for today's news. I hope you guys stay safe and healthy. If you have anything you want to share with me, the best place to do it is over on Twitter the handle I use this tech stuff. H S double you and I'll talk to you again really soon. Y Text Stuff
is an I Heart Radio production. For more podcasts from my Heart Radio, visit the I Heart Radio app, Apple Podcasts, or wherever you listen to your favorite shows