What is Section 230? - podcast episode cover

What is Section 230?

Dec 07, 202054 min
--:--
--:--
Listen in podcast apps:

Episode description

A US law that was instrumental in shaping the Internet has been a controversial topic recently. In this episode, we learn what Section 230 is and why without it we wouldn't have the Internet of today.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

Welcome to text Stuff, a production from I Heart Radio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with I Heart Radio and I love all things tech And what if websites were held responsible for the content that other people post to those websites. What if after a customer left a bad review for a product online, the company that makes that product would sue Amazon for hosting the review. What if that company that makes the thing, what if they

won that lawsuit. What if Facebook were liable for the posts made by its two billion users worldwide. If sites were held accountable for the content that users and third parties posted to them, we would not have the ornet we have today. In fact, companies and organizations like Wikipedia, Amazon, Facebook, Twitter, even Google wouldn't exist, or at least they wouldn't exist in the forms they do today if that were the case. Now, if you live in the United States, you might have

heard a bit about a section to thirty. Even if you're outside the United States, you might have heard some references to it. Now, if you're only casually following the news or you just here section to thirty in passing, it's probably pretty confusing. It clearly has something to do with technology and liability and communication. President Donald Trump called upon Congress to revoke it several times now, even threatening to veto the funding of the National Defense Authorization Act

unless Congress repealed Section to thirty. But Trump is not the only politician to call out this legislation. Representatives from both the Republican and Democrat parties have proposed those changes or even the outright elimination of Section to thirty over the years. Heck, President elect Joe Biden has also called

upon the need to revoke to thirty. And if you live in America, you might be surprised to hear that Trump and Biden have agreed on something, though I guess it's fair to point out they agree on the end result, but for very different motivations. So in today's episode, I want to talk about what Section to thirty is, where it came from, what its purpose was and is, and why there's so much discussion about the need to change or get rid of it from various viewpoints across the

political spectrum. And I'll do my best to avoid any political commentary, but I do want to say that the motivations behind these various calls for change. They very a great deal. I think a lot of folks in politics agree that section to thy needs some attention, but they don't all agree as to the reasons why or how it should be done. So we're gonna get into all of that. And before we jump in, I want to

recommend an amazing resource. It's a book by Jeff Kossa titled The Twenty six Words That Created the Internet, and it's all about section to thirty, from the genesis of the idea to the implementation of Section two thirty in court cases. And it's also a really good read, which is a weird thing to say about a book centering on a subsection of a huge telecommunications bill. I should also add as a trigger warning that book discusses some

cases that deal with some really heavy, dark stuff. To thirty has been tested in some really emotionally charged cases that include truly awful things that have happened to people, So fair warning if you do want to check that book out and give it a read. Now. At the heart of all of this are concepts like free speed, each which has very wide protection in America, and liability

that is being held accountable legally accountable for something. Because while there are broad protections for freedom of speech, it is not absolute. There are some forms of speech and expression that are not protected under the First Amendment. And that's because free speech sometimes bumps up against other important things like security or privacy, that kind of thing. So it's one of those things where it's not pure black and white. There's some shades of gray. Now, let's find

out what section to thirty is. And it's called Section two thirty, so that suggests it's part of something bigger. Right, it's a section of something. Well that it's a section of a larger piece of legislation, and that piece was the Communications Decency Act. So let's turn back the hands of the clock a bit. Heck, maybe i'll maybe I'll dust off the old text of time Machine for this one.

I don't think we've actually used it in years. Fortunately, I did bring it home with me when our office went into lockdown, so it's really just taking up space in the corner. Hang on, hang on a second. I'm just gonna get it out. I gotta move a couple of things. Be right back. Alright, alright, so I've got it. Now, let me set the dial back to uh, let's see nineteen all right, Okay, here we go, everybody in, come on, let's all get into the time machine. All right, Ready,

push the button, Frank, and here we are. It's The number one hit single of the year is hold On by Wilson Phillips, a song I'm not ashamed to say I absolutely loved at the time, shows like Cheers, A Different World and Murphy Brown, or on television that the box office, the film Ghost comes out on top, with Home Alone not far behind. But we're not here to see the awkward transition of the nineteen eighties transform into

the nineteen nineties. Now, we're here to learn about how US law would view the role of online platforms now back in where we are now, the Internet isn't really a thing as far as the mainstream public is concerned. It exists, but hardly anyone knows very much about it Outside of research facilities and government offices. There's no such thing as the Worldwide Web. Yet, However, there are a

few big online service providers or OSPs. Now these are sort of the predecessors to Internet Service providers or I s p s, and OSP is kind of like its own micro internet, though really, we would just kind of call it a network. So think of it as a self contained collection of servers that hosts stuff like forums and newsletters and articles and files, and you're on the right track, and they don't necessarily talk to each other,

so they're kind of self contained. Well, one of those big OSPs is compu Serve, and it's going to get taken to court. At the heart of the matter is an accusation of libel that is, uh, a misinformation with the intent to cause harm that's in print. The plaintiffs, Robert Blanchard and a company called Cubby Incorporated, have developed a news and rumors service called scuttle Butt, which focuses

on the radio and TV industries. Now, according to the plaintiffs, a newsletter that is also that's called Rumorville USA, which also covers rumors in the TV and radio spaces, has published untrue and harmful things about scuttle Butt, and Rumorville is available on compu Serve. So the plaintiffs targeted not

just Rumorville, but compu Serve in their lawsuits. So they say compu Serve is responsible because it allows the distribution of Rumorville, which in turn has published libelous content about scuttle butt. So the lawyers representing Company Serve argue that the service has no connection to Rumorville other than as serving as a way for people to get the newsletter. In other words, compu Service saying, hey, we don't write that. We just have it on our service, but we don't

write it. There's no there's no employment here to generate that newsletter. Compu Serve isn't involved editorially in the newsletter at all. It just comes from another company. That company is Don Fitzpatrick Associates of San Francisco, and it's referred to in the court documents as d f A. Compu Serve did not employ this company or pay for this news letter. And moreover, according to the agreement between compy Serve and d f A, d f A accepts full

responsibility for the contents of its newsletter. So compy serves lawyers go and make a motion for a summary judgment, which in this case was to dismiss the the the charges, just for the court to make a decision on behalf of one party against another party without the need to go to a full trial. And the judge grants this to Compy Serve. And the judge agrees that compu Serve did not bear responsibility for the contents of this newsletter.

The judge says that compu Serve is kind of like a bookstore, and you wouldn't hold a bookstore responsible for the contents of a book that was published by a third party just because it happens to be in that bookstore. The bookstore is just that's where customers can buy books. The store did not put the actual content into the books, and this becomes a precedent that would serve as a foundational building block for section to thirty later. All right, everyone, um,

we're done here. Let's all jump back in the time machine. Come out, no stragglers. I don't want to have to come back to lived through it. Once we're done. We got a hop forward a couple of years. Okay, ready push the button. All right, now we're in. So now the number one song in America would be Ace of Bases the Sign which I don't know about you, but it opened up my eyes. I'm happy now. Seinfeld is dominating TV ratings and the big movies at the box

office are The Lion King and Forest Gumps. So what are we doing here? And now, well, this time we've got another lawsuit, but this is one that's against a different osp called Prodigy. Now. Like compu Serve, Prodigy hosts stuff like forums and articles and their services on one forum, and anonymous user alleged that a securities firm named Stratton Oakmont was committing fraud in a stock offering and Stratton

Stratton Oakmont would sue Prodigy for liabel. Now, Prodigies lawyers said that we shouldn't be held responsible for the content that's posted by a third party by a Prodigy user, and they cited the CompuServe ruling that came back a few years earlier. But the judge in this case disagrees with Prodigies lawyers. They rule against the service, and the judge says that Prodigy exercised editorial control over the forums. The service could and did remove material that was objectionable.

Unlike CompuServe, which had taken largely a hands off approach to the stuff that was published on CompuServe, Prodigy got more involved and would remove things that were in violation of you know, community standards, and therefore that made Prodigy not like a bookstore and made them more like a publication like a newspaper, and the editorial control means that Prodigy would have to assume responsibility for stuff that appeared

on the service. After all, if Prodigy intervenes in some cases, it means it could and should have intervened in other cases, like with Strattonoqumont. So we have that nine one decision that says a platform is not responsible for third party content published on that platform. But then we have a nineteen five decision that seems to contradict that if the platform exercised any sort of content management, that means that

they can be held liable. And I know we traveled to nine respectively, but court cases can take a really long time. So the decisions were actually handed down a year after the initial lawsuits started. I'm sorry that we

all had to wait around so long for that. Well, this created a precarious situation for online companies because the message seemed to be that if you provided a space for users and third parties to post stuff, it would best suit you if you just never ever interfered with that, regardless of what gets posted to the platform, because intervening would be a slippery slope if you start removing content, even stuff that clearly should not be there, like videos

of violence, or pornography, or death threats, or or personal information of other users, whatever it might be. If you remove it, no matter how obviously awful it is, you create a precedent in which you are acting in an editorial capacity. And if you can do that, then are you really free of liability when someone posts something that's libelous or otherwise illegal. And remember this is the mid nineties.

The online world hadn't even really started to take off yet, so there was a real concern that we would see a big negative impact, a chilling effect on the founding and evolution of online businesses. Considering that people had generally come to the belief that the internet was going to be the future of business, or at least an important component of business, this was a bad thing. Now, if you guys pay attention out there, you probably know that

politicians typically lag behind big issues and technology. Technology changes much faster than policies do. And a lot of politicians in this country are in the United States, Um, how do I put this delicately? They're old, Like a lot of them are really old. The average age, the average age of a congress person is twenty years older than the average American in the average age of a representative

is fifty seven. For senators it's sixty one. And generally speaking, older generations are a little a slower to pick up on technological advances than younger ones. Now there are exceptions to this, don't get me wrong, I'm not trying to be agist here, but as a general rule, older people are less up to speed on emerging technologies. I think that rule applies double when it comes to politicians from my own anecdotal experience, which I get isn't really evidence.

So while this was a growing concern within the tech world, only a couple of politicians really picked up on how these court decisions could create an issue and impede the growth of online services in those early days. Now, that pair included a Democrat named Ron Wyden and a Republican named Chris Cox, who wanted to create legislation full stop.

They both wanted to make their mark, they wanted to pass some laws, but at this particular time in American history where it was really hard to do that because partisan politics were any vicious at the time. I think they were gentle as a kitten compared to today's politics, but at the time it was considered pretty brutal, and that meant there was very little chance to get agreement across the aisle. Republicans at the time controlled both the

House and the Senate. In Congress in the United States are our Congress is divided up into two branches, the House of Representatives and the Senate. And a Democrat was president, so president and a Republican Congress, and they figured. These two people figured that their best chance at making an impact was to find a topic that was so new, so cutting edge that neither party had actually formed an opinion about it. Yet the Internet was a perfect target.

And this, my friends, drives me bonkers because it points out that the writing of section to thirty didn't begin with politicians identifying an issue and then finding a way to solve it. Instead, it was a case of a couple of politicians trying to figure out what sort of problem, any problem, could they find where they could potentially tackle

it and get their names on some legislation. Uh. It's probably being a little unfair, but it is sort of the reality of politics, and from a practical standpoint, I get it. But it's also kind of disillusioning to me. The pair determined that they would have a decent chance at proposing legislation that would protect online services from being sued for the content that other people were posting to those services, plus give the services the freedom to moderate

content without fear of being sued for that. Either. The Internet was so new and the potential was so huge that they felt that this was a pretty good bet, so they drafted out a proposed piece of legislation that they called the Internet Freedom and Family Empowerment Act. It would ultimately have as its core principle the following quote. No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information

provided by another information content provider. End quote. Now. In addition, the piece has what has been called the good faith section, which states that platforms will not be held liable for quote any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.

End quote. Now when we come back, we'll learn more about what this actually means from a practical sense. But first let's take a quick break. So on the face of it, Section two thirty is fairly simple. You cannot hold someone or thing, in the case of a platform, as the person responsible for what someone else or something else says on that platform. So that applies to the

services and the users. If I'm on a forum and I'm making an argument for a certain point of view and someone else joins them with libelious accusations about a third person, it would be unreasonable to hold me accountable for that other person's words. Right, I didn't say the libelous thing, why would I be responsible for it. I might have started the discussion thread, I might have initiated the conversation, But if I didn't actually say anything libelous,

then I shouldn't be held responsible. Right. Well, that same protection, according to this legislation, would apply to online platforms. In addition, the platforms would be able to make their own moderation policies and not be held liable if the platform removed

something that would otherwise be constitutionally protected. So, in other words, if Facebook removed a post because it violated a Facebook standard, even if that post would otherwise be protected constitutionally, Facebook would not be held liable as long as that removal

was done in good faith. Now, in at the time that Cox and Widen were putting together their proposal in the House of Representatives, there was another piece of legislation under consideration that aimed at the issue of pornography online. But that was taking place over in the US Senate, and this piece tried to treat online content in a way similar to how the US government regulates content on

TV and on radio broadcasts. There was a growing concern about the availability the accessibility of pornographic and obscene material. Obscenity in this case would fall under a pretty conservative definition, like you know, kind of like beauty, obscenity is in the eye of the beholder. It's one of those things that you know it when you see it. That's kind of the famous quote in US history. But the proposed legislation would criminalize the act of exposing those under eighteen

to obscene or pornographic materials online. Now, I should say it covered instances in which the age of the recipient or at least they're under eighteen status would have to be known to the person or entity sending the material for this to be relevant. The proposed legislation went a bit further than that too, with sections related to speech

that is indecent but not obscene. Again, really weird gray area territory here, but the language raised some concerns among civil liberties organizations, so people made some pretty strong arguments

against this Senate version of the idea. For example, in the medical industry, there's terminology that some people might define as indecent that's absolutely critical for legitimate medical communication in clear terms, So sites that provide useful information for sensitive topics like educating people about sexually transmitted diseases or resources for people who are in the g b t Q plus communities, all that could be at risk if you

allowed this kind of legislation to go forward. So Ron Wyden and Chris Cox were positioning their proposal in the House of Representatives as kind of an alternative to the approach that was being talked about in the Senate. They suggested that online platforms have the ability to moderate content on their sites without the risk of being held liable for the stuff that people and other parties were posting to those sites, And according to Kasaf, their idea met

with no resistance. In fact, barely anyone even noticed. In the United States, the way Congress creates new laws requires both the House of Representatives and the Senate to vote on the legislation and approve it before sending that piece on to the President to be signed into law or

potentially vetoed. In a case where both the House and the Senate are working on something similar but distinct as far as legislation goes, those two versions have to be hashed out in committee to create a more unified approach. Both the Communications Decency Act out of the Senate and what would become Section to thirty in the House of Representatives would ultimately be lumped in with the overall discussions

of what would become the Telecommunications Act of NIX. Most of Congress was really focused on the other elements of the Telecommunications Act. The stuff that related to telephone companies and telephone infrastructure and cable companies and the Internet portions were more of an afterthought. It was so new that a lot of people weren't really thinking about that. They were thinking, no, the futures and long distance phone calls,

Gosh darn't it. And so both the c d A which had support due to it being positioned as a piece of legislation that was advocating for family values, which, boy, was that a big, big point of discussion in the nineties. And then Section to thirty, which was positioned as protecting new and a vulnerable industry. Both of them made it through.

So in the end, the language that would become Section to thirty and the alternative proposal which would become the online portions of the Communications Decency Act would be bound together to form kind of a vultron like construction of online policy. Now, the Telecommunications Act is enormous. It's a beast of a law, covering stuff like telephone lines, cable television, and more. Section to thirty is just a tiny part

of that beast. Interestingly, the anti obscenity measures in the Communications Decency Act would not stand the test of time. Judges would strike down large portions of it, citing issues such as vague terminology like indecent and offensive without firm definitions. That legislation was open to far too much subjective interpretation

to be useful. But while a lot of the c d A would go by by Section to thirty remained intact, so all that obscenity and decency stuff was gone, but that section to thirty I idea from Widen and Cox was still there. It wasn't brought into question in the courts the way the parts of the c d A that specifically dealt with indecency online did. But Section to thirty went untouched and it was a doozy. So with Section to thirty, online services received immunization from liability regarding

what their users and third parties published. Section to thirty means that these entities things like Facebook, Twitter, Amazon, Google, they cannot be held legally responsible for user generated content, with only the most minute exceptions, and those most of those would come much much later. It's sort of a get out of jail free card. Especially in the early days when drafting the language, Widen and Cox were careful

to avoid being too broad. Uh. The idea was that outright criminal activities and stuff like copyright infringement wouldn't receive full protection under too thirty, but pretty much anything else was at least that was the potential for it. Now here's the thing about laws. They get tested in the courts. Courts are left to interpret and enforce laws. So Congress

writes the laws. The President approves the laws, but the laws are interpreted and enforced in the court system, in the judicial system, So the fact that the courts could interpret this meant that there was still a question as to whether the court would say that Section to thirty would mean that online services would have a conditional shield, a conditional immunity to liability as long as they did operate in good faith, you know, kind of like if

they were told to remove something because it was illegal or harmful, that they then went and did so, and then they'd be fine. Or the courts could interpret a different way. They could say that the protections are more broad and they could give online services complete freedom from liability unless an exception were to be applied. And as it turned out, the court system leaned toward option number two, that broad application approach, and it wasn't done lightly or easily.

I would love to go into this more, but it would take up way too much time, and trust me, this episode is going to be a long one. So if you want to learn about the legal decisions that would codify the extent to which Section to thirty protects online services, read Jeff Kossoff's book that I mentioned earlier now.

When Widen and Cox put this plan together, their concept was that these service providers, protected by immunity to legal repercussions as far as user generated content is concerned, would be freed up to moderate that user generated content as

much as they needed to. The goal was to create a safety structure so that Prodigy or America Online or much later, Amazon or Facebook or Google would have the freedom to exercise objectionable material off the site without risking being seen as a published that has liability like Prodigy was a few years earlier. And I've seen this described as the sword and shield approach. The shield is that immunity, and the sword is that editorial capacity to intervene without

fear of retribution. But there was a problem. Platforms began to adopt a tendency to just rely on the shield part. Frequently they did very little to editorialize or to moderate. Now this might sound familiar to you. Over the past few years, Twitter, for example, has come under fire for being reticent when it comes to applying the company's own

terms of service as far as content goes. Time and again, people have cited tweets from various users and asked Twitter's management why such examples are allowed to stay on Twitter when, but at least what is arguably a reasonable interpretation, those tweets violate the code that Twitter says it has in place.

We've seen this a hunt with certain politicians in particular, and it was only recently that Twitter would even flag posts from prominent politicians like the President as containing misleading information or untruths, and Twitter in no way rushed to address this problem. I'll get to why they ultimately did later, but spoiler alert, it doesn't really have anything to do

with Section to thirty. Now. I don't mean to just single out Twitter and ignore everyone else, because, as I said, it was far more common across the board for platforms to take a very hands off approach when it came to moderating content. Typically, they only stepped in during particularly extreme or blatant abuses of the platform's policies, and not always even then. All of this despite that immunity granted by Section to thirty. There were a couple of reasons

for this. One is that, as I mentioned in the Facebook algorithm episode that published last week, companies like Facebook benefit financially from people being active on their platforms and controversial posts generate a lot of action, so you could argue, hey, that bad stuff that we would rather not have on these platforms, that's making the platforms a lot of money, so there's not a lot of financial incentive for them

to act against it. Second, over the past few years, several platforms have emerged that seek to encourage trolling and

malicious behaviors. Like they're not just hosting it, their whole purpose is to be a hot spot for that, and they rely on Section to thirty to shield them from repercussions because that immunity, at least at first, extended so far that even if platform were to distribute or perhaps even encourage the distribution of malicious content, it was still protected as long as it was not generating the content.

If it was someone else was using their platform to spread terrible stuff, they were still in the clear because

Section to thirty gave them that protect action. There were numerous court cases that challenge this in different ways, including ones that would see plaintiffs sup platforms for negligence for failing to take down a harmful post after being told about it, But early on courts and decided that if they found in favor of plaintiffs in these cases, it would just represent a work around for Section two thirty, and it would essentially invalidate the protections entirely, because all

it would do is send the message of don't sue Twitter for libel, for example, suthe them for negligence. So that would mean that you would just have a different pathway to go after these platforms, and Section to thirty would be meaningless. So the courts decided that was unacceptable, and they ruled early on that immunity from liability extended

beyond stuff like libel. In fact, some of the rulings appeared to give online companies even more protection than Chris Cox, one of the two original drafters of the law, had in mind. He said, it may be that this was applied more broadly than we had intended for malicious websites and forums. This could mean that if you have a particular agenda and a means to launch an online platform, you can push your agenda even if it causes harm

to other people by giving sympathizers. You know people who share your philosophy if you give them a place to promote their ideology your ideology, as long as you're careful not to do it. Yourself, because you're really just providing a place for third parties to do it. If you start to publish your own words, you no longer have protection because you're not acting as a distributor for a third party's content, and and that would be fine. So

we saw it happen a lot. We still do. And then third, over the years, the government has steadily chipped away bit by bit at that protective shield, with various court cases leading judges to interpret exemptions for that immunity to liability. And that makes platforms a little less keen on editorializing fear of being pulled into one of those exceptions. Now, when we come back, I'll talk a bit more about the ways that courts have eroded Section to thirty from

its original and arguably o P status. But first, let's take another quick break. All right, guys, you know what, I guess it's about time we kind of jumped back into the present. So the nineties were fun and all, but take off the flannel, stop listening to grunge, forget about my so called life for a minute, and let's all get back in the time. Everyone back in the time machine, you two, all right, I'll push the button

for the first decade of its existence. Section to thirty was kind of like a bulletproof vest for online platforms when it came to liability linked to user and third party content. There are numerous incredible court cases featuring some really sympathetic plaintiffs, you know, people who inarguably suffered hardships because of someone sharing harmful or abuse of materials online. And the message seemed to be that, at least in some cases, there was no real legal recourse for these

people to seek out justice. If the perpetrator who's publishing this harmful information is anonymous, it can be really hard to track that person down and hold them accountable for what they said online. There are lots of ways that people can hide their identity, including ways to hide their IP address, which means that in some cases it would just be impossible to figure out who was ultimately responsible.

And Section to thirty would shield the platforms from legal action, which meant that the victim would have no real options and that rubbed a lot of people the wrong way. Now, before I get into details, I do want to say that I acknowledged this is a very tricky situation. On the one hand, it seems unreasonable to hold a platform accountable for something that it didn't generate. It wasn't responsible

for creating it. So if I pop onto Twitter and I post harmful lies about you, it's not Twitter's fault that I did that, right, It's my fault, and Twitter should be protected from being unfairly lumped in with me on that matter. Online platforms serve millions or even billions of people, and there's just no way to filter every single post from every single person to make sure that

there's nothing harmful there. But on the other hand, let's say I post something that is demonstrably false and harmful directed towards you, and you alert the platform I've used to this problem. If the platform fails to act to take down that post or perhaps even go further, maybe ban me from using the service, then doesn't that suggest the platform itself should be held accountable. I mean, it's allowing a wrong to continue. Is there no responsibility to

prevent harm? And if there isn't, what makes the Internet different from other things that law covers Because other areas of the world that you know, in the legal world

in the United States, they don't enjoy this protection. But it's a really complex issue, and it reminds us that you can have a lot of things that are really important, like freedom of speech or a right to privacy or an expectation of security, and you can have those come into conflict with one another, and it ultimately means that whatever decision you make, it's not going to be satisfying

to everybody. Starting it. Around two thousand eight, courts began to rule that Section two thirty wasn't a perfect force field protecting online services from all liability. In California, a pair of fair housing nonprofit organizations brought a lawsuit against a website called roommates dot Com. They said that the site was encouraging users to post and sort housing opportunities in a discriminatory way, and that violated federal and state law.

It is illegal to advertise housing with a language that indicates preference, limitation, or discrimination based on race, sex, familial status, and that kind of thing. But roommates dot Com allowed users to fill out fields on all that kind of stuff.

You could create a profile where you included things like your gender, your familial you know status, your sexual orientation, all this kind of stuff, and some people on the site started posting discriminatory notices saying things like, you know, essentially only white people need to apply that kind of stuff.

Terrible stuff. Now, the lawyer for roommates dot Com argued that the site was protected under Section to thirty, but the plaintiffs lawyer said, hang on roommates is totally setting all this up because it has people fill in that information in the first place. It asks people to give

that that those details. Now, the case was initially dismissed in a lower court, but an appeals court would take it into further consideration, and that three judge court ultimately decided that with a very narrow focus, roommates dot Com was liable for asking questions that were allegedly discriminatory, but that it was not liable for the content that users were writing in the site, like under additional comments. And it's a fine distinction, but it marked a small weakness

in two thirties armor. Moreover, a subsequent hearing found that a website develops content if it quote contributes materially to the alleged illegality of the conduct end quote, which would mean in those cases the service would no longer be a simple publisher or distributor. It would be a developer of content, and thus to thirty protection would not apply.

Subsequent court cases reinforce the idea that if a website quote unquote materially contributes to the illegality of material posted to that site by third parties, it would not or may not qualify for Section to thirty immunity. So the parameters of protection began to change a little bit. Should a court find that a site had not just allowed users to post a legal material, but to somehow be active in that process beyond just being a publication platform,

then it could be held accountable. But while there were new parameters, they weren't strictly defined, and courts would have to interpret specific cases within the context of this kind of vague notion of restrictions to immunity. A subsequent case brought against Yahoo by a woman named Cecilia Barnes would further complicate matters. Barnes's ex boyfriend created a fake profile on Yahoo, claiming that the profile will longed to Cecilia, and he included nude pictures of Barnes that he had

taken without her consent, which is truly horrifying. And then he also included her work contact information and before long men were showing up and trying to contact Cecilia, and that must have come as a real shock to her. I can't imagine how disruptive that had to have been

to her life. Cecilia found a link on Yahoo that explained what people should do in the event that they wanted to claim a profile that purported to represent them was in fact a fake, and it involves sending an assigned statement and a copy of their ID to Yahoo via snail mail. So not exactly the fastest or most streamlined of processes, but Barnes went ahead and did it, but the Yahoo profile stayed up. Barnes had heard nothing back from Yahoo, so she tried it again a couple

of times and still didn't receive any reply. Then she was scheduled to give an interview on local television and talk about her experience, when miraculously, a Yahoo representative actually reached out to her. Now that representative was the director of communications at Yahoo, and the director of communications promised Cecilia that she would take a fact request from Cecilia over to the proper division by hand and make certain that the profile was removed. And the profile was not

removed it stayed up for another couple of months. So Barnes goes and sues Yahoo. Now I'm going to skip most of the court process, But ultimately the case hinged on the fact that a Yahoo representative had made a promise to do something but then didn't do it, and that the court found was outside the protections of section

two thirty. Ironically, if Yahoo had not reached out at all, if the company had just allowed things to keep on going as they were going with the fake profile up and they just never replied to Barnes, Section to thirty would still have applied to Yahoo. It was only because the representative had promised to do something and did not follow through that the company was found liable. Now, if y'all who had actually pulled down that profile, there also

would have been nothing to talk about here. So if they had done what they said they were going to do, there also wouldn't have been a problem. So literally, Yah, who went down the one pathway where there still would

be liability. Barnes actually ultimately withdrew her lawsuit before it would go through the entire court process, but that earlier finding in court would hold and it would oddly discourage platforms from taking a more active role in moderation, because if a site did promise to remove something and then they didn't do it in a timely enough manner, they could be held liable for that because they didn't carry through on a promise. If they did nothing at all,

they wouldn't be liable. They beat protected under section to thirty. And I don't know about you, but doing nothing tends to be easier than doing something. I mean, it's even easier than promising to do something but not doing it. Just not doing anything at all, it's still the easiest thing to do. So in a way, these rulings that found limitations to two thirty reinforce the behaviors of companies that were reluctant to moderate the content on their platforms.

More recently, cases have been brought to light that section to can play a role in suppressing voices of marginalized and vulnerable population. So, in other words, a piece of legislation tied to the spirit of free speech could in itself be suppressing the free speech of others. For example, while there are plenty of cases of online harassment campaigns for all sorts of people, women represent a disproportionate number

of victims of online harassment. Women, particularly young women, encounter sexualized online abuse are more frequently than men do, and so there is a real issue of Section to thirty providing immunity to platforms that house communities who are perpetuating an abusive set of behaviors, which is not great and other vulnerable populations face this too. We see it in terms of race and sexual orientation, religious affiliations, political affiliations,

and more. And that harassment can have the effect of silencing the people who are being harassed, So it is a form of suppression of the freedom of speech. So courts have whittled back a bit of the Section to thirty protection and we have seen a bit more of a move towards moderating content on platforms. However, this is not out of a fear of legal liability. Instead, it's

because of consumer demand. We've seen platforms like Facebook and YouTube and Twitter get more involved in content moderation, not because the government was, you know, not going to provide them immunity. The immunity, the illegal immunity was still there. It's because users were demanding it and a failure to

act could have resulted in users dumping the services. And these companies could still operate with an incredible amount of legal protection, but that doesn't save them from the consequences of people abandoning their their business. They need those customers. And then in Congress passed a bill that amended section to thirty. It removed protections for any site that knowingly

contributes to or supports sex trafficking. While the goal of eliminating the support of sex trafficking is a really good one, it's one we absolutely need to focus on. The actual bill itself would receive some criticism, not for its purpose

but in its creation, like in its wording. So law professor Eric Goldman wrote that quote As a result, liability based on knowledge pushes internet companies to adopt one of two extreme positions moderate all content perfectly and accept the legal risk for any errors, or don't moderate content at all,

as a way of negating knowledge. So when you think of it that way, if if the law says if you know that this is happening, you're obligated to stop it, or if not, you're going to be held liable, then that also opens up the opportunity to quote unquote not know it is happening. It creates an incentive to not get involved, which is like the earlier problems that I

mentioned in this episode. But you know more, but any system designed by humans is going to be imperfect, right, And that brings us up to this year where we're seeing various politicians and others calling for an end or at least an amendment to Section to thirty. President Trump appears angry that Twitter, for example, has flagged many of

his tweets as containing misleading information. He has gone so far as to call Section to thirty a threat to national security, which echoes something that was actually argued back in the nineteen seventies that related to the publication of the Pentagon papers. Uh, that tactic didn't work then, and I don't think it's really gonna work now. And it doesn't help that the root of the problem, which is misinformation,

is really to blame here. There's also and a misinterpretation of Section to thirty going on here because Trump has argued that platforms must be neutral in their approach, which just isn't true. It's not part of the original law at all, Nor is that an interpretation that's been supported in the numerous court cases that have shaped the practice of Section two thirty. Nowhere does it state that a

platform has to be neutral. In fact, in some of the most famous cases in which Section to thirty protections were upheld, it involved content platforms that were most assuredly not new neutral, and in at least a few cases,

they were arguably downright maliciously biased. Now last year, in twenty nineteen, Senator Josh Holly, a Republican from Missouri, introduced legislation that would require any online service with more than thirty million US users or three million users globally, or with a revenue of at least five hundred million dollars would be required to take a politically neutral stance when it came to moderating content in order to qualify for

Section to thirty protection. So the implication here is that the platforms have a bias against a particular political philosophy. In this case, he's arguing that they are biased against conservatives, and therefore when these platforms moderate content, they tend to do so disproportionately to the detriment of conservative voices. Not many people are taking this particular proposal seriously because it would probably get torn to shreds under First Amendment arguments

in court. It wouldn't hold up to scrutiny at all. Other proposals aimed to follow the path of the teen Amendment that covered sex trafficking, with the idea that you could do the same thing with other exceptions to the

section to thirty protection. But one potential problem with that approach is that it creates a real mess as far as what to thirty does and doesn't apply to, and it could potentially reach a point where it's harder to tell under what conditions platforms have protection versus the conditions where they don't, and it would place a very heavy burden on the court system to determine through various lawsuits if the defendants, that being the platforms, had met the

legal burden to qualify for two thirty protection. So, in other words, that's not ideal. Either. Then there's the possibility that Congress will just repeal TO thirty totally, which would have been terrifying to companies back in the nineties when they were still trying to establish themselves. But frankly, to thirty protection is an im American thing. In other places like Europe, these broad protections don't exist in that form, and yet social networking platforms forums that kind of stuff.

They still operate in those places now, granted they have to do so while following a more strict set of rules, and it's a pain in the butt. But we should also remember that the big tech companies that this would affect are also heavily involved in mobbying efforts in politics. So how likely is it that we're going to see section to thirty completely repealed? I honestly don't know, but I think it would be a steep, uphill battle because you've got a lot of money from these tech companies

influencing a lot of politicians. According to a great piece in Ours Technica titled Section to thirty, the Internet Law Politicians Love to hate, explained a law professor at the University of Maryland named Danielle Citron, and researcher Ben Wits from the Brookings Institution suggests that two thirty should be amended so that the platforms receive immunity only if they ensure quote reasonable steps to prevent or address unlawful uses of its services end quote, leaving a lot of that

language up for interpretation in the courts. So the idea being that you should be fine, you should be immune as long as you can prove that whenever bad stuff is happening on your platform, you're doing your best to stop it. Um So, in cases where a platform was notified, hey, some other user has published my private information on your platform without my permission. Take it down, they would actually

go and take it down right now. Section to applies to those companies, whether they take anything down or not, and that has led to some pretty tragic circumstances in the lives of people who have been affected by malicious users of various services out there. Now, as I said, this is a complicated subject. There is a real need to protect freedom of speech because without it, if companies can be held liable for everything that users right, we're gonna see a disappearance of all of those things that

we take for granted. Now, I mean, social networks would be totally different. We wouldn't be able to leave reviews because any company that didn't like a review could end up suing the marketplace for hosting that review. It would be a huge mess. So we do need something there. At the same time, we have to address that the rules as they state right now do disproportionately affect vulnerable

populations in a negative way. And we need to fix that, and we've got to figure out how to give more incentives for platforms to take an active role in moderating the content that appear on those platforms. And to me, that's that's a tough tough nut to crack because there's not a whole lot of financial incentive to do it unless, as we've seen in there's a thread of people leaving those platforms. Otherwise there's more of a financial incentive to

keep it up there. So it's a complicated situation. But I hope that this helps you have an understanding of what section to thirty is, what it was intended to do, and what it actually has done, because, as we know, often we will create a construct planning for it to do one thing, only to see it go off and rampage through the village and you know, throw a little girl in the river. That's a Frankenstein reference, although I think it might have been a little boy in the book.

I don't remember. I haven't writen a long time. Anyway. That wraps up this discussion of section to thirty on tech stuff. Hope you guys learn something. I hope you have heard my dog shaking his head in the background. If you have any suggestions for future topics of tech stuff, reach out to me. You can do so on Twitter, where unless they start moderating your comments, I'll see it and they handle for that is text stuff H s W. I'll talk to you again really soon. Y. Text Stuff

is an I Heart Radio production. For more podcasts from I Heart Radio, visit the I Heart Radio app, Apple Podcasts, or wherever you listen to your favorite shows.

Transcript source: Provided by creator in RSS feed: download file