FBXL Social

(1/?)

@norightturnnz
> Will Labour take on the oligarchs?

I very much hope so, but David Parker is dead wrong when he says;

"... we in the west have made a fundamental error in providing what is in effect an exclusion of liability for third party content."

I suggest reading some of the pieces Mike Masnick has published in defence of , the US equivalent of the limited liability for third-party content that Parker proposes to abolish;

https://www.techdirt.com/tag/section-230/

@strypey @norightturnnz Masnick is wrong because he does not consider how e.g. Musk actively curates the Twitter feed for political/economic gain. Using the algorithm to help Donald Trump get elected is not what 230 was intended to protect. Masnick is still in the 1990s with earnest moderators missing some bad posts by accident.

@vy
> Masnick is wrong

You clearly haven't read his explainer article on Section 230. Please do that, because it explains why you're fundamentally wrong about this, saving me the trouble;

https://www.techdirt.com/2020/06/23/hello-youve-been-referred-here-because-youre-wrong-about-section-230-communications-decency-act/

@norightturnnz

Potential bad news for anyone hosting third-party postings in the EU. Hopefully this gets overturned by a higher court, or the legislation involved amended to spell out safe harbour protections for hosts.

"Under this ruling, it appears that any website that hosts any user-generated content can be strictly liable if any of that content contains 'sensitive personal data' about any person. But how the fuck are they supposed to handle that?"

@mmasnick, Dec 2025

https://www.techdirt.com/2025/12/04/eus-top-court-just-made-it-literally-impossible-to-run-a-user-generated-content-platform-legally/

(1/2)

"The basic answer is to pre-scan any user-generated content for anything that might later be deemed to be sensitive personal data and make sure it doesn’t get posted.

...

There is no way that this is even remotely possible for any platform, no matter how large or how small."

@mmasnick, Dec 2025

https://www.techdirt.com/2025/12/04/eus-top-court-just-made-it-literally-impossible-to-run-a-user-generated-content-platform-legally/

(2/2)

@strypey @norightturnnz i read it and he is wrong

@vy
> i read it and he is wrong

If you like, I can go through your posts and quote the bits of Masnick's explainer that address each point. But it would be easier for me, and less publicly embarrassing for you, if you just read it again and give it some serious thought.

If you read it, which quite frankly I doubt. As all you've posted so far is the standard anti-S230 talking points he specifically debunks, including basic factual errors about what it's for and how it works.

@norightturnnz

@strypey @norightturnnz Masnick is stuck in the 90s. Musk's use of his platform to sell rightwing politics, smear his opponents, and provide donations in kind to the Republican party is not what the authors of 230 had in mind, but that's what we have. Similarly, Meta's use of its platform to profit from scams is wrong and shielded. https://play.cdnstream1.com/s/kcrw/question-everything/how-meta-is-making-billi-440241

@strypey @norightturnnz You could argue that it's worth the price of destroying popular government and supercharging scams to protect "free speech", but there is no actual free speech on twitter, you can get banned or algorithmically suppressed or targeted for violent threats for writing things that the owner doesn't like. What's most irritating is that neither Masnick nor his fans ever engage on these issues other than to point at his essays which *miss the point*.

@strypey @norightturnnz At the very least 230 should be reformed to require large platforms that do filtering and promotion, by hand or algorithm, to publish their editorial policy, because it is an editorial policy, and open their system to monitoring. If Musk is promoting Trump on his platform, he needs to be open about it. If he is promoting angry tweets, same. And I think there is more that can be done. Also their should be Federal and state SLAPP back laws that cover all media.

@vy
> Also their should be Federal and state SLAPP back laws that cover all media.

This bit is particularly hilarious. You know what SLAPP stands for, right?

Without Section 230, and similar safe harbour protection for intermediaries, corporations could use SLAPPs against anti-corporate activists posting on any platform they (or their group) don't own and run themselves. So platforms would have to refuse to host anti-corporate speech, to avoid constant litigation.

@norightturnnz

@strypey @norightturnnz It's naive to think that allowing oligarchs to create conduits for fascist propaganda is a good idea. I don't even agree that what Murdoch does should be legal.

@vy
> It's naive to think that allowing oligarchs to create conduits for fascist propaganda is a good idea

It's naive to think that censorship isn't a tool of fascists.

@norightturnnz

@strypey @norightturnnz It can be or not. Libertarianism is a stupid ideology. People who publish client lists in Battered Women's shelters are not adding something to public debate.

@vy
> Libertarianism is a stupid ideology

Completely beside the point. As well as being another one for the list of things you comment on without knowing the basics of.

> People who publish client lists in Battered Women's shelters are not adding something to public debate

Agreed. But also completely beside the point. This OTOH, *is* relevant to the point;

https://inthesetimes.com/article/toni-morrison-peril-racism-fascism-liberation-black-women-writers

@strypey @norightturnnz Go ahead with some examples. I've been wrong before but in this case I am not. I'll deal with the embarrassment if you deliver.

@vy
> Go ahead with some examples

*Sigh*. I'll need to get the laptop out for this.

Just remember you asked me to do it, after necroposting on *my* post. You're not stuck in here with me, I'm stuck in here with you ...

A comment on the article sums up nicely the misreading of the spirit of GDPR;

"Of course, as long as the EU has legal norms that require everyone who wants to say their opinion or sell stuff online to provide their personal contact information, it’s completely ridiculous for the EU to pretend to care the slightest bit about protecting people’s privacy."

https://www.techdirt.com/2025/12/04/eus-top-court-just-made-it-literally-impossible-to-run-a-user-generated-content-platform-legally/#comment-4922199

The biggest threat to online privacy is well-meaning but technically illiterate uses of state power.

@strypey Don't bother if it is an issue for you.

@vy
> Don't bother if it is an issue for you

Like I said, happy not to bother if you want to be spared the embarrassment. But if you're doubling down on the claims you made about section 230;

https://mastodon.social/@vy/115737366379177904

... then I'm happy to debunk them.

Since you've given me some reading homework, I'll do that in the morning, reread Masnick's piece and section 230 itself, and post a thorough explanation of how far off the map you are on this.

"Twenty-five years of the Zeran reading of Section 230 have created immense 'interpretive debt' in the courts, which have been able to avoid grappling with foundational questions of common law liability online because Section 230 allowed them to dismiss nearly all (non-copyright) lawsuits alleging intermediary liability."

, 2023

https://www.brookings.edu/articles/interpreting-the-ambiguities-of-section-230/

(1/2)

@vy

This is interesting, because the way copyright takedowns work is arguably what Section 230 was intended to work. The platform isn't liable for the copyright violation, *unless* they refuse to take immediate action to unpublish anything that can be reasonable considered a violation, once they're informed of it.

(2/2)

Me:
> Without Section 230, and similar safe harbour protection for intermediaries ... platforms would have to refuse to host anti-corporate speech, to avoid constant litigation.

@vy
> No. Here's a more serious analysis

From your link;

"Limiting Section 230 would create immense uncertainty and flood lower courts with years of litigation. This uncertainty would in turn lead platforms to act far more conservatively when it comes to allowing speech on their platforms ..."

https://www.brookings.edu/articles/interpreting-the-ambiguities-of-section-230/

@vy note that they're only talking about the effects of "limiting Section 230", not scrapping it entirely. Which would obviously make platforms even more risk averse. So your link backs my claim.

Given that, as well as remaining unconvinced that you've actually read @mmasnick's piece on 230 (as opposed to strawman versions of his arguments by third partied), I'm not even convinced you've read the article you gave me to read.

It appears to me you're just spreading anti-230 FUD.

(3/?)

However, I want to make it clear I'm not a knee-jerk defender of 230 in its current form. I'm all for an updated safe harbour law clarifying the line between "publishing" and "distribution". Making sure it gives the courts clear criteria for determining which is which in the case of digital media.

(4/4)

It s concerning that;

@vy
> Musk actively curates the Twitter feed for political/economic gain. Using the algorithm to help Donald Trump get elected

As your links says, algorithmic recommendations were not unheard of when 230 was drafted. But we have more information now about their potential (mis)usage.

I'd be OK with *some types* of recommendations algorithms moving a platform to the "publisher" side of the line. But again, this would be amending or replacing 230, not scrapping it.

(5/5)

Whatever happens with Section 230 and similar laws limiting like liability for intermediaries - amend, replace, supplement with clarifying legislation, etc - it needs to be done in a way that takes into account the Manila Principles on Intermediary Liability;

https://manilaprinciples.org/principles.html

Accepting that for the net to work at all - or indeed any information transmission system - distinctions must be made between publishers and distributors of works.

(1/?)

So, I've done the (re-)reading. Let's go through your claims;

@vy
> Masnick is stuck in the 90s

"... from the Roommates case, to the Accusearch case, to the Doe v. Internet Brands case, to the Oberdorf v. Amazon case, we see plenty of cases where judges have made it clear that there are limits to Section 230 protections ..."

https://www.techdirt.com/2020/06/23/hello-youve-been-referred-here-because-youre-wrong-about-section-230-communications-decency-act/

Note that many of these cases happened since the 90s.

(2/?)

@vy
> Musk's use of his platform to sell rightwing politics, smear his opponents, and provide donations in kind to the Republican party is not what the authors of 230 had in mind

See this;

If you said “A site that has political bias is not neutral, and thus loses its Section 230 protections”

... this;

If you said “Section 230 is why there’s hate speech online…”

... and maybe this;

If you said “Section 230 was designed to encourage websites to be neutral common carriers”

(3/?)

@vy
> Similarly, Meta's use of its platform to profit from scams is wrong and shielded

See this section;

If you said “Section 230 means these companies can never be sued!”

... and maybe this section;

If you said “Section 230 gives websites blanket immunity!”

> You could argue that it's worth the price of destroying popular government and supercharging scams

See Cory Doctorow's point, that overstating the influence of these platforms helps them sell their snake oil.

(4/?)

> to protect "free speech", but there is no actual free speech on twitter

You're literally admitting - right there - that you're bringing up a red herring. You know that, right?

Nevertheless, some of the stuff in this section is relevant;

If you said “If all this stuff is actually protected by the 1st Amendment, then we can just get rid of Section 230”

(5/?)

@vy
> you can get banned or algorithmically suppressed

See;

If you said “Once a company like that starts moderating content, it’s no longer a platform, but a publisher”

> or targeted for violent threats for writing things that the owner doesn't like

IANAL but I'm pretty sure death threats are a violation of federal criminal law, so you need to read;

If you said “Section 230 is a get out of jail card for websites!”

(6/?)

@vy
> What's most irritating is that neither Masnick nor his fans ever engage on these issues other than to point at his essays

I refer to this essay for 2 reasons;

1) I don't live in the US so any well informed tech commentator who does is likely to be better informed than me.

2) The essay summarises and deals with all the main anti-230 talking points that crop up over and over. Tapping the sign saves a lot of time.

> which *miss the point*

Someone is certainly missing the point.

(7/?)

@vy
> 230 should be reformed to require large platforms that do filtering and promotion, by hand or algorithm, to publish their editorial policy, because it is an editorial policy, and open their system to monitoring

This bit I actually agree with. But this has nothing to do with whether moderation is allowed or not, mandated or not, incentivised or not, or whether moderating or not moderating makes a platform liable. So it's not an argument against what 230 is there to do.

(8/8)

So, yeah. You seem to think your analysis is saying something that hasn't been said many times before. Clearly it's not.

Section 230 is not the cause of the problems you are concerned about - some of them quite rightly - and fighting it, like pushing for age-gating is a waste of time that could be put into fighting the actual problems. See Cory Doctorow's Enshittification, for details.

@strypey @mmasnick You can read the Brookings paper a tiny bit further to see
" If Congress reacted quickly by enacting a comprehensive—and, unlike Section 230, clear—liability regime, these disruptive effects would be limited. If not, they could fester for years."

This part of the Brookings essay is a realistically low expectations analysis of what Congress could get done. It doesn't in any way justify 230 or support you or Masnick's over simplified arguments.

(1/2)

@vy
> This part of the Brookings essay is a realistically low expectations analysis of what Congress could get done

True. It's saying that in the likely case that Congress - especially in its current state - would takes a long time to amend 230;

"... these disruptive effects ... could fester for years."

In the best case scenario there'd still be "disruptive effects", but they "would be limited".

(2/2)

Again, this is in a scenario where the courts are ignoring years of precedent and enforcing a narrower view of 230. Not one where it's removed entirely, which would cause all the same disruptions and worse. For the reasons given in the bit I quoted.

@strypey Is it your theory that Meta and Twitter etc. offer free speech on their platforms?

@vy
> Is it your theory that Meta and Twitter etc. offer free speech on their platforms?

Why would you think that? They host speech on their platform. "Free speech" is the concept that they ought to be allowed to do that, and that the people whose speech it is ought to be allowed to speak. Unless there is a *very* good reason why not, on a case by case basis.

So the default is, all speech is allowed. The onus is on people arguing for any limitations on speech to justify them, case by case.

The history of section 230 is that prior to 230 it was the law of the land that most internet services were not moderated at all and therefore the people operating the services had limited liability for anything posted on them unless and until it was reported.

There was one early case involving compuserve, and because that server and didn't moderate editorial, they were treated like a newsstand that sells newspapers other people published, and so CompuServe was not liable for the individual things written by the posters. By contrast, There was a case involving Prodigy I think, where the service claimed to be moderated and thus safer for young people to go on to. This opened them up to liability because they asserted editorial moderation and some of the content they allowed through was found to be defamatory. Therefore, the state of the internet prior to section 230 was this: limited editorial moderation which limited liability for the things said, or editorial moderation where the website provider took on the liability for the things that were posted and also took on liability for damages when they moderate.

If suction 230 were to be removed, this would be the way things would go back to. Legally, if you never made any pretenses of moderation, you would be protected from the individual things that people post. And if you did choose to moderate, you would be personally responsible for everything that was posted on your website.

This is why a lot of free speech people were pushing for the abolishment of section 230 altogether, because at that point either everything becomes usenet again and largely unmoderated for editorial, for what remains becomes so fully locked down it would no longer be interactive. In other words, individuals who believe that abolishing section 230 would result in moderation that they like becoming the norm are likely incorrect. The more likely result would be that once section 230 was removed, remaining services that didn't immediately lock down would not have any editorial controls at all.

But I should tell you that if you are on the political left, there's an awful lot of stuff that would immediately get taken down because no company would want to potentially be treated as the speaker of a lot of that speech. In particular, accusations that a certain company did X or that a certain individual did Y would almost certainly be banned out right because the company's running the platforms would not want to be "saying" such things in court.

Of course on the political right there would be a lot of stuff taken down too, but not the stuff most lefties would want. Legally, there's generally nothing wrong with what is called hate speech, at least not in the United States. Therefore, we're talking about a certain company dumping talk to chemicals into a river maybe actionable defamation and therefore might be moderated, or making statements of fact about a certain politician maybe actionable defamation and therefore might be moderated, you can legally talk about your opinions on different races all day long.

A lot of politically neutral services would likely be up in the air as well. Things like recommendation algorithms or monetization decisions could have to change to accommodate the elimination of section 230 and the resulting legal regime.
replies
1
announces
1
likes
2

@strypey Again, I didn't propose that.
Me: At the very least 230 should be reformed to require large platforms that do filtering and promotion, by hand or algorithm, to publish their editorial policy, because it is an editorial policy, and open their system to monitoring. If Musk is promoting Trump on his platform, he needs to be open about it. If he is promoting angry tweets, same. .. there is more that can be done. Also there should be Federal and state SLAPP back laws that cover all medi

@strypey Section 230 does not protect against SLAPP suits (as Mike Masnick should know) and it does protect corporate monopolists. The case for it relies on pretending otherwise.
https://www.eff.org/deeplinks/2022/09/its-time-federal-anti-slapp-law-protect-online-speakers

@vy
> Section 230 does not protect against SLAPP suits

How could it? They're totally unrelated. If there's some way to amend 230 so it makes SLAPP suits harder, I'd be all for that. But I can't imagine one.

> it does protect corporate monopolists

It does the opposite. See;

If you said “Section 230 is a massive gift to big tech!”

https://www.techdirt.com/2020/06/23/hello-youve-been-referred-here-because-youre-wrong-about-section-230-communications-decency-act/

If you have read Masnick's article, you either didn't understand it, or you're intentionally spreading FUD. Neither is very impressive.

@strypey You are defending monopolists who spread right wing propaganda world wide, actively promote threats and violence, and suppress free speech of their opponents as if it were free speech. Facebook has 3 billion monthly active users.Their algorithms embody a strong editorial hand, and you pretend that protecting them from oversight and competition is facilitating free speech.

@strypey You wrote
: Me:
> Without 230, and similar safe harbour protection for intermediaries ... platforms would have to refuse to host anti-corporate speech, to avoid constant litigation.:

And so I ask you whether the main, monopolistic, platforms are hosting anti-corporate speech as it is.

@vy
> I ask you whether the main, monopolistic, platforms are hosting anti-corporate speech as it is

Yes, all the time. People routinely post anti-corporate rants on platforms owned by them. For example @norightturnnz criticises corporations on their blog all the time;

https://norightturn.blogspot.com/

(BlogSpot is owned by Goggle)

@vy Oh FFS. I don't know which is more hilarious. The fact that you are still deeply, fundamentally wrong about how Section 230 works, see;

If you said “Section 230 is a massive gift to big tech!”

https://www.techdirt.com/2020/06/23/hello-youve-been-referred-here-because-youre-wrong-about-section-230-communications-decency-act/

... or that you think you're talking to someone who defends corporations.

You're either functionally illiterate or concern trolling. In either case, we're done here.

@vy
> Section 230 does not protect against SLAPP suits

Me:
> How could it?

Except where a SLAPP takes the form of trying to hold an online service liable for something a user posted there, as a way to make them censor it. S230 would definitely be useful to lawyers defending against that form of SLAPP.

Otherwise, as I said, they're totally unrelated.