Ctrl-Alt-Speech

For Meta or Worse

Mike Masnick & Ben Whitelaw Season 1 Episode 96

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 53:09

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

Don’t forget to listen along with Ctrl-Alt-Speech’s 2026 Bingo Card and drop us a line if you win or have ideas for new squares.

 Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.  

Ben Whitelaw

So, Mike, I know you've been very, very deeply affected by the recent news about Horizon Worlds. you've done very well to conceal it. Um, but I know it's, really hit a kind of deep fiber of your being. The fact that, meta will no longer be creating new games for it. It's essentially putting all the kind of funding out of it. You know, it's grand plan for a metaverse is seemingly dying. And, yeah, that's gotta be tough for you as a non non-user. but anyway, horizon Worlds, has a prompt as a user, which is Discover Worlds. So, as a kind of homage to, this kind of no longer platform, I want you to tell me how you would discover worlds.

Mike Masnick

The, well, I was going to say the world that I'm discovering, or at least the world that I'm remembering, is the very last message I ever received from Mark Zuckerberg was the night before the company changed his name from Facebook to Meta. He sent me a message saying that they were going to be making a really big announcement in the morning about, VR and related things. And he thought I would be really excited and really interested in it, and he couldn't wait to hear my feedback on it. Then the next morning they made the announcement. I was like, wait, why would he think I would care about this at all? I, I never used, horizon worlds. I don't even quite even understand how you would use it. I never bothered to learn. And maybe that's on me. So maybe it's on me as to why it's shutting down. But, but it just, uh, never, never got me. But what, uh, what worlds are you gonna discover, Ben?

Ben Whitelaw

Well, I mean, I don't like foretelling the future, but the world I've been inhabiting this week has been the trust and safety summit here in London, which has been a gathering of the kinda great and good of trust and safety, an internet regulation. And many of our listeners have been there. spoke to a number of them and they were all saying very nice things. but there's some very interesting kind of dynamics in the trust and safety world that we're, we're seeing at the moment, which, I think is, is a world that was interesting and new to me this week. everything in moderation was the only media slash press who was allowed at the summit. And, yeah, so I think I got a kind of interesting look at, you know, the underbelly of the industry, uh, at a very interesting time. Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's March 26th, 2026. That's a tough one to get out. And this week we're talking about the major US legal cases that have been announced this week. Meta's big AI moderation, push, and frankly, anything else that comes our way. Oh my God, Mike, what has happened?

Mike Masnick

this has been a week. This has been a week. And, and I will just say before we get into our stories that like, as we were just about set to record, we got news that, the, case that Twitter and Elon had filed against advertisers basically saying like, it was illegal not to advertise on Twitter slash x has been thrown outta court. We're not covering that. That just came out like 30 seconds ago. And we have so much stuff that we, we were like, we don't know how we're gonna fit everything that happened this week, or really like in the last three days, into this episode, but we're gonna do our best.

Ben Whitelaw

I mean, some podcasters, you know, nowadays do like three, four hour episodes and we, we've been pretty good at trying to keep it tight, you know, an hour maximum, um, you know, nice and concise, but like, this is one of the weeks where we could have easily done three or four hours. we would've needed some energy bars and some luc to do that, or Gatorade as they say in your part of the world. Um, so yeah, we have a lot to get through. I wanted to kinda just talk a bit about the summit because I think it's, I think it's interesting, there is the big trust on summit in July and California. This summit in London didn't exist three years ago. to be honest, I was kind of skeptical that. Actually a summit like this could draw folks over from the us but we saw more than 400 senior trust and safety folks, come to London, two days of really intense conversations. A bunch of streams throughout the conference, lots of workshops. not always the most kind of candid conversations, I would say. it was fairly high level at times, platforms doing their best to kind of talk through what is they were doing to keep users safe. very interesting thing for me was that actually Ofcom were not allowed to be at the conference. they gave a plenary, so they, spoke at the start of the conference and then for various reasons, they were not allowed to stick

Mike Masnick

booted out.

Ben Whitelaw

They were

Mike Masnick

Get out.

Ben Whitelaw

Uh. And so I think that kind of summarizes, I think of the tensions or one of the worlds that, you know, I witnessed is this, you know, do regulators, are they friendly? Should they be allowed in the room? What is the kind of relationship the platform should have with them? Uh, and I saw a, fair range of opinions, you know, Europeans thinking that maybe actually regulators, being friendly is a good idea. Folks in the US being a bit more cagey and a bit more kind of, uh, worried about maybe having conversations in front of regulators. So, very interesting. One of the many, many, interesting points of the conference. UL Roth was there, who was a former co-host of controlled speech. Gave a really good, talk at the, the start of the day about how trust and safety team should be taking bigger swings and trying to be more innovative, which, was a, provocative talk at the start of things. So yeah, great to see, so many listeners in person as well. We've got lots of really nice comments from, from people. Mike people told me that they listened to the podcast while running, which

Mike Masnick

we gotta talk faster. You gotta,

Ben Whitelaw

Yeah.

Mike Masnick

you go, you exercise to the pace of the thing that you're listening to. So you and I have to speak faster. This is a good week for speaking fast.

Ben Whitelaw

Yeah. and also, you know, a few folks actually listened while cleaning, which is again, something we talked about right back at the start. but yeah, it's, it's a part and parcel of a lot of people's weeks, which is great to hear. what have you been up to? What's, what's new with you?

Mike Masnick

I'm just trying to, get through all of this stuff that's happening. I'm about to head off to the Atmosphere Conference, this weekend. talking all about, the AT protocol and all sorts of fun, exciting developments going on there. So I'm really excited about that. but I gotta pack up and, get on another damn airplane. I'm seeing too much of the airport lately, and it'll be fun because, you know, TSA is not getting paid. So, traveling in the US right now is lots of fun. so that should be, that should be an adventure. but there's so much going on this week. It's, it's insane.

Ben Whitelaw

yeah, I was actually reminded by, a Charlie War Atlantic interview that came out a few weeks ago about people making themselves, what they called, like, wearing themselves to death. Uh, so it's this idea that like everyone's trying to kinda consume as much information as possible and monitor all these different storylines because everything is so chaotic. And I really feel like we've seen that bear out this week with

Mike Masnick

I mean it, look, it depends. There are different things that happen and obviously there's like political stuff that everybody feels like they need to be aware of. I mean, the, global situation is, is a mess in all sorts of ways. And, and people wanna be aware and don't wanna miss things. It's not like a, it's not a FOMO situation. as much as just like there are important things happening in the world, it's probably good to know about them, but you can definitely be overwhelmed.

Ben Whitelaw

Yeah, and we, we obviously try and control speech to kind of bring you the minimum viable stories you need to know about the, the things that you know you should pay attention to. And, and this week is no different. along those lines, actually, we're, we're gonna mix things up a little bit this week. Normally we do two bigger, more meatier stories followed by a, a botched attempt to run through a couple of smaller stories. We're not doing that this week. we're not even gonna try. we will fail, even more so than we usually do. So, we're, we're gonna do two kind of packages of stories around two big topics that we think are the most interesting things to, discuss this week and to chew over this week.

Mike Masnick

it's really, we have a bunch of stories. We're gonna link them into two different buckets. and I'll, I'll start going in on the first one, which is just a whole bunch of legal stuff in the US related to intermediary liability in all sorts of ways. and so we're gonna work through the cases, the, some of these, most of these probably listeners will have heard of because they've been major headlines and front page news. we'll start with the big verdicts, the two jury verdicts in two different cases. One in New Mexico and one in California. both against Meta, the California one also, brought in YouTube slash Google. and both of them were sort of, big headline grabbing cases where the platforms were effectively held liable for design features that. Were claimed to cause harm. and in both cases there were fines In New Mexico, it was a few hundred million dollars in, California. It was, just a few million dollars, basically$6 million total. between a few different types of damages. A lot of people are celebrating these cases. a lot of people, dislike meta. and, you know, I think that's a justified stance to take that you, you don't like meta or you don't like their platforms, you don't like how they do things. I understand that. but I think that these two cases are really, really problematic and I think that the results will be bad for everyone. the history of these cases is long and we don't have as, as much time to get into them. But, the New Mexico's case was brought by the state of New Mexico and claiming that Meadow was, harming people, with The various design choices they made. meta had brought a two 30 defense and said, this is all about user speech and we are allowed under section two 30, we cannot be held liable by a state, by anyone, outside of the federal government for, the choices that we make for how we display content. that is, I think, the correct legal position. But the judge rejected it, in 2024 and said, no, and let the case go to trial. And once you go to trial and, you know, part of the trial was very problematic in, in a lot of ways because they used internal documents that are taken very much outta context and say things like, there were people inside the company or advisors to the company that warned Meta in particular about encrypt. Facebook messenger messages and saying, you know, there are safety risks in doing this in terms of people may use these, tools to do bad things, such as sharing CSAM is the major example. and that could be a risk and a problem. We will have no way to get at that information, and that was a big part of the case, but, think about what that means. There's a few different elements to it here. One, the idea that encrypting messages could lead to massive liability. Hundreds of millions of dollars is really bad because for most people, encrypted messaging is a very important security feature. It, protects most people. Yes, some people can abuse it, but just the fact of creating encrypted messaging should not lead to liability. Furthermore, the fact that some, safety people raise concerns about this is also something that we should want. We should want companies to be able to have open discussions where different people are raising different things and discussing the trade-offs and allowing the companies to do that. The end result of this, which is that because some people raise this. Some people disagreed, but some people raised this meta chose to still go forward with encrypting messages, which I again, think was the right decision. Those documents were used in court as proof, oh, meta knew this was harmful and they ignored it, and therefore they're liable. So the end result, that means that a lot of companies and a lot of general counsels are now going to tell everybody in their company never discuss safety stuff in a way that can be written down, taken outta context, and show up in court. That means less open discussion about actually understanding the trade-offs and, safety around trust and safety, that we know these conversations happen all the time. We want companies to be exploring these things. We want companies to be able to have open discussions on this, and the result of this case says to most internet companies don't have those conversations. And I think that will lead to, a very, very bad result.

Ben Whitelaw

just on that, does, does it say. I know that is the kind of a potential outcome, a potential unintended outcome. But doesn't it say, you know, be more mindful of when somebody brings you, viewpoints or research or evidence that something might be harmful. Be mindful that, that decision that you make as a leadership team

Mike Masnick

Sure.

Ben Whitelaw

has implications. I get,

Mike Masnick

Yeah. Well,

Ben Whitelaw

it's not saying, don't talk about it. It says like,

Mike Masnick

well, but I think, I think it does because this is the thing as we often steal from, from Charlotte, Wilner, trust and safety is all about trade offs and sadness, right? Every decision that you make has trade offs. Every single decision has some sort of trade off, and there will be arguments for doing this or not doing this. Each of them will cause harm and you have to weigh all those things and weigh other. Factors and come to a decision. That's what businesses do all the time. That's what all of these companies do all the time. Yes, we can argue that, like if somebody raises a safety issue, obviously you should take that seriously, but here you have competing safety issues, right? Having encrypted messaging makes a lot of people much safer. It makes their messages safer, and that protects them in lots of ways. Yes. It may also lead to some people using it to do illegal things, including sharing CSAM, and companies are trying to come up with ways to deal with that as well. But there are trade-offs to all of these things, and the fact that the case only shows the one side of it, which is, oh, people raise this, therefore, the assumption is that as soon as anyone raises anything, you have to act. And there are also things, safety ideas that people will raise all the time that are not good ideas. So like this case makes that much more difficult. It makes it much more difficult to have an open conversation where you weigh the trade offs and you talk about them in a free and open manner. Because if you can just cherry pick one part of it, then you lose that, you know, it's used against you, the openness of the conversation.

Ben Whitelaw

Okay. Let's go to the second case as well and

Mike Masnick

so the second case is an ev probably an even bigger deal, even though it was a smaller amount of the fine, which was it's known as the KGM case, which is the initials. and this was, there have been many, many cases and, uh, a lot of them brought by the same law firm arguing that social media is leading to addiction and harmful behavior because of all that, the courts have been just trying to deal with all these cases and they set up this, concept of a bellwether trial, which is like, basically we're going to test these ideas in a trial and see what a jury says. And they actually set up a, a multiple bellwether ones. This was just the first one, so there's still more to come. I think the next one starts in June, if I remember correctly. and so we're gonna see a few of these before there's like any sort of final determination, I guess. But again, here, this was a situation where. The companies, in this case, there were four companies originally. TikTok and, and Snap were in the case, but they settled right before trial was supposed to start. but Meta and Google stuck it out. And again, they tried two 30 and the judge basically said, ah, you know, we don't, we don't have to do two 30 here because this is really about design and this, this is what, you know, both of these cases said this is not about user speech, it's about design. But the reality is that no, it, it absolutely is about, this is an end run around section two 30. This is an end run about, trying to hold the companies liable for content that people don't like on these platforms. And the example that I used in my writeup was like, if these platforms were full of videos of paint drying and they had, you know, autoplay and infinite scroll, nobody would be complaining because nobody watched. Right. Nobody would watch that content. It is the content here and the claim of it's a design issue and therefore it's product liability is a lie. it is only a way to get around the two 30 issue and pretend that you're not trying to regulate speech on these platforms. they they absolutely are. But again, like here you have a sympathetic case. there's a young woman who was very, very troubled. There was a fair bit of evidence that was presented that suggested she had significant trauma from other sources in her life. And it's not that you wanna, like how do you parcel out you know, what things are causing trauma. There was also some evidence presented that her use of social media actually helped her with some parts of the trauma, but may have exacerbated other parts. So how do you separate out those things? Right. You know, we've talked about this in other contexts when we're talking about mental health and social media. People use it for all different reasons and some people struggle with it and we should look. To help, you know, to find those people and to help them. But there are lots of people who are helped by these tools as well, and lots of people who it's, it's neither helpful nor harmful for. And there's this assumption in these kinds of cases that it's inherently harmful, or that if anything harmful happens to a person that can somehow be loosely traced back to social media, that you can sue those companies and make millions of dollars. And then what both of these cases do is they just open the floodgates for, anytime anything bad happens and that person used an internet service, suddenly there's this opening for liability and it effectively takes us back to. The world after the Stratton Oakmont case, which was the case about the Wolf of Wall Street firm that led to section two 30 in the first place, which was, in that case somebody said there was defamation on platform. But because Prodigy moderated to try and make it a family friendly environment, they said that, because you're moderating, you're taking on liability for this. And the fear that Chris Cox and Ron Wyden and lots of other people had was that like, now this is going to make it much harder for companies to moderate because any decision they make is going to be, you know, they could face liability for it. Now that's the situation that we have based on these, if these sort of carry through both companies are gonna appeal and we'll see what happens. There are the other cases, but the sense is like any design decision that you make is now open to have to go through an expensive, very expensive trial. Um, you can argue that, you know, one of the reasons why SNAP and TikTok settled early was that like actually going through the trial is super expensive.

Ben Whitelaw

Yeah. I mean, this is the, one of the most interesting points of your tech de piece, which would include in the, this week, show notes. You know, you kind of run through the argument that you've given there the procedural element of, of this, kind of judgments is particularly interesting. Right? So explain what you mean when you kind of, you said you kind of worry about the fact that this could happen to more platforms now that the precedent's been set.

Mike Masnick

yeah. This is one of the things that I think is not as well understood about two 30, like the nice thing about Section two 30 and sort of the clarity and and certainty in theory behind section two 30 is that they get cases rejected early on where they're not that expensive. It's, you know, they're different numbers, but you're basically talking 50 to a hundred thousand dollars, which for most companies. Is not a huge amount for individuals. Obviously it's a very large amount. but you can get the cases dismissed on those grounds. If you have to go past, that motion to dismiss stage, you have to go to what's called summary judgment, which means go through depositions and discovery. You're already increasing the price to millions of dollars. Five to 10 is generally the range that I hear. Some people will say, some, you can do some of these cases,$2 million to get through a summary judgment stage. that's a lot more money and that adds up a lot quicker for smaller companies. Trial, you're adding a tremendous amount to it, you know? lawyers cost a lot of money trials. You know, the, I forget the exact one of the trial. One of these trials took six weeks. One of them took seven weeks. That is a ridiculous amount of money just for the trial process. Not even counting all of the, the. Prep work leading up to the trial and the motion to dismiss and the summary judgment stuff. These are extraordinarily expensive. Many, many millions of dollars that, most smaller companies certainly can't afford that. Even if they won, these types of trials would put these, you know, companies into bankruptcy.

Ben Whitelaw

Yeah, this has been a good week for lawyers. The, the, they're the, they're the real winners here, aren't they?

Mike Masnick

I should have gone to law school.

Ben Whitelaw

what's the kind of like, what is the role of the kinda jury here? Because there, there haven't been any cases heard by juries to this extent. Right. What, and you mentioned the fact that this is, the matter is a disliked company. A lot of people don't, don't like Zuckerberg. To what extent has the role of the juries played in the, in the outcome of these cases, do you think?

Mike Masnick

Yeah, I mean, I think it's, you know, this is the kind of case and you know, especially with the California case that had the lawyers for the plaintiff is, uh, kind of a showy lawyer and demonstrative and entertaining and, you know, really those kinds of lawyers know how to tell a narrative and tell a story that is very convincing to a jury that isn't an expert on law. Now, the jury is not supposed to be making decisions of law. They're supposed to be deciding, factual issues. Right. That's the difference. The judge is supposed to handle issues of law. The jury is supposed to determine issues of fact, but those things, which things are issues of law versus, which are issues of fact, is often a little bit more subjective than people would like. And in, in both of these cases, the judges, you could sort of sense and who knows, you know, I, I think. Most judges deserve some level of, of respect, but like, it sort of felt that, both of the judges in these two cases also kind of felt like, eh, these companies are bad. Like, let's just send it to a jury and, and see what they say and, you know, let them have their day in court is the, the kind of thinking, but it's such an emotionally charged topic and it's so easy. you know, the media I think has also contributed to this where they, talk about, social media as if it is inherently harmful. You have all these stories that suggest that I think are, have been highly misleading and you have bestselling books that we've talked about that I think are extremely misleading about these things And there was a study, a few months ago that laid out this idea that actually the media conversation around things like social media addiction was potentially more harmful than any level of social media addiction itself because it, it sort of made people think like, I can't help myself. I can't do anything. I'm in the grip of this awful addictive substance that I can't do anything about. and so I think the framing of this is a narrative that is you have this instinctual appeal for it, but the reality is much more complicated. And I really, really am worried about if these cases hold up, we're just gonna see a flood of these kinds of cases and they're gonna go after every platform you can think of, many of which are not going to be able to handle it. And it's, you know, we're, opening the floodgates here and I think, you know, a lot of people are comparing it to like the tobacco cases and cigarette cases, but like. Cigarettes are inherently harmful. there is no oh, you know, there's cigarettes are, are helpful here.

Ben Whitelaw

they may look cool.

Mike Masnick

yeah, sure. Okay. But this, this is not like cigarettes are putting a chemical into your body that does real damage, that we know, this is speech, right? that social media is a speech and the evidence suggests that it is not inherently harmful. and so all of the comparisons here I think are, are really problematic. And I really worry about these verdicts even though I understand like the excitement overseeing like, you know, a court say like, meta is bad and it's doing bad things to people. there's a, an emotional appeal to that, that I totally get and it clearly worked on the juries as well.

Ben Whitelaw

Yeah, I mean this, this is, we've talked about the product design piece and the addictive design piece kind of more broadly, and these cases fit neatly into this new narrative that we've seen emerge. Right? It wasn't a matter of weeks ago where the European Commission announced that it was investigating TikTok for a number of its addictive design features. it's not just in the US where. Product design, pro-social design has been deemed to be the kind of, vector point for attacking platforms. Right. And that's kind of really interesting development. That's, that's happened very quickly, I would say.

Mike Masnick

it's been effective and yeah, and, and again, like it has an, there's an intuitive appeal to it where it's like we have product liability cases where, you know, the defective design or negligent design come up, and. you understand that, but here are the, like, you know, when you, when you break it down, like the negligent or problematic design is like showing you more speech that you like and that is really worrisome in all sorts of ways.

Ben Whitelaw

I, the issue I have, Mike, is that. I was at the conference this week and a number of different pre presentations, talked about their safety approach, much, much like a car. And they said, you know, a car has a number of safety features. It has seat belts. It has a bumper, it has an airbag. no one of these features in and of itself, will keep users safe, entirely, but the combination of them reduces risks to a kind of manageable degree. And when you buy a car, you buy, a brand of car that you know cares about safety. you're not thinking that you'll never crash, but you can undertake that risk knowing what you know. That is what the safety professionals are talking about. And we know that cars are regulated and we know that they are deemed, I suppose, products that need to kind of hit a threshold of safety. how does that chime with these judgements and, and this idea that we shouldn't be thinking of? We should be thinking of product safety as speech and not as a

Mike Masnick

Yeah. I mean, I like, obviously I think safety on social media is really important. like a supporter of trust and safety and having that happen. But it's, it's different with speech than cars. Like cars are not speech, right? Like physical products are different. Physical products that can crash it's a different world and the trade-offs are not. Same like the only trade-offs around like safety in cars is like the cost of building in airbags or, or whatever it is, right? The cost on social media is you're harming some people. Instead of other people, right? if you're cutting off people from groups, that is a kind of harm. If you're blocking certain kinds of conversations, that can be a harm. You know, there are things around like this idea that oh, eating disorder content, I've discussed this before, like eating disorder content. Oh, that's bad, that's dangerous. We should cut off those groups. But there were all these studies that showed that when you cut off those groups, it didn't stop the eating disorders. It made people go to more extreme places and made it worse that the groups that were on mainstream platforms actually had more people in those groups that were helping people with recovery. Because they would show up and say, I went through this. Here are resources for you. Here's how to help. Or people who would understand you. And that actually led to a, decrease in eating disorder as opposed to when they started to ban it and push them to the further edges of these things. It's not the same thing. You can't compare it to something like, should we, you know, have seat belts or not?

Ben Whitelaw

right. But, but what I'm saying is if people in the companies are Treating it as a product and not as speech. Shouldn't they be judged by, juries and by wider society For that? Because with all due respect to the people at the conference, and some of them are, you know, some of the smartest people in safety at platforms in the world, they weren't talking about speech, Mike, they were talking about products. They were talking about engagement of products. They were talking about ensuring that the users are safe in the context of the use of those products. If they were talking about speech, then I think you, the, point you're making makes, makes more sense. But that is the difference of what they're saying and what we're judging them for. That I, always, always have an issue with.

Mike Masnick

So, so, and, and, we have other cases we need to get onto. But, but, but I will, I, what I will say is that, yes, I mean, obviously there are products here and they're making product decisions, but the underlying aspect of all of these social media products or all these internet services products is speech. And so you can't separate out those things. Like, yes, you want them to be thinking of them as products and design, and you want design for safety. You want to think about these things and how do we minimize the harms? But, these are things that, again, like every choice is helping some people and harming others. It's not the same as traditional product safety in the sense of, a seatbelt or, or something like that. And so, I, I just think. it's an easy analogy to make, but I think it's a dangerous analogy to make. And, and the result is the, the cases this week. And, and I think the potential harm that will come from more such lawsuits.

Ben Whitelaw

Yeah. Before we move on, and I know you are itching to do so, um, I wanted to ask like, is there a design based claim that you would be happy with?

Mike Masnick

possibly. I, I mean, it, it, it would really, really depend. I, I think I haven't seen a really good one yet, but like you could in theory see. A design where the design is deliberately created to lead to harm. And I don't think any of the major platforms would do this, but like, could you see like some sketchy platform show up that is designed to encourage direct harm? Like the design choices themselves were made not for, general reasons of, to support the business or, you know, to increase engagement or something like that. But like deliberately designed to say, we wanna create as much harm as possible. We wanna directly cause people to have eating disorders or something like that. I think you could make a case in those scenarios, but these are always going to be really fringe and extreme sites. not a mainstream site that I could think of.

Ben Whitelaw

Okay. so pretty overt product design there. Okay. Let, let's go through the other cases'cause we are already

Mike Masnick

that, yeah. Yeah, I

Ben Whitelaw

30 minutes in.

Mike Masnick

Well, that actually leads really nicely into the next case, which, is the Supreme Court this week. did the Cox versus Sony case, which I think a lot of people in the sort of online safety world or intermediary liability world, didn't think was that important or didn't even pay attention to it. And in fact, even the, a lot of the tech companies even weren't paying attention to this case because it felt like, it didn't really matter towards intermediary liability for social media. because it, it was about an ISP, which providing just pure internet service and it was a copyright case. And that's like a whole other world of cases. You have the DMCA and you have different rules there. an infringement is different than, than other kinds of things, and yet I think the ruling could turn out to be really, really important for section two 30 intermediary liability. The decision, depending on how you look at it, it was either a nine oh decision or a seven two decision. The really quick basics of the case, to give you the background, Cox, is and internet service providers, how people, get their broadband connection to the internet. Sony they were sending notices to Cox saying some of your users are infringing on. they're, they're downloading pirated music is, basically what it was, and that Cox was not cutting them off fast enough. And the DMCA has some rules about you have to have a reasonable policy to deal with repeat infringers. which works better when you're talking about a company that's hosting, like those issues come up for like YouTube or stuff like that, or, or an internet hosting service. it's harder when you're just the connector'cause you don't hold the content. The content might go through your pipes, but you have less overall say. And so Cox took what was a, a semi reasonable position, I think, which was that if they got notices, they would send warnings to the users and over time they would monitor. You know, it took a lot of notices, repeat notices, before they would finally cut you off. I think in the end they ended up cutting off like 32 people and Sony said, this is not good enough. And the lower courts found Cox libel. They basically said you didn't do enough to cut people off. There are all sorts of problems with that, like cutting people off from, you know, getting banned from Facebook is one thing. Cutting people off from their entire internet connection is a whole other ball game. And especially if it was done just on accusations.'cause it's all just accusations. Not like you've gone to court and proven that you've infringed on someone's copyright. It's all accusations. That seems like a really big deal. The lower courts actually did mostly side with Sony, but the Supreme Court said no and it was a really interesting decision written by Justice Thomas, who I don't often agree with. And Justice Thomas is also, Been very clear that he doesn't like Section two 30 and would potentially like to get rid of it. But a few years ago we had the, Tamina and the Gonzalez case, which were about terrorism and there was a, concern that that case would involve them trying to attack two 30. They, sidestepped the section two 30 issue in that case. But Thomas wrote this decision around terrorist content and talked about aiding and abetting and, what standards you would need to say that Twitter aided and abedded these terrorists and basically said they would have to take some sort of proactive step. you can't just be like, oh, here's a platform. Terrorists happen to use the platform terrorists do something bad. That is a causal relationship that because you allowed terrorists to use the platform, you are now responsible for the bad thing that they did. And so in this decision, Thomas wrote it, and he basically makes the same argument in copyright law and says, for an intermediary to be liable, there's a whole bunch of like, specific language, which I'm not gonna get into. They have to actually tailor it towards the violation of the law. You have to tailor it to infringe, decide to make things, proactively not just offer a service that people might also use to infringe, but that you have to like, you know, make explicit decisions to help infringement. That is another view of, of liability that matches with what he was saying in the Tamina case. And basically we're seeing in these worlds that are not protected by two 30 specifically, intellectual property is explicitly excluded from it. and the, terrorist case, they avoided the two 30 question. You have Thomas and most of the rest of the court basically saying to have intermediary liability, the intermediary has to take proactive steps. Towards this illegal thing was what I was basically just saying to you now in terms of you're like, what cases could you see the design thing be a problem here is, the Supreme Court effectively laying out what that would be, which is you have to not just have a tool that can be misused by someone, but like actually design it specifically for that misuse and you know, and that harm and that violation of the law. And so I actually think this case may become very, very important because as two 30 gets sort of wiped away by these lower courts, if the intermediary liability case comes back to the Supreme Court, you can point to this case, you can point to the Tamina case. You can point to Justice Thomas and say like, hey, you are setting these rules and therefore you've, almost recreated some elements of, of a two 30, uh, scenario even without section two 30.

Ben Whitelaw

Right. So these are kind of collection of rulings might end up replacing two 30 in a world where it doesn't exist or doesn't exist in the form that we know it now.

Mike Masnick

yeah. Possibly you still have the worry of, you lose the procedural benefit of getting it out on a motion to dismiss stage and it might still be expensive, and so it's not as clean and not as nice as two 30. But there is, you know, there were questions in the past, like if two 30 had never existed, what would the sort of common law develop around intermediary liability? I had published on Teter a few years ago, a chapter that was written on a book of internet law pre. Two 30 discussing intermediary liability and what this, very experienced lawyer thought, at the time. the chapter was totally obsoleted by section two 30 and we've had now 30 years of no development in the case law on it. And so you have to look at sort of parallel cases and copyright and terrorist stuff. Are those, parallel cases. But we're seeing this sort of law develop on, what does intermediary liability look like without two 30? And, justice Thomas in particular seems to think, and the rest of the court seems to agree with him, that you have to actually take proactive steps towards doing stuff. And so I'm hopeful we'll see. You never know. you know, sometimes copyright is like its own world and never expand. Like nobody ever pays attention to it with other, legal doctrines and stuff. But it's interesting to me, and I think potentially. a backstop on, on some of the other court decisions that are, problematic right now?

Ben Whitelaw

Yeah. Okay. That's interesting. and, and does Justice Thomas kind of know that we No. Okay.

Mike Masnick

I don't th I, I don't think he's put two and two together because like, he, he wrote this really good ruling, well, not really good, but there were good parts of the Tamina ruling where he basically lays out the reason why we have a section two 30, which is like, without that, if you just have intermediary liability for everything, nobody's gonna do anything that would be really bad. And how can you blame people for things that really other people are doing and all that's bad. And then like, you know, a few months later, there was another case where he got to mention how terrible Section two 30 was, and I was like,

Ben Whitelaw

Hmm.

Mike Masnick

you understand the principle behind it, you just don't seem to connect it to this. so I'm not sure that, that he puts it all together.

Ben Whitelaw

it's like kind of Section two 30 has become a toxic brand that, uh, is almost the, the kind of poster boy for what the platforms have, have done over the last however many decades. Right. does seem to be that that's where everyone's kind of eye of is focused.

Mike Masnick

Yeah. So let, let me, let me go on very quickly. We still have one more big case this week that I wanna talk about, and then, then we get to your bucket. So, so the other one is the, the Murthy versus Missouri case, or the Missouri v Biden case. We talked about a lot. The Supreme Court heard it. This is the one where like two states, Missouri and Louisiana, and a bunch of random people who were moderated on social media blamed the Biden administration and said that they were censoring the content. You had crazy rulings at the, district court, slightly less crazy, but still crazy ruling at the Fifth Circuit. And then the Supreme Court sent it all back and said, there's no evidence for any of this over and over again. No evidence, no evidence, no evidence. Actually mocks the district court findings saying it's based on clearly erroneous findings. and basically says like there's no evidence that the Biden administration did anything. Lots of stuff was read outta context, all this stuff, but in sending it back to the lower courts, those cases still went on. There are a couple of weird quirks in here, which is including that, like one of the plaintiffs, Jay Bad is now in the Trump administration and sort of had a drop out of the case because you can't be on both sides of the case, and all sorts of other things. Last week, friend of the podcast, Daphne Keller, testified in the Senate and, Eric Schmidt, who had started this case when he was Attorney General of Missouri, started yelling at her because she works at Stanford and he thinks Stanford was a part of this. And we got the classic line from Daphne where he was like, just read the Murthy case. And she just said, the one that you lost, which is just perfect.

Ben Whitelaw

a, such an own,

Mike Masnick

Yeah. Uh, so wonderful. and he completely lost his mind. But that case has now. Settled and Schmidt and others are declaring victory. And that is utter nonsense. now you basically have the people who brought that case are in power, so they're settling it just to give themselves a victory. And the settlement is way less than what is being reported or what Schmidt is saying. It only involves the, three remaining plaintiffs. And it only says that the administration or like the parts of the administration, like the CDC cannot put coercive pressure on social media to, moderate their content. Which

Ben Whitelaw

was always the case,

Mike Masnick

which was always the case, that is the way the First Amendment works. So that changes nothing. And it explicitly calls out in the consent decree that like they can still talk to them and share information and even like alert them to content that might violate policy. Like that's, which is what they were doing all along. This is a complete nothing, but lots of people are, declaring victory and people are celebrating like X is full of people. Like, oh yeah. And even people who, otherwise like generally have a good sense of this stuff, like reclaim the net, had an article saying like, oh, the US government has admitted that they censored people. It's like, no, that's not what happened here. So it's important that this case is now over that we've talked about, but this is a much less, you know, this is face saving nonsense. it's a meaningless settlement.

Ben Whitelaw

Yep. And, and another reminder of, the importance of reporting with, nuance and with deep understanding of these issues because yes, ostensibly those headlines will be true and factual, but they don't tell the full story. I think that's the closest we've come to a kind of control alt speech filibuster, Mike. I, that was, that was incredible. I mean, I, I dunno, we, you could have probably done four hours on that on your own, I imagine

Mike Masnick

Oh my God. You don't know how much detail and nuance I, I didn't even get to include in all of these stories, but

Ben Whitelaw

yeah, yeah. When we launched the Patreon, which we, we, we, I've talked to a few people about, uh, this week, um, there'll probably be some of that behind in the, in the premium tier, I imagine. I imagine. I'm only realizing now, Mike, that actually we should have really warned listeners up front. This is a very meta heavy episode

Mike Masnick

Oh yeah, I guess.

Ben Whitelaw

two big cases with meta, albeit not on their own. And this next story that I was particularly interested in this week, is fully a hundred percent meta focused. So apologies, listen. So we didn't flag that up front. the story that I'm really interested in this week, Mike, is the fact that meta announced at the back end of last week that it's going to use, AI more for its user support and safety. and the announcement came out last week, was widely shared. Unsurprisingly, the biggest platform in the world is, going to use AI to do a bunch of things, related to keeping users safe. the announcement is pretty funny because it, it really goes very far and tries to claim that AI is gonna solve everything. It says that AI is going to help and I'm gonna list the things out. Catch more violations, do that more accurately, stop more scams. Respond to real world events faster, and it's gonna do so with fewer enforcement mistakes. it's a kind of magic wand, Mike that, uh, meta is, is waving here. and, and look, I'm trying to be open-minded about the potential for, AI in an in moderation world, right? There are lots of human moderators in the past who've had to watch and scroll through and, triage really terrible content online. and AI is a potential way and, and has been for some time, but has a lot of potential for eradicating the need for that. and there's a lot of talk at the conference this week about, about that there's also, increasingly potential for AI to be used to make enforcement much more consistent. And again. There's a few really interesting startups. There's a lot of research at the Stanford Research Conference that you went to last year that suggested that this has a lot of potential. I'm trying to be open-minded, but I do, but I do have concerns. And, and, again, we're just going on the basis of, meta announcement here. We don't know how it's playing out in the backend. We, you know, this is, this is going to happen every period of time. but some of the examples they use in the announcement are not necessarily things that AI is newly doing. You know, it talks about catching twice as much sexual solicitation than human reviewers. decreasing the rate of mistakes in, there's some sense that AI has kind of optimized systems, in recent months and recent years. But, this was probably possible before the, the AI boom that we've seen in the last couple of years. What people are thinking around this announcement is that it's an opportunity for meta to get rid of, humans reviewers. It's getting, it's an opportunity to get rid of potentially kind of tools and software and vendors that it uses to, moderate content and that's, gonna have a cost saving in, but actually, you know, whether this is gonna be as effective as they're claiming and as the kind of pilots that they've done suggest is I think really up for grabs. one of the big flashing warning lights, Mike, in the announce was actually that it claims that the systems that it's created will be able to. do moderation actions for 90 per 98% of people online. so it's got a massive, massive language coverage, far beyond the, of current systems that it has, which cover about 80 languages. we talked about how in the past, know, in certain countries meta doesn't have a moderation system set up, that understands the language, that understands the cultural nuance, how will it be able to kind of necessarily judge on a piece of content without that context? it is claiming quite shockingly that actually it'll be able to do that in 98% of, users cases, around the world, which I would love to see. and actually my, in order to kind of test that, I actually asked, The meta AI within WhatsApp because, this was the other half of the announcement that anybody, we will be able to kind of go to the meta AI will be able to ask, why has my content been taken down? Or, why can I not get access to, this particular Facebook page? I want to change my password, et cetera, et cetera. And so I asked it, what languages and niche subcultures will this new AI be able to, cover you know, that it wasn't able to cover before? And do you wanna know what it told me? It told me about this thing called MetaMind. Okay. So it, it came, it came back with, some snippets of the announcement that I had read with my own eyes. And it also came back with, note about MetaMind and MetaMind as an AI system for specific industries and professions that it claims will help. Improve the moderation system and responses, that the company has. And I said, tell me more about that. And this is in Meta's own AI system, the thing that will, than likely be aiding users, to, understand the moderation decisions it makes. And it sent me back a medium post from a company called Meta House, which has nothing to do with meta. And I said, I said, Hey, like

Mike Masnick

great.

Ben Whitelaw

this system that you've said will be used in this new, magic moderation era that you've just announced has nothing to do with your company. and it said, ah, yeah, yeah, no, you're right. That's not, it's not true. So if that doesn't give you confidence, Mike, that this new. AI moderation. Dawn, that has been announced isn't gonna be widely successful. I dunno what is

Mike Masnick

I love that. First of all, that is a wonderful example of, of some of the problems we're relying too heavily on, on the AA stuff. But I, I will push back a little bit on that too, and this is where, like, I wrote something else this week that I got yelled at about, but like, I think there's this overfocus on, like the chat bots, chat Botts get stuff wrong.

Ben Whitelaw

Mm.

Mike Masnick

And, talking to chat bots about themselves. don't have an accurate picture of themselves. They're just trying to have a conversation with you and trying to make it sound legitimate. And in that case, they might pull up false or, wrong information like that. I do think that in a more bounded environment where the situation is like you are, comparing this content against these policies, that the technology is much better that, you know, if you're like trying to capture particular patterns and styles of behavior that are, Inauthentic behavior or phishing or scams or whatever, the technology is much better at that. I agree that technology does have problems and will make mistakes, but I also think that like the technology is probably better than, humans on a lot of these things. And that doesn't surprise me. I think, we had Dave Wilner as a guest host on the podcast talking about why he thinks that AI will make my masnick impossibility theorem obsolete. these technologies are powerful and, can be really useful for those purposes, but I do think we do need to recognize maybe they're not gonna like cite the wrong company's policies, but, but you know, they can be bounded in a way that they will be useful, but they will make mistakes and we're gonna have to deal with those.

Ben Whitelaw

And that's a, that's a good segue. I mean, I used the example slightly facetiously and I, I'm not a big fan of people who kind of pick out a evidence, uh, to, to

Mike Masnick

am. It's too amusing. Not to mention though,

Ben Whitelaw

I I had, I had to say it. I mean, the, the point is, you know, it will make mistakes. we know that. and the kind of adjacent story that I'll talk about this week is, is evidence of that. Right? So the, the other thing that I found interesting this week is that one of the ODS bodies, that is, has been empowered under the DSA to do reviews of content moderation decisions. release the report this week, this organization is called User Rights. It is one of the original ODS bodies was, based in Germany. And over the last few years it has been reviewing moderation measures, that have been submitted by users, in the European Union. And the report looks at data that they have collected in 2025. So the number of people who submitted moderation, decisions to them, the number that are eligible. And then it has gone through a review process for as many, as it has been possible to do. And this report kind of sums up what they found. So we know that they make mistakes. You're right, Mike, did you think that as many as 85% of moderation decisions might be incorrect because that's what the report found?

Mike Masnick

But that, that, that, that's, that's not accurate either though, right? That's, 85% of the ones that were appealed to this board. Right. So you're, you're, you're not saying that every mod that 85% of moderation decisions were wrong, people who were wrongly moderated are probably more likely to appeal. So let's, let's frame this properly.

Ben Whitelaw

Sure, sure. that's fair. but the point is that we already know that AI is used in, moderation, workflows, in moderation decision making. you know, that has been happening for years. My concern is that once a company like Meta ramps up and very much makes public its commitment to using AI in the system, and there is a, in this case. decent chunk of moderation decisions, which are, have been kind of wrongly rooted and wrongly, wrongly advise on according to its policies, then that is, that is a concern. if they found that a small proportion of this, you know, is around 3,600 proceedings that user rights looked at, around 3000 of them were deemed to be, incorrect. If there was a, a very small proportion that came back that would give me greater confidence about a, you know, meta using AI in, in all of these new ways, that's not the case.

Mike Masnick

Yeah, I mean, I'm not sure because I think, I think that yes, AI will make mistakes, but they may be different mistakes, and we don't know if that will increase the number of mistakes. It will just be different style of mistakes. And the fact that, this report, we've, we've seen some of the other, appeals groups, and their reports, like, they feel like they're under capacity right now. It feels like they could take a lot more cases at this point. I don't know that that's true, but I, it feels that way that they're not well utilized. And so, like, if it does lead to a certain increase, like yes, if it becomes overwhelming, that'll be an issue. But I, feel like there's, room there right now. And again, like, we'll see how these decisions are made and if they're making more mistakes or if they're making mistakes in a way that are more easily correctable. so I'm, I'm. A little bit more enthusiastic about the possibility of, of using ai Well, for the moderation side. but it is definitely something we should watch, and I think the mistakes will be different, and figuring out how we deal with that is gonna be a, a big deal.

Ben Whitelaw

Yeah, and we won't dive into it, but there is one more story that, that exactly that, that's, in the verge this week. And it's about Tumblr and Tumblr users essentially having, many accounts taken down as a result of an automated system, which is a story we've covered many to many times before on the podcast. That brings me to a, a close of, my little chunk of stories. maybe it's better without the, the small story roundup, Mike. I don't know.

Mike Masnick

What we'll have to see, let us know what, you think about this particular episode, which is yes, the Mike Filibuster episode.

Ben Whitelaw

Yeah. Get in touch with us listeners, podcast@controlaltspeech.com. rate reviewed, like, subscribe wherever you get your podcasts. I had a few people tell me that they, were waiting to leave a review in a rating mic. Do not delay listeners, I dunno what you're waiting for. We, who knows how long this will last for? Um, tell us what you think. Spread the word. thanks for listening this week. it's been fun. Thanks very much, Mike. we'll see you soon.

Announcer

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.com. That's CT RL alt speech.com.