Ctrl-Alt-Speech

A Lack of (Under)Standing

Mike Masnick & Ben Whitelaw Season 1 Episode 16

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

So before AOL Messenger, Mike, and before MSN Messenger, there was ICQ, which I'm told is one of the original internet messaging services. And sadly this week it closed down. I read. And in homage to that, I wanted to ask you, are you available for random chat?

Mike Masnick:

Ben, with you, I am always available for a random chat, though I will note that the Supreme Court does not seem to be available for a random chat because they still have not given us a Netchoice decision, though they have given us a Murthy decision.

Ben Whitelaw:

okay.

Mike Masnick:

We will discuss that soon, but, but how about you, Ben, are you available for a random chat today?

Ben Whitelaw:

I am available for random chat. We're going to be chatting through a load of great stories today. I'll tell you who isn't available for random chat. Or for emailing any of his execs when they have a custom safety issue. And that is Mark Zuckerberg. And, uh, we've got a great story on that today. Hello and welcome to Ctrl-Alt-Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you with the financial support from the future of online trust and safety fund. My name is Ben Whitelaw, and I'm with a somewhat emotional, touch, touch, teary eyed Mike Masnick, who has been regaining me of stories of his time using ICQ. Um, are you okay? Do you want to talk about it? Did

Mike Masnick:

I mean, I have to admit, I did not realize until this news came out that ICQ still existed, but it was very much, uh, my first, uh, of the instant messaging type of, apps. It was what I used in college, um, way too much, probably

Ben Whitelaw:

Did you do any work when you were using ICQ? Probably,

Mike Masnick:

I mean, ICQ was very much for, planning weekends, flirting with, girls and, and trying to, uh, make plans and stuff like that. And just chatting with friends.

Ben Whitelaw:

look at you, look at you now. Who would have thought you'd be referring to your ICQ usage? Decades later, um,

Mike Masnick:

It was, uh, yeah, it was a thing. I mean, I, I can't remember when I stopped using it, but for a while, that was how I stayed in touch with a whole bunch of people at college. That was the,

Ben Whitelaw:

now we're inundated. What was ICQs moderation like?

Mike Masnick:

I, I have no idea. I have no idea if they had any moderation. I have no idea how any of that worked. But that would be an interesting thing to look into, but it turns out that somewhere along the line, you know, ICQ was definitely overtaken by MSN and AOL and Yahoo messenger, which was big back in the day, and then, you know, all the other services came along and I didn't realize like I, at some point, I guess ICQ was purchased by some Russian company, so it was, it's now owned by the. I guess the same people who own VK. And so they're now telling everybody, whoever was left still using ICQ to use VK. So yeah, it's quite a, quite a trip down memory lane to remember the little icon with the, uh, the flower and the, the changing colors of the petals

Ben Whitelaw:

Yeah. If, if listeners of control or speech were also ICQ members or users, then get in touch. Tell us about your memories of it. If you know about its content moderation policies, we'd be even more interested to know. And, uh, you mentioned Mike about the lack of net choice rulings this week. How do you feel about that?

Mike Masnick:

I am, frustrated, in part because, despite the fact that you and I have been doing this every week since March, next week, we are on vacation, both of us. There will be no Ctrl-Alt-Speech planned, but the Supreme Court, we assumed that the Supreme Court would give us our, net choice by this week and we'd be able to talk about it at least by today. And they failed us. I was up early again. And no net choice decision just yet. So, uh,

Ben Whitelaw:

I'm thinking that they, done that on purpose.

Mike Masnick:

yeah. Oh, just to me off personally. It is, I am taking this directly personally that they are thinking we need to ruin Mike Masnick's vacation, uh, and, and his podcast. So it's, yeah, I'm, I, think Clarence Thomas has it out for me.

Ben Whitelaw:

Yeah. Yeah. I wouldn't be there. It wouldn't be the first time. And, um, and so let's, talk about, a Supreme court. Case that did come out this week and let's get going with three stories. It's a case that's really defined the short life of Ctrl-Alt-Speech. We talked about it in our second episode and back then you called it an interesting and thorny case. Um, Which I think still stands. This Murphy versus Missouri case about the government's ability to communicate with social media companies. What has the Supreme Court says? Talk us through.

Mike Masnick:

Yeah. So, the messy part of this. The really messy part of the Murthy case, as we described when we talked about it, few months ago, was that the record and the details were just nonsense. there was pages upon pages of just weird conspiracy theory, conjecture, this person got banned, this government official made some comment to the press about this, and therefore there's like a clear connection. the underlying issue. Was always, where's the line for the government between persuasion and coercion? Because the general belief is, and sort of past precedent on this says that the government is allowed to try and persuade private companies to do certain things, but they can't coerce, you know, when it comes to speech. and we have some precedence on that with the Bantam books case being like the main one where, a city council effectively threatening a bookstore because they were, including certain adult books on their shelves and the court had said that that was an attack on their free speech, but the question is, where's the line between that and say that same city council holding a press conference and. Complaining about these books being available in the store and making a big deal out of it. There's a line somewhere in there and it would be nice if the courts had had a test that made it a little bit clearer so that everybody understood. So the government officials understood, you know, what can they do to persuade versus what goes over the line and what interferes with the first amendment rights of the booksellers or whoever it is that they're trying to pressure. This was an opportunity in theory for the Supreme Court to come in and set that standard. The problem was, again, was that the record was a mess. there weren't details and facts related to this case that showed anything, in part because the plaintiffs, which were Missouri and Louisiana, two states, and then a small group of just conspiracy theorists, more or less, To put it nicely, uh, that's about as nice as I'll go with any of them. They just had just craziness and nonsense and insisting that certain things happen. A lot of it, which was untrue. There were things that were totally misrepresented. There were things taken out of context, as the Supreme Court noted in the ruling, which we will get to in a moment, you know, at one point, one of the, uh, Plaintiffs in the case argued that, his account was, banned on LinkedIn, because of this. And then it turned out it, it wasn't, there was never any evidence presented that the government ever talked to LinkedIn. There was to other companies and like, it wasn't even his account. It was his brother's account. And they're just like, there was all of this nonsense that The plaintiffs, including state attorneys general, seem to think that like, if we just throw all of this out there and just make this cloud of nonsense, people are going to magically believe that what we're claiming happened, which is the government going to social media and saying, take down these accounts is what actually happened. And so the way that the Supreme Court dealt with this in a 6 3 decision written by Amy Coney Barrett, was to say that. Because of this messy record, because of the details, as they tried to pick through them, did not show any clear factual evidence of the kind of, actions by the U. S. government towards these particular plaintiffs. These particular plaintiffs had no standing. And this is, a core element of, The way U. S. litigation works, if you're going to court, you have to show standing, which is that, you know, you have to show that there's some cause or controversy that is related to you, between you and the party that you're suing. And here, none of the plaintiffs could show that. The states certainly couldn't, because they're, Their states, uh, and then, the states had made this sort of, you know, bizarre argument for trying to get standing, which is effectively like, the people of our state have a right to read stuff. And these actions of content moderation, we're taking away the right to read of the people of our state. And the ruling effectively says like, what the, you know, like that is not, that's not how this works. Um, and if that. Was how it worked. We would have, a crazy scenario where almost anyone could sue for almost any kind of content moderation effort anywhere. And that doesn't make any sense.

Ben Whitelaw:

Yeah, begs a question then my, and I know that we kind of touched on this a bit before, but like, how did the case even get to this point? Like, can you just like remind us how it even got here? Because I think some of the commentary that I've been reading is, has been basically like, how did it get here? This is, this is a reminder of, of, of the kind of flaws sometimes in this system. Not everyone is, is a Zofei with the, the legal system in the U. S. as you are, just explain a bit about that for people who are catching up.

Mike Masnick:

Yeah. So, I mean, I think it, it actually really shows some problems with the way the U S legal system is working today, which is that, so when the case was filed, I had written something saying, yeah, this is, this is so like, you know, at the very earliest stage when it was just the States before they had actually included the other plaintiffs. It was ridiculous. Because obviously, Missouri and Louisiana have no basis for making this a lawsuit, which was how it all turned out as well. and they, had all, uh, you know, a bunch of this nonsense in the original claims, including some of the original claims were involved blaming the Biden administration for, uh. content that Twitter moderated in 2020, when there was no Biden administration, the Biden administration did not start until 2021. So my assumption was, this is a case that will easily get dismissed very, very quickly by a district court. So here's where the US legal system It gets weird, which is that, as plaintiffs, you choose where to file. There are some rules about venue and jurisdiction that will often be fought over in terms of which court does this belong in. It's not only which court does this belong in, which laws apply in which courts. There's a whole thing called choice of law, not worth getting into. There's all this stuff. But what a few people have discovered more recently is that you can effectively Jurisdiction shop, which is find a judge who you think will be supportive of you. And so there are certain courts that either have only one or a very small number of judges. And in the last decade or so, there's been a concerted effort, certainly by the Republicans to stuff those courts with perhaps More ideologically driven judges than we are used to, you know, the U S judicial system for all its faults has traditionally been pretty good about not being ideologically driven, not being partisan. certainly judges will lean one way or another. And then, you know, what we've really seen more recently is starting from the Supreme Court down, there has been much more of an effort to get ideologically driven judges into seats. And that has been true especially in Louisiana and Texas, there have been a bunch of courts that have been sort of famously, set up in ways that, Cases are being directed there because they know they have judges who will look very favorably on kind of culture war issues. And this was one. And it went to one of those judges, Terry Doughty, who his short time on the bench has history of doing these kinds of things. And he released this, ruling and he did it on July 4th on purpose because no court releases rulings on July 4th and, said it was like the. Biggest censorship, scenario ever to hit the U S court says, I mean, it's just all this nonsense pages and pages of nonsense, and then blocked all communications between the government and social media companies, even effectively blocked communication with like, Stanford and, other schools that were researching. It was just absolutely crazy. And then from there it was appealed to the fifth circuit. And this is the next part of the, U S judicial system, which is Crazy, but like, you know, the appeals courts are broken up into a bunch of different circuits and the fifth circuit, which covers, Texas, Louisiana, and, and, you know, that area of the South has also become in the last few years, incredibly ideologically driven and very willing to just do nonsense. In fact, one of the. Net choice cases, the crazy one, cause the net choice, there's two cases. One is in the 11th circuit, which is from Florida. And then the one from Texas, which is in the fifth circuit and the 11th circuit ruling was the one that was like, mostly correct and sane and using a century's worth of first amendment principles. And the fifth circuit one was just not, it was like, let's ignore a century's worth, especially like what had been conservative views on the first amendment. Just throw that out the window because we don't like the way social media companies moderate. And, you know, I thought it was funny. I'd written a thing when both of those rulings came out because both of them, both the 11th circuit and the fifth circuit one, I know we're sort of like wandering away from Murphy, but I'll get back there, were done by recently appointed, appointed by Trump judges, Federalist society. Both of them had a long track record of being involved in conservative causes and conservative activism. And yet judge Newsome on the 11th circuit, recognized. The history of the first amendment and the precedent that existed and the craziness of the posture of the States and wrote a pretty good opinion. There are a couple of things that I have problems with it. And the fifth circuit was just like, nah, like we're just going to make this up. And, and it was, you know, and it was crazy. So the fifth circuit has that sort of habit of doing these, sort of crazy things. And in fact. in the last couple of years. And even in just this term, the Supreme Court has sort of smacked down the Fifth Circuit pretty strongly and, and pretty pointedly. just a few weeks ago, there was the abortion pill case, which was a similar thing with the Fifth Circuit also just went nuts. And the Supreme Court was just like, this is not how it works. And so, there are phases and for a while, like the Supreme Court kept, pushing back on there's the, the federal circuit, which handles all like patent cases. And so like 10 years ago, the Supreme Court went on this rampage where they just kept saying, like, the federal circuit is gone rogue. And it's like making all these terrible decisions. And eventually the federal circuit realized like, and sort of got into line. Then we have the ninth circuit, which is California, the West coast of the us and Supreme court sort of hit them on a bunch of things. Now it seems to be really targeted on the fifth circuit. So the fifth circuit took the case. Made it's crazy ruling slightly less crazy than the original trial court ruling. And here the Supreme Court again is saying like the Fifth Circuit never should have gotten this far. Never should have had standing. I mean standing is the first thing that you dump a case on. So it's basically like this case should have just been kicked out early because these plaintiffs are not, the plaintiffs for this case.

Ben Whitelaw:

Okay. So that's really helpful. I mean, that gives us a really good sense of how we got here. let's kind of get into the decision itself and also the kind of justices, because not all the justices agreed. And, and, and that's an interesting point as well following on from what you just said there. So just talk us through the kind of, the ruling itself, some of the kind of interesting points that you spotted.

Mike Masnick:

yeah. So, the ruling was written by Amy Coney Barrett and was joined by, it was a 6 3 decision. So you had with her was, Brett Kavanaugh and Chief Justice John Roberts. And then the three who are generally called the three liberals on the court, Sotomayor, Kagan, and Justice Jackson. And written very clearly. Justice, Barrett is a very clear, uh, Writer, like very strong prose, it's really well written and just basically walks through and says, this makes no sense. And, and it calls out that the, fact pattern was erroneous, that there were all sorts of problems and says, just on the basis of standing, this case never should have gotten very far, it does talk through a couple of different issues. In part, it was saying that that might have been different. If the plaintiffs had asked for monetary damages instead, because they only asked for an injunction, which is they only asked for a ruling saying the government has to stop these communications, stop these efforts. they said that was a little different. And this part is a little bit complex and it has a few people worried because part of the argument that is made in the case is that because they're asking for an injunction, And they haven't shown that this kind of conversation is continuing. Like maybe if it happened in the past, you know, for an injunction, you're trying to stop future behavior. And so you have to make some sort of showing that this kind of behavior is ongoing and likely to continue in the future. And so they sort of said that wasn't the case. And so some people are like, well, that. That's a little concerning because that kind of suggests that like a government can just once can say like, well, if we just do it once we can get away with it because there's no ongoing thing. I mean, I, I don't think that's really what they meant, but it could be read that way, which was, a little bit concerning. And then just, I mean, she went through a bunch of the record and pointed out how silly it was and how erroneous. The facts that were presented and how it was ridiculous that, the case really got as far as it did. It made very clear they were not getting into the merits. They were not setting a standard particularly on how, these kinds of jawboning cases should play out. Where is that line for the First Amendment? and so they kind of left that open, which probably means this issue will come back before the court. Perhaps in, the new version of this case, because it's just been sent back to the lower court so they can refile amended complaint, try and get over that hurdle. I think it'll be really difficult, especially for these particular plaintiffs, but maybe we'll find other plaintiffs who will try and bring similar cases. And so the issue is not, finalized yet, but they just said, this is not the case to bring it.

Ben Whitelaw:

Yeah, and I was interested in, talking about the personalities on the, among the justices about Coney Barrett being the person who kind of authored, the ruling, right? Cause she was in a Republican appointee and, in a couple of days since the ruling, there's been a whole backlash against her, right? And. Talk, just give us a kind of flavor of that. And also Justice Alito's dissent to that too, because they're big personalities that get talked a lot about in the case of content moderation and trust and safety, because they have some pretty strong views.

Mike Masnick:

Yeah. and so, there is this assumption, I mean, and, and this goes across the political spectrum. Like a lot of people think because of, again, I already mentioned like three justices are considered the liberal justices and then the six remaining are considered the conservative ones. And so there is this assumption, which is not true that The results are preordained, and that, it's always going to be six, three conservatives against the liberals. I mean, just today we saw a ruling where, Amy Coney Barrett was in the dissent with, Sotomayor and Kagan and Justice Jackson joined the five other conservative, so you know, it's not always the same thing. And, what seems to be coming out is that we're beginning to see that Alito and Thomas are kind of off on an island of their own, and they are very happy to be ideologically driven. They're happy to sort of go to, crazy town where it feels like they have, been just, mainlining extremist nonsense content. the rest of the court and Gorsuch is sort of. borderline, but the rest of the court does seem like, Yes, They definitely have their ideological leanings. They were appointed for specific reasons on that front, but on issues that are not directly culture war specific, they're actually willing to look through and understand some of the nuances and details. And so Kavanaugh and Coney Barrett in particular seem much more willing to like actually deal with the underlying issues and recognize downsides to what they're, what is being pushed for some of the craziness that has been presented to the court now the dissent from Alito. I mean it starts out by basically saying well if everything that the plaintiff said was true then this was like the as Dowdy had said in the District court ruling. This was like the biggest censorship case in American history or something to that effect. There's a really big if in there, you know, which is addressed in the majority opinion, which says like the, if is not there, like if you explore whether these things are true or not, it becomes Very obvious, very fast that it wasn't. And then, he just goes on and on. And the interesting thing that I'll note about the Alito decision. And again, we're saying this before we have the net choice decision, which will come out next week, he does say a few times that the companies have the right to moderate as they see fit. And that's. An interesting sign because that's kind of the major issue with the net choice cases. And so there's a chance that he was sort of pre signaling that that is going to be the result. You know, if net choice had come out first, he could have cited to that. The one other thing that I'll note that is really important in this was that A few weeks back, we had the decision in another case, which was the Voolo case, which was heard the same day as the Murthy case, and there were some concerns that the court would conflate the two, and the Voolo case involved, a New York public official who, uh, Effectively was threatening insurance companies not to do business with the NRA, the National Rifle Association on ideological grounds. And in that case, which was also a jawboning case, it was a 9 0 decision, all of the justices agreed that that violated the First Amendment, that that official went too far and effectively threatening the insurance companies, basically saying, like, we're not going to go after you on other stuff. if you drop the NRA. It was sort of a pretty clear quid pro quo kind of situation, pretty clear coercion. And so there's been some concern with the Murthy case that, the way it's written, it means that nobody will ever have standing to challenge jawboning. But I don't think that's I think that's true because just a few weeks ago in the Voolo case, we had a case where they re upped the Bantam Books decision and said, there's a line and the government can't cross it. And where it's clear as it was in Voolo, then, there's no problem. The problem with Murthy was that there was no evidence to support the argument in the first

Ben Whitelaw:

And how, how do we get to a place where there is clarity about the difference between coercion and persuasion? Do you think that's the big question, right? That's the kind of

Mike Masnick:

Yeah, so the real thing is like there will be another case in the future and there will be a scenario where there is a factual pattern that raises real issues that the plaintiffs in the case can show in part because, government officials seem to have a hard time shutting up when they really should shut up, right? There are things that the government shouldn't be doing and they shouldn't be trying to force companies to act in a certain way and they can't resist. Yeah. So they're going to keep doing it. Someone will eventually bring a case. In some cases, it might need to be the companies themselves. You know, I could see given, the way things are right now, if a democratic official says something to, Elon Musk and Twitter, I could see him bringing a case that might establish where that line can be drawn. So at some point, the Supreme Court is going to have to take a case where they have to actually get to the merits on this and make a ruling on the First Amendment. And probably, done right, which is a really big if, they would come out with a pretty clear test, You know, if a, b and c are in place, that's coercion. If you know these other things, then persuasion, but we're not there yet. We don't have a really clear test. Part of the problem is the fifth circuit sort of faked a test from another case of like substantial encouragement. Which, what is substantial encouragement, you know, is, is that how, so, the fifth circuit sort of found that to be violate the first amendment, but like, that's kind of a problematic thing. So that still technically stands as a potential test and eventually probably we'll get, something going to the Supreme court that, questions whether or not substantial encouragement goes over the line or not, and eventually we'll get a ruling on that.

Ben Whitelaw:

Okay. So just, we should kind of move on to the rest of today's story, but just in terms of the kind of reaction to that among people who are both working in government and in contact with platforms and people working in platforms, some of whom will be listening to controllable speech, do you sense that people will be relieved about this? And we'll probably kind of, there'll be a moment of reset perhaps in terms of how both parties work with one another as, they kind of see how things work out, that feels how it's going to be.

Mike Masnick:

Yeah. I think this, at this point, this is probably the best of, of all possibilities. What we don't want is a world where the government does feel free to, to threaten and, and put pressure on the companies, because I don't think that is a good world. I don't think the government should be doing that, as we've discussed before. But here, it's now made pretty clear that the government didn't, probably did not clearly go over the line, at least not in a way that, that showed up in this case in terms of just doing things like sending information about foreign threat actors or about, election disinformation, things of that nature. It probably allows that kind of information sharing around like, Hey, we saw this, we want to report. Uh, tweet that's telling people to vote on Wednesday instead of the election on Tuesday. And so things like that the government for now, at least is free to share that information as long as it's not coming with a sort of coercive threat that you need to take this down. And so. I think that's good. There have been people freaking out and I've seen people saying like, Oh, this is, you know, this okayed government censorship, the daily mail over on your side of the Atlantic ocean put out this report, like that the Supreme court has now approved government censorship, which is not. Even close to what happened. and so you do see some of the sort of, MAGA extremists saying that this was terrible and Coney Barrett was a failure and how Trump made a mistake and all this kind of stuff. It's like, no, this was a pretty clear ruling based on the fact that this case was just dumb.

Ben Whitelaw:

Yeah. Interesting.

Mike Masnick:

you know, we'll see what happens, but I think,, from what I've heard I think people are sort of relieved that. Cause if the Murthy case had gone weird, it would have led to some really problematic situations where you want the government and the companies to be able to communicate about foreign threats, about, really dangerous scenarios that were effectively blocked by some of the earlier stuff. And the lower court rulings. And now I think there's like, okay, we can, especially as we're heading into election season, we can continue these communications.

Ben Whitelaw:

Yeah. Okay. So one. Online speech, Supreme court case down two to go. We won't be there for those on Monday, but we may jump on and record, depending on how things are going, cause we're, we're both tied up. But thanks Mike, for taking us through that. We've really delved deep into that and I think a really good way. And I'm going to substantially encourage us now to move on. To, uh, to our story now, um, about some really interesting reporting done by the New York times on Meta this week, which this is a kind of a new story, but also an old story. so you might remember back in 2023, there was, a whole bunch of states who collectively in different guises sued Meta. 41 States sued them in one week. It was kind of giant action of all these States at once around the idea that Metta has, created products that ensnare, was the way that New York times referred to it, children and got them to use their products above and beyond the kind of levels that they should. And these cases have been kind of milling around in the background. And the New York times had done a really big piece of analysis on the court filings, which. They published late last week. I found it really interesting for a number of reasons, but it's going to explain, explain why. So the documents and we, I think we disagree on this to an extent. So it'd be interesting to hear what you think, but the documents for me really paint this picture of a fascinating. Very hierarchical company in which Mark Zuckerberg gets a huge amount of sway in decisions around how trust and safety works. And there's a couple of really interesting examples in which, the filing show that he is. Asked for resources to build out teams that will be designed to kind of mitigate harms that have been found by teams within Meta or on the back of reporting done by media organizations that show harms to children. And he basically kind of doesn't respond. a few examples in January, 2018, he was told that there was 4 million under thirteens using Instagram. And, didn't do anything about it. Didn't give the resource to build out teams to help, address the age verification of, minors on the platform. Couple of months later, he testifies in front of, Congress. I think that There's no under thirteens that use Facebook. so there's a whole kind of series of color, I guess, that we get from these filings and these emails and the, documentation that, that really, I think, add to this sense of, meta having a very odd approach, trust and safety in which Mark is kind of given so much sway is a really interesting. Another example in which Nick Clegg, who is now the head of global affairs. In which he asks for a whole team of people to, build out, some teams for addressing mental health on the platform of children. And he doesn't get a response. He doesn't hear back. and Mark Zuckerberg doesn't make any response to these big media reports about children being harmed on the platform. However, he does make a correction to a media report about his hydrofoil. Do you remember this?

Mike Masnick:

hydrofoil. Oh yeah.

Ben Whitelaw:

he, you know, he famously didn't respond to the Wall Street Journal story about, Instagram being toxic for, teenagers. He did say that's actually not a surfboard. It's a hydro. And this New York times report kind of. Unpacks how mad Nick Clegg is about the fact that Zuckerberg is publicly, responding to that, but not harm on the platform. And so there's, you know, there's all these really interesting kind of details of what feel like a company who, and this is the, really the essence of the filings who prioritize engagement over safety and that's a very broad picture that is painted there. We know lots of people at Metta who are doing a lot of really important work and the difficulties of keeping billions of people safe on a platform. But I just felt this was a really interesting report with information that's out there and has been kind of building up over the last year. You may see differently.

Mike Masnick:

Yeah. And I mean, I more or less, I, I see it the same way, but I have a little bit of a different lens on that and a little bit of a different perspective. Like I agree, Mark obviously has tremendous control and sway over the company and that includes how they handle trust and safety. I think that. There are a few different ways to potentially interpret what was in that article. Most of the articles coming from the filings in that lawsuit, which are very much one sided. It is the states trying to present an argument of, gross incompetence and negligence by the company. And so they're going to, play things up that way. And some of that may be out of, context. Some of that may not be entirely clear. the thing that I got out of it reading this article, which I think was really interesting. And again, certainly suggest that, Mark Zuckerberg does not prioritize safety on the platform as much as he probably should. And. I get that. And I don't disagree on that. I would be great if he actually did take it more seriously. I do think, it leaves out a bunch of the other things that are happening in the world as well, in terms of, the other pressures and the other things that he is dealing with, including, a Wall Street that was, Pretty unhappy with how Facebook was doing for a while. And so he's probably juggling, this is not, I'm not excusing it. He's probably juggling a bunch of these different things. I also think there's a possibility. And I, you know, I don't know if this is true or not. There's a possibility that some of this stuff. Because there have been leaks and there have been things like, what is showing up in lawsuits because they get access to emails. It would not surprise me if someone like Mark Zuckerberg, when he receives these kinds of emails, says, we're going to set up a meeting, Nick, and we're going to, talk about this in person where there is no record of how I'm responding to you that will show up in a legal filing at some point. that's not to say the end result was, good. Because, there are problems. I also think like even the, the story about like him saying, when testifying, there's no under 13, he didn't quite say that. He said, we do not allow under 13, which does not mean that there are no under thirteens on the platform. Um, and so there, there's a lot of sort of nuance here. You know, I also think that in part, because of a lot of the reporting that again, sort of takes every effort by the company to, And paints it in the worst possible light. And I go back to the, documents that were released by Francis Hawkin, some of which, do demonstrate, you know, like real problems in terms of how, Facebook deals with, trust and safety issues, but also some of them presented showed people at the company, not Zuckerberg, but people at the company researching this stuff. In order to present it to people higher up in the company and saying, this is a problem, which I think some of these emails were sort of seeing the results of that. Like the Instagram study that shows that, percent of, young women on Instagram feel that it makes their body images feel worse when you look at that presentation, that was clearly created by people within the company, trying to convince higher ups to take this issue seriously, to deal with it. And yet it's painted in the media as, Oh, metta knew about this stuff and decided to ignore it, which is not entirely clear. Like clearly there are people inside trying to do stuff and now it's being reported in a very negative light, which could lead companies to then not want to put anything down on the record that actually suggests they're looking into this stuff and doing this stuff. And so I'm, I hesitate, like this could be bad. and the stuff in the article looks bad, but there could be alternative explanations in terms of what is actually happening and how it's playing out. We don't know exactly which one is really going on. it would be nice. It would be wonderful if, the company were more transparent about these things and we're generally clear about what they're doing. But without that, we're getting a very one sided picture, which may or may not be entirely accurate.

Ben Whitelaw:

Yeah. I suppose what was interesting to me was that previously there was this idea that there were people quite junior levels, like Francis Haugen and like Arturo who, who has like, you know, kind of whistle blew on this stuff and who have been very public and come out of the company and talked a lot and tried to essentially advocate for Facebook and now Meta to change its approach to child safety. This tells almost a very different picture in which. VPs and like head of whole, units, are being kind of routinely ignored and yes, there may be other ways of dealing with those things that are

Mike Masnick:

Yeah. And, and, and there's a question of like, I mean, again, we don't know the whole picture. And again, like this, it might not be true, but there could also be like, maybe there were like three other proposals that were put on the plate of you know, how are we going to deal with this particular issue? And for whatever reason, he ignored the, Nick Clegg one and decided we're going to do this other one. we're not getting the whole picture. It'd be nice if we got the whole picture. It would be nice to see. And it is entirely possible. Like, again, it is entirely possible. This is the whole picture and that, Zuckerberg is just, doesn't care about this stuff. That wouldn't surprise me if that was true, but like. There've been so many cases where these things are presented so out of context and without all of the other details that it's tough to take this one thing, which is coming out of a lawsuit and out of a media that is, you know, doesn't,

Ben Whitelaw:

not, not met as big as fan, let's

Mike Masnick:

Right. Exactly. And, and so is willing to sort of take these stories and run with it. That I'm, hesitant to say that this is, conclusive proof of what, you know, a lot of people are suggesting it is.

Ben Whitelaw:

suppose from a trust and safety practitioner point of view, if you're somebody in this company, I wonder what that feels like to read that, however good you do your job and however you try to kind of advocate for, safety, actually your work is contingent upon an email sent. By Nick Clegg, God, God love him. Um, to, to Mark Zuckerberg, which, you know, can fall flat, might fall flat. I mean, that, is kind of almost what's the toughest thing to read about this, and I've spoke to a bunch of people who, when talking about the challenges that they face in their roles within trust and safety, actually, the issues are not harm, they're not the kind of exposure to these issues per se, it's the corporate culture, it's the nature by which you need to really emphasize to your boss and to your boss's boss, why this is important and it's resource and head count and, tooling and all of those things. So I think you're right. This is a partial view, but knowing what we know from. Listeners of controllable speech that the two things together kind of, I think feel like a, more whole picture than, we've had for a while

Mike Masnick:

Oh, and I, I mean, I think, the benefit of articles like this are that hopefully it does put more pressure on the companies and the senior management to recognize this stuff. You know, I think the other element of this is that, Zuckerberg. Has full control over meta and he has a board who does not care about trust and safety and I mean Mark Andreessen who is still on the board, published this thing last year calling trust and safety, part of the the

Ben Whitelaw:

and enemies,

Mike Masnick:

Yeah, the enemy enemies of progress or whatever. and that that's on meta's board. And so he has cover to do this kind of stuff. And I think that's a problem. And I think that that is, the concern that you have, you know, a wall street that is demanding up, up, up numbers doesn't seem to recognize that you don't get up, up, up numbers. If your platform is a total mess and people don't feel safe on it. and so, there is a mess here. There, there clearly is And maybe articles like this give more power to people lower down from VP on down to actually present like, Hey, look, we need to fix these things. You don't want to appear in the New York times with an article with this headline,

Ben Whitelaw:

yeah,

Mike Masnick:

we can do that ahead of time by, fixing some of these processes.

Ben Whitelaw:

yeah. And, you know, it's often the case that stories about platforms end up being the best form of content moderation, right? You know, you, you have a story published, suddenly the accounts taken down or suddenly the processes improve. So, you know, there is, this interesting kind of dynamic of, story comes out, platform does something, but maybe we should avoid that. We've been extremely, bad at timekeeping today, Mike. And, uh, but I think these are two really interesting stories. Let's kind of move on to the, the best, the rest. And I think there's a few that we should mention. We might not get through them all, but you had a couple AI that, uh, I'm going to let you cherry pick from since we may not, you know, get the chance to go through them all.

Mike Masnick:

yeah, well, I'll start with this one that was in the, Mercury News was actually interesting because it was part of a program that the Mercury News has, that trains high school students to, work with professional journalists and to write professional level articles. So it was actually written by high school student, which is, which is great. Talking about how. Teens in the Bay area are relying on AI chat bots and other systems for mental health support. and I can understand like the immediate reaction that a lot of people have to this is just like. Freaking out like, Oh my gosh, this is horrific. How could they do that? Chat GPT should not be your therapist. And that's true because we have no idea, how good or bad that is, but it gets to a point that we've been talking about with a lot of these, child online safety. Issues and the challenges and what the evidence and the research has shown, which is that often the biggest problem is not they have access to social media, but that they don't have access to mental health support. And the reason, and a bunch of studies have suggested this, and it's not true universally across the board. There's a lot of nuance here. everyone is different. Every platform is different. All of these things are, you know, all the caveats in the world here, but the evidence does suggest that one of the major issues where they found correlation between mental health issues and teenagers and spending excess amounts of time on social media is that it happens when somebody is having mental health issues and does not have support, does not have resources To go to, and therefore they end up going to social media for lack of other things. And so I thought this was interesting in that it suggests, here is something else that they can go to. It would be nice if we had. Evidence, careful research, looking at, does this help, is this better? But it is suggesting that, kids are looking for help and looking for support. And they were maybe just going to social media before, and that was bad. And now they're testing out AI services. some of the kids in the article were going to services that, that, you know, Uh, specialized that present themselves as, being for mental health support.

Ben Whitelaw:

Yeah. Yeah,

Mike Masnick:

enough evidence to know whether or not those are good, whether or not those work, whether or not they're okay. But some of them were just going to chat GPT, but they were talking about how like, they could do that and they didn't feel guilt or concern. They didn't feel judged. Whereas if they went to an adult, they felt judged and they felt concerned about it and it, there were a number of students in the article were just like, it legitimately, gives me someone to talk to that is helpful. And just that act alone seemed to be helpful for them. And again, it'd be great if we had studies, if we had research, if these tools are actually helpful or not we should be very, very careful about that. But I thought it was really interesting to hear from the teens written by a teenager, to look at how they were using these things.

Ben Whitelaw:

interesting, kind of positive story about generative AI and, you know, it, not having a adverse effect on people's health or, or not endangering people for once, it's a kind of rare rest or, you know, more research is definitely needed around this and it kind of leads neatly onto another of our stories, Mike, around snap who have introduced a whole range of new features to keep, teens safe on a platform and. I think Snap are interesting because they have a pretty good history of launching products that are thoughtful in terms of keeping teens safe. They naturally have a much younger demographic, by virtue of the product and the platform and historically have done a pretty good job. and these new features kind of build on that work. And in particular, one in-app warnings that it launched last year in which you get a popup. I dunno if you're a Snap user. So tell me if I'm preaching to somebody who knows, you get an in-app warning if you get a message from somebody you don't know now that has proven successful enough that they've now. Added new signals and users will now get a pop up when they get a message from somebody who's been blocked or reported by a friend or a contact. And, also by people who messaged them from a region where they're not normally located. So adding new dimensions to this feature, which I think is really interesting and makes a lot of sense to me. They also have been thinking about how you can't send or receive a friend requests if you, don't have a mutual friend with that person and if they're located elsewhere. So kind of triangulating different data points to avoid situations where, teenagers are in contact with people that they shouldn't be, and it's all in the guise of trying to stymie this, you know, Horrible trend that we're seeing. We talked about on the podcast around sextortion, particularly kind of financial sextortion as well. So I really thought this was a, some thoughtful additions to the product. you noted very, very cleverly that it came just a day after a new report in which Snapchat was mentioned as a platform for financial sextortion, right? Which I haven't read in full. So I'll hand it to you.

Mike Masnick:

Yeah. So, the timing on this was interesting. I will say, I mean, you were noting that snap was, pretty good about these things. I mean, it also comes two months after there was a pretty big controversy over snap had released this solar system, set up, which got a ton of criticism for really not being well thought out. And there were questions about, Did a trust and safety folks spend any time reviewing this before it was released and snap had to sort of roll that back. So

Ben Whitelaw:

It was, it was pulled pretty quickly.

Mike Masnick:

it was pulled pretty quickly, but again, like getting back to what you were just saying, where like it required the press to be like, what the, like this, who thought this was a good idea? I mean, It, and there've been a few other scenarios where snap is don't necessarily always think through how these things might actually be used and how the, people might look at them. So, yes, they've done some, good stuff. But, uh, Thorne, with support from NCMEC put out a report about sextortion, which we had a conversation a few weeks ago, about sextortion and what a big issue it is. The report is really, really interesting. It goes deep into, NCMEC, takes reports of sextortion. So they give some numbers on like how much it has increased, which is Massively and kind of like how they're trying to deal with it. The two main platforms that seem to be used for sextortion are Instagram and snap. It's very much targeting teenage boys. This is one of the things that we've discussed. and so in April, and we had mentioned this in a podcast, also meta had rolled out a bunch of tools to try and limit. the impact of sextortion as well. And so to some extent, this is snap, basically rolling out similar kinds of tools and recognizing like we need to put in place things that, limit the ability for these scams, which again, the Thorne report goes into detail, it's often organized crime, as we had discussed when we discussed the story, I think it was back in April, it's become like a, A lot of, um, Nigerian scammers, which used to do other kinds of financial scams have moved on to this kind of scamming. and so, Snapchat or Snap, I'm sure, knew that this report was coming out and, and I'm sure at time the release of, these new features to the fact that this report was coming out. but it, it is still progress, right? I mean, recognizing that this is a really, really horrible situation. You know, it's a scam that's targeting lots of people and you need tools and education to, prevent those scams from being successful.

Ben Whitelaw:

Yeah. And also interesting that Ivory Coast came up. You know, a lot of people know about Nigeria. the big story we, talked to in a few episodes ago was a financial extortion case with two Nigerians who were then, I think, expedited to the us. Um, expedited, sorry. And then. But co Co deis is an interesting new name in this. And you know, obviously French speaking, it'll be interesting to see how that, as a country responsible for some of the harms that we're seeing in, other areas like develops.'cause that's new to me.

Mike Masnick:

Yeah. I mean, we'll see. I mean, I also wonder if, we're going to start to see these scams expand, you know, we've talked about like the pig butchering scams, which tend to be from Southeast Asia. and those are slightly different, but you could see those expanding into kind of sextortion as well. Um, it'll be interesting just to see sort of how these kinds of, crimes and scams. Evolve over time.

Ben Whitelaw:

We've talked a lot about the us and Supreme Court particularly. A little story that was in Euro News, which I wanted to mention, which I think is really interesting is, that of. Lack of uptake of the DSA is transparency data. I can see you raising your eyes, Mike. Um, it's your, it's your, your favorite topic. So

Mike Masnick:

Yeah,

Ben Whitelaw:

is basically, reported on comments made by Tieri Breton, who was, nemesis of Elon Musk, kind of internal commissioner at the EU. And he has explained in a parliamentary session about how there's only one. Non very large on platform that is submitting. Statements of reason to the DSA transparency database. So as of February the 17th, this year, intermediaries of all sizes should have been starting to do. So they should have been starting to send information to the EU about decisions made around content moderation. And these, these. This data is being stored and made available to researchers. anybody can go and look at how many have been submitted by platforms. And only one Latvian e commerce and FinTech company called Doom has been doing so, without being forced to, which is a fascinating thing. It's a bit of a blow to the DSA because obviously there was this deadline. A A lot of noise was made about it. And to only have one. Non FLOP be submitting those transparency reports is a bit of a kick in the teeth, but we know that they have been struggling to appoint, DSCs, digital service coordinators in the countries, these national regulators who are essentially responsible for going to companies and getting them signed up. And Thierry Breton was pretty, clear that there was a whole tranche. He said 30 companies who were basically on the cusp of starting to submit to the database. So maybe we'll see more in the coming weeks and months, but yeah, four months in only one Latvian FinTech company providing their statements of reason. Not great.

Mike Masnick:

Yeah. And, and, I think so much of the coverage about the DSA has always been about the flops and very little has been on the requirements for every other intermediary and there are different stages and there are different levels, but some of the requirements. Go all the way down. I mean, technically, two person mastodon instance somewhere that is, deleting spam is supposed to be submitting statements of reasons to a transparency database, which is. Kind of ridiculous. And with some of my concern about some of the requirements when you get below the VLOP level, and now we're seeing like effectively everyone's just ignoring it because most of the time they don't know about it. And it feels like from this article and from Terry's comments that it's Sort of saying like, well, we have to like go handhold and walk around to all of these, smaller and midsize companies and basically tell them like, you have to re architect your system in order to submit these things. And as a very small platform, uh, with Techdirt and the comment moderation that we do, like the idea that we would ever. Be in a position to have to submit to a government every time we're deleting a spam message or, dealing with a troll somehow is, is kind of ridiculous.

Ben Whitelaw:

any trials on TechDirt, do you?

Mike Masnick:

Oh, you have no idea. This week has been particularly trollish. Uh, but so there is this sort of thing that's kind of, it's kind of exposing like the way that these things often work, which is that when you pass a regulation, that doesn't really make sense at some point, a lot of people just kind of start to ignore it. And the only way that people start paying attention is if you start doing enforcement and it's, it's, you know, so we'll have to see, obviously there are midsize companies who should be doing this in some way. And we'll sort of have to see where the enforcement side of this goes.

Ben Whitelaw:

yeah. And from talking to people involved, in the commission and in the kind of adjacent, areas there, there is this massive focus on enforcement and a lot of money, I think being plowed into making sure that companies are submitting and are compliant with the DSA, which is, you know, it's a lot of effort to, you know,

Mike Masnick:

yeah,

Ben Whitelaw:

a lot of effort to kind of get off the ground very quickly.

Mike Masnick:

And it's not even clear, like, the. Databases available. I mean, anyone can go through it and it's sort of fun to go through it. But what you begin to see is like, it's not clear the VLOPs know what they're doing exactly in terms of how to handle it, where they're, filing very different things. And like, Twitter is filing stuff, but it's a lot less than some of the other companies, a lot less than they probably should be. and so, I think start there, start with the focus there before we start worrying about the smaller

Ben Whitelaw:

Yeah. And still under investigation, X slash Twitter as well. So that was interesting. great. So round us off for today, Mike, with one of a number of really good 404 media stories this week

Mike Masnick:

Yeah. And we'll just go. We'll go really quick here. Cause we're, taking longer than we normally would. There's a really fascinating story, from four, four media in terms of how they replaced their own site with AI. just like a, a kind of a fun story about how the internet is being flooded with AI driven speech and kind of how it was bad, but it was really easy to do, and it sort of really raised a bunch of issues of how anyone who is dealing with moderating speech is going to have to deal with an absolute flood of AI, speech coming down the road and really, keeping track of that and understanding that is going to be really important. So I'm not going to go deep on that. We're sort of running low.

Ben Whitelaw:

you planning to, um, you plan to replace TechDirt with AI?

Mike Masnick:

I saw that story and I was like, Oh, this is fascinating. But the, the real key takeaway was that it was terrible. Right. I mean, it was not, it was only going to, get drive by. And my whole belief is that, good media is about building loyalty with an audience, which means giving quality, content, not just, nonsense quantity.

Ben Whitelaw:

Yeah, no, I, I think, um,, I won't be finding a freelancer to recreate everything in moderation anytime soon. Okay, cool. So thanks very much, Mike, as ever. We have run a bit long, but really good to get the details on, uh, Murthy versus Missouri and to slam Mark Zuckerberg whenever we get the chance. So, uh, good, good episode. if you enjoyed today's episode as a listener, I Get in touch with us via email. go to the website, CTRL alt speech. com. Send us your thoughts and also rate and review us on your podcast platform of choice. We won't be here next week, unless we do something very ad hoc and ruin both Mike and mine's holiday. But, uh, stay tuned for that. Thanks for listening as always. And we'll speak to you soon.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode