Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
Judge, Jury and Moderator
In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Social networks can’t be forced to filter content for kids, says judge (The Verge)
- Judge Rejects Yet Another Attempt By Texas To Police Online Speech (Techdirt)
- Telegram apologizes for handling of deepfake porn content in S. Korea (Yonhap)
- Brazilian Supreme Court panel upholds X ban (Axios)
- Elon Musk’s Starlink backtracks to comply with Brazil’s ban on X (The Guardian)
- With Musk’s X banned in Brazil, its users carve out new digital homes (AP)
- The Internet Archive Loses Its Appeal of a Major Copyright Case (Wired)
- Posts That Include “From the River to the Sea” (Oversight Board)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So Mike, there's a AI tool called PI AI, right? And it was created by a company called inflection designed to be a kind of shoulder to cry on for people. Went to be kind of empathetic AI. Right
Mike Masnick:I, I cry on its shoulder all the time.
Ben Whitelaw:I thought that's what this podcast was for. Um, but I'm going to give you what it says on the kind of homepage screen for when I access PI AI. Okay. So this is what it says. It says, take your time. I know that I'm here to listen. What's been happening.
Mike Masnick:Well, Ben, thank you. Thank you for listening to me. Uh, what has been happening this week, it seems, is that courts have been throwing out big important rulings left and right and keeping me very, very busy. So now take your time, and know that I am here to listen as well, Ben, what, what is happening with you?
Ben Whitelaw:I have been following along to the new reviews that we've got, to Ctrl-Alt-Speech on the back of your plea last week, Mike, we've had listeners respond in their droves or at least in a few cases. And, uh, really grateful for that. So, that's been this week and we'll talk more about that in today's episode. Hello and welcome to Ctrl-Alt-Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you with the financial support from the Future of Online Trust and Safety Fund. My name is Ben Whitelaw. I'm the founder and editor of Everything Moderation, a weekly newsletter about online speech and content moderation, and I'm joined by The founder and editor of TechDirt, Mike Masnick, Mike, you're here, you're listening. What would I do without you, eh? Without our weekly
Mike Masnick:I'm. I'm not an AI yet. I'm sometimes accused of being an AI.
Ben Whitelaw:You could, there could be Mike AI,
Mike Masnick:Yeah.
Ben Whitelaw:Is there a chat GPT with all of the TechDirt articles from the past 20 odd years in it yet?
Mike Masnick:I mean, I assume that, that somehow they've been sucked up into all the different AI systems. I did have one, organization reach out to me and say that they would create a TechDirt specific AI and feed. I don't know how many, I think we're at like 75 to 80, 000 articles from, from TechDirt over the, the past year. The years, and they were going to feed all of them and create a TechTurtle AI. but I never really followed up with them, but I should.
Ben Whitelaw:Yeah. you could turn that into a kind of life sized mic that follows you around and tells you what's happening with the latest court cases in the U
Mike Masnick:gosh, oh, I, I
Ben Whitelaw:for the future. Um, we've got a lot to get through today. Um, we've, like you say, had a number of really interesting kind of late breaking stories as the week's gone on. But yeah, we wanted to, Pay testament to the readers, um, and listeners who've sent in, their reviews and given us five star ratings on various different platforms this week. Turns out there are people out there, Mike, and they listen. Um, they are, they're empathetic to, to our pleas, which is nice. Um, is there any that caught your eye?
Mike Masnick:yeah. I mean, first of all, thank you. Obviously it's, it's great to get the feedback, even if I had to hound you last week. Um, but you know, I, um, we're really happy with, the reviews. We, we, we got a bunch of five star reviews. We got one four star review, but it was still very nice. And so that's fine. we're not telling you, you know, what you have to do. but, um, you know, people, People said some, some nice stuff calling it a must listen to, which is great. And, people talking about how we talk about a pretty wide range of issues having to do with technology and speech. And so I was pretty happy with, the reviews. we are. We'll also note, we got some critical feedback, which we also appreciate. We're always happy to hear some stuff. I think we wanted to, uh, just mention, you know, one of the things that we are trying to do with this podcast is not just have it be about speech issues in the U S and Europe. We always try and make sure that we're. getting things from around the globe, and highlighting stuff that the world is bigger than just the US and Europe, even though that is where Ben and I are located. Um, and, That also means that sometimes we won't have as much direct knowledge of specifics or nuances of a particular country or region. And we absolutely welcome, a constructive criticism on that or nuance or specifics. And, you know, we try to talk to people in those countries when we can. we'll potentially try in the future to have some more folks from around the world on the podcast in the future as well. but you know, sometimes we, we don't know everything and we are open to, folks letting us know. And so we did get some constructive criticism I would say last week about, our coverage on Brazil and, um, we appreciate it and we appreciate any other feedback that people have as well. That
Ben Whitelaw:Indeed. And if, you know, you want to send us. An email with your thoughts, you know, podcast at Ctrl-Alt-Speech. com. If you're interested in becoming a co host for an episode of the podcast in the future, drop us a line. We've already had a couple of different co hosts who've got in touch directly who aren't people we know directly. So there's really lots of opportunities to be involved. We want as many. People with a diverse range of experiences and views to be on the podcast in the future. So consider this, your opportunity to get in touch as per your, summarizing might, we've got a range of stories on today. Some from the U S some from further afield, a big story that I'll be talking about in South Korea as well. So, lots to get into, but yeah, thanks everyone for the recommendations, for the reviews and for the stars. so let's start off Mike with, On your side of the Atlantic, with the story that could be, if you're a kind of pencil aficionado, you might think this is about, about pencils. It's not HB 18 is, is a.
Mike Masnick:a reach. Okay.
Ben Whitelaw:my, that's my, that's my kind of graphically, inspired intro, um, talk us through HB 18 also known as the securing children online through parental empowerment act. named there
Mike Masnick:Yes. Yeah. This is a law that was passed in Texas, it was actually passed last year, and it is one of a number of laws that a number of states have passed, focused on protecting the kids online. And obviously there's been a lot of debate that around the globe. and in the U S at the federal level, there's been obviously the ongoing debate about re enrollment The Kids Online Safety Act, COSA, and in the interim, while that debate has been going on, a bunch of states have passed different laws. We talked about California and its age appropriate design code. Texas last year passed HB 18, which Um, is also, law that is designed, they say, to protect the children. and it had a bunch of requirements in there around age verification, and then how certain social media websites would have to deal with, Different specific types of content that the state deemed to be harmful to children, whether it was eating disorders or, grooming, harassment, a variety of, things of that nature, that law was set to go into effect September 1st. This month, just last week, this week, I guess, depending on, uh, yes. And, like, uh, about a month, a little over a month ago, CCIA and NetChoice, stepped up and filed the lawsuit to try and block it. And so just. Right before the law was set to go into effect, it was in fact blocked, by a district court judge in Texas who found that it almost certainly violated the first amendment.
Ben Whitelaw:And just before we go into why Mike, the lateness of the block, how comes, that net choice and CCIA waited so long? How comes it's taken until now to happen?
Mike Masnick:for sure, but what I can say is, all of these laws, so one, there's, more than a dozen laws in different states that, both CCIA and NetChoice mainly have been filing lawsuits to stop. And so that is a lot of work. Both of those organizations are relatively small organizations. and so they have to like keep track of like the dozens of laws that are being passed. And a lot of it is just looking at when they go into effect. I have obviously no inside knowledge on this at all, but my sense, given the timing of the various lawsuits is that they are looking at when they're scheduled to go into effect and scrambling to make sure that. they push for an injunction before the law goes into effect. So we had the same thing, there was a law that was supposed to go into effect earlier this year in Ohio. And so a few weeks before it was supposed to go into effect, they filed there. So I think it's just based on the timing of when the law is supposed to go into effect and then just the general business of the organizations and their legal teams. And so my sense is that It was the September 1st deadline, even though the law was passed last year, the fact that it wasn't scheduled to go into effect until September Some of it also may have been, they were kind of waiting to see what happened with the net choice cases at the Supreme Court and other things in terms of if that gave anything useful for blocking these laws. And so I would imagine that was part of it. So they filed, I think at the end of July is when they filed. And then right before the law was set to go into effect, the ruling came out and they, put the law on hold and basically said that it is likely to violate the first amendment. I see this case as, are a bunch of other issues here, right? So it's like a few years ago, Texas had passed its social media content moderation law, which Was part of the net choice cases at the Supreme court, which we now refer to as moody because it's not worth getting into why, but that's, that's the moody cases, which also impact Texas, even though moody is Florida. It doesn't matter that that was a case about regulating social media, Texas also. had this other law around age verification for adult content websites or websites that had more than 30 percent adult content, depending on how you determine that. and that case was also in the district court, was found to be unconstitutional with both of those laws. Originally found in the district court found to be unconstitutional fifth circuit in the way that only the fifth circuit can reversed on both of those, making a whole bunch of nonsense in the process. so 2 years ago, that was around the social media moderation. That case went to the Supreme Court. The one that we had a few months ago, Moody, basically saying the Fifth Circuit was absolutely crazy, sending the case back, please reconsider. That is now happening. The second case was the, age verification for adult content, again, thrown out, district court, unconstitutional, Fifth Circuit, no, this is totally fine. Now the Supreme Court is, is hearing that case. So that was, you know, recently taken up by the Supreme Court. it'll be heard this fall. Who knows when we'll get a ruling sometime next year on that. some of that is backdrop to this case because there is an age verification component to HB 18 and how, how the course will handle it. will probably depend very much on what happens with the Supreme Court. But the other stuff around filtering content and saying kids can't access certain content, here again, the judge, and I will note that this is the same judge who got this case who had the case about the first law, uh, around moderating social media. And I actually find it's interesting that very few people mentioned the fact that like that original law around moderating social media in Texas was one that basically said social media can't moderate. You know, it's against free speech for social media to do any moderation was the, overall message of that law. This law is, Oh my God, children, how could you not be moderating? Uh, and so there is some sort of like mental conflict there that you would think some people would realize, but nobody does. So apparently moderation is bad for Adults, unless it's like really adult, in which case there has to be moderation. Or if they're kids, in which case all bad content must be deleted.
Ben Whitelaw:Yeah. try and thread that needle if possible.
Mike Masnick:Yeah, but it's interesting to see, you know, it was pretty strong, ruling, pretty thorough. I think this judge knows what's likely to happen with the Fifth Circuit, which is that, again, it's probably going to do something stupid. but I think the judge wants to set a pretty clear record and wants to be very clear about it. One thing he also did, which I thought was interesting and we hadn't seen in some of the other cases, was he said that this law is preempted by Section 230. This has come up in a bunch of the different state laws. Section 230 is very clear that no state law can, basically undo Section 230. So any state law that is in conflict with Section 230, Section 230 wins. And therefore, and it's been brought up in other cases and the judges have generally ignored it or just sort of said, I'm not going to deal with this now. Most of these laws around content moderation that tell companies how they have to moderate, whether take down more stuff or keep up more stuff, violate section two 30 and should be preempted by the law saying this law, you can't go into effect because the federal law blocks that.
Ben Whitelaw:Interesting. I'm going to, I'm going to play my dumb Brit card now. Um, and,
Mike Masnick:That's what, that's why you're here.
Ben Whitelaw:or, you know, dumb person card, however you want to see it. Um, why do any States bring cases that they know? Are going to be preempted by section two 30 eventually anyway.
Mike Masnick:Um, well, they, One, either they don't understand Section 230, which is a common theme that goes through everything in the U. S. around Section 230, is that people just don't understand it. The second element of it is that they are hoping that because there is a sort of judicial dislike for Section 230 right now, I think they think that judges will ignore it. And as I said, you know, a lot of courts have effectively not paid attention to the preemption arguments. Partly because of the timing of the cases, like preemption is the kind of issue that might come up later in some of those cases., or they just think they can get around it. They think it's something different. They say it doesn't conflict. I do remember there was this funny moment where, where one of the authors of the original, the first, social media content moderation bill, the one that was at stake in the Paxton Moody net choice cases, he went on Twitter and had said, like, we wrote this law so that it is. It doesn't conflict with Section 230. And he had this interpretation that was like, basically, this law is useless. The law goes into effect, but if it's used in a way that conflicts with Section 230, then that's not what the law is meant to do. And I was like, the only thing the law does is in conflict with 230. It was a very weird moment. But I think a lot of it's just like people just don't understand section 230. And so they think that they can pass these laws and we'll get around it. Or they don't care because it's all about culture, war, messaging, grandstanding, be on the front page of the paper, protecting the children kind of
Ben Whitelaw:Okay, fine. And are there particular aspects of this law that the ruling kind of pulls out because it's very broad, obviously scope. And can you kind of give us a sense of like what they had particular issues with, because I suppose that's where we kind of maybe see some patterns with other rulings as well. Yeah.
Mike Masnick:I think it was actually really interesting. I mean, the court, the, the judge is very thorough and kind of went through it. And Basically said that there was a lot of stuff that was vague. This is always an issue, that, when you're talking about regulating certain types of content, first of all, that is going to require first amendment scrutiny. and you know, people seem to think that like, oh, well, we can just ignore the first amendment because like this content is, bad. but the court made it clear that like, you can't just put in like, vague stuff. So, for example, it said that you had to stop, content that promoted, glorified, or facilitated certain types of harm. And the judge was like, what, what does that mean? Promote, glorify, facilitate. Could be, could be. All sorts of stuff, right? You know, and so there's a line was said to glorify potentially includes any content that favorably depicts a prohibited topic, leaving no clear answer on what content must be filtered. Do liquor and beer advertisements glorify substance abuse? Does Othello glorify suicide? like these are the types of things that saying like, A lot of people, when they write these laws, there is this sense, and this is true of people, all the time, people who sort of comment on social media, content moderation, trust and safety issues, there's this sense of, like, well, it must be easy, you stop the bad stuff, and you let the good stuff go, as if that's, like, really clear, and, it's not. So it's easy to say, like, anything that, promotes, glorifies, or facilitates eating disorders, for example, or self harm, seems bad, but, as the judge says, like, well, Othello, is that glorifying suicide? Arguably not. Yes, right. But like we think that that is artistic and not something that should be blocked. And so that's where a lot of the sort of First Amendment issues come in here. And so it was sort of good to see him calling all of those things out.
Ben Whitelaw:Right. So this is kind of a ruling that kind of falls foul of the like awful, but lawful distinction to an extent. And I think it, you know, you mentioned there about how, Texas is, this is the third kind of major ruling that it's had kind of go against it. But you also, before we started recording, talked about how this is not the first state trying to bring. A kind of, child safety ruling and to have had that blocked. So it's like happened in, other States as well. Right. We kind
Mike Masnick:Yeah. I mean,
Ben Whitelaw:another pattern.
Mike Masnick:Yeah. And so, the things that I've noticed over the last few years is like the attempts to regulate the internet, it's funny how like the states and the federal government are all sort of feeling their way around and testing like different things to say, like, can we do something that probably violates the first amendment without it? Violating the first one, like everybody's trying to find like the secret path through to regulate internet speech that doesn't violate the first amendment. And so the latest thing there's sort of these two elements to it. One of which is like these kids code ideas where it's like, well, kids are different and therefore we can put these regulations in there that require a sites to treat kids differently, which first of all leads to like questions about age verification and privacy for kids and all of that. And that is now being somewhat tested. The Supreme court, we'll see what else comes of that. You know, and the other thing is like these, you're sort of trying to pretend that we're talking about design as opposed to content. We saw that with California and the age appropriate design code that we talked about a few weeks ago, that got thrown out, again, for first amendment reasons, you The other angle, and this leads us into sort of the next thing that we want to talk about a little bit is that, there's an element of transparency. People think, well, if we just require the companies to be more transparent about their moderation impacts, like maybe that is a way to get around the first amendment and in fact, the, the laws, the Texas and the mainly the Florida law, that was, part of the Moody case had a transparency, element to it, which. the courts had said was okay and that the 11th circuit in particular said the transparency part is, fine. so California also passed a similar law that had a transparency mandate for trust and safety for, terms of service in terms of how they moderate certain types of content that required different reports. and claimed that this was not about, taking down First Amendment speech, just wanted you to, send to the Attorney General how you were dealing with specific categories of potentially problematic speech. And that law was also thrown out this week by the Ninth Circuit. This is the Appeals Court. It had actually, there have been few different folks have challenged it. The original challenge actually came from users on Twitter who, I would say are not great users. You have like the Babylon Bee and Tim Pool, uh, who's in the news this week for something else that we might talk about later,
Ben Whitelaw:Indeed.
Mike Masnick:who sued California saying that this law was going to. pressure companies to take down their content, which actually might be true, but the court in that case said that they didn't have standing to sue. And so drop that lawsuit it was a very poorly done lawsuit. I think what happened, because almost immediately after it got thrown out a few weeks later, Twitter or Twitter, X as we are now unfortunately forced to call it, file the suit challenging that law using Floyd Abrams, very famous First Amendment lawyer, one of the big names and First Amendment law in the US, and challenge that law saying like the transparency requirements violated. It's. First Amendment rights. And the lower court rejected that and said, Nah, it's fine. but now the Ninth Circuit on appeal said, Yeah, actually, this law clearly violates the First Amendment. And mostly threw it out. There's parts of it that it's sort of sent back for further analysis. And. basically says this law is a mandating speech. It's compelled speech by the company that they might not want to do. It points out that, there's this very confused standard in the U. S. The, there was a case in the 80s Zauderer on mandating transparency around advertising for professionals, where the court basically said, you can, if it's advertising and it's, for non controversial factual content, you can do it, but a lot of courts and a lot of. Lawmakers have taken that Zadera ruling to say, Oh, transparency is fine. We can make any company be transparent, which is, I think, a very broad ruling, but the courts have sort of not really dug that deeply on it. This one starts to but does so in a slightly weird way. It doesn't go into the full Zadera analysis, but actually says that. the Zadera ruling is dependent on the idea that commercial speech is okay to regulate more readily than non commercial speech. Um, but in this case, the court said it's not commercial speech, that this is speech about speech, which is a, an interesting phrase. Um, and, this is not the company speaking for itself for commercial reasons, this is the government forcing it to talk about its speech on the platform and how it moderates it, and therefore it does not qualify as commercial speech, and so it is still compelled speech. not under commercial reasons, and therefore we don't need to even do the Zedero test. And then because it's compelled speech, requires what's called strict scrutiny, to review it. And it doesn't pass strict scrutiny in that this is, goes way overboard in terms of what you can compel, a website to reveal. And
Ben Whitelaw:Interesting.
Mike Masnick:Between these two cases, the two main ways that a lot of lawmakers and activists, frankly, think that they can get around the First Amendment with design codes and with transparency laws. We have two courts this week saying, no, actually we see that you're trying to get around the First Amendment and really you're sort of trying to tap dance in a way that you are forcing companies or pressuring companies to take down certain types of content. And that requires First Amendment scrutiny and you're not doing enough to meet that.
Ben Whitelaw:Yeah. So kids code transparency, not having the desired effect for some States bringing these, laws. It's interesting, obviously in the, in Europe, within the DSA, both of those things are present and we're therefore starting to see, I guess where the fault lines are between obviously the European approach to regulation, which is, several years further ahead, I'd say, and, and where the U S is landing on this stuff. Where do you think states will start to then push a little harder? Like if it's not these areas, do you, do you see other areas emerging other, other other rulings that might tell us more?
Mike Masnick:Yeah. I mean, I think they're still working on this and sort of trying to figure out the path through and some of it is, is still going to be determined by how all these Supreme Court cases come down. Obviously the age verification part of it is a big part of it. So that case is going to be super important. and then, the Moody case is going to bounce around again because we still haven't quite figured out what is and what is not allowed. I think everyone is just going to keep pushing and testing, and trying to figure out, how this goes, I would imagine that California is going to try and appeal this, particular ruling. You know, it is interesting that it was in this case X that brought this case because my sense, and this is totally, vibes, this may, may not be true. My sense was that with this transparency law, that the big tech companies were actually. perfectly happy with it. They knew that they were in perfect compliance with the transparency law, and if anything, all it did was create a compliance burden for smaller companies, and the big companies don't really care about the smaller companies that much, and actually don't mind the fact that there's a bigger compliance burden, because that's a sort of competitive advantage for the bigger companies. So it is interesting that it was X that was willing to challenge this law, because without that, the transparency aspects of these laws might have just gone into effect, without anyone seriously challenging them until much later. And so it's interesting to see just sort of how, how that plays out. But I would imagine, especially because you have a lot of like, civil society folks and certainly academics who are really big for totally understandable reasons, Pushing on the transparency and saying like, we want transparency. And from my standpoint, I keep arguing like transparency is a really good thing and we do need more transparency. I worry about the mandated transparency, which, you know, it's like this, you know, I want more transparency, but when the government says you have to be transparent about this, I actually think it can create real problems. And in fact, the, California law 587, I think was a terrible transparency law, if that was what you wanted, because among other things it had these rules about like how often you could. change your terms of service. And anyone who does trust and safety knows like, changing policies happens all the time because bad actors change how they act. And also like, you have to be kind of careful in what you do publicly reveal because bad actors use that as a roadmap for getting around your policies. And if you have to be really public about it, and you can't change it that quickly, you're sort of, Giving not just a roadmap, but like a time period for bad actors. So it's saying, here's the roadmap and I can't do anything about it for like a month. So have at it, um,
Ben Whitelaw:the worst of both worlds,
Mike Masnick:Yes. And so, you know, I, I think that there are ways to do transparency, to incentivize companies to do transparency better without mandating it. And I wish that I wish that that was the focus sort of incentivizing rather than mandating, but we're not there yet.
Ben Whitelaw:um, really useful, helpful unpacking and kind of knitting together some of those rulings that we've talked about in the last few weeks, Mike, we'll move on now to, to our kind of next story, which is one I've been following for the last couple of weeks out of South Korea. I'm going to kind of explain a little bit, Mike, how this story came to be, because there's been a couple of developments over the last few weeks. But essentially South Korea last week, the president of South Korea announced a seven month campaign to combat what it calls digital sex crimes. The kind of epidemic that has been taking place in the country over a number of years now. The latest version of it has seen deep fake images of women, be created and traded and, in some cases sold via platforms and messaging services. And the reason why the president kind of talked to the media and talked publicly about this is because in the last eight or nine months, they've seen a significant uptick in the number of crimes. Related to, deep fake images from a small, small base, but significantly higher than the last few years. And it's also been discovered that there are large telegram channels of up to 220, 000 people in some cases, using, creating, trading fake images. deep fake images and that's bad enough, you know, but this comes on the back of, this comes on the back of a, of a rough few years, for, this story in South Korea. So I hadn't heard about it until I started digging into it. Mike, you might've heard about it, but there was a story in 2020 that came out, about the use of telegram to trade. real images of women and girls that had been taken without the consent. The story was was actually kind of, it was called the nth room. It was a collection of telegram channels that was discovered by journalists. And essentially the scam was that, Several men, and then a group of men as it got bigger, essentially persuaded women who were interested in becoming models to share non consensual imagery with them, and sometimes consensual imagery that was then used to blackmail them and the images and the photos are then sold. Via often by telegram, often using cryptocurrencies. And so this was a huge network of blackmail and, NCII. And it heralded a massive public outpouring. It heralded a number of new laws in South Korea that made the process illegal. And, um, the guy who, you know, who, who organized the telegram channels, who, who called himself God God, which kind of tells you all you need to know about this guy's character. He was sent to jail for I think 42 years. So there has been a kind of longstanding history in South Korea of this kind of story, this kind of thing happening. And. Malka is this idea that the still exists in South Korea where women are taken advantage of. They're taken, uh, had videos taken of them in kind of voyeuristic situations, and there's a huge amount of misogyny in the country, country. The thing that's happened this week, on the back of that story where the president gets up and says that he's gonna combat this, uh, scourge, is that Telegram, one of the central platforms in the story over a number of different years, has. Apologized. they have taken down 25 different pieces of content that the South Korean media regulator, asked them to take down over a number of different months, and they've set up a hotline where the government can also go directly to Telegram In order to have more, I guess, violating images pulled down, which is something the Korean government had struggled to do previously. So this is a big U turn, and in and of itself, it's an interesting story. We talked last week about states versus platforms. And so this is a kind of another part of that, puzzle, but it's also really interesting in terms of what's happening in France with the arrest of CEO, Pavel Zhurov and some kind of. I've unpacked that over a fair bit of time, because I think it's interesting a number of different reasons. It's interesting because all of a sudden following the arrest of the CEO of Telegram, they've started to react whether or not they're related, who knows. And then also where we, I wonder if it's something that we're a path that we're going to see that case and those charges follow. what do you think about this, Mike, as a kind of adjacent story to that Jurov case?
Mike Masnick:Yeah. I mean, that was the thing that really stood out was that this happened right after, the CEO is arrested and, in particular, even though, as we said last week, the charges are still not entirely clear what, what the, the exact charges facing Durov are in France, you know, and Telegram had been forced to put out a statement saying, you know, we comply with all laws, even though that is perhaps questionable, very questionable. And yet he, so here was this other story bubbling up in South Korea that was directed very much at Telegram, and suddenly they're just like, oh, we're really sorry, we're taking this content down, and by the way, here's a hotline how to call us. I don't think that happens if Derov is not arrested in France. And so I think this is, very much, whoever is currently running Telegram, and I guess Derov is, you know, not in jail right now, but is, must remain in France, so he might have had something to do with it. But whoever is running it, Is saying like, huh, maybe we should actually pay attention to governments when they're complaining about illegal content and figuring out a way to deal with it. And so, you can't say what would happen in the alternative, but it certainly seems that this is telegram. perhaps reaching that realization that most companies realize a little bit earlier, which is that, if you are hosting content and some of that content is illegal, then you need to have some way to deal with it. And, the sort of like, ah, come after us, viewpoint now that France has come after, Durav, uh, is making Telegram sort of wake up and, maybe changes policies somewhat.
Ben Whitelaw:Yeah. I mean, it's interesting as well, the slight differences between what's happening in France, I mean, and what's happened in South Korea, because in France, it took judiciary, as you talked about in last week's episode, there was a kind of case. That was actually cause some political embarrassment for Emmanuel Macron, who has kind of long courted, and I've read this week was trying to get telegram to move its headquarters to France, you know, this actually the judiciary pulled his pants down a little bit by bringing this, these charges against him. Actually, it doesn't seem like that's what's happened in South Korea. Now, this is a, there's been no kind of judicial case brought against Telegram. This is a, you've got the president talking almost directly to Telegram and, and employing them to do something about it. So there are kind of slight nuances here, which I think are interesting. I mean, I don't,
Mike Masnick:Yeah. I was going to say, but I think in the context of what happened to Dhirav, like all of these things appear differently, right. You know, I think Dhirav, honestly, I mean, is not unlike lots of other CEOs of, speech platforms who when they first come in, they think like, Oh, we're just going to allow everything and like screw the government and all this kind of stuff. And it's kind of a joke to them until, I think France the arrest made it real. And so now telegram is like, Oh, wait, we actually do need at least Some sort of response. And so like, if the next area where issues related directly to telegram are popping up or South Korea, I'm sure they were just like, well, we gotta, figure out something to do here. And hey, maybe if that means taking down this, Clearly illegal content and, issuing an apology and setting up a way for the countries to reach us, and so I think it totally makes sense. You know, the question is, where does that go from here? Does telegram set up a hotline for every country and then you run into all sorts of issues where, you this is the, the slope that makes all of these issues very, very tricky where, you have the US, letting companies know about problematic content, which then was turned into a whole thing here, where it's like, Oh, the, the government is violating the first amendment and telling companies how to moderate, which isn't, you know, Always the case, or is it, just alerting them to illegal content? Where do you draw the lines on all of these things? Becomes really tricky. And it's something that Telegram, if they want to remain viable company that is not just a rogue lawless entity, is going to have to start to figure out like what are its policies on talking to governments and handling complaints about certain content. Because it's one thing to call out. content that is clearly illegal and C SAM is the obvious version of that. And then it just begins to get trickier and trickier in terms of which content is content that the government just doesn't like versus content that is directly illegal. A smart company, a thoughtful company with, real trust and safety, thought behind it comes up with policies and ways to handle this. Telegram seems to be, figuring it out and, uh, as it goes right now.
Ben Whitelaw:a billion users in. Better late than never. And it's certainly going to increase the costs per, user, right? You know, Pavel Djurov was saying in an interview with EFT last year, it only takes 70 cents to service each user per year, you know, it's incredibly cheap. And if you set up a hotline in every country to deal with, these kinds of harms, that's going to change. So
Mike Masnick:yeah. And again, like, you know, you see the, there will be countries that will try to abuse that line. And so that's going to then be on the company to figure out how it deals with that.
Ben Whitelaw:for sure. But yeah, it's a, A good example continuum from last week of the squaring off between kind of public and private, you know, between kind of sovereign States and platforms. And, imagine the kind of States this week could have claimed victory, uh, not just for this South Korea story, but also for what's happening in Brazil.
Mike Masnick:Yes. Nice transition.
Ben Whitelaw:Better than my pencil one, a lot has happened in the seven days since we recorded last week. Um,
Mike Masnick:So
Ben Whitelaw:to do a kind of bit of a tick tock of what's happened. Right. Just to kind of bring listeners up to speed.
Mike Masnick:Walk through really quickly. I mean, I think basically as we were recording last week, it was before, the judge, Alexandra de Moraes had, issued the official order blocking, X. And so that happened basically right after we recorded last week, we'd already mentioned that he had started seizing some of the assets from SpaceX, that continued. So the original order to block X was kind of an interesting one in that it requires a bunch of different entities to do a bunch of different things. It required Google and Apple to remove X from the Play Store in Brazil. It required ISPs to block access to it. It required backbone providers. To block access to it as well. So basically all up and down the stack, they were saying everyone is required. And it was the, uh, Brazilian telecoms regulator was in charge of, enforcing that, had to figure out a way to block access. Now there were a couple of elements of it that were eyebrow raising for a lot of people. The first was beyond telling Google and Apple to block the X app. It also said they had to block access to VPN apps that people could use to get around this, which is all of them, basically. I mean, the whole point of a VPN generally is to, appear as if you're coming from somewhere else. It could be within your country, but it can also be in some other country. And so Apple actually had started to block VPN apps in its app store a few days earlier, which is curious. And I have not gotten an explanation for why. So in the Brazilian iOS app store, VPN started disappearing before this order came out. but a lot of people were like, that seems really aggressive. Obviously there are lots of. Perfectly legitimate uses for VPNs and in fact, very valuable uses for VPNs. And to that order seemed very broad. A few hours later, he issued an amendment basically saying, I'm putting that part, just that part on hold temporarily until the parties are heard from. I'm not even sure who the parties are exactly in that scenario. Is it, X, or is it. Google and Apple who are being told to block things, or is it the VPN companies? I don't know, but it was sort of, okay, we're not going to do that part. The other part that raised a lot of eyebrows and people got confused. They thought the part that he backed away from on Friday was this part, but it wasn't, was that there was fines. Potential fines for users if they used a VPN to access X, not just using a VPN. It had to be to, access X. and the fines were effectively like 9, 000 a day is 50, 000. Reyes. I'm not even sure how to pronounce the Brazilian currency. So I apologize for that. Um, but that is also not unprecedented. A few years ago when they blocked Telegram, bringing back Telegram, Brazil blocked Telegram for a few days and they had a hundred thousand raise fine, so basically twice as much for anyone who used a VPN to access it. Though that, block only lasted a few days and nobody ever enforced the VPN thing. But a lot of people were kind of up in arms about the VPN thing, which I think is legitimate to be concerned about, then, uh, on Monday, a larger panel on the Supreme Court, I think it was five justices on the Supreme Court, reviewed the order and unanimously approved it. Murray sort of He didn't quite walk back the VPN, the users of VPN fines, but basically said something to the effect of that it would only be enforced against sort of egregious, uh, uses of a VPN. I do know that a bunch of, generally sort of right wing Bolsonaro supporters including like a few senators who are Bolsonaro supporters in Brazil have been, very clearly using a VPN, posting that we are using a VPN to sort of provoke and see what happens. We'll see what happens there. there was one justice on the panel who had proposed, modifying the order to say that, You could only be fined if you used a VPN to get on X and to then post racist, fascist, or other criminal content. But that wasn't approved by the other justices. There is now a push by some more, right leaning justices on the Supreme Court for the entire Supreme Court to rehear this, to see whether or not they think it's legal. and so then most of the, ISPs, did what they said and started blocking X. So, uh, it became much harder to get to X very quickly. The one that held out was SpaceX, who has Starlink, which is used, not hugely used, but it is used, has a fair number of customers in Brazil. Famously, when SpaceX Starlink launched in Brazil, Elon Musk had flown to Brazil and, did a whole thing with Bolsonaro and said he was providing 19, 000, Starlink things to go to schools around the country. It turns out that never happened. Uh, a reporter in Brazil revealed that that was announced, but never happened, even though on, I forget, Monday or Tuesday, Elon reposted a tweet that somebody posted saying, look how great Elon was for Brazil. He provided 19, 000, uh, Starlink things that apparently just didn't happen, but he's happy to take credit for it because that's. Um,
Ben Whitelaw:all the time. Take credit for stuff that I didn't actually do.
Mike Masnick:but, you know, so at first they announced publicly that they weren't going to block and Ilan made some noise about like, sort of come and get us. He also declared that the U. S. should seize, the, Brazilian president's plane because they had just seized the Venezuelan president's plane. And he said like, we're going to get them to do that as retribution for seizing SpaceX assets, which, you know, I mean, maybe if Donald Trump wins, then like, he'll take orders from us. So who knows, but like, there's all sorts of craziness. And then just as that was coming to a head, SpaceX announced that yes, they were actually going to block X and we're implementing it though. They were doing so under protest. What someone else pointed out was that they could have protested. Under the law, they could have filed a legal protest and they missed the deadline to do that, which is, you know, when you fire all of your lawyers and all of your people who know how this works, that's something that might happen. Um, and right now it's just, you know, it's still mostly blocked. People are getting around it with VPNs and then sort of the Brazilian micro blogging, universe, which, you know, It was a very large contingent of people, I've seen different numbers for how big they were on, on X, anywhere from 20 million to 40 million, but still a significant number of users uh, a lot of them have moved on and there has been a big influx and the reporting is a huge influx to threads and to blue sky. And here's where we. Give a disclaimer. I am on the board of BlueSky. I may be biased in anything I say here, but please be aware of that. BlueSky had a huge leap. They've gained, last I checked, it was like 2. 3, 2. 4 million new users. I think 85 percent of them were from Brazil. Portuguese has become a very big language on the platform over the last week has been a massive influx. and so it's been interesting to see how that's gone. The president of Brazil, Lula has set up a blue sky account and I will note set up a, the, the. Personal domain, you know, this is the way blue sky works for verification that if you have a control of a domain, that you can use that to verify that you're really real. we also saw the Supreme court of, Brazil set up their own with their own domain as well on blue sky. And so it's interesting to see the sort of, diaspora aspect of what happens when X gets blocked in that
Ben Whitelaw:And then, so the, the Starlink, thing is interesting, right? Because Musk made a whole bunch of noise, said that he wasn't going to block X. And then. Did a, did a U turn
Mike Masnick:Well,
Ben Whitelaw:and that's to do with the fact that, the Supreme court blocked Starling's assets. Right. You know, they, we talked about it in last week's
Mike Masnick:Yeah, it's a little unclear. Like, you know, Moraes had, had ordered that Starlink assets be seized and Musk had respond to that and saying, everyone in Brazil gets free Starlink Well, this is happening because they couldn't take payments And so that had something to do with it. And then apparently they were seizing equipment There was also some report that I saw someone in Brazil had shared with me And there was an order to even maybe start seizing the gateways, the thing that you put up, if you have a starling, like they were going to go around, like, I don't know, home to home seizing the, the, uh,
Ben Whitelaw:putting them in a giant bag.
Mike Masnick:is this seizing the, the satellite receivers. And so I think, probably. And, and I don't think Musk ever commented on the fact that SpaceX announced that they were, going to comply with the law. My sense is that Musk just did what Musk did, which was speak. And then some cooler heads at SpaceX in the compliance and legal departments were like, you know, I think we kind of have to comply with this. Otherwise a lot of really bad stuff is going to happen. Uh, and so they agreed to it. But you know, as of right now it's still blocked and people don't really know what's going to happen. It's a big kind of open question.
Ben Whitelaw:Yeah. And all the while, as you might expect from, uh, Musk, he has been putting out court documents under the handle, Alexandra files. you know, he's made a big noise, um, shared a whole bunch of, files and legal documents claiming that there is no transparency from the court and that the people are being recourse to appeal. So, so there's the kind of whole new kind of Musk psycho drama that's been unfolding this week. and we'll probably continue over the coming weeks, right?
Mike Masnick:the one thing I'll say is like, you know, some of the stuff that he's revealed in the Alexandra files are they do look concerning to me, like just an understanding how things are done, but it's one sided. And as we've seen with the Twitter files, like, I don't know that we can trust what we are seeing. they're not showing actual court documents. He's showing sort of their interpretation of the court documents and saying that certain people had to be blocked without it being revealed, like which tweets violated what law, and so there's no transparency there. And that is concerning and including like, you know, potentially having to take down, content from a sitting Senator in the, Brazilian legislature. And so, like, I understand why that's concerning, but again, like. We don't have the details. And so it is a little tough to say.
Ben Whitelaw:Yeah. Okay. So states, two states this week kind of winning out in terms of the platforms, although maybe not winning out in terms of the effect on users in the long run, which is, I guess, really what we've got to keep an eye on. We're going to do a couple of additional smaller stories, Mike, as we often try to do, What else do you want to kind of flag this week
Mike Masnick:quickly, I think this is an important case. It It might not seem that way for people, specifically in like trust and safety online speech case, but the internet archive lost a copyright case in the second circuit this week around its open library, which was this procedure where they scanned books and would let you take them out and. A lot of people looked at this and had a sort of knee jerk reaction of clearly this must be, copyright infringement. And the court said that it is. I think the argument is a little bit more complex. I think the court actually got this wrong, but I'm not the court. The court is the law. So, um, uh, but, uh, what they were trying to do, and they had it very clearly laid out, was treat it very much like a library. So they would get books, either by purchasing them or by donation. they would scan the books, make a digital PDF, kind of an ugly one, not a particularly useful one, and then, They would lend it out, they would put the books into storage, but they would only lend out a one to one copy. So it was effectively no different than a library. If they had purchased the books themselves, as you know, they had the possession of the books, they could have lent them out to people local, but they were making it available to people online. And it was very useful for lots of people, especially for researchers, who just, you know, couldn't get access to books. Otherwise it was, it has, it's a huge library and gave access to a bunch of people, the publishers got upset and they sued, and they'd lost at the lower court and now on appeal, they lost as well. Basically the court saying it wasn't fair use. And I think it's, You know, there's something to be said that often gets lost in these questions about online speech of how much copyright actually is one of the biggest regulations regarding online speech that often people don't think about it as a speech regulation, and yet it is, and one of the uses that came up in the case that I thought was relevant and made some of this point, which the court totally did not care about, was That one of the unique uses of this is it allows people when they're writing online to link to books or to link to a page in a book. Just like you would link to another, website or something. And that's a really useful feature that is done away with, and that was presented to the court as, as an example of how this is useful and transformative in ways that, regular e books are not, licensed e books are not. And the court just dismissed it at a hand. And I think it's an example of where copyright really can come in and hinder, open internet speech aspects. I don't want to go too deeply. There's a lot more to that case. I was really concerned about some of the language in the case where he basically, the judges, his three panel judges said that, you know, Like, well, you know, this is clearly, we're doing this for the public benefit because if people could lend books for free or borrow books for free, authors will no longer want to write. And it's like, do they not know the centuries long history of libraries? Like those, How do you ignore that? But, you know, we don't need to get into it here, but it is worth remembering how copyright intersects. And many of the legal disputes that we're dealing with today are sort of echoes of copyright fights from, one to three decades ago as well. So, um, just noting that as a major case in the second circuit, I do expect that there will be a, an appeal. I don't know if it'll go to the Supreme Court and if it does, I don't trust this particular Supreme Court to get it right. So I'm, I'm not happy with how that ruling came out, but it's one worth paying attention to.
Ben Whitelaw:Yeah. Okay. well, to, to keep an eye on, and do some more reading on, we'll include a couple of links to that in today's show notes as well. and just finally kind of interesting story that I, I noted this week, A pretty hefty policy decision out of the oversight board, the independent, but meta funded, group of experts who rule on content moderation decisions. as many of you will know, this is a decision that it made this week on the phrase from the river to the sea. Um, which is something that obviously many of us have come across in, our kind of doings online since the October the 7th, attacks, last year now and almost a year on it's, good to see that the, the meta, and oversight board have made a judgment on this. What they've done is that they've reviewed three cases involving the use of the phrase and the board have found that they, don't believe that it breaks any of META's rules on either hate speech, violence and incitement, or it's dangerous organizations and individuals policy. and so this is obviously really notable because the conflict is ongoing. it's a very Obviously an emotive topic, it's something that where people obviously, uh, have strong perspectives and by no means, it's the kind of oversight board, the B1N door when it comes to judging on this, but it's, it's really important, I think for us to note and cover this. they say that the content that they reviewed contains what they call contextual signs of solidarity with Palestinians, but no language. calling for violence or exclusion, and there's no glorification or reference to Hamas, which is one of the organizations that, is, banned in the DOI policy that it has. And as such, in and of itself as a standalone phrase from the river to the sea is okay to use. there obviously the use of the phrase in conjunction with, other words that may violate these policies, oversight board are very clear to say Would not be acceptable. but it's kind of notable that in and of itself, it's okay. This comes on the back of a, obviously a long, quite checkered history that Meta have with, uh, You know, moderating speech related to the conflict, the Sheikh Jarrah protests, which we've talked about on the podcast in the past, triggered an independent review, which found that, um, you know, meta on its platforms was not sufficiently well placed to moderate Arabic speech. And that has not really changed a great deal over the last few years. so it is interesting again, that this is, the way it's landed. So I think it's. there's more to dig into here. We've got, we haven't got time today to really dig into this particular, ruling, but it's, it's an interesting thing that the oversight board have come back and said.
Mike Masnick:the one thing I'll say about it is that, it's a really good representative, very fraught representative of like the impossibility of content moderation, on these issues, because there are people who hear that phrase and say, there's no way that that is not violent because the only potential way that you could read that is that you're trying to wipe out, the Jewish people of Israel. so it's one of these things where the context of the speech is very, very important, and people hear it differently. based on where they are and who they are and what their situation is. And there's a lot of speech like that. This is just a very clear example of it, where again, it feels easy to say, take down the bad stuff, leave up the good stuff, but. this is very good to some people. It's very bad to some other people. And it very much depends on their perspective. And there's no, right answer that says, this speech is always okay. And this, and it's not. And, you know, what the oversight board here is to walk this balance that is itself impossible, because no matter what they say, there are many people who are still going to say, like, Okay. That is absolutely violent. And there are many people who are going to say it is not violent at all. It is, a message of solidarity. And so, to me, it's just a really great representative of the impossibility of, having to moderate speech like this.
Ben Whitelaw:Yeah, and that is with, a group of people who are experts in the field, um, making their best judgment with the evidence they have. that's probably a useful note to end on Mike, the impossibility of, of moderation and the difficulties of, uh, of online speech. thanks as ever for your time and, bringing those stories today. I really enjoyed talking through, what will forever be known as the pencil ruling in my head. Uh, and, uh, And really, really, really glad to have, talked through some of the issues in South Korea as well, which I think we will come back to forthcoming weeks. I appreciate everyone's joining us today. Thanks as ever for your, time, for listening and for your reviews and ratings on our platforms. If you like the podcast, if you enjoyed today, don't forget to spend a bit of time going there and, rating us. It does really help us get recommended and found on the platforms and we'll speak to you next week. Thanks all for joining.
Mike Masnick:Thanks.
Announcer:Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.