Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
ChatGPT Told Us Not to Say This, but YOLO
In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Daphne Keller, the Director of the Program on Platform Regulation at Stanford's Cyber Policy Center. They cover:
- Ninth Circuit Rules in Favor of NetChoice Over California’s Age Appropriate Design Code (Ninth Circuit)
- Governor Newsom and Attorney General Bonta on appellate court decision regarding California’s Age-Appropriate Design Code Act (State of California)
- Regulating platform risk and design: ChatGPT says the quiet part out loud (Stanford Cyberlaw blog)
- X says it is closing operations in Brazil due to judge’s content orders (CNBC)
- Indigenous creators are clashing with YouTube’s and Instagram’s sensitive content bans (Rest of World)
- Teens are making thousands by debating Trump vs. Harris on TikTok (Rest of World)
- Forced to rethink this Patreon (Chris Klimas Patreon)
- Ninth Circuit Allows Case Against YOLO Technologies to Proceed (Ninth Circuit)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So Daphne, we start off each week, with a prompt from social media, and I know that you are a big LinkedIn user, and one of the prompts that they use there, which is a little weird, but I'm going to run past you, is try writing with AI. So would you like to do that this week?
Daphne Keller:Why, yes, as it happens, I did try writing with AI this week and I got some really interesting answers from chat GPT about what some laws like KOSA in the Senate and the Digital Services Act in Europe actually require. So I'm pretty into writing with AI this week.
Mike Masnick:All right, well, let's explore that. Hello and welcome to Ctrl-Alt-Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you with financial support from the Future of Online Trust and Safety Fund. I am Mike Masnick, the founder and editor of TechDirt. And as we mentioned at the end of last week, Ben is off this week and instead we are joined by Daphne Keller. The director of the program on platform regulation at Stanford's Cyber Policy Center, and also former associate general counsel at Google. So I am hoping that we can explore some platform regulation issues this week, since you are the expert.
Daphne Keller:And there are so many platform regulation issues this week.
Mike Masnick:Yes, in prepping for this, we were having trouble cutting down on stories because there are so many and we're going to bounce around the world a little bit. But welcome to the podcast. Thank you for stepping in to Ben's Shoes. You don't have an English accent, but I think your expertise will fill in where, where, where we are lacking the British accent.
Daphne Keller:I can't even fake one. Sorry.
Mike Masnick:Uh, so I want to start with one that we mentioned right at the end of last week's podcast, because it came out while we were recording, which was the Ninth Circuit's ruling in, Net choice versus Bonta, or is it Bonta versus net choice? I always get confused which direction those go in. I guess they usually change at the Supreme Court. So I think it was still net choice versus Bonta. and that was looking over California's age appropriate design code. And I had spent a lot of time writing about and warning about this law. And California Politicians ignored me, as they are known to do. And then Netchoice challenged the law, and it was thrown out by the district court, and then it went to the Ninth Circuit on appeal. And so, last week at this time, the Ninth Circuit came out with their decision. And it's very interesting, and I think it could have pretty wide ranging impact. But Daphne, do you want to sort of summarize what the Ninth Circuit said, and kind of, get into what it means?
Daphne Keller:Sure. So first, thank you for being such a canary in the coal mine on this one. You and Eric Goldman were pointing out the problems with this law way back. also to contextualize, this is one of almost half a dozen Cases going on around the country, almost all of which are all of which were brought by the Trade Association, NetChoice, that are challenging child safety laws passed by states. And most of these laws have, I think, some pretty obvious constitutional thoughts. but NetChoice v. Bonta, which is the California case, is the first one to have reached a court of appeals. So this is a really important ruling and hopefully it will affect the other cases. So the, the law The California age appropriate design code is this big sprawling thing with a lot of pieces and some of the pieces might be totally fine and innocuous. They look to me like normal privacy requirements to, you know, not collect more data than you need or, have a disclosure about your privacy policy that that kind of thing. But then mixed together with that are a whole lot of things that raise complicated constitutional questions. And then something that I think raises an uncomplicated constitutional question, which is what the court ruled on. So the law says that platforms that might be accessed by children have to make. Data Protection Impact Assessments, or DPIAs, and the DPIAs are supposed to assess anything the platform does that might create a risk of harm to children, including, risk of Things that are detrimental to their physical health or mental health or well being. And then after writing up this assessment of the risks to children, the platform also has to create a timed plan to mitigate or eliminate those risks. So, it's, platforms talking about the kinds of risks that are created by online content, like content that makes kids feel bad about themselves, or that scares them, or that encourages eating disorders. This is bad stuff, but it's definitely about content, and it's definitely about content posted by kids. Users and it seems to me that it's also about content that the state doesn't have the authority to regulate even though it's bad. It's about lawful but awful content. And so, a law requiring platforms to assess and mitigate the harms of that kind of speech seems to me obviously unconstitutional and, the Ninth Circuit. Agreed in this case, they said that this was subject to strict scrutiny and they said that it, quote, deputizes private actors into censoring speech based on its content, unquote. They were quite firm in saying that this part of the law, is unconstitutional and they were not at all. fooled or distracted by this formulation the lawmakers tried to make, about how, well, we're just asking platforms to assess risks that arise from their data management practices. Well, I mean, that means risks that arise from everything about platform operation because everything platforms do involves data management. So, you know, the, the poor lawyer for the AG's office, for the California AG's office defending the law before the Ninth Circuit just had a terrible time because they kept pressing her on how can this not be a law regulating content? And she really didn't have an answer. So the ruling focuses on that, says that's unconstitutional. It remands a bunch of other pieces for reconsideration. But the most important part of the ruling, I think, is It's super straightforward and should tell us a lot about the constitutionality of things like the COSA draft law in the Senate or a lot of these other state laws.
Mike Masnick:Yeah, and I think that's important because there's this sort of dance that I think a lot of, regulators and, and lawmakers have been doing over the last couple of years in which they are trying to frame laws that are clearly intended to regulate content and speech, but they frame it as in, this is just about. data management or privacy or, process, you know, we, we just want best practices or things like that, but the underlying issue is always really goes right back to what you really want is to regulate content. You just know you can't say that directly. And I think the nice thing here, the lower court, I think. saw this as well, and then the Ninth Circuit says like, no, like, come on. The only way that you can really enforce this is to, take down certain kinds of content.
Daphne Keller:So the lower court, I think, said some things that alarmed privacy advocates for good reason. And I'm happy that the Ninth Circuit did not replicate the lower court's work in this respect. Because the lower court kind of went down this road of saying, Oh, well, maybe all data is information and all data is speech. And therefore, a law that regulates how you use data, like. You know, private information you have in your logs from users usage practices, maybe that violates the first amendment in invoking a Supreme Court case called Sorrell. And so I, I think there's a reason that so many amicus briefs came in from academics and from privacy organizations saying. No, hey, wait a minute, you know, the basic privacy regulation parts of this are okay. And that's pretty much what the briefs from the ACLU and CDT and EFF said also was like, look, privacy regulation is okay. But a big chunk of this law is something else. It's obviously speech regulation and it's unconstitutional.
Mike Masnick:Yeah. I mean, it was funny because I actually attended the oral arguments at the district court. I watched the ones for the Ninth Circuit, but the district court, the judge had just recently before that had been reversed by the Ninth Circuit. And she said a few times during the oral arguments, I don't want to be reversed.
Daphne Keller:Oh,
Mike Masnick:And, and so like the Sorrell part of it. Was she really focused on Sorrel and that was a whole case around like medical information and whether or not, it could be regulated in terms of how that was shared and, and stuff like that. We don't need to, go down that path, but I think she was really influenced by that case. So it is interesting that she was sort of partially reversed here, not, you know, remanded back to the court. But, part of that also is, this is, coming off of the, also Net Choice, Supreme Court cases, which now people refer to as Moody, which was the Florida and Texas law, where the Supreme Court sent those cases back, but basically did so because they said Net Choice brought these as a facial challenge and there are different requirements for a facial challenge, which were not met in those cases. And the Ninth Circuit kind of does the same thing here. Is that a fair statement? Right.
Daphne Keller:think this case is very much influenced by Moody, by that part of Moody that said, When the state passes a big sprawling law, you have to look at all the little pieces of it. Well, maybe you have to look at all the pieces in isolation and look at how each of them might apply in different circumstances, because it might be that there are enough legitimate applications of. You know, some are all of the law, that it shouldn't be struck down, it's in its entirety, that it can't be subject to a facial challenge, which is the kind of challenge that Net Choice brought both against the Texas and Florida laws and here in, Bonta. And in oral arguments at the Ninth Circuit, the, Judges were really interested in just like the mechanics of how to do what Moody is telling them to do. Like how many isolated questions do you have to assess separately? And I think they actually did a pretty nice job of kind of bucketing the pieces that obviously could be dealt with. Right away, struck down, facially were done. and the, the parts that needed further attention from, from the lower court. But on the other hand, like, because it's, this law has so many different pieces in it, like there's an implicit age verification requirement. There are some disclosure requirements that are a lot like the transparency mandates that are being litigated elsewhere in the country. There's just a lot there. And it's expensive for NetChoice, but also for the state to have to litigate all of those different things, like, we should not be grateful to our legislators for passing a big, messy, confusing law that has so many different constitutional questions embedded in it.
Mike Masnick:Yeah. It struck me that also the judge in the district court had asked both Metrois and California to focus on the DPIA requirement, this, you know, explaining each of your features and how you're going to mitigate any potential harm. And there had been a discussion. At the district court on severability. Like, could we cut out this part of the law and say, that goes away, but the rest of it can go into effect, which is slightly different than the issues with the facial challenge, as I understand it. But that seems to get really deep into the legal weeds that I have no interest in ever learning about.
Daphne Keller:Yeah, one of the questions is federal and the other one is state
Mike Masnick:Oh God. Okay. Let's, let's stop before, before we go any further on this, because I don't want to know. But. there are these questions. And that choice argued that the whole law is sort of tied up in the DPIA stuff. I think you're right. There are some of these other things. I mean, the age verification stuff really just didn't come up that much in the lower court. But I, kind of feel that, on remand feels like that might also get thrown out on First Amendment grounds once it's actually briefed and once they're actually discussed.
Daphne Keller:but, but there's an age verification case before the Supreme Court right now about the FSC case about Texas's age verification law for porn sites or sites with more than a third pornography. So if I were a lower court,
Mike Masnick:Right.
Daphne Keller:would wait and see what the Supreme Court says there.
Mike Masnick:Yes. Well, we just had in the seventh circuit, that Indiana had a similar law to Texas's law and net choice again, not net choice, uh, FSC, FSC had the free speech coalition had challenged that law and the lower court had thrown it out. It's obviously unconstitutional because of the age verification stuff. And then the seventh circuit just said, well. The Supreme Court is looking over the Texas law, and while they're looking it over, they did not put an injunction on the law. So the Texas law went into effect. And in fact, Paxton is already suing using that law.
Daphne Keller:Yes, and the Seventh Circuit's like, yeah, okay, we'll let ours come into
Mike Masnick:yes, yes. Yeah. Like, all right.
Daphne Keller:that one, I think?
Mike Masnick:yeah, That's the Indiana
Daphne Keller:and, so Pornhub, I think, pulled out of Texas completely when the law came into effect, and I assume they'll now do the same,
Mike Masnick:They they've, I think they're now pulled out of 17 States. I think the, I mean, it's just a block where if they recognize your IP address as being in one of the 17 States that has passed a similar law, will get blocked from accessing it.
Daphne Keller:yeah. And one of their arguments is, if we're not there, people are definitely still going to find porn. It's just going to be from offshore sites that are even worse. Uh, which, they're not wrong.
Mike Masnick:Yeah. I mean, there's, there's a whole, yeah, issue to go down there where it's like, yes, every other site, uh, porn site tends to not pay attention to these laws and that's in part because they have nothing in the U. S. that could be gone after. We're getting a little further away, but yes, all of these things are connected, and it's kind of an interesting time because of that, um, but so, the big part of the law, the injunction stays in place, the DPIA requirement, is gone. The rest of it gets sent back to the lower court. one thing that frustrated me, and I wrote about this this morning, was that California Governor Gavin Newsom and Attorney General Rob Bonta put out a statement claiming victory, of this. And I found it remarkably similar. To when the Moody decision came out in the Supreme Court and Ashley Moody, who is the Moody in, in, in the Moody case declared victory because she said, you know, it threw out the 11th circuit ruling. So we won, which is not. how any normal human being would read the Supreme Court's ruling in that, which basically said, the law is almost clearly unconstitutional. There are all sorts of free speech problems with it, but the lower courts just did it wrong because they didn't do it as a full facial challenge, as we sort of discussed. And so to see Newsom and Bonta basically do the same thing, they declare victory because they send back the case. They completely ignore it. The fact that And it's interesting because, the key part of this law, the DPIA section, the court was like, this is clearly unconstitutional. This is clearly an attack on free speech. and so having politicians and attorneys general claiming victory over cases they clearly lose, frustrates me. feels like either they don't understand it, which can't be true, or they assume the public is too stupid to be told the truth. And I don't, think that's a good look for, anyone. And, mimicking the way Florida does anything is probably a bad thing for California to be doing. Uh, and so that was frustrating to
Daphne Keller:And it's just, I agree. Very frustrating. And it's, so, Just dishonest and keeps us from having a real conversation about these laws. And it's almost like this echo of how dishonest the laws in the first place are, you know, where there's this sort of pretense of regulating privacy as a feint in order to create content restriction rules, you know, we'd be much better off if we could just. Speak clearly about what rules we're trying to create. Um, but that's not what's going on.
Mike Masnick:Yes, but that sets me up perfectly to segue to the next thing that we wanted to cover, which was something that you did this week with ChatGPT to explore some of these laws and figure out which parts are hidden and which are not. And so this is something that you wrote up on the Stanford Cyber Law blog, around ChatGPT saying the quiet part out loud. So do you want to describe what it was that you found and what you did?
Daphne Keller:Sure. So this was so like fun and exciting for me. I was really:surprised, by the answers I was getting from chat GPT and I wound up. You know, staying up late, playing with it. So I started with, um, a version of ChatGPT that was put together by, someone who was at a trust and safety vendor called the Trust and Safety Regulation Expert, ChatGPT. And what I think that means is that in addition to having whatever the background training materials are that ChatGPT has, there was also additional information entered as, prompts. To sort of set up this one to have more or better answers than regular chat GPT. so I asked it questions like. What must I do in order to comply with the California age appropriate design code, the draft of COSA that's in the Senate right now, Europe's digital services act. And the answers should be, if you listen to the proponents of all of those laws, the answers should be like, Build good systems and risk mitigation practices, you know, things that are kind of hand wavy. but this is, this is an instance of ChatGPT that's trying to tell you how to do compliance. It's aimed at the people doing the work and they need something more concrete than that. And it turns out that something more concrete than that is ChatGPT says, take down the following categories of content. And so, for the U. S. ones, so it lists sexual content, which is the one kind of content that the Supreme Court has consistently said, can be legally restricted for children, and you can have reasonable laws trying to achieve that, and then we can debate what a reasonable law is. But then it also lists violent content, which we know from the violent video games case is something California has failed to restrict from children. Before dangerous challenges, cyber bullying, self harm and suicide, eating disorders, content, hate speech. Also something the Supreme court has recently emphasized is generally constitutionally protected speech. So it lists these things that like. Yeah, those are terrible content. You know, like, I, don't particularly want kids seeing that stuff, or at least the bad version of that stuff. Of course, if you read the list literally, it would exclude all kinds of educational material, classic literature, things that are taught in high school, comic strips, you know, like, lots and lots of things. but, but, but, What was striking is just that it, it is a literal list of legal kinds of speech that CHAT GPT thinks you have to take down or you have to stop kids from seeing in order to be compliant with these laws. And that's exactly what the AG's office in Bonto was trying to insist the law didn't mean. And you'd get, you know, uh, ChatGPT I think arrived at the same answers that anyone trying to comply with this law would probably arrive at. This isn't some kind of hallucination. This is just being realistic about what these laws actually require and probably reflecting what a bunch of, uh, online coverage or recommendations of best practices for vendors or, you know, whatever, what they say you're supposed to do to comply. So I think it really is saying the quiet part out loud and it's refreshingly blunt and straightforward.
Mike Masnick:Yeah. And this is, you know, one of the things that comes up in these discussions is, how would you comply with these laws? And as much as the politicians and regulators say that it wouldn't require removal of content. The reality is that a lot of companies are going to read these laws this way, in part just. To avoid even having to litigate it, right? The problem is that so many of these things are vague and vaguely written, and, you don't want to have to be in court defending this, because that would be crazy expensive. And the way to avoid that is just to remove all this content. And then you get to this point where, effectively, the law is requiring you take down protected speech. And that's what the Ninth Circuit saw. It was interesting to me also, because you did. Sort of two separate questions on this. You talked about the ADC and, COSA, but you also did it with, uh, Europe and the DSA, where, as everyone in Europe always reminds me every time I talk about European regulation of the Internet, there is no First Amendment. And they have different standards, but there are lots of European officials who have insisted to my face at times that the DSA is not intended to regulate speech and is not intended to require sites to take down speech, yet, as we have talked about, Terry Breton, who is, uh, effectively in charge of the DSA, seems to think otherwise, and ChatGPT seems to agree with him?
Daphne Keller:Yep. Yeah. So in Europe's defense, you know, they have human rights instruments that protect free expression. They just draw the line between what's protected and what can be prohibited under law in a different place than the U S would. and so I think the fundamental question is identical in Europe. Versus the U. S. It's for, whatever that, slice of speech is that's lawful, but awful, you know, that's not prohibited by law, but that lawmakers wish they could prohibit or that, you know, most people find offensive or harmful or, you know, morally abhorrent for that slice of content. What can laws get away with doing? And the part of the DSA that. I think that comes closest to being like the AADC or COSA and being like, Oh, here's a sneaky way of getting platforms to take down lawful but harmful content. That part is the risk assessment and risk mitigation rules for the very large online platforms in articles 34 and 35 of the DSA. Those say, you know, You have to do an assessment of the risks that your platform poses to this whole list of really important societal values, safety of children, privacy, free expression, violence, civil unrest, you know, et cetera, and having assessed the risk of. Everything that might go wrong in a society, then you say how you're going to mitigate that risk. And is the answer that you as a platform have to mitigate that risk by taking down lawful speech? Well, Terry Breton seems to think so. Um, I was recently at a multi day event in Brussels about this specific issue, like just these articles at the DSA and a bunch of experts there were like, No, actually, obviously, like they didn't even think it was a question, obviously, that you're supposed to take down this harmful but lawful speech, but that is absolutely not what the commission was saying the law was supposed to mean when it was being drafted. I have a really strong quote on that in my blog post of how they were framing it. And I think sincerely framing it, I think they truly wanted this to be a law that is about systems, but that didn't give government the authority to push for removal of specific kinds of harmful, but lawful speech. Well, turns out chat GPT, like Commissioner Breton, uh, thinks that is what the DSA requires. And so, you know, it told me to remove, quote unquote, a harmful speech, which is the sort of has been the buzzword for this lawful but awful category in the DSA discussion. And it also told me to do something that crosses an enormous, important, bright line in EU law, which is, it's been a legislative rule for a very long time that governments can't compel platforms to do quote unquote general monitoring of users communications. You can't make them go out and search for every time a user says something that violates the law.
Mike Masnick:hmm.
Daphne Keller:and the, highest court of the EU has said repeatedly, this is not just a legislative rule. It also protects users free expression rights from the over removal that comes from filtering speech. And it protects their privacy and data protection rights. This is a really important rule. You can't make platforms engage in general monitoring. Now. There are some holes in that rule, which we can talk about another time. But ChadGBT just blows right past that and says, go out, do proactive monitoring, you know, use the tools vendors want to offer you, um, find the bad stuff and take it down proactively. So it, it's really going against, it's passing two really important red lines. one is telling platforms to take down lawful, but harmful speech. And the other is telling them to do general monitoring.
Mike Masnick:Yeah, and, there is a response to some people can have for this which is like, well, this is Chet GPT. It's not a lawyer. Chachapiti is prone to hallucinations, but I think the important point here is that like, this is a fairly standard way that these laws will all be, and in some cases already are, being interpreted. And the fact that Chachapiti always seemed to to that same conclusion with all of these laws, and these are the ways that people, warned about that the laws could be interpreted, and just the fact that, you know, this sort of gets left out of the debate, but I think is an important. Point, which is that, companies are liability averse, right? They don't want to risk this stuff. And so if you give them something where they might face liability and you give them an easy out to avoid that liability, they are probably going to take it. we had a post, uh, about a month ago about COSA in particular by Matt Lane, who was talking about like the laziness of companies in terms of how they will deal with these laws. And it is like. They don't want to fight it out, so they're just going to remove the content because that is the easiest and that avoids even having a lot of these questions.
Daphne Keller:Absolutely. And there are so many shortcuts like that. You know, for example, if you are a company and you don't want to deal with the slight differences in hate speech law or privacy law between France versus Sweden versus Romania, you just expand your terms of service to prohibit all of it.
Mike Masnick:Right.
Daphne Keller:And then that's, you know, much cleaner and easier. And that's the same kind of shortcut that we should expect in, in the face of an ambiguous legal mandate. I also think the biggest platforms, the very biggest ones, have lawyers who can parse this carefully and who know what the commission has been saying this means, et cetera. Anyone else looking for advice on this, they're going to look at whatever the secondary literature is on the web, and that's the same secondary literature I think ChatGPT must have mostly trained on, and it says these sloppy things about taking down harmful speech.
Mike Masnick:Yeah. And, and just to note, because I think there might be some people listening to this who are like, well, what's the problem here? If we're talking about harmful content, why not just remove it? and I think I'll just point out that that turns out to be a lot trickier of a problem than people make it out to be because, what counts as harmful content and harmful to whom and harmful how, and, do you include Content that might be, LGBTQ useful content that some people consider harmful and some people consider life saving. determining these things are really difficult. I talk about the eating disorder content issue quite a bit because that comes up as one of the examples harmful content that people talk about a lot. And yet, many of the studies have found that When that content is removed from platforms, it actually makes the problem worse, not better, because people continue to seek it out, and they tend to seek it out in deeper and darker places that have fewer protections and, importantly, fewer mitigations. When that content is found on platforms like Instagram or TikTok, they will often include some sort of mitigating content. Content saying, you know, warning people or, you know, pushing them towards resources for recovery, whereas the deeper, darker places of the web don't do that. And so, I always like to raise that because sometimes you get into this discussion and some people I know are listening saying like, but no, like that's bad content. It shouldn't be allowed at all. And I just want to remind people that it's, it is often trickier than people think it is.
Daphne Keller:It's tricky and it's so inextricable from this sort of culture wars backdrop. You know, these are laws that have a vague concept of child safety, and then someone has to be the enforcer to decide what content is or is not safe for children. And so the concerns that LGBTQ groups have been raising I think are incredibly compelling about conservative AGs saying what child safety means or conservative FTCs under a Trump administration saying, you know, what's harmful to children is information about, reproductive. choice or information about transgender identity. And Marsha Blackburn very much said the quiet part out loud on this, such as saying that we need to protect children from quote unquote, the transgender. And so the, you know, parts of the left that are paying attention have been rightly very alarmed by these laws. But recently we've seen parts of the right appropriately get alarmed in You know, the exact same way. We have Rick Santorum and pro life organizations saying, wait, what if a liberal attorney general or enforcer wants to override our values and preferences about what our children should see? And yeah, they should worry about that.
Mike Masnick:Yeah.
Daphne Keller:I'll say one more thing about the, the speech at issue, moving away from the kind of fraught cultural work questions. There's this big, Beautiful brief that I just love, filed in the lower court as an amicus brief in Bonta by the New York Times and the Student Press Law Center. I link from it in, in my blog post. And the second half of the brief is just like moving stories about student activism and journalism and how important it is for Teenagers, especially, to like, see the upsetting things that are going on in the world on the news and not be protected from that. and I just think it's such an important point and it's so impossible to isolate the good and productive upsetting information in the world that kids need to see from whatever it is that these love is what platforms taking down.
Mike Masnick:Yeah. And I'll use that as a transition point to our next story, which is also somewhat related to all of this, which is that it was announced in the past week that X, the platform formerly known as Twitter, was shutting down. all of its operations in Brazil in response to an ongoing battle that is happening in Brazil that started earlier this year where there is a Brazilian judge and I'm going to try and fail to pronounce the name properly, Alexandre de Mores, something along those lines. I apologize if I got it wrong, um,
Daphne Keller:But I think he's on the Supreme Court and maybe is the equivalent of the Chief Justice. Like, this is a very powerful person. It's
Mike Masnick:a very powerful person who had ordered, Twitter to, I'm just going to call it Twitter, to remove certain content earlier this year, and Elon Musk sort of, went very public with it, opposing it, and saying that he would not do that, and then there were threats that were passed back and forth, and then like two days later, very quietly, Twitter said that it would, be removed. Enforce those. And it would pull down that content. And now this is months later and apparently, that content wasn't removed, even though they said it was. And so there were new threats and there were threats about. daily fines and potentially arrests of ex employees in Brazil. And so Elon just said, screw it. we're going to shut down all of our operations in Brazil so that there won't be someone there to arrest and fine. And I thought this was interesting. This is not unprecedented. We've had Companies leave other countries in the past. Usually those have been authoritarian countries. You know, I know Google shut down its operations in Russia at one point, going way back at one point, Yahoo shut down operations in China over similar concerns, there have been other examples of companies pulling out. This still struck me as kind of interesting. Kind of newsworthy. I mean, the battle between Brazil and Elon struck me as interesting in a few ways. And I've spoken about this before, where Elon has claimed, though he doesn't live up to this, I mean, like, lots of things that he claims he doesn't live up to, that, he believed that free speech means that which the law allows, which is a very strange interpretation of free speech. Uh, Daphne just rolled her eyes, but I want to make sure that is on the record for the podcast. Um, And so if that were true, if he believed it was that which the law allows, then he would obey the ruling in Brazil. And yet he is protesting it. I think he's right to protest it because I think that some of What the judge in Brazil was ordering seems really questionable. It did strike me as interesting that he chooses to only fight or appears to only fight, that which the law allows when doesn't agree with it. Uh, that country's political leadership, whereas in other cases, such as in India and in Turkey, where he is much more aligned with them, ideologically, he seems to have no problem taking down the content that is demanded of him. But here he's taking another step, not just fighting it that way, but actually shutting down operations. So did you have any thoughts on, Musk making this decision?
Daphne Keller:I mean, so it's, it's interesting that it's happening in Brazil specifically because Brazil does have a history of threatening to arrest, maybe actually arresting? I can't remember. Uh, the sort of in country executives, uh, once it was from Google, but more, more high profile, was about WhatsApp. They wanted WhatsApp to, turn over information about users to the police that WhatsApp didn't want to turn over. So that one was a fight about surveillance. This one is a fight about speech and what content gets taken down. But the sort of Brazil being The country that's willing to really threaten to put people in jail is, is a recurring pattern. And I would think if this were old Twitter, I would think this was the biggest and most interesting story of the week. But what, what's going on in India? You know, like, I just can't imagine that there have not been situations in India that could, and maybe should have led to the same kind of standoff. Um, and yet we don't see Musk and X picking fights with the Modi government in India. So you know, I think there's a piece here that is about, Musk's political preferences and which speech he values. There's a piece here surely that is about markets and commercial interests of both X and other countries. And so, tragically, this just feels like another front in the war of, well, Musk against other. Big important men. He does seem to like to have these public standoffs. Yeah, but you know, I feel terrible for people in Brazil who use Twitter and have relied on Twitter and presumably if Twitter pulls out it, you know, the service will still be available until Brazilian ISPs are ordered to block the site and the app, which seems like a pretty plausible thing that could happen if, Twitter's refusing to comply with the law there. So, you know, as somebody who's seen. Twitter go from being something incredibly valuable to me down to something that I reluctantly still use for my 30, 000 followers, all of whom have bikinis in their profile pictures. You know, it's tragic to me to see that demise. And I think to all of us, but to have it just like poof, it's gone overnight, for people in Brazil, that, that will be very unfortunate.
Mike Masnick:Yeah. you know, it strikes me as interesting too, because Brazil, it feels like there's sort of a pendulum where they, Push for and sometimes pass laws that are very supportive of open internet and civil liberties and civil rights kinds of concepts. And then they do things like this,
Daphne Keller:Yeah.
Mike Masnick:they're threatening to arrest executives for not pulling down speech.
Daphne Keller:Yeah. I mean, Brazil has some of the most sophisticated thinkers and advocates on intermediary liability for platforms in, in the world. Um, their law, the Marcos Civil is one of the most pro, free expression, pro privacy, anti surveillance, Laws out there, but it's under a constant attack. There are constant threats to change it. Um, it is not necessarily being applied in lower courts. Like the reality on the ground and the likely future in Brazil is not what it might be if they took advantage of, again, this just like tremendous rich body of expertise that they have in country.
Mike Masnick:Yeah. Well, that actually is a good segue to our next story, which is also about Brazil, but in a different context. And this was a story from Rest of World, which was talking about indigenous creators in Brazil who are having trouble with both YouTube and Instagram, and that they are often being moderated or banned. Due to in theory, violating sensitive content or nudity rules or things of that nature. So do you want to talk a little bit about that story?
Daphne Keller:Yeah. So this is a pattern of indigenous content being taken down, as you say, probably for violating nudity rules. And it jumped out at me in part because you know, you know that I follow so called must carry cases, the cases where somebody tries to force a platform to reinstate content. One of the earliest ones that I came across probably like eight years ago now was this exact same issue in Brazil with an indigenous woman asserting a right under the constitution To, it's like protection of cultural identity or something like that, that she was taking to some, a human rights body to get an image. I think it's a topless woman in traditional attire in the Amazon to get an image reinstated on Facebook and that case kind of fizzled out. And I don't think anything came of it, but this is a longstanding issue and it's an issue that's important and it's an issue that has. Some analogs, I think, in the U. S. and around the world in this question of like, well, the government can't, shouldn't, I think, come along and tell platforms how to moderate speech online generally, but if the claim is this moderation is racially discriminatory or is discriminatory in other ways that we do have other laws for, I actually think that's kind of, you know, Different, there's plenty of precedent for the idea that a law that's there to combat discrimination can have some incidental on effect on speech. And the classic example is, you can't have a job sign in the window that says only white people need apply, you know, there's some. Things that work differently when discrimination is at issue, and I'm, by no means saying I think there's some valid discrimination claim that could be brought, but the ways this has come up in the U. S. so far, including some kind of like junky and poorly substantiated cases against YouTube by Black and LGBTQ plus content creators saying they'd been discriminated against, they just didn't tee up the question very well, You know, it's a question I think governments around the world need to be thinking about a little bit differently than the questions that are about, can you silence someone based on their advocacy or their political perspective?
Mike Masnick:Right, right. Yeah, the other thing that I thought was interesting about this particular story was that it, discussed how, Indigenous creators are effectively self censoring themselves, and they're sort of, because of these rules, they know that they can't show certain content, or they can't talk about certain content, and so they're changing the way they present themselves. And that is something that is interesting and we're thinking about in one that isn't as widely discussed. How these rules change the way people actually act online. and seeing that struck me as very interesting.
Daphne Keller:Yeah, I mean, in a way it's analogous to the ways that, you know, sex workers have changed how they talk about themselves online. There's a whole body of sexual content where people are self censoring in various ways, but it usually doesn't have this overlay with, and this is how we are perpetuating our cultural tradition that is gravely threatened by the modern world. You know, that, that is a whole different, category of harm that, that's, happening. Sad and troubling.
Mike Masnick:Yeah. So, and then I want to move on to, to,, uh, Other stories that I'm sort of going to merge together because I think they're a little different but semi related and has to do with people making money in different ways online having to do with their content that I actually think is interesting. one is also from rest of world, which is a great source if you don't follow it. Talking about teenagers who are making thousands of dollars by debating on TikTok. And TikTok has this feature, which I did not realize until I read this article, though it's been around for a little while, apparently, where you can sort of live stream a debate for five minutes against one another. And. A lot of people are using it for political fights. So there's like Trump versus Harris, where one person, you know, argues in favor of Trump and one in favor of Harris and all sorts of other things. And the way it works is that it's, there's a five minute timer that counts down and people watching effectively vote on. Which side they prefer, but the voting is giving virtual gifts, which have cash value. So the story is about kids making thousands of dollars a month by debating in these live streams and making money from it. And this struck me as just fascinating for a variety of reasons, but. The sort of trust and safety element of it all struck me as like, if I were a trust and safety person and someone showed me this feature, I would freak out because it raises like every, alarm bell related to trust and safety. There's live streaming, there is political content fighting, arguing, there is sort of just like, Disinformation, there is the monetary component, because as soon as you put money in, you get money involved in any of this, then it leads to questions of fraud and other issues, and you have people donating lots of money, and how do you track that? There are a couple stories in this, this article of people spending. Thousands of dollars supporting the TikTok creators that they like. There is one story, which is a little weakly sourced about somebody who, who stole 1. 2 million where apparently some of it was to support creators on TikTok. so all sorts of stuff that just struck me as there All of these crazy questions that come up around trust and safety with a tool like this, that I'm surprised this tool ever got released in the first place to some extent, but it's also kind of a neat idea to have people debate in these ways and get people conversing in this
Daphne Keller:Yeah. So I, I mean, my first response was, couldn't this be a rap battle? I want this to be a rap battle, but like, hear me out. I think that would help because there's presumably these teenagers pretending to be Trump and Harris are going to spew a whole bunch of misinformation. Because they're just trying to be entertaining. And like, that's bad. We don't like misinformation. On the other hand, Saturday Night Live sketches and political cartoons are also full of misinformation that are part of the like, commentary and entertainment content. So if it were a rap battle, Mike, then it would be clearer to everyone that this is like an artistic take and you shouldn't see it as the news.
Mike Masnick:Yeah. Okay. I like it. I like it. We'll suggest it to Tik TOK, force them to rap. All right. The other one that, that has to do with monetary support for people is the story about Patreon and Apple and Patreon recently announced that they're changing some of their policies in terms of how their subscriptions work. Okay. In response to Apple's demands on Patreon, and there is an element of this that a lot of people talked about that is to me, actually somewhat less interesting, which is that Apple demands that if someone is paying an app through iOS, that there's a 30 percent discount. Charge that has to go to Apple. And so they've pressured different companies. There've been legal battles over this Spotify and Apple thought about it, Epic and Apple thought about it. So there were a bunch of different back and forths with that. And here Apple is forcing Patreon to do it. Patreon is telling creators that they have to choose whether or not they bump up their price. By the like 3% or, or 30%, or if the creator themselves will eat the 30%. and so that, got some attention. But someone pointed out, which I thought was really interesting, was that part of this is that Patreon now has to match the way that Apple does subscription payments, and that means that they are getting rid of the ability to pay per creation. And this struck me as a big deal because that was how Patreon started. The whole idea that Jack Conte, who started Patreon, had this idea. He made all these like really elaborate YouTube videos. And so he set it up where people would commit to pay a certain amount, a dollar, 5, 10, whatever it might be per YouTube video that he released of a new song. And that was the beginning of Patreon. I thought that was a really brilliant. business model. If you're a content creator, and you only create every so often, but you have people who want to support those creations, if you get them to pre back each creation, that's fascinating and a really unique and interesting business model. I recognize that Patreon has mostly moved to a pre back. Per month subscription model, which is what a lot of creators wanted. But now the fact that they're effectively getting rid of this per creation model, not because they want to, or not because users don't like it, but because Apple's forcing them to, in order to meet the requirements of being in the app store, that just struck me as an interesting change and one worth noting.
Daphne Keller:Yeah, absolutely. I mean, you're more of a content creator than I am. So you probably see this more clearly, but that just seems like it rewards a certain mode of cultural creation that is routinized, that can come out the same way every month. Maybe that depends on having a bigger organization. And it harms other modes of more, as my colleague Evelyn Dueck would say, stochastically produced content, podcasts, et cetera, you know, that seems to just really flatten out what it's possible to do with Patreon and what speech and communication and projects you can do.
Mike Masnick:Yeah. I mean, it also gets a little bit at, you know, the impact that a company like Apple has on other areas of the creator space, just by the nature of their policies for the app store, which, which is worth thinking about. But on that, I wanted to go to our final story, which is we started this podcast out talking about a Ninth Circuit ruling in the Bonta case, and we're going back to the Ninth Circuit for another ruling that I think may be somewhat impactful, which. is about YOLO, which is perhaps either the best or worst name. Uh, in this case, there was a company called YOLO Technologies that seems like an awful company. And I'll state upfront that I, everything about this company appears to be awful, but they built some sort of add on to Snapchat. There were already concerns about how Snapchat is used among kids, but they created this add on that allowed you to post, questions or polls on Snapchat where people could respond anonymously. There's some areas where they could choose to reveal themselves, but basically they could respond anonymously. Not surprisingly, some people used it and then were harassed and bullied. in one tragic case, at least, someone appears to have died by suicide after being harassed and bullied via the app. so some of the, people who were harassed and bullied, including the family of the child who died, they sued YOLO. and made a bunch of claims. Some of them were product liability claims, which have been very popular over the last few years, for apps that a lot of kids use and occasionally abuse. but part of the claims were that the site itself, YOLO, the service said that the way that they dealt with bullying and harassment was that if they caught you harassing someone, they would. Reveal who you were. So you would no longer be anonymous if you were bullied and harassed. And the way the story plays out here is that these children who were bullied and harassed then reached out to YOLO for that purpose to try and unmask who was harassing them. And they got no response, like literally no response. In one case where the legal, contact was So basically, it appears that YOLO had no intention whatsoever of ever living up to this stated, anti bullying, feature and service. and so the lower court had dismissed the case and now the appeals court let some of it back in. So, Daphne, do you want to sort of summarize the finding of the Ninth Circuit?
Daphne Keller:Yeah, so that the Ninth Circuit says the plaintiffs can't proceed on their products liability theories. Um, and so that makes this one of so many different cases going on right now, where there's some products liability theory probing the edges of Section 230 and, and in, um, child safety cases, including the multi district litigation consolidated in the Northern District of California courts have actually said, yeah, there's some things like infinite scroll, for example, that that's not user speech exactly. And, you know, we can debate that, but they're kind of finding edges. Um, but this, this court did not find that kind of an edge. Instead, it embraced this idea that, YOLO had made a promise to users and then it had breached that promise, you know, basically a breach of contract, and That's not immunized by section 230 because the cause of action arises from their promise. The court uses the word promise a lot rather than arising from the user content and I want to say, I think there's kind of a continuum of cases here that I'm more and less, Sympathetic to kind of stepping away from the law a little bit from the specific doctrine in these cases, section two 30 was supposed to allow platforms to moderate content. And not get in trouble if they made mistakes in moderating content. So like we want them in section two 30, wants them to moderate content. We also generally want them to disclose what the rules are. And often those rules are mentioned in their contracts. And so if the idea is every time a platform says something about content moderation, they expose themselves to a contract claim. That's just guts 230, right? Like it's ridiculous to say that that kind of case should be allowed to go forward. And the Ninth Circuit did allow a claim kind of like that to go forward in a case called Calise, which I think is actually a very problematic ruling. And I'll, come back to why in a minute. but this case Yolo feels a little different to me because If the facts alleged are true, you know, when you sign up for the product, there's a big notification saying, Hey, don't harass or bully people. If you do, we're going to disclose who you are and we're going to block you. I think also, you know, it's this sort of big, noisy representation about what the service is. And then they apparently had a staff of 10 people and there was no way they could possibly do this. And they didn't respond to inquiries, et cetera. And that. Again, not as a matter of legal doctrine, but hypothetical better legal doctrine. That feels almost more like fraud, right? Like they're, they're selling their service as being a thing that it just isn't. and I would like to think that there can be a line between those two kinds of cases where something with sort of an extreme commercial representation that's just can, can itself be, could, could stand outside section 230.
Mike Masnick:But I think that the fear that I have, and I agree with all of that, the fear though, is that companies make marketing promises all the time, or they'll say, like, we don't allow bullying or something like that. And now does this open up the possibility that people could sue over things like that, or, you know, the sort of classic example I'm thinking of, where Twitter. Had executives claim that we are the free speech wing of the free speech party, and we're not going to take down content. And then they did because you have to, would that open up a lawsuit for saying, well, you said you were never going to take down this content. And then you did.
Daphne Keller:Yeah, so I think there would have to be a lot of limits on what kinds of representations could be actionable. I do think that companies should be careful and say, we try hard to do X rather than sounding like they're guaranteeing they're going to take down all of a certain kind of content. Cause they're, they're definitely not going to succeed. I mean, this kind of gets to one of my worries. So the, The lawsuit that was most focused on the Twitter representation about being the free speech wing of the Free Speech Party was Taylor versus Twitter. And that was brought by Jared Taylor, who's a white nationalist. His brief was really interesting. It was basically like, I'm not Rude to anyone. I'm, I'm a gentleman racist. I'm just presenting my scientific theories. They shouldn't have banned me. But also he said Twitter, this was a bait and switch. They said they were a free speech platform and then they changed their rules to prohibit more content. And that made my business and you know, my use of Twitter no longer viable. And the district court was kind of sympathetic to that at first. Like the idea that there's a bait and switch in the commercial offering doesn't really sit right with judges. So I don't know exactly where the line should stand, but just to come back to Khalees, this, um, ruling that I think is so problematic from the Ninth Circuit earlier this year, I think that, you know, that court's reasoning was if you were harmed by content, or in this case harmed by ads for defective products, and you think that that was a breach of the contract, then you can sue on breach of contract, Has a mirror image, which is if you're Jared Taylor and you think that you had a right to share your white nationalist content online and that taking it down was a breach of contract, probably you can sue under Khalees also, you know, just opened the door to contract claims about every kind of content moderation dispute, including the must carry variety. So I find that ruling kind of troubling.
Mike Masnick:Yeah, I think it's a concern. I mean, I think a lot of this goes back to, there's a case that, both of these cases cite, which is the Barnes case, Barnes versus Yahoo. I always keep trying to say YouTube, because I think with a Y, but it's Yahoo, because it's an old case. It goes all the way back to 2009. And that was one where there was, Content that somebody didn't want and they called up somebody at Yahoo and that person told them directly. I will get this taken down and In that case, which is mostly a good section 230 case I think actually has a lot of good section 230 language in that case. They said that that part There's the promissory estoppel, because this person directly promised to take care of it and didn't, that is outside of section 230 protections. and so like when that part came out, like I was worried about, and I'd written something saying like, this is kind of scary. But that hadn't really come up. We'd seen people sort of try and use Barnes in other cases and not successfully. And now in Kelis and now in this Yolo case this week, it seems that suddenly judges are using that. And so I sort of, you know, feels like that Barnes case from 2009 was like this ticking time bomb that was always sitting out there and now judges that don't like section two 30. And in the, Kelis case in particular, it The judge makes it clear that he's no fan of section 230 and would like to
Daphne Keller:Quotes Clarence Thomas on the subject.
Mike Masnick:quotes Clarence Thomas and it's pretty clear that Just think section 230 is bad and they're now sort of using this element of like the promise and turning it into a duty or sort of a contractual requirement. If you say something, you must do it and just opens up this wide area that I think a lot of lawyers are going to, make use of. in making claims in and around the ninth circuit and say, we can get around section two 30 because someone at this company or some marketing material from this company promised X and they didn't deliver on it. And so that, worries me at least.
Daphne Keller:Yeah. I mean, so I think Barnes in principle should be distinguishable because it was this, one to one communication with a specific promise about a specific piece of content. I assume that plaintiffs have tried to use Barnes this way a lot and that courts have found a way to distinguish it. And I assume it's something like that. But to be honest, I don't remember. The, the main, when I used to teach Barnes, uh, in. Berkeley back in 2012, I did a very practice oriented version of my class. and I would point out that the person who made this representation was Yahoo's director of communications. And so the lesson from Barnes was don't let your PR people do trust and safety work, those need to be separate undertakings.
Mike Masnick:Yeah, but, you know, it'll be interesting. I think that these cases, the Calise and the YOLO case are going to, they're going to lead to a flood of, new cases. There is good language, I think, in the YOLO case about the product liability side, which I think people have been very aggressive in making these product liability claims and YOLO makes it clear that those are, in this case, certainly are blocked by 230. but I, I worry about where these cases go and we're going to see a lot more of these in the future. So something to, pay attention to. But I think that is it for this episode of Control Alt Speech. So Daphne, thank you very much for stepping in and filling in for Ben this week and doing such an excellent job with so much interesting and relevant expertise and information for us. Cool,
Daphne Keller:I hope you're having a good vacation. If you're listening, I hope you're not listening. Uh, but this was a lot of fun. Thanks for having me on.
Mike Masnick:will be back with another episode next week.
Announcer:Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.