Ctrl-Alt-Speech

Hypocritical Infrastructure

Mike Masnick & Cathy Gellis Season 1 Episode 58

In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Cathy Gellis, an internet and First Amendment lawyer. Together, they cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Mike Masnick:

So Kathy, as you know, personal newsletters are really big these days. Everybody's got like a substack or something. there have been some controversies about Substack policies and people have been looking for really good alternatives. And one that's really great that a lot of people talk about is the open source ghost. I've heard amazing things about it, and on its website it says that last week. 6,219 brand new publications got started with Ghost and today it is your turn. So Kathy, what is it your turn to do today?

Cathy Gellis:

Today it is my turn to sit in the hot seat and talk to you on this podcast.

Mike Masnick:

Excellent, excellent, excellent. And, it is my turn to, uh, try and figure out how to, manage the podcast, which I usually rely on Ben to do. And welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulations. It is May the 15th, 2025, and this week's episode is brought to you with financial support from the future of online Trust and safety Fund this week. In theory, we are talking about some ai, some antisemitism, maybe some more about antisemitism in ai, some online speech issues in the UK and India, and then some usual problems with Meta and Elon and maybe a few other things as well. We have a lot of stories to get through. I am Mike Masnick. I will note that last week we were off because I was at the. Media Law resource centers, legal frontiers in digital media conference, which was a lot of fun, and I greatly appreciate that when I was on stage. One of the questions that I got from the audience was from, as I found out later, a listener to this very podcast who said that he wanted to throw in a literary reference in his question that is. Obviously in reference to our ongoing quest to have people write reviews of this podcast with literary references in them, I know that we've received at least one more that matches that brief, but we need more. So please rate, review, subscribe to the podcast, all that usual stuff, but. Leave reviews with literary references, and I promise you on a week that we have more time, I will start to read through some of those reviews and see if we can spot the literary references. I'll also note that that same questioner mentioned that he apparently somehow, got his high school aged kid to listen to the podcast and maybe even get some other friends of theirs in their high school to listen to the podcast. So I shout out. To all of our high school listeners as well. I hope that we are arming you with thoughtful arguments regarding the use of technology and the internet as kids, and of course. Ben is now out for a bit to learn about raising a kid on his own. So we have, a bunch of wonderful guest hosts lined up for the next few weeks. And this week we are starting with our wonderful guest hosts, my colleague Kathy Galles, who is an internet and free speech lawyer who has represented Teter in various matters and frequently writes for us on tech Dirt. So Kathy, thank you for joining us.

Cathy Gellis:

Thank you for having me. I hope it will be the best of times and not the worst of times.

Mike Masnick:

Let's see what we can do. we have a huge list of stories to get through to today. so many in fact that I'm only going to very briefly mention right now in this opening that COSA was reintroduced in the US this week. and it is incredible to me. I did write something about it that is incredible to me that we are. reintroducing cosa at a moment when all of the concerns that were raised about the bill, about how it could be abused for censorship and people would sort of brush those aside and say, ah, you know, you're, you're not being serious. And yet now we're seeing how much this particular administration is abusing every tool it can find for censorship. so it's amazing to me that CSA has been reintroduced in this particular environment, but we don't even have time to talk about that.

Cathy Gellis:

Although this is good to know for our high school listeners who do need to know this and be reminded so that maybe they can talk to their representatives and say, hi, we like to speak online as well, so.

Mike Masnick:

Yes. And do not do this in our name, I think is, is an important statement. but let us start, let us dive into the actual stories with a story that you, Kathy wrote for us on Tech Dirt this week about a. Recent report that was pre-published. Uh, and maybe we'll get into the, the thinking behind the pre publishing report. And this is the third chapter or third section of three in a series of, reports coming from the copyright office. but there's some important relevance to online speech. And so let's start with, can, can you just tell us the background of, what this report is and how it came about?

Cathy Gellis:

So it came about because the copyright office periodically does studies about some issue that is pressing up against the bounds of copyright law and copyright policy, and I. sometimes I think they come up with them on their own, but other times Congress says, we think there's an issue. Copyright office go study this issue. I think this may have been one of them. So the Congress says to the copyright office, AI is an issue. We understand there's some copy, copyright, may interface with it, somehow go research this. And so what they end up doing is they call for comments and public participation so that people who have. Thoughts about how copyright and AI interface with each other would submit them to the copyright office. And then the copyright office goes through all those comments. I think they do some of their own legal research, looking at cases that are evolving, cases that are already on the books about other things that may get applied to ai, and they tried to produce a report. Explaining, here's how copyright law may interface with ai. And it's a complex issue with a lot to look at. So they broke it into multiple reports. So, there's been two already. I am, forgetting one of them, but one of them really had to do with like the output of an ai. Like if an AI produces something. Is that copyrightable? Is it copyrightable by anybody? And who would that anybody be? Is it the person who, who sort of drove the, the AI to go do something? Or was it the people who made the ai? you know, all sorts of questions, but that's not the one that we're talking about. The one that we're talking about is the most recent installment, which is looking at are there any copyright issues? Salient to the training of ai. and what that basically is, is as people train the ai, it goes and consumes in some way a whole lot of preexisting things that are out there in the world. And by consuming it, it learns stuff and then it is ready to go. Use that as we start to deploy the AI to do various things. It's arguably, and we made the argument and the comment that we wrote and submitted to the copyright office for part of this, a separate question about the training versus output, including whether output might infringe a copyright interest if it happens to output something that looks an awful lot, like something that came before. Have your copyright inquiry around that part. But the point we made is that especially if you don't like ai, and you want to throw some law at ai, you really don't wanna throw copyright law at AI because it wasn't designed for this purpose. And if copyright law could reach it, you're kind of gonna break copyright law and you're gonna have a whole lot of other problems. It that you're gonna now create'cause you've just changed copyright and made it do something it wasn't designed to do. So one of the points we had made in our comment is don't do that. Um, make sure that you're looking at things separately. Output issues separately from input issues. But in terms of the input issues, you know, we kind of looked at it and we're like, that's something different. This is basically people have the right to read. You get to go to a library and read all the books you want. You could even maybe even sit in the bookstore, although they're gonna get mad at you and want you to buy the books, but copyright law isn't going to be what gets mad at you for sitting in the bookstore and

Mike Masnick:

Right. Read Reading a book is not, doesn't hit any copyright, right.

Cathy Gellis:

right a copyright owner gets to say no to a whole bunch of. Things like copying the work, performing the work, but it doesn't say you can control who can read the work. It's basically the copyright is really about you get control who makes a copy and reading isn't about making a copy. Now, that's isn't necessarily true in the technical, world because reading sometimes sort of makes some sort of. Copy in some way, but not necessarily even in the AI context.

Mike Masnick:

a a quick temporary Copy,

Cathy Gellis:

Sometimes ephemeral, they, they call the, um, the works, but even in terms of ai, they aren't necessarily making. A true copy. They're sort of learning about what they read, just kind of similarly. I mean, the more that it's actually an artificial intelligence, it's more like how people learn things anyway. Even for people, you want your soft, you know, you can use tools to help you. Sometimes you might need tools to help you. Like if you can't physically read, then you need a software to help you do the reading. And so we send out our

Mike Masnick:

so something like a, a screen reader, like, for example, for people who are vision impaired, they may have a screen reader that helps them read and that doesn't infringe on anyone's copyright

Cathy Gellis:

It doesn't infringe on anyone's copyright and. One of the important points that I'm spinning up to is that the First Amendment also includes built into all of its rights. Is the right to read that? No law can be made to keep people from reading. So reading is something that we're gonna have free expression. It really needs to be something that people can freely do. Also use the tools to make sure that they can do it. And so we pointed out that, you know, AI training is really just sending your bot out to go do the reading. just the same way you would stoke a natural intelligence. If you're gonna stoke an an artificial intelligence, it still has to go learn stuff about the world before it can go forth and do something. And that it invokes the same rights because even now that it's not truly artificial intelligence, it's still doing it on behalf of a person. They're using their tools to go make their bot a little bit smarter because then the bot can help them out at some point down the road. So that was the point that we made, which was basically. Completely ignored in the report that they came out. So this report has apparently been controversial for a variety of reasons, but we think there's a very reasonable one, which is how did it manage to write a report about copyright and AI training, which arguably is a fair use. And fair use is one of the ways that we make sure that copyright comports with the First Amendment. How do you write a whole zillion page report? It was very thick. It was very juicy. Well, footnoted, I, I lost track Over 400. They, they had a lot of references and yet the words First Amendment don't appear in it at all. Right to read didn't appear in it at all. Most of the points that we made in our comment did not appear at all. And so we think that's unfortunate because it's a very thorough and, you know, well reasoned, report. They did highlight one, thing that we and some other commenters had noted, which was, if you don't let an AI train on the widest universe of material, you're going to build in biases because it's just going to, it will think it knows the world, but it will know too small a subset of the world, and whatever bias is creating that limitation is going to be. Stuck in whatever it outputs later, you won't be able to get anything broader. but other than that point, it missed a lot of really important points and, I think that's disappointing and I think that is a legitimate reason to have issues with the report. I.

Mike Masnick:

Yeah, so, what is the importance of the report? So, it's just a report from the copyright office. It's not, you know, I think there's some confusion over this as to, you know, like if a court rules on something, then that is the law within whatever jurisdiction, it covers. The copyright office is not a court. So what, what is the actual importance of this report?

Cathy Gellis:

I think much less than people think, but not zero. the copyright Office is in theory, the tantamount authority who understands how copyright law is supposed to work in the United States, but They don't write the law, they don't administer the law. They don't particularly interpret the law, but their interpretations have influence for the courts and for Congress that when they write the law, if you're Congress or interpret the law, if you're the courts, they will, take seriously whatever the copyright office had to say, especially when they do well Reasoned, well cited. Reports that trace back and show the work about how they concluded this, that, or the other thing. it's helpful. It's influential, but it is not the law. So if you don't like it. That's not the end of the world because you can still write your briefs to the courts when an issue is before them and say, here's how the copyright office was wrong. And you can go to Congress and say, don't do what the copyright office said, or do something that the copyright office said like before it becomes something that is going to rule our lives. There's a lot more steps to take so we can criticize it, but. I think you know, within the bounds of reason, and I am not entirely sure that all the criticism that's been spewed forth is necessarily within the bounds of reason

Mike Masnick:

Yeah. So, I think it's interesting to me and just a a little bit of further background on this for folks who aren't as deeply in the copyright space or, not necessarily in the AI space. You know, there are a whole bunch of lawsuits going on right now about. Copyright and ai, and almost all of them, not all of them, but almost all of them, are about this issue of training and whether or not the process of training these AI systems LLMs infringed upon the copyrights held by, the material that it was, was trained on, and. those cases are all ongoing and in various states, and there have been a few sort of early rulings, but nothing, concrete. And some of it is contradictory and there's a lot of back and forth. And so there's a potential that this report could impact some of those cases and sort of influence a judge in one direction or another. I actually, in my read of the report, I don't think you know, the media coverage of the report is like, oh, it was a, a straight loss for the AI providers. I actually didn't read it that way. I think, and this is, my read of many of the copyright reports over the years, and I am, you know, one of the, sick people who will read these reports

Cathy Gellis:

We like you.

Mike Masnick:

when. Yeah, when they come out, uh, historically the reports that the copyright office puts out and, and this is frustrating to me actually, is that they often just kind of are a mirror. all these people submit these comments. The AI companies submitted comments. The, copyright holders submitted comments, just random people submitted comments. There's all these comments, and a lot of what the copyright office does is they basically just reflect back. The comments, so there's a, you could read this report and see things that agree with the AI companies. there is a whole section on how training is often transformative and being transformative is a key part of the fair use analysis. So if you're arguing for fair use, you could say there's stuff in this report that is actually useful for you. There's other stuff in the report that is not, and, the report itself doesn't necessarily have a single point of view. It is basically reflecting back different pieces of the different comments that were made in the process of creating this report. And so I think some people may be overreacting to sort of like the weight of this report. On top of that, we also. I have to keep saying live in a different world than we did in the past. And in this case it's specific because last year the Supreme Court and Lo Bright said that the weight of federal agencies matters, next to nothing, you know, compared to the way that the judicial branch will analyze these things. And so, before Loper Bright, you could say that based on Chevron deference, this concept of, you know, the court should give deference to what. The various administrative agencies have to say, whereas after Lo Bright, the Supreme Court said no. the courts can totally effectively ignore what the agency says. So even if this report had some weight, I think the courts actually have a lot of freedom to ignore it. So I, I'm sort of, you know, I think people overreacted a little bit to this report. I think you are right that it's sort of sad, but is. Not all that surprising given the history of the copyright office that they will ignore the sort of First Amendment issues and the speech issues associated with copyright. That's kind of the copyright office's Mo, you know, it's unfortunately the way they've acted.

Cathy Gellis:

So I do think it's unfortunate. I think it's unfortunate because I think there had been some movement in recent years to greater acknowledge the First Amendment, but I. You know, it is a mirror. I think it's actually supposed to be a mirror because it's built around the public comment. So I think the disappointing thing is it's not more complete in terms of what else was represented to it, but it is, they had a lot of material and, uh, they turned it into a pretty juicy report, just not quite juicy enough.

Mike Masnick:

Yeah. and just to close out on this, I'll note, because this is what, a lot of the headlines have been about is that the report came out as a pre-publication report, which is not standard. they basically said, we're gonna release the final report, which won't be very different than this, in the next few weeks, which raised a bunch of eyebrows because why would that happen? The context matters because the day before that, Trump fired Carla Hayden, who is the librarian of Congress, which is, effectively unheard of. and the copyright office is part of the Library of Congress and so then the very next day, the report came out Friday night, the pre-publication report came out Friday night, Saturday, the register of copyrights here at Perlmutter was fired as well. And immediately everyone turned this into a story of, well, they put out this report on Friday night that was critical of the AI companies. And then Elon Musk stepped in and made sure that Shera got fired. I don't think that's entirely accurate. I think there was a plan to fire them no matter what. and it does feel like the Copyright Office probably did rush out the report, realizing that there was probably going to be an attempt to try and block the report from coming out. but I think there, you know, it, it wasn't about the report so much as the report was sort of, collateral damage within this process in which they were going to fire the register of copyrights no matter what. And now, we'll.

Cathy Gellis:

needed to get it out as long as there was somebody they knew was going to be able to sign off on it. So it, it probably did create some deadline pressure.

Mike Masnick:

Yes, I'm, I'm sure I'm sure of that.

Cathy Gellis:

I am just thinking in terms of pre-publication. My dad always hated the term pre-boarding, like time to pre-board the airplane. He's like, either you're boarding it or not. There's no pre you're doing it or not. So pre-publication, yeah, we don't really know what it means. Published it, so

Mike Masnick:

Yeah. And it is kind of funny because publication has a specific meaning within the copyright law too. And so I think this qualifies as publication under copyright law. But let's, let's leave that aside. Let's move on.

Cathy Gellis:

Yeah.

Mike Masnick:

so from the Copyright office, we are going to talk about Elon Musk, our favorite person to talk about on this particular podcast. there was a big story this week. There are a few stories about Elon Musk, and we may hit some of the other ones later. There was a big story this week. About what is happening in India. and just as background, we've talked about this in the past as well, that India has often asked Twitter slash x to remove content over time. And the old regime at Twitter before Elon Musk came in, they would often. Push back on requests from the Indian government to Mustaf. And in fact, at one point, during the pandemic, the Indian government raided Twitter's offices in India and there was all this sort of back and forth and there was a lawsuit and, and all these things. And then Elon Musk comes in, takes over the company and he says, I am bringing free speech back and I'm gonna fight for free speech. But he also says to me, free speech is what the law allows, which is not. how free speech works because the law is often designed to stifle free speech. And you know, you have to push back on that, which again, the old Twitter did and specifically in India. And then there were a bunch of stories over the last few years since Elon took over in which India continued its demands to remove certain content from Twitter and Elon caved each and every time. And when pressed on it said, well, you know, it's what the law allows. Now this was funny because. If you would compare that to other places, the story was very different. it basically seemed to depend on whether or not Elon Musk supported the regime in charge. So Turkey had a bunch of requests to take down content and Twitter and Elon took down that content. Brazil had a bunch of requests to take down content, and suddenly this was an important free speech fight. Elon Musk refused to take down those accounts, and in fact, as many people remember. Twitter itself was entirely banned from Brazil for a period of about a month, because of this fight before they did eventually cave, and take down that content. Similar sort of thing happened in Australia where Australia had some demands to remove certain content and Elon turned into this big fight where fighting for free speech. Uh, that case actually the government of Australia backed down. and there are interesting debates and discussions to be had about this, but it was really odd that. Or I mean, it was understandable if you understand the mind of Elon Musk and his willingness to support people he likes and not support people he doesn't like, but he would turn certain things into free speech, Marty, Dom, and certain ones, he would, bow down and say, this is fine. So in India, as the controversy between Indian Pakistan, have. Increased and, obviously turned into violence, and attacks over the last few weeks. So as we're recording, I believe there's, they're in a ceasefire. Hopefully that, stays, India requested that. Elon Musk's ex remove a whole bunch of accounts, and in fact, many of the accounts were of journalism outfits. and not just individuals, but entire media entities, were banned and somewhat surprisingly. X put out a statement basically saying, we disagree with this. And this was the first time I've seen them push back on a demand to remove content from a country that is friendly with Elon. the geopolitics of this is bizarre, the idea of countries being friendly with individuals, but Elon is a, a country unto himself. and so. They pushed back and said, we think this is a problem. We don't think, we don't agree with this. But eventually they did cave and they did agree to remove, I think it was like 8,000 accounts, but they're sort of doing so under protest and have indicated that they're planning to sue. I haven't seen any details on whether or not an actual lawsuit has been filed yet.

Cathy Gellis:

I think some of the people who were affected were planning to sue.

Mike Masnick:

that may also be the case. But, uh, there was some indication that X itself might sue. And again, you have the sort of geopolitical questions here because Elon needs Indian support for, a number of his different operations and, and companies, and they're trying to get starlink, stuff happening there. And, Tesla obviously, and so there's a bunch of different questions around it, but I thought it was. Interesting and slightly surprising that they were willing to come out and say in this case that they disagree with it. Though they did still cave very quickly. They didn't put up the big martyrdom fight that he did with Brazil and with Australia. Then, the funny thing too is, you know, I mentioned Turkey that he's been willing to take down accounts in Turkey. So there was another story related to that this week where they took down the account of a, prominent presidential candidate who is, opposing the, the ruling regime in Turkey. And that account was ordered to be removed and Elon did, and somebody questioned him about it. This is just this morning as we were recording. And Elon posted X must follow the rules of a country within that country or will be blocked by that country. and he said, RIM'S account can be viewed outside of Turkey or by using a VPN within Turkey. I don't know how they feel about him telling people how to get around the block. and then he says, do you believe the Turkish governments demand to limit the visibility of M'S account within Turkey is contrary to Turkish law? Which again, so he. Jumps back to this idea that whenever people accuse him of caving, he says, well, you know, I'm just following the law and I believe free speech is, is under the law. But like, he doesn't do that. He doesn't, didn't do that in Brazil, didn't do that in Australia. The sort of selective interpretation of free speech is something that I always find kind of amusing.

Cathy Gellis:

Well, it's a very Trumpian view of how free speech works, and I wonder if over time, because he is hanging out with so many of the Trumpy people, he's picking up a lot of their reasoning and actually somewhat adopting it on its own. there really is this view of like. What the state says is what is okay and all the freedom exists within that parameter, and that really does seem to be the understanding of how free speech works among the Trump administration. And it doesn't sound like he's inconsistent with that. He's sort of where he was already going anyway. Seems to be getting reinforced.

Mike Masnick:

Yeah, I mean, I, I do think it is interesting that there was at least even this marginal pushback on India, and I'm kind of wondering, where does that go? And does, does that ex, you know, expand? Or does it just fade away like it has in some other cases?

Cathy Gellis:

Being willing to do the take down within the jurisdiction and not worldwide, I do think is an important line that they still drew. and then I

Mike Masnick:

well they did that, they did that for Turkey. I'm not sure if they've done that for India.

Cathy Gellis:

I think they did it for India. Uh, I

Mike Masnick:

that it, it's just within India.

Cathy Gellis:

I might have misread, but I think they said yes. And then it seems like some may have been temporary, but I didn't quite follow why, like why are you just trying to like inconvenience people or are you really trying to take them down? And either is bad, but the effect is slightly different.

Mike Masnick:

Yeah. And I think, yeah, I'm looking now and it does say that this was, they withheld them in India, and so I think that is true. It will be interesting to see if the demands. Expand because I know in the past, India has asked for worldwide takedowns and that, under Musk they did comply in the past. So it'll be interesting to see if that bubbles up as well. Uh, I think that's, kind of an interesting, that question to watch, but it, you know, again, it's like, it's one of these things where. the theory of free speech that many people have is so disconnected from the reality. And then when you're in a position to actually make these decisions and understand, like, the government demands, will often be contrary to free speech. And you need to have like an actual. Understanding and, conception of what free speech is and principles behind it to actually make decisions that apply to that. which I think could to some extent, maybe lead into, to some of our next stories, set of stories or, or all of our stories today, since we're talking about. Elon Musk, we're going to have another Elon Musk related story, and since we were talking about AI earlier, we're going to merge those two stories. And if you were to give an AI bot the prompt to mix the two stories, something about Elon Musk and his bad conceptions of free speech and artificial intelligence, what would you come up with? I think it might be our next story, which is that for the last day or so. Grok, which is Elon Musk's, slash Twitter's ai bot, which apparently people have said have gotten pretty good. I, I certainly have no interest in using it, but people say that technology is actually on par with a lot of other AI systems, has suddenly started responding to almost every prompt, no matter what the prompt is with a discussion on, South Africa and the concept of white genocide and it, it is hard to believe if you have not seen it, but over and over again, no matter what you ask it, it will turn the conversation into something saying the claim of white genocide. And South Africa's highly debated. Some argue that white farmers face disproportionate violence with groups like Afro Reform reporting high murder rates and citing racial motives. and. that is to any question no matter what. And the obvious, likely result of this is that at some point Elon Musk got annoyed that Grok was pointing out the silliness of the, there's been this sort of trumped up effort, and I use that term directly trumped up effort to, claim that whites in South Africa are being discriminated against and targeted sometimes calling people, saying, calling for genocide, which does not seem to be an accurate statement in any form whatsoever. and yet. Elon Musk, who is from South Africa, seems to want to support that message and has decided that Grok should have its underlying system tweaked, to make that message. I don't know. If he meant for it to show up in every request, no matter what you were asking, I saw there was one that somebody was asking about, max Scherer, who's the pitcher, baseball pitcher. Somebody had asked about Max Scherer and it responded with like one sentence about Max Scherzer before saying, this reminds me of white genocide in Africa, in South Africa.

Cathy Gellis:

So I have a question. I didn't test it, but what would Grok say about little Stevie fans Zand. And the reason I ask is deep cut here. He was like one of the lead singers on, I Ain't Gonna Play Sun City, which was the anti-apartheid song from the 1980s. So I wonder what Rock has to say. It's.

Mike Masnick:

knows. Who knows? I mean, this is just, it's such a crazy, weird story. It is reminiscent of the time when Elon Musk got very upset that one of his tweets did not perform well, and apparently sent like a 2:00 AM email to the entire. Twitter team, that they had to fix Twitter to make sure that his posts performed better than other people's. And the team, you know, spent all night. And when people woke up in the morning and opened their Twitter feed, every single post was an Elon Musk post and nobody could avoid it because they just like wrote in hard coded the fact that Elon Musk's post must show up at the top of everybody's feed. this feels like the same sort of thing, except with like. Racism. I mean, it's,

Cathy Gellis:

Team, we are not racist enough. We need to rec this further.

Mike Masnick:

Yeah, but you know, I mean this is sort of an amusing story, but it, sort of, is commentary on kind of the world that we live in today, which, takes us into the next story, which is, a related story to this, and this may seem. Not an online speech story, but we're going to show you why it is an online speech story, which is that this week also the Trump administration brought in a bunch of white africaners from South Africa as refugees. I. the same time that they've been shutting down refugee programs from basically the rest of the world. and in that process, they brought in, in particular, one guy who is just an out and out antisemite, racist, horrible person. All sorts of terrible tweets. we're not going to read any of them because they're absolutely. Horrific and terrible and scary. and yet has, brought that person into the US as a refugee. And this comes just weeks after the US department of Homeland Security announced that they are now monitoring immigrant social media specifically for antisemitism and that they won't allow in. Anyone for what they really mean is posting pro-Palestinian messages, not actual antisemitism. And so there's this direct contrast where you have this issue of the US government watching people, social media, if they want to come visit the US or, immigrate to the us. But even just visiting, it's this covers tourist visas. They won't let you into the US if you've praised Palestine in any way. Um.

Cathy Gellis:

the, um, the man that we were referring to beyond just. Racist. He was explicitly being antisemitic in a not at all, subtle or racist. This is not, it was very clear that he

Mike Masnick:

this, this is,

Cathy Gellis:

being antisemitic. And yet the goal of this policy is thou shalt not pose antisemitic things on the internet if they wanna come into the United States, except for this guy that we're laying out The white carpet for so.

Mike Masnick:

Yes, yes. The white carpet. You know It is kind of incredible. Again, sort of gets back to the same sort of thing we were talking about with Elon Musk, where this is just the double standards at work, right? If you're talking about speech online, they're coming up with arbitrary rules that they think are clear rules. They think they have some sort of principles behind it, but their real principles is we will let in who we'd like You know, and we'll block anyone. We don't, and it's the same sort of thing that we see when, people who don't have any experience with online trust and safety or content moderation. Assume that there's like, there's some easy principle here that we can just set these rules and they're easy to enforce and they're effectively doing the same thing, But from the immigration side of it and saying like, well, you know, we're gonna let in the people we like and not let in the people we don't like, and we're gonna call it. gonna, say it's about banning antisemitism when it, when it's clearly not. And it strikes me as like this weird sort of example of like content moderation gone bad, but at the immigration level, right? It's, it's sort of like DHS doing content moderation to some extent. You know, they're not moderating the content, but specifically the actions of what people are allowed to do based on their social media posts.

Cathy Gellis:

Yeah, I mean it's content moderation except it's really human moderation and they're using the speech to evaluate the people. I mean, some of this is content moderation is hard. Because if you squeeze a little bit one way, different things happen on the other way. But a lot of this is, these two stories linked together are out and out. Hypocrisy like this isn't that, oh, content moderation is hard. This is that being reasonable is apparently hard and

Mike Masnick:

that is a very good clarification. Yes. Yeah. I wasn't, I wasn't trying to imply like these are thoughtful people making thoughtful policy and as, and discovering like, oh, the edge cases. These are stupid people making stupid policy for stupid reasons.

Cathy Gellis:

Exactly, exactly. But I guess if we were to segue into some of the other things we wanna talk about, there are some stories where it is a harder call where they were trying to do things on the edge cases and although, you know, even some of these stories, they weren't really on the edge. They were kind of driving a truck down the middle and now saying, oh, okay, bad things, unfortunate things have happened from driving our truck down the middle. So.

Mike Masnick:

Yeah, so the, the next set of stories, I wanna combine three separate stories into one, that all sort of touch on the same thing, but, really are related to the theme that we're talking about, which is the, Policy around moderation. Now we're gonna get to meta finally. and as we have talked about, and as everybody knows, you know, mark Zuckerberg announced this big change in policies and they have a new approach and it's more free speech. And this may sound familiar from what we were just talking about with Elon Musk, where you know, it's like, okay, we're gonna take our hands off and we're gonna. We're gonna allow more, things in. And so there have been a few stories this week that sort of show the impact of that, and also call into question how serious meta is about, oh, we're taking a hands off approach. And so the first one was that. Kanye West came out with a very horrible song this week. Uh, I believe the title is Hail Hitler. which should tell you all that you need to know about the song. I have not heard the song. I have no interest in hearing the song. and apparently a lot of different platforms have taken down the song when he uploaded it, and you can understand why. But one place that hasn't really done that is Instagram. they are taking down some, but they're leaving up a ton. And so four or four media had an article about how the Kanye's Nazi song is all over Instagram and we're sort of trying to figure out The reasoning why and whether or not does it have to do with the new hands-off policy? And meta came out with this statement that was effectively saying like, well, you know, in some cases we allow, it's like we recognize that users may share content that includes references to designated dangerous organizations and individuals in the context of social and political discourse. And it says this includes. reporting on neutrally discussing or condemning dangerous organizations or individuals or their activities. And this has always been a debate, like, if you are condemning a terrorist attack or some sort of problematic content, you know, how do you deal with with that? if you're showing, you know, a terrorist attack is sort of one of the quintessential examples.

Cathy Gellis:

human rights abuses,

Mike Masnick:

exactly right. Like one.

Cathy Gellis:

it, but you're also showing the picture of it, but differently than somebody saying, look what I just did, versus, oh my gosh, look what that guy just did. But it's the same picture. Yeah.

Mike Masnick:

Right. It can be the exact same thing, but within context it has a very different implication. And so here they're sort of trying to say that they're doing the same thing with the Hell Hitler song. and what 4 0 4 Media found was like, no, like a lot of the videos are clearly people celebrating it. I mean, they have one that was like a guy dancing in front of a bunch of swastika flags. And that was somehow allowed. And so, is

Cathy Gellis:

they think it's a, producers like they're redoing the producers. It's all springtime for hit. Can't be.

Mike Masnick:

yeah. Now the next story that I wanted to cover as well with this one in combination is, a story from the New York Times, which is that Instagram and Facebook are regularly and rapidly blocking, posts from abortion pill providers. And so they're very quick to take that stuff down and block that, but somehow they're not so quick to take down the Nazi song. And now, as you and I will always say, like, all this stuff is impossible to do. Well, everyone is going to make mistakes. It's impossible to pick one particular example and say, well, look, they like this content. They like don't like that content. Here, there, you know, where there does seem to be a really clear pattern where their hands-off approach and like, well, we'll let everybody discuss. The controversy is applied to the Hell Hitler song. But somehow talking about, post from abortion pill providers is somehow completely potent. It feels really political. It feels like, gee, there's a lot of culture war issues. It's funny how meta keeps coming down. Specifically on the side of, the party in power right now that is driving the culture of war.

Cathy Gellis:

Yeah, I mean sometimes I could imagine situations where one might be technically easier to write some sort of script algorithm to be able to like, well, what was an ad? Okay, well, the ads are all going down, and so I could see collateral effects of where it is hard, but it does seem to reflect a judgment call to say. we won't worry about too much of the abortion related content coming down, but we will worry about too much of the Kanye Nazi stuff coming down. So There does seem to be a choice being made, and it's a questionable choice.

Mike Masnick:

yeah, Again, content moderation is, is always impossible to do well. There's always gonna be mistakes made, but it does feel like there is some overtly political decision making going on, and I think that was clear from the way that Zuck discussed the policy changes that they made. And it's interesting to me that we haven't heard, and you know, maybe they do exist and we just haven't heard them. And I would be curious to hear them. We have heard a number of stories about content being taken down still under these new policies. Every time it's culture war related issues, it's the kinds of content that the Trump administration would like taken down and would like removed. Which, to me always seems to call into question the actual principles behind this. is it really about free speech or is it just about doing what Trump wants? And I think we're just seeing more and more evidence that it's always been about just doing what Trump wants, no matter what Zuckerberg said.

Cathy Gellis:

Yeah, I mean, he has the right and so does Musk to make his editorial policy for moderation. B, whatever the big guy wants. I'm down with it. And, and that is, he has the right to make that decision. But it would be nice if there was some realization about the amount of power he wield by making it and the collateral effects that happen to other people's speech if it's made. And, there just doesn't seem to be enough of an understanding. There isn't necessarily enough of the understanding of the people who are complaining about jawboning, where It's the same thing kind of, and they didn't understand, but I kind of would hope that people who run platforms for their businesses and have staked their fortunes on would have a better idea of how this all works.

Mike Masnick:

I mean, I was gonna make that point about the Jawboning stuff because I mean, you're right that, private platforms have the right to, and we've discussed this many, many times on the podcast. They have the right to make their own editorial decisions, even if those editorial decisions favor a particular party or anything like that. But they can't do it if it's, Coercion, right? If the, political parties are in charge and they're coercing them into doing this, then that's not legal. That is a violation of the First Amendment. it is generally referred to as jawboning. And I know that you've taken a, a big interest in how the law often used for Jawboning lately. you know, so I do think there is an open question because again, When Zuckerberg made these changes, Trump was suing him. He had published a book saying he wanted to put Mark Zuckerberg in jail. there is a point at which like, does this crossover to Jawboning not that he's gonna challenge it, but it, seems worth noting that all the people who were so concerned about jawboning in the Murthy case. You know, that supposedly the Biden administration was putting pressure on these platforms to do things have gone completely silent now that

Cathy Gellis:

face of even greater pressure

Mike Masnick:

Yes, exactly. Where it seems very clear and very obvious. and so again, it seems like a big theme of this, episode is, the lack of principles among many of the people in power right now.

Cathy Gellis:

Well, yeah, boy, that, that could, we can just keep talking and talking and talking about that. But, one other thing that comes to mind, uh, actually we didn't discuss it before, is the story that, I believe I'm doing this from memory. I think Casa. the new iteration of CS a or somebody similar has taken down their RSS feed of vulnerabilities and started to use one open medium. I think they're doing it by email, but otherwise X, you have to be an x. Subscriber in order to get the vulnerability dispatches. And that's a huge change. That is not an open technology, that is not a, in theory, a government platform, but all of a sudden it's kind of a government

Mike Masnick:

Yeah, there have been.

Cathy Gellis:

mean?

Mike Masnick:

Yeah. And there've been a few other concerns too about, government agencies using X as official communications. I don't wanna go deep on that'cause we still have a couple more stories and we don't have that much time, so I'm gonna put a pin on that one. It is interesting. It is important. It, I do wanna move on to Wikipedia, has made it clear that they're now going to challenge the online safety. Act in the uk, which is very good and very important. And we've talked about the online Safety Act, quite a bit, on this podcast. but I thought it was notable that Wikipedia is the first one out there saying like, Hey, this is really problematic. It creates a huge burden on us and is not going to be helpful. So I don't know if there's that much to talk about there, but I did think it was really notable that they were in particular, sort of, regretfully said that they were going to challenge, the law in the uk.

Cathy Gellis:

One thing that's worth highlighting from that story is what they're, I believe, challenging is the way that platforms are categorized where a lot of regulators are like. Oh, no, no, no. We only wanna regulate the big ones. But then there's a question of how do you define big? Because you have something like a Wikipedia, which has, you know, an infinitesimally small budget relative to everybody else that we think of as big. But I. A gazillion users. So in that sense, it's huge. And I think what they might be challenging is the, categorization that they're falling into because they end up, I think they're saying it's vague, but they're ending up subject to regulation in a category that is completely implausible and impossible for them to comport with. Instead of one that in theory the regulation might have been trying to carve them out in the first place but didn't, and that's causing a problem for them. And that shows up when American regulators try to regulate two, oh, we only mean to do it to the big ones, but big is really hard to define.

Mike Masnick:

Yeah, it's incredibly hard to find. And actually Eric Goldman has a, and Jess Meyers actually together have a really great paper that like if you're trying to regulate based on size, here's all the problems and challenges you're going to face. I think a lot of American laws have traditionally, and some European laws too, have tried to get around the Wikipedia problem, which is like specific to this by saying. This doesn't apply to nonprofits and like that's their way of getting around it though. It's interesting too because some folks in the US have been trying to challenge Wikipedia's nonprofit status, which is a whole other issue, which again, we're not going to get into. But I will tie this to another story, which was on tech policy press, which was really good and very, very interesting actually. talking about what attacks on Wikipedia reveal about free expression, and it just kind of says like. it posits the idea that we could create sort of like a, freedom index based on like what countries treat Wikipedia well and which ones are attacking it in some form or another. And I was sort of wondering based on that, if now we should view the UK as going down the freedom list because the online Safety Act is an attack on Wikipedia's freedom. and I thought that was, uh, sort of an interesting way of, thinking about things.

Cathy Gellis:

I mean, I think that answer has been yes for a while.'cause when you try to regulate speech, how do you have free expression? If it is regulated expression, it's always gonna be uncomfortable. Flip.

Mike Masnick:

Yeah. All right. And then our final story of the day, we started on one that you wrote for Te Turt. We are ending on one that I wrote for Te Turt, which I don't, it just hasn't gotten very much attention at all, but I actually think is potentially a big deal, which is that Missouri's attorney general, this guy Andrew Bailey, who is just terrible in all sorts of ways. he was, you know, part of the, the Murthy case, which was brought by Missouri. It was brought by his predecessor, but, it still went on once he became Attorney General and he was sort of leading the charge in that case, announced that he was promulgating a rule that it would violate the, unfair practices, of the state. If. Social media apps did not allow you to choose your own moderator. and he argued that this was in compliance with the ruling in the Moody case, uh, which was a different Supreme Court case, not the Murthy case, but the Moody case, which was about the laws in Florida and Texas. Now, the ruling in that case was very, very clear and very, very explicit that states have no business. Content moderation of social media websites. They went very on that and it over and over. I was surprised. See. attorney General Bailey come out and say, well, based on my reading of Moody, I can promulgate this rule not even the legislature that he alone can put out this rule and decide that he can demand that any social media company allow third parties to moderate. And I should note here, in our last episode, Ben. Unveiled our bell, that he has. I don't have the bell. He has the bell and he is off. So I will just note, ding, ding, ding. potential conflict of interest. I am on the board of Blue Sky. Blue Sky is the only social media company that currently allows you to choose your own moderators and pick alternative moderators through our labeling system. so in theory, I should like. In theory when he comes along and says every, every platform should have that. I should like this because I like that I wrote a paper that says there should be, you know, you should be able to choose your moderation service. So him coming out and saying, you know, yes, I agree, should be something I agree with. I do not agree with it because I think companies should be free to make that determination on their own if they want to do it. I think it's a good idea. I think it's a great idea. I think we're seeing it play out on Blue Sky is a very nice idea. But for him to come in and say, the state can order you to do that or say you are violating, deceptive and unfair practices, I think that is wrong. That is dangerous. I think that is blatantly unconstitutional. And I think any real reading of the Moody case or just an understanding of the First Amendment says that he cannot do this even as he is trying to. So I dunno if you had any, any thoughts on me ranting about

Cathy Gellis:

Oh, as, as usual, I generally agree with your rants. Um, I mean, one important point is May is not the same as must. So, you know, to advocate that platforms start to use sort of modular, content moderation. I. Tools in whatever way is, yeah, I'm, I'm all on team. Like, I think Blue Sky's got a good thing going on in terms of how they're approaching it, but that's not the same thing as forcing every platform to do it. And what's kind of weird is how I. Except for blue sky and maybe MAs it on, nobody's gonna be able to cope with it like nobody else has architected themselves in a way that allows for that modularity. And although it'd be nice to encourage it and maybe even incentivize it, you know, with a carrot, not with a stick, and this is a big stupid stick.

Mike Masnick:

and you know, the thing that gets me too is because Bailey has shown that he is like very much on team Elon Musk and, I mean, he launched investigations against Musk's enemies. Like he launched an investigation and got thrown outta court on First Amendment grounds against media matters. Who criticized Elon Musk and Elon Musk is suing and all this stuff. And I'm wondering, like, does he think Elon wants to be able to be forced to have alternative moderators on, on x.

Cathy Gellis:

I'm not sure Elon necessarily knows it's against his interest. He frequently does not have a good assessment of what actually is in his interest or not.

Mike Masnick:

as I think we just discussed earlier in this podcast.

Cathy Gellis:

Yes.

Mike Masnick:

But anyways, I think we'll re wrap it up on that. we went rapid fire through a whole bunch of stories that were sort of all linked in some form or another. and, uh, sometimes we didn't necessarily clearly segue from one story to another, but I think they all, it made sense in some form or another. Kathy, thank you so much for, joining us and, giving us your, thoughts and wisdom on these issues, on internet speech and, all of that kind of stuff.

Cathy Gellis:

Thank you for having me.

Mike Masnick:

And, thank you everyone else as well. And we'll be back next week.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode