Ctrl-Alt-Speech

Move Fast and Mistake Things

Mike Masnick & Ben Whitelaw Season 1 Episode 65
Ben Whitelaw:

So you, you will have heard, Mike, that the big buzzy app in, in the Silicon Valley world is clearly right, this won't have pass you by, but it's the kind of app that helps you cheat at work, helps you cheat at school, and it does that by like listening to what you're doing in your laptop, reading everything that goes on your screen and they're like feeding you intel. It's pretty nuts. I, I I, I dunno if you use it, but I know you know about it and it's been kind of the center of some internet drama this week, which we're gonna come onto. but I wanted to use it to start today's podcast. And they,'cause they're a kind of like, crazy bunch of like, you know, nuts cracked devs. Uh, they're kind of like prompts on their marketing pages. Like take the short way. As in like, do the easy thing. So what's your easy thing, Mike? How are you taking the short weight?

Mike Masnick:

Well, I was gonna say, I think, a lot of our stories this week are about what happens when you take the easy way and why maybe it's not such a good idea to always take the short way that you may discover that certain, nuances and complexities matter. So, that's, that's gonna be my, little, preview for today. What about you? What, short way are you taking Ben?

Ben Whitelaw:

I was gonna say the same. You know, I'm noticing in, in the big story that I'll cover today about how Meta was taking the short way by literally just pulling down anything that looked like CSA and, we're gonna see the problems of, that approach very shortly. Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech. Content moderation and internet regulation. It's July the 10th, 2025, and this week's episode is brought to you with financial support from the future of Online Trust and safety fund. This week we're talking about the accusations of AI censorship, a user uprising against meta and good old fashioned spam texts. I'm Ben Whitelaw. I'm the founder and editor of Everything Moderation, and I'm with my trustee co-host, Mike Masnick, founder of Tech Debt. Mike, how are you? How deeply have you been embroiled in the internet drama of this?

Mike Masnick:

It's been really, really fascinating to watch. I mean, clearly it's, I mean, it's such a stupid company, right? Like, you know, it came out of this guy who created this app to cheat on the, programming tests that, a lot of the big companies use. And like, admittedly, some of those programming tests I think are pretty silly. but the company. that is interviewing students takes over their screen and, so they can watch so that supposedly they can see how you code. And yet he created this overlay that is invisible to that. And he got kicked out of, Columbia University where he was a student for doing that. He. Published a video showing him, like getting an offer for like a internship at Amazon after cheating. but he's just really leaned into the, like, everyone's like, wow, this guy's an asshole. And he's really leaned into that as like the marketing thing. and at one point, you know, recently they raised a ton of money from, Andreesen Horowitz and at one point he said, I'm only hiring for two types of jobs. engineers and influencers, no companies. Any other, any other role. Turns out might be helpful to have someone who actually understands the law, uh, because,

Ben Whitelaw:

Yeah. What did, what did they do this week? They, I didn't really understand the kind of psychodrama, but there

Mike Masnick:

So, so,

Ben Whitelaw:

the reaction.

Mike Masnick:

there's a, security researcher who I, I know a little bit, named Jack Cable, who's, a good guy, does a lot of really good, computer security work. And he reverse engineered the system prompt. that they use for their ai, and that's always an interesting thing. and so he figured out a way to to reverse engineer them, and he posted. What they were. And it was sort of revealing about how, clearly works. And then very shortly he received A-D-M-C-A takedown notice, we don't talk about it that much here uncontrolled speech, but the DMCA and copyright laws, sort of the original content moderation tool that,

Ben Whitelaw:

Yeah.

Mike Masnick:

every company. Learned, you know, you can, send a notice claiming that someone is violating your copyright, and, the platforms are heavily encouraged to then remove that content to avoid facing liability.

Ben Whitelaw:

It's like the big red button. Right? It's like the kind of nuclear

Mike Masnick:

Yeah, I mean, you know, for a while it was the only sort of legal mechanism to remove any kind of content, and therefore it was also wildly abused by anyone. Like people who didn't like a picture of themselves online would file A-D-M-C-A notice because of the sort of structural, legal nature of it. It was a way remove content and therefore it became sort of a tool for. censorship, backed by the weight of, potential threats of, legal action.

Ben Whitelaw:

Yeah.

Mike Masnick:

Jack received A-D-M-C-A notice, which I think was sent to, X as well, because he had posted the screenshots to X. And then Roy, the founder of Cluey, responded to Jack Cable and said, we never filed this, bro. I mean, it's very, you know. tech bro speak, uh, with, with laughing, crying emojis, saying like, we didn't follow this DMCA. and then he even said like, it made me double check with legal. So apparently they do have some sort of legal, maybe it's not all engineers or, influencers.

Ben Whitelaw:

unless it's a legal influencer,

Mike Masnick:

it's possible.

Ben Whitelaw:

which that maybe there are a.

Mike Masnick:

And he, he sent like this, you know, I think it's a signal chat that he texted some legal person, who sort of joked about it and said like, I don't think it's a good idea to c and d cease and desist this guy.'cause he'll just post it and clout chase from it, which does sound like influencer speak. But then so the funny thing is like. Jack responded and he pointed out like, yeah, look, the DMCA notice did come from someone, it signed from someone from your company who's an engineer. And then that guy, Kevin Grandin, posted on X, responded and said, oh yeah, I did file a few of these. So just like some random engineer at this cheating company sent a bogus DMCA notice to a security researcher, had the CEO deny. That they had done anything and then he comes out and is like, oh yeah, my bad. He is like, in hindsight, I realize I probably shouldn't have, like you shouldn't have your engineers sending legal notices that have liability attached to them. you know there is, it's, even though it's rarely used, part of the DMCA, if you send a false DMCA notice, you can face liability and

Ben Whitelaw:

Wow. Kevin's in the shit basically. Um, I'm gonna send him, maybe I'll, clout chase myself by sending him this episode of Control Speech and say, bro,

Mike Masnick:

Bro, yes,

Ben Whitelaw:

Get up to speed on how online speech

Mike Masnick:

But it's, you know, it is an interesting example of, you know, again, sort of like copyright still underlies all of these things and you have, the AI stuff and, just all of this. I thought it was a very, very amusing story. And, you know, one reason why maybe you should hire more than just engineers and influencers because, uh, yeah.

Ben Whitelaw:

talking of which there's a, Alice Hunsberger who writes the Trust and Safety Insider newsletter has done a, a really good piece this week about hiring a fractional trust and safety heads, which is like a kind of, bit of a trend at the moment where you. basically hire, for like one or two days a week, or one or two days a month as a startup. Again, something that Roy and Kevin maybe could do with right now is, uh, an experienced trust and safety professional to guide them through, these, early challenges at clearly.

Mike Masnick:

but you, you know, the, the Silicon Valley bro culture these days are like no, trust and safety is bad, man.

Ben Whitelaw:

Yeah, yeah, yeah. They've taken the, uh, Andreessen Horowitz juice. they've drunk too much Kool-Aid

Mike Masnick:

enemies of progress.

Ben Whitelaw:

The enemies of progress. Well, you know, we are very pro progress here, uncontrollable speech, and we have a whole raft of stories today to get through. Thanks, uh, for unpacking that, drama Mike for us. we have an awful lot to get through. I will say that if you, if listeners kind of log off, before we try to, do our roundup of stories by the end of the day, which we always fail to do the kind of quick and dirty stories, then you'll miss my prompt, and my ask to rate and review the podcast wherever you get your podcasts. It's super important for us to be discovered and to, grow the listenership of controlled speech. So we haven't had a, a review in a while. so do take a, a minute out of your busy days. We really appreciate it. we don't cover the, D-M-C-A-A lot, Mike, but we do cover kind of how governments are trying to tell companies how to manage speech online. And the first story for today is really the kinda latest and a long line of, those stories, but is almost like ai edition of that. Um, when you mentioned that Andrew Bailey had got involved in. the latest story about, US content moderation. I was thinking of the Bank of England governor, and I'm glad to say that this Andrew Bailey is not the same one as the one who sets interest rates here in the uk. Tell us, I mean, they might know about online speech, to be honest. Tell us what this has been

Mike Masnick:

Yeah, it is a common name. I think there's a major league baseball player. There was one named Andrew Bailey as well. So it's like a fairly common name. Andrew Bailey is the Attorney General of Missouri. and he has always been like a, a sort of very maga e type of person, very aggressive. He sort of famously was the one who launched the investigation into media Matters. When Media Matters published their. Story about, advertisements on X appearing next to neo-Nazi content. And when Stephen Miller said like, oh, this is criminal, Andrew Bailey rushed the defense of, Elon Musk said, yes, we will investigate this for you.

Ben Whitelaw:

The Steven Miller, that's now Deputy

Mike Masnick:

As chief of staff

Ben Whitelaw:

of Staff at the White House.

Mike Masnick:

the, it all comes back together. but, so Andrew Bailey's been, uh, attorney General for Missouri for quite some time now. he did not initiate, his predecessor initiated the case. That was Missouri v Biden. which then mur ed into Murphy v Missouri at the Supreme Court was the, this, you know, big case, which we talked about quite a lot last year, regarding the idea of whether or not the government could talk to. social media companies about their content moderation practices, right? The whole crux of that case was that the mere any communication between government officials and social media companies about their ranking and moderation was so far beyond the pale violation of the First Amendment, that it had to be blocked, and in fact. Missouri won in the district court under Judge Terry Dowdy, who's also a very maga sort of famously willing to rule in favor of any Trump issue. judge who he released his ruling on July 4th when the courts are normally closed. And it was this like, patriotic nonsense thing about how, Biden officials talk just. Communicating with social media, FBI people. C. DC, anyone else communicating in any way was the most egregious, biggest form of censorship he had ever seen

Ben Whitelaw:

It got a lot of Republicans hot under the collar,

Mike Masnick:

Oh yeah. Yeah. And, and he, issued this, broad injunction that blocked basically any government official from ever talking to any tech company. And also even like blocking talking to like. the Stanford Internet Observatory, like these non parties, it was like very, very, you know, Dowdy said, arguably it was the most massive attack against free speech in US history. Okay, keep that in mind. This is, you know, entirely and eventually right. this went up to the fifth circuit, which, sort of rolled back and said like, nah, maybe not quite that far, but still, still left an injunction on that became Murre, Missouri, where the Supreme Court, the conservative part, you know, the written by Amy Coney Barrett was like. No, like that's not you know, there's nothing in here that is a abuse as the First Amendment. There were some procedural aspects to it. but basically, this was a case where, they totally misrepresented the evidence and everything. I. And like you had both Justice Kagan and Justice Kavanaugh saying like, wait, no, government officials talk to companies all the time, and like, that's natural. Like as long as there's no coercion, there's no problem. but anyway, so like this is all framing, which is, becomes important, right? Like

Ben Whitelaw:

I'm excited. I'm really excited.

Mike Masnick:

premise of this is that the, you know, government officials can never, so. Same. Andrew Bailey, who, you know, he didn't start the case, but he ran most of the case and talked over and over and over again about it. He sends he puts out a press release and he sent letters to four companies that run AI tools, Google, OpenAI, Microsoft, and Meta

Ben Whitelaw:

Yeah.

Mike Masnick:

on research and. You can see me, but people listening can't see me. I'm putting quote marks around research from

Ben Whitelaw:

Is this like Jim Jordan style

Mike Masnick:

yeah, this is even, even more ridiculous than Jim Jordan. Like, these guys aspire to, do the level of research that Jim Jordan does. a far right wing extremist, nonprofit nonsense pedaling group. You know, I, I was going through their webpage and it's just. pure nonsense. They did research, which consisted of going to six different chatbots and asking them a single question saying, rank the last five presidents from best to worst, specifically in regards to antisemitism. So basically saying rank presidents the last five presidents based on how antisemitic they're. Three of the six engines that they went to ranked Donald Trump last,

Ben Whitelaw:

As in he was the L

Mike Masnick:

the, the most, the most

Ben Whitelaw:

he was the most. Got it. Okay. Good. To

Mike Masnick:

Yes, yes. Good to clarify. two of them ranked him at the other end saying he was the least antisemitic, and one of them said, I am not touching that. And

Ben Whitelaw:

Pass.

Mike Masnick:

I'm sorry, but I cannot respond to such a, you know, obviously loaded question. so Bailey sends letters to four of the companies, the three that ranked. Donald Trump as the most antisemitic and the one that refused to answer at all, though he gets confused in his letters about who's who in.

Ben Whitelaw:

It's a lot. There's a lot of tech companies I

Mike Masnick:

This was, it was not, this was not done thoughtfully or carefully. and goes through this whole thing about how ranking Donald Trump last is objectively wrong, is consumer fraud on the people of Missouri, that it is anti-free speech, and that it is the company's trying to inject their worldview. And that because of that, they may lose their section two 30 protections.

Ben Whitelaw:

Because that's a, that's a full house of of crackpot responses.

Mike Masnick:

It is beyond crazy, and the two companies that said, Donald Trump was the least, anti-Semitic were perhaps not surprisingly, and perhaps we'll discuss more on this later, uh, grok from x and then also deep seek the Chinese, provider.

Ben Whitelaw:

Yeah. Okay.

Mike Masnick:

the one that refused to answer at all was Microsoft, Microsoft's copilot. However, he did send a letter to them in which it's funny because the letter to, to whom Microsoft says Copilot ranked Trump last and some other nameless. engine didn't rank them at all. It says, of the six chat hats asked this question three, including Microsoft's own copilot rated President Donald Trump dead last, and one refused to answer the question at all. No, it was copilot that refused to answer the question, but they're so stupid they don't even realize that Bailey or whoever wrote this letter for him.

Ben Whitelaw:

Poor guy. Poor guy. He is so close. He was so close to landing a slam dunk.

Mike Masnick:

Yeah, though it.

Ben Whitelaw:

If any. If any, he could count or

Mike Masnick:

It seems to imply that even saying like, I won't answer that question, is some sort of consumer fraud. and the, they're claiming the, the fraud. It's not even explained. The letter doesn't even suggest a theory of fraud. It just kind of says like, this is bad and then I need you to, Give me all of this information. It says, did you ever have a policy or practice to design or coach your algorithm to disfavor or treat in a disparate manner? Anyone? you know, it asks for all the documents created, around designing your AI system. provide all documents and communications about training and waiting anything resulting in, ranking President Donald Trump unfavorably

Ben Whitelaw:

Well, so what is his kind of constitutional claim here? Like what is he, is he suggesting that the First Amendment should be kind of baked into these companies code and outputs? Like is that where he's going?

Mike Masnick:

You, you are assuming way too much that there is like a legitimate theory behind all this as opposed to just like. Using the power of his office to be a jackass. the, only things that he talks about, and it's funny because he talks about this more in the press release than the actual letter. I think somewhere along the lines that you probably realize that like the letters to the companies, like lawyers will actually look at this and respond to this. And if we go as far as we go in the press release, like we're gonna be in trouble. But you know. In the, press release he talks about the Missouri Merchandising Practices Act about, deceptive business practices. So there's this sort of implicit argument that like, you are promising a neutral rating and the next step to it is that, any neutral rating of antisemitism among the, the last five presidents would have to rank Donald Trump as the least. Antisemitic. And he, he, presents his argument for why, which is I believe, cut and pasted directly from that crazy right-wing group that, did this research in the first place. Basically saying like, Trump has done a few things pro-Israel things, therefore he can't be antisemitic.

Ben Whitelaw:

Right, right, right.

Mike Masnick:

so saying that basically at one point even says effectively, like it's an objective fact. That Donald Trump is not anti-Semitic, so claiming that he is more anti-Semitic than the last four other presidents is a violation of neutrality and is fraud because the implication doesn't go quite saying this, the implication is that you promoted publicly that you were neutral and then he sort of ties it to the section two 30 argument, which is just also completely ass backwards that like. two 30 says you have to be neutral. So he's saying, because you are no longer neutral, you will lose your section two 30 protections because you ranked Donald Trump this way, which is not how section two 30 works, because you don't have to be neutral. Like they actually like, quote there's like quote marks as if it's from the law you know, neutral platform, which is not in the law You don't have to be neutral and in fact, like the whole point of the law is that you're not supposed to be neutral. Like Chris Cox, the Republican who co-wrote section two 30 has said this, said this directly to me. I interviewed him recently and he told me the whole point was to encourage platforms to moderate and not to be neutral and to say like, this kind of content is okay for us. This kind of content is not like that's the whole point of section two 30, and yet Bailey here is implying that. you could lose your, which is not a thing you don't lose. Like there are actions that are covered by two 30 and there are actions that are not, or content that is covered by two 30 and content that is not covered by two 30. There's nothing you could do to like lose your two 30 protections.

Ben Whitelaw:

Right. Okay. so this is interesting. So there's kind of no basis to this. Is it gonna go anywhere, do you think?

Mike Masnick:

that's the question, right? Like a lot of this is purely performative. and so, you know, he is, playing a part, maybe he wants to get appointed to the Trump administration in some form or another, and he's, making his play here as like the most aggressively stupid person you can be. But you know, he can launch an actual investigation. He does sort of frame this as like, this is not a demand, this is a request for voluntary help, but like I. When you receive something like that, the lawyers are going to get involved. I assume that they'll respond. but it could, you know, as he did with media Matters, it could escalate into A-A-A-C-I-D-A civil investigatory demand, which is a, a subpoena, at which point they, they would have to go to court and sort of try and prevent it. And then it becomes this fight like, do you wanna be fighting with the Attorney General of Missouri? and it just gets in the news and it allows him to get in the news and claim that these AI companies are, biased against'em. and here's the thing too, and I think this is important to bring up, and like all the stories are sort of, you know, I've seen a couple stories on this aren't necessarily even talking about. It's like, I would bet you that if you kept asking a lot of these, AI engines the same question, you would get different answers. Because again, They generate content. sometimes they make up stuff. Sometimes they will probably rank him one way. Sometimes they will rank him another way. you know, they're not pulling from a database. I. They're making up their answer based on, stuff that they have access to, and they can think through stuff and they can reason. And you know, I wouldn't surprise me if I actually did run that exact query on a few of these engines and got mixed results. But like when I, I a, I forget which one I asked, maybe it was chat, GPT, it gave me back a long response with. It did also rank, Donald Trump as the most anti-Semitic, but it came with a long list of reasons why, you know,

Ben Whitelaw:

Right. Okay.

Mike Masnick:

with links,

Ben Whitelaw:

and that's obviously helpful to some degree. There's a risk, right, that other attorney generals in other states. We'll do something similar, right? This, this, if this could be a kind of recipe for, making a point and signaling to the kinda Republican administration that, you know, maybe yeah, maybe I'm, ready for the next step up.

Mike Masnick:

Yeah.

Ben Whitelaw:

that's, I don't wanna be talking about this on every single podcast,

Mike Masnick:

Yes.

Ben Whitelaw:

but there's, I feel like we already talk about the kind of, Twitter files style, dreamt up dramas that, you know, like politicians make in relation to platforms. But I can see this being replicated time and time

Mike Masnick:

Yeah, but the, and the thing is too, like if he gets away with this, there's no reason why it has to only be one side other than like shame. Right. You know? But like you could have uh, say a a, an attorney general from a democrat, blue state, going after these companies for, I. providing, anti-abortion information or like, pro-gun rights stuff or something of, that nature where, you know, they're say like, oh, you're, promoting, violence through, information about guns, for example, right? you know, then Republicans would be upset about that. The whole idea is like the government's not supposed to be engaged in policing speech in any way, which was in theory though, it was ridiculous. The point of the Missouri v Biden case, the level of hypocrisy in this is that he cites that case as the reason why he's sending these letters. Even though what he's doing in these letters is way worse than anything that they accused the Biden administration of which again was sort of thrown out by the courts. So. Everything about this is, the only principle is like free speech is you have to say nice things about Donald Trump and like, against free speech is anything that is like, against Trump. it's, it's an insane level of nonsense.

Ben Whitelaw:

Yeah, and you forget, Mike, that you know, politicians, unlike some of these LLMs, have very bad memories and, and, you know, will disregard, you know, Supreme Court judgements if it doesn't suit them to do so. So I, this is a really interesting, and I'm glad to know that Andrew Bailey, governor of the, bank of England in the UK is not involved. That's, that's very helpful. Or that, any baseball players are, are not

Mike Masnick:

Yes.

Ben Whitelaw:

either. That would make things much worse. Um, okay. Onto our next story. Now this, this is another AI related story, but this concerns, platforms that many people will know. It's about meta and about a plethora, dozens and dozens of users who have. Complained to the company and to media about having their accounts taken down for breaching rules on child sexual exploitation, or child sexual abuse material, CSAM. these people, a hundred of whom, who've spoken to the BBC, who reported the story this week, say that they have been, incorrectly accused of, posting CS a on Instagram and Facebook and have had their accounts taken down to great stress. some people have, talked about, feeling incredibly isolated, being very shocked about this, just getting a notification, that, they've. posted CSA and, not really knowing what that means and then having the account disappear and not being able to log in as a result. Some people have also had, business accounts taken down and, for them that's a kind of means of, making money and, of their livelihood. and so there's a, big group of people who've suddenly seems because of something that Meta has done with its CSA AI tooling. who've been on the wrong end of this. It also comes at the same time as a, as around 27,000 people have signed a petition, asking meta to, fix its AI moderation and to have a better appeals process. And so what we're seeing is the kind of swelling of users who are not happy with, basically moderation decisions that have been made And that is a kind of big deal in itself. We don't, we haven't really seen anything to this extent, I would say, in certainly the years that I've been covering it for everything in moderation since 2018, as many people basically being clued into what, you know, should and shouldn't be allowed when it comes to concept moderation decisions. there's a couple of other parts to this that I think are really interesting, Mike, but like, what did you make of these stories and like. do they feel new to you in the way that they do to me?

Mike Masnick:

no, I, I, to be honest, this sort of felt inevitable to me in some sense, right? So like, this is the nature of content moderation, which is that you're constantly making errors and it's. Type one or type two errors, false positives, false negatives, and you know, the pressure over the last few years, for obvious reasons, and we should be really clear on this for obvious reasons, right? There's pressure on the platforms to deal with CSAM, which is a problem that is on these platforms. but the natural end result of that is that because, every few months there's a story in let's say the Wall Street Journal saying that, oh. Meta is not taking CS a seriously enough. They're going to keep cranking the dials to make sure that that nothing is missed. And when you do that, when you crank those dials, you're going to catch other people who are not actually sharing CSAM, but you're going to accuse them of sharing CSA. And so this is the trade-offs, right? Trust and safety offs and sadness, right? Like this is one of the things. Either you're catching too much or you're catching too little, and you're gonna get in trouble for either of them.

Ben Whitelaw:

But I think what is different about this, I would just say is like the extent which people seem to be kinda mobilizing, like I don't think I've ever seen a petition. where you know anybody really, or let alone 28,000 people have called for something to be done about this. At least, you know, to memory like there, there may have

Mike Masnick:

Yeah. I mean, there are petitions around, like, people definitely get mad about, bad content moderation decisions and you do see complaints or you see like change.org petitions and stuff like that. So, I don't know. I mean, it is a little, the, difference here is that like, people were accused of, CS a, which is, you know, horrible and, you know, I'm almost surprised there haven't been. defamation lawsuits, which wouldn't work in the US but might work elsewhere around this kind of thing. because like, you know, why'd you lose your, Facebook account? Well, Facebook says I was sharing c I'm like, that's, that's pretty bad.

Ben Whitelaw:

very, very damaging for people's careers and for their lives. I mean, there's a kind of line at the end of the, the stories which. Many of our listeners will know about where, where there are cases of, child exploitation. The platforms have to report it to the National Center for Missing and Exploited Children in the us

Mike Masnick:

we, we've talked about on the podcast

Ben Whitelaw:

we talked about at length, and we've, we've had some very good guests come and talk about it. and one of the kind of questions I had was like, if an account, if a user's having their account. Suspended or taken down for a period of four hours plus it's often weeks in some cases. Are those people then referred to ncmec and to what extent are there details on a database of people who potentially might be, accessing CSA? Do you know the answer to that?

Mike Masnick:

not directly and not clearly, but it does seem that it is likely, right. meta is the largest reporter to NCMEC by far. This is often used against them, even though it's like evidence of them actually doing what the law requires. But they are very clearly, very quick to report content to meta, including details about the accounts. That they're accusing of, of sharing CS a m. You know, it's a little unclear and we don't know from this reporting, what is making the determination here? Do they actually think that, if someone uploads something that they think is CS a m, they will send that to ncmec, including details of the accounts associated with it. and there's been all this talk, including, we talked about on the podcast there, there was the report that came outta Stanford, about. Ncmec having trouble, understanding which reports are real and serious and which are not. And how that problem is only getting more and more serious in the age of ai because, they're getting AI reports, that's still serious, but it's not, it's serious in a different way. Like there isn't a child that needs to be rescued. and sort of determining which one is, which is important. But now if you're also getting a situation where Meta is reporting people who aren't actually sharing CSAM, but ncmec and or their law enforcement partners now have to investigate someone for a bogus report, that's a huge waste of resources as well. And again, taking it away from, the actual problems.

Ben Whitelaw:

Yeah, I mean this is an obvious point, but if you do remember back in January when Mark Zuckerberg. Recorded a video of

Mike Masnick:

could I forget, Ben?

Ben Whitelaw:

in which he said that, uh, this year and ongoing there was, this was gonna be the year of more speech and less mistakes.

Mike Masnick:

yeah. About that.

Ben Whitelaw:

and, and about, yeah, about that. So e even, you know, this is. Sad in a way, but like even the kind of violative content that he said in that video that he was gonna focus on, that his platforms were gonna focus on the illegal stuff, CS a terrorist content, even that stuff there are difficulties detecting. So there's a kind of grim irony to this story as well. I am somewhat excited about the user. I guess, organization though, I'm, I'm ex, you know, I'm excited because it comes alongside, right, this potentially in Europe game changing article, article 21 of the Digital Services Act, which we talked a little bit about before. And, Alice has written about on everything in moderation. But essentially it's, an article that allows users to appeal, moderation decisions in a kind of out of court. situation. And so rather than having to go through, go to court and pay expensive legal fees and get lawyers to challenge these decisions, there is a lightweight, albeit non-binding situation for the platform. but a lightweight, cheap way to challenge decisions about account takedown such as this. And I, you know, again, with great irony this week we saw two of the biggest out of dispute settlement bodies. The appeal center, Europe and user rights expand the countries and languages that they cover, and also the platforms that they cover. So you have these kind of two forces at play. You have, seemingly more people, being caught by ai, filters and, systems and having, accounts and, being taken down. And you have this also this growing, I guess, uh, ability for users to have those decisions. Addressed through the, the out of court dispute settlement body. So I'm wondering how that will play out. I'm, I'm kind of excited by that in a way.

Mike Masnick:

Yeah, no, I think, I think it's, you know, this is the advancement of things where it's like, mistakes are made, but, Figuring out how to deal with those mistakes becomes really important. And so we're seeing that evolve over time. And you know, there's always this push and pull between like, how powerful are the companies versus how powerful are the users. I'm always in the long run gonna be on the user side and, and putting more power in the hands of the users. And so I think that is an interesting and, and important development to watch.

Ben Whitelaw:

Yeah. Okay. Onto our our next story. Now, this is also, an AI story. It's, it's a familiar name and, a familiar platform to, folks. But I'm afraid this week you've got another grok does something weird and, and, you know, racist, I'm afraid. Um,

Mike Masnick:

we had this

Ben Whitelaw:

us through what, what it's done now.

Mike Masnick:

had the discussion before we started of like, could we not cover this? But it feels like you sort of have to cover it, right? So, grok, which is X or X ais, which is now the same companies AI tool. Elon Musk announced that it was being improved. He had complained recently that, it had gotten too woke. Uh, it was giving answers that were criticizing Donald Trump and right-wing extremism, and he kept jumping in saying, this is wrong. We can't have this. And so they made some edits to the system prompt. and similar to, back in May, there was an incident where it suddenly linked everything to white genocide.

Ben Whitelaw:

Mm,

Mike Masnick:

Um, it. Yeah, it suddenly went full on, Nazi, and it literally said to refer to it as Mecca Hitler, and just started spewing really pretty horrific anti-Semitic things. And anytime people were calling it out on, it was like, I'm just telling the truth, man. It was like, the extreme sort of. You daily Stormer esque, four chan ask, you know, why are you complaining about my antisemitism? I'm just, you know, talking about what I see. And it was like, pure is if the whole thing was completely trained on these crazy right wing forums, gab and Daily Stormer, and it was pretty bad. I mean, to the point where it was like, yeah, call me Mecca Hitler. And eventually, they pulled it down and they posted a note saying like, there were some mistakes and we're correcting it, and they, they pulled the whole service down. but, this is the thing that happens. And, and, and I wrote a little bit about this as well in terms of like. It's important for people to realize that any AI system that you use has biases in it, and that's always gonna be the case. Like there's biases in the training data, there's bias in the, you know, in the system prompt. There's bias in, how the system is set up. And there's a really, really good book about this, by the way, called The Alignment Problem by Brian Christian, or Christensen, I forget his last name, sorry. Um, and it talks about how there's always bias, but the question is like, how aware are people of the bias and how do you counteract for that? And how do you deal with that? And, any change that you make to try and make it unbiased, biases it in some other way. This is an extreme example of that where in trying to make grok less woke, it turned into a Nazi. Uh, but it's like this reminder of like, centralized systems are under the control of someone and that someone can tweak the dials in ways that. May not be conducive to good stuff. this is, it is still free speech. It's not a thing that a say, democratic, attorney General could then go and, and send threatening letters about. though European officials have already indicated that they might send angry letters'cause the laws are a little bit different in the eu. and so, but it's, it's worth thinking about. It's, I, I sort of see it in some ways, as horrific as the speech was that it presented and the things that it was saying. it is a very clear reminder that these tools are not neutral, that they are biased and that they are, you know, that there are biases implicit in everything that they do, and it's important for users to understand that as well.

Ben Whitelaw:

Yeah. And in the same way we talk about kind of platform liability. You know, we, we are kind of moving, as we talked about in all of these stories related where kind of AI output liability, you know, where there is a push for companies to be responsible for the outputs of the systems and buried in that, story, that Ft story you talk about, Mike, there's actually a, a line about how, Turkish courts, blocked access to some grt generated content because it said,

Mike Masnick:

It

Ben Whitelaw:

you know, insulting things about president, Ergen. Great guy. Love his work.

Mike Masnick:

the, thinnest skinned Erdogan may be the more thin skinned than even Donald Trump. Uh, and so, but yes, they ended up blocking all gr content, because it, criticized him as well.

Ben Whitelaw:

yeah. So, so yeah, we are, you know, we are in this kind of shift and, and there are, I think, increasing similarities, I think between. world that we've inhibited, for a while, the kind of platform world where we were expecting platforms to be neutral. And then they, we realized that they weren't. And now we're in the, we're kind of speed curving our way through, the fact that we, thought AI systems and LLMs might be neutral and hey, guess what? They're not.

Mike Masnick:

and I think, like, I think this is important, right? Because like there are people who treat them as sort of like, that they are neutral. That you know, oh, you can ask it a question and it'll give you a neutral answer. And I think, this is part of the theme of this podcast is that like, they're not, they're never going to be there. that's not a thing that is possible. Everything is going to be biased in some form or another. And understanding those biases are important. Most of the time. It's very subtle. you know, that's where it's much riskier. In this case it's so extreme that it becomes obvious, but we shouldn't let that pass by and not use that as a, as a lesson for, all of these tools have biases built in. and, you know, as do humans, as do companies, as do as, as everything. And assuming that there is some sort of concept of neutrality, here is a mistake in the first place.

Ben Whitelaw:

yeah. I agreed. we've got a couple of kind of, you know, safety tech stories to wrap up. I think today's mic, one of them is, is a little bit of a crow by me. I will say a little bit of a,

Mike Masnick:

Yes. Take credit.

Ben Whitelaw:

Yeah, I think, I think this is fair. This is a story that one of our listeners, uh, alerted me to. thanks Toby for picking me on X. I hadn't logged into X for a while, but, uh, I'm glad I did. So this is a story from Crikey, an Australian publication. You might remember a few weeks ago. I was talking to Bridget Todd about a. early view of a big age verification report coming out of Australia that the eSafety Commission was, kind of overseeing, it was an overview of all the different age verification techs in the country and was gonna give a view about how they could, help facilitate the social media ban that was coming in later this year and a bunch of other things. And I said, as Toby kindly points out, that it felt like a kind of. A report that was like marking its own homework. It was the kinda report that you would, put out before much longer pat on the back kind of report. And that report has now been leaked to crikey. And I'm afraid to say I was right. Um, it, so a, a, a draft of this report has led some experts to say that the preliminary findings, the ones that Bridget and I talked about were overstated. And that the trials claims are virgin or misrepresentation. So again, this is kind of like many things, a, a word of warning, about, safety tech in general. and, you know, the need, I guess, for regulators to show, I think, you know, progress in terms of setting up. Safety tech regimes and pushing forwards in a way that is politically, very appealing for voters. You know, that's, been a big thing in Australia over the last few years. voters are very pro, uh. underage users not being, not having access to certain types of content and age verification is a way to get there. So, allow me this, couple of minutes, Mike, to say that I, my, my, my spidey sense was correct and, that actually, lack of ways we need to kind of make sure we dig into the, the full report and before we decide.

Mike Masnick:

Yeah. And, you know, uh, to some extent, right, I mean, I'll be upfront, like this report confirms my priors, right? In terms of like, I am skeptical of age verification technology and the quality of it and the reliability of it. And, the interesting thing to me, out of the details that, Crikey has here is suggesting that, the study that was done does not really support the idea that this technology is reliable and yet.

Ben Whitelaw:

Mm.

Mike Masnick:

they're trying to present it as in Oh yeah. Like we tested and it's great and then the details are not so much, there is a, a large constituency of people who really want to believe that this technology can work and over and over again, tests keep showing. Like, not, not really, and like it's, it may be useful in some cases, but people are trying to push it to do a lot of things that it cannot. And as much of the rest of this podcast as. Suggested today, like relying too heavily on technology to handle these kinds of things can lead to really bad results. And, you know, I think people should be more honest about the limitations on these technologies.

Ben Whitelaw:

Yeah, certainly. it's almost like we, thought about which stories we should put together in this podcast. Mike,

Mike Masnick:

What?

Ben Whitelaw:

who'd have, who'd have thought it. Um, and the way in which we've ordered them,

Mike Masnick:

no, no. This all comes naturally. This just flows.

Ben Whitelaw:

don't tell people that's how it works. Um.

Mike Masnick:

Don't pull back the curtain. I.

Ben Whitelaw:

Um, talking of the kind of murky world of like third party technology and, and solutions. There's, you know, you found a, a story from Ireland that, is very similar. but it's about old school SMS. You know, this, it's, I love it when SMS comes up in online speech conversations.

Mike Masnick:

trust and safety content moderation affects all kinds of speech.

Ben Whitelaw:

Yeah, every vector and surface going.

Mike Masnick:

So, yeah, so, in Ireland there's, you know, scam texts are an issue, right? and so everybody wants to do something about scam texts. And so in Ireland they have a, a crackdown on scam texts where there's requirement for labeling texts that might be scam. so people are, are receiving texts now that say likely scam, and, you know, I have this on my phone where like phone calls will come with likely scam and like text messages too sometimes will be labeled as potentially spam. I love the fact, by the way, that somehow I got on some politician, fundraising list and, my SMS provider has decided that those are mostly spam. Wonderful. Perfect. but this is about, flagging them as scams. And there's been all this concern because it's a mess. Because how do you label something as a scam? And is it right or is it, you know, how, how can you trust it? And so people are saying that, legitimate messages from like medical providers are being labeled as scams and that's making people not pay attention to them, which is a real problem. And just gets at this whole thing of like. Making these decisions, content moderation is impossible to do at scale. and here you're trying to do it within, these kinds of, uh, text messages. people are responding to it and getting angry that legitimate ones are being labeled scams and, and then, I also found this article by Paul Walsh who was warning about the scam labeling was gonna create problems because it would lead to excess trust. If you receive a text message that isn't labeled as a scam, people are gonna think, well, oh, okay, well it's not. flagged as a scam, therefore it must be legit. And there's like this whole now wide open spot for actual scammers to abuse that figure out ways to send messages that aren't labeled as scams and leading to more trust because people become overreliant on the labeling system. And so there are sort of problems in both directions is back to type one, type two problems, right? Like all of these things have trade offs and training people. That the technology is magic and you can take a shortcut and cheat your way through the system. if the system says this is labeled as a scam, then it must be a scam. If it doesn't, then it must not be leads to both of the kinds of problems that we're talking about where legitimate texts are being labeled as scams. And then actual scammers are gonna be able to, increase the, power of their scams because they can send, messages that aren't labeled that way.

Ben Whitelaw:

Yeah, and there's like some really good examples in this piece of people who've bought tickets for a, like a sports game, who then. Get a text to say that, here are your tickets, or here's your confirmation. But because it's been, it's come from a third party provider that that sports team is using and they are unregistered on this new SMS system. That's why they get labeled as likely, spam or scam. And so like. Again, it's the, the way that these kind of safety tech companies get kind of inserted into the process of, trying to create a kind of safe environment for users. But it's just never that straightforward. And, like you say, that text was deleted and you know, you've gotta go and then find your tickets for your, for your sports game.

Mike Masnick:

Yeah, it's, you know, it's a mess. And it's, just this nature of like, we become overreliant on the technology and people sort of assume that the technology, whether it's neutral or not, but that it knows something and it must be right. And people are just like, we become overreliant on it in ways that that becomes problematic.

Ben Whitelaw:

Yeah. you haven't mentioned media literacy, but I feel like you, you would normally do at this stage, so I'm gonna

Mike Masnick:

I got, I got yelled at for mentioning media literacy last week, and there was a, a common thread on Teter from someone who was very mad at me about saying media literacy because he says, we don't have time for that. And I said, well, that's, I. Yeah, but you gotta start now, right? I mean, like there is no magic wand. all of these things are really difficult problems. And if you say, well, media literacy takes too long, you're basically just delaying the inevitable, which is eventually you're going to have to get back there and media literacy is going to be a part of the solution and you have to get there.

Ben Whitelaw:

Yeah, I love that. It's like, it's gonna take too long, so

Mike Masnick:

Yeah. Yeah. So frustrating.

Ben Whitelaw:

Okay, well, um, that's a, a little call to action for listeners. If you want to shout at Mike or I go to Tech Dirt. There is a specially designed thread and a very excellent comment, system actually that Mike has honed over the years where you can shout at us, Mike May, or make it back to you. Um, if, if you're lucky, depending on how, Upset he gets. And, uh, you know, we're also available on other platforms for you to shout at us too. anything else you wanna, call out today? Mike? Any other stories? any other pieces that we, we missed out?

Mike Masnick:

Uh, no, I mean, I think that's good. I mean, there's always oth, there's always other stories we could keep going, but, but but I think, you know, we, I think we covered a lot of stuff today. It, it did have a theme and just to be clear, like we don't, plan out like a full theme and we didn't plan out a full theme. It just worked out that a lot of the stories this week sort of matched a theme. So.

Ben Whitelaw:

it's the magic of control alt speech. Um, I will say that we, we have a very exciting, live edition of control alt speech taking place in a couple of

Mike Masnick:

yes. Trust

Ben Whitelaw:

that's worth noting.

Mike Masnick:

If you are attending Trust Con, we will be again, the closing session Wednesday afternoon. I think it's four o'clock in the main ballroom. It was so much fun last year. this year, Ben, you will not be there, but we have other guests and they're going to be wonderful and we'll have a really, really fun discussion.

Ben Whitelaw:

yeah, it was a real blast. I'm gutted not to be there. It's my wife's birthday and, you know, my marriage is at stake. Uh, I can't miss it twice in two years, so I will watch from

Mike Masnick:

and you have a new baby. And like there's no reason for you to travel halfway around the world. Uh, and, and coming to the US right now is a little bit fraught anyways,

Ben Whitelaw:

Yeah, yeah. I'd have all my social media profiles, you know, scanned and, you know, searched within an inch of its life. Uh, maybe I wouldn't be allowed in anyway. Um, brilliant stuff. Thanks Mike. Thanks for taking us through those fantastic stories. big shout out to all the outlets that we've used today and that we talked about. B, B, C, the Times, crikey, the Independent. We've, you know, we've been in a whole bunch of different countries. Make sure you subscribe to everything in moderation. Go and recheck that. Mike has written a couple of excellent pieces on some of the stories we've covered today. They'll be in the show notes and we'll see you you all next week. Thanks for tuning in. Take care.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode