Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
Let Fly the Claudes of War, with Casey Newton
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Ben is joined by Casey Newton, founder and editor of Platformer and co-host of Hard Fork, a podcast that makes sense of the rapidly changing world of tech. Together, they discuss:
- After a deadly raid, an AI power struggle erupts at the Pentagon (Washington Post)
- Following: Anthropic vs. The Pentagon (Platformer)
- Anthropic Drops Flagship Safety Pledge (TIME)
- Hackers Expose Age-Verification Software Powering Surveillance Web (The Rage)
- Discord is delaying its global age verification rollout (The Verge)
- Reddit fined £14m by UK data watchdog over age verification checks (BBC News)
- How to evaluate Trust & Safety vendors (Everything in Moderation*)
- Regulate platforms, not children – Commissioner urges caution over social media bans (Commissioner for Human Rights)
- MPs reject total ban, want data housed locally (The Star)
- Exclusive: US plans online portal to bypass content bans in Europe and elsewhere (Reuters)
Play along with Ctrl-Alt-Speech’s 2026 Bingo Card and get in touch if you win!
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So Casey, how familiar are you with your very early stories that you wrote on The Verge?
Casey Newton:You know, I would like to think I'm relatively familiar with them, but it probably would be possible to stu me about one or more of them, I would say.
Ben Whitelaw:Okay, here's my attempt to do so. so we're going to use an early story that you wrote on the Verge back from 2013, as the start of today's controlled speech. Um, there was a, an excellent story, fantastic story you did about a text, notes app called drafts. It's still going, surprisingly, still going strong, won multiple awards. I don't use it myself, but you know, you probably brought it a lot of users with your Verge story. we're gonna use, its kind of prompt that it has on its kind of user home screen, which sounds a bit like a kind of Donald Trump foreign policy. call to arms, but I promise is directly from the app. Okay, you ready for this?
Casey Newton:let's do it.
Ben Whitelaw:The prompt is capture everywhere. So I want you to think of what you would capture, now or in the future.
Casey Newton:Yeah, I mean, well I would've said Nicholas Maduro until recently, but, um, the US government beat me to the punch there. I did capture a fiance over the weekend that was really great. I got engaged, so,
Ben Whitelaw:Wow.
Casey Newton:so that's been a very happy, happy thing in my life.
Ben Whitelaw:Congratulations.
Casey Newton:thank you. But you know, really, Ben, I'm just trying to, I'm trying to capture the moment, you know, whether it's in platformer or hard fork. We're living at a time, that feels, quite, urgent and scary and bad a lot of the time, and so I'm trying to capture that, but hopefully in a way that at least some of the time can calm people and make them feel empowered, uh, instead of making them feel, anxious and, uh, hopeless. So that, that's what I'm trying to capture, day to day. how about you?
Ben Whitelaw:Well, I think capture everything is, is a, for warning to one of the stories we talk about today. I mean, I'm not gonna take a similar approach to, persona, the, infamous, somewhat controversial age verification tool. But, you know, I wouldn't be surprised if one of their kind of mottos is, capture everywhere because boy have, have they got in trouble for that?
Casey Newton:Oh yeah. Watch out for those guys.
Ben Whitelaw:Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. it's February 26th, 2026, and this week we've got a platform focused week. We're talking about anthropic, rapidly rewriting its policies, discord, hastily doing a U-turn on age verification, and a not Sofast UK fine for Reddit. My name is Ben Whitelaw and I'm the founder and editor of Everything and Moderation. And as you would've already heard, Mike is jet setting. Again, he sends his regards, but in his absence, we're very lucky to have Casey Newton, who is the founder Scoop Machine at Platformer, a very, very excellent newsletter, and the co-host of the Hard Fork Podcast, which many of our listeners would've heard about and listened to. Welcome to the podcast, Casey.
Casey Newton:Thank you so much, Ben. It is great to be here and, you know, anytime where, I can, uh, sort of drown out Mike Masnick by replacing, his voice with my own, it, it feels like a triumph. So I'm, I'm thrilled to be here.
Ben Whitelaw:you were the only, you were the number one kind of Californian that we could replace Mike. You know, we couldn't have another Brit, you know, at this point we've, we needed some contrast. You were the first person I thought about.
Casey Newton:I, well, I'm glad to be your backup Californian today.
Ben Whitelaw:well, we've got a lot to get through today. I'm really glad to have you here. What's been happening in your world. before we dig into today's stories,
Casey Newton:Yeah. Um, let's see. I mean, honestly we're going to be talking about the thing that is biggest in my world, which is anthropic versus the Pentagon. I would say, before that moment happened, what's been happening in my world is I think the world is just waking up to some of the issues that I've been trying to raise on my own work for a few years now. one of those is just that AI can be incredibly dangerous and harmful. It can be harmful in ways economic and military. And I think we've seen both of those over the past few weeks. And to me that is an extension of just a step change in capability that we saw in November with the release of, uh, Claude Opus 4.6. but Claude was not alone, uh, Gemini you know, made by Google and OpenAI have also released. Very, powerful models in recent months. So I just see sort of an increasing rate of acceleration in AI that is starting to have ripple effects in the real world. And so that's just where my head is at now most of the time.
Ben Whitelaw:Yeah. And you talked about the kinda lens you've been applying to your coverage over the last few years on platformer. just give kind of listeners a sense of, of how you try to cover. the broader like technology and social media platforms because you go back a long way in terms of covering these, companies and we're obviously starting to see a, a new set of bear moths, a technology bear, Moss. H how do you kind of think about that shift now?
Casey Newton:Yeah, I think that, you know, any publication evolves alongside the news cycle and whatever you started writing about, probably won't be exactly the same, five years later. platformer now, uh, a little older than five. I think if I had to draw a through line through everything. It's that platformer is always interested in frontiers in tech policy. What is the hard question that no one has had to answer yet? And how does the arrival of a new technology suddenly make that question very urgent? And what are the trade-offs that will be involved in answering it? What would be good ways to answer it? What would be bad ways to answer it? That's how I see my work, most of the time, day to day.
Ben Whitelaw:Mm. And I think you, you've had some brilliance stories recently. There was an excellent story about OpenAI last week, and some of the kind of personnel changes in the safety team there. I'm a, proud subscriber to your work and, I encourage everyone to, go and do the same. I think the kind of independence with which you approach the topic is great. and yeah, we're actually, we're gonna be talking about you next week. Because Mike and I have already recorded next week's episode, and you are, you're actually mentioned,
Casey Newton:Is that right?
Ben Whitelaw:in that context. Yeah. So next week, we don't have a regular episode of Controlled Speech. Mike and I have done a kinda prerecorded episode about, the three era of content moderation. Coverage in the media, and Casey has been a big part of that. we kind of chronicle from around 2003 to, the modern day, the three eras, that I feel like have, evolved over that time. And it's all part of a, a new book chapter that I wrote for, an upcoming book that's published in the next couple of weeks. So, Casey, you are here today in person. You're here next week in
Casey Newton:yeah,
Ben Whitelaw:and it's great to have you here on both accounts.
Casey Newton:That's great. Well be careful, you know, if you say my name three times, I will appear. So just keep that in mind as you discuss me on your episodes, Ben.
Ben Whitelaw:Okay, great. Um, especially if, Max, Nick's getting on my nerves again. Um, um, how do you feel about Bingo, Casey?
Casey Newton:I, uh, am not particularly good at it, but who doesn't love a night at the Bingo Hall?
Ben Whitelaw:Exactly. I, I agree. And so that was why we, created our control speech 2026. Bingo. our listeners can play along, I'm sure Casey, you, you play along once when you listen to the pod.
Casey Newton:absolutely.
Ben Whitelaw:never winning though. You can go and play along listeners@controlaltspeech.com slash bingo. and that will be, an ongoing part of, the listening experience here. we've done enough kind of intro. We've, done the preamble. we, let's get into today's stories. Casey, one of the things you, you write at the top of every edition of Platformer is the disclaimer, the famous disclaimer. Do you wanna, do you wanna give listeners that disclaimer in full because it very much relates to our first story today.
Casey Newton:very much and unfortunately does relate to, um, many of the things we'll be talking this week. My fiance does work at Anthropic. he does not gimme any insight information about this company. I arrive at my views independently. Our finances are separate. You can read my whole ethics disclosure at platformer News slash ethics, which I encourage you, to do. But yeah, I always wanna make sure people know that before I start running my mouth about these things.
Ben Whitelaw:Yeah. Yeah, I like that. Is that the first time you said My fiance works at Anthropic.
Casey Newton:Unfortunately, it is the second, as we did record an episode of Hard Fork this week. But you know what? It feels good to say it every time. I love saying my fiance, so I welcome any opportunity to do so.
Ben Whitelaw:great. I, I like that. Okay. Well let's get into the story then. this is a story that kind of has a couple of different parts. got the US defense, it's got the one of the world's largest AI companies. It's got kind of safety rollbacks like, give us why this story matters, Casey, and what your take is on it.
Casey Newton:Yeah, so there are two stories that you mentioned, which I think have some relation but are probably discussed at least a little bit separately. So I'll lead with what I just think is the biggest news of the week, full stop, which is that the Pentagon is trying to get. Anthropic to renegotiate a contract that it has already signed to enable what the Pentagon calls all lawful use of Anthropics technologies. Anthropic for its part has said we're willing to do that, but we want to carve out two exceptions, which is we want you to agree Pentagon, not to use any of our technology to conduct mass domestic surveillance, and we also don't want you. to use our technology to create autonomous killing machines that have no human in the loop. And the Pentagon has responded by saying that that is unacceptable, and they've now given philanthropic a deadline of 5:01 PM on Friday. If Anthropic does not comply, they've raised two scary possibilities. One is that they would designate anthropic as a supply chain risk, a designation formerly reserved for our adversaries, like Chinese and Russian companies who we think may be trying to do Americans harms. this would have economic implications for philanthropic. Not only would they lose a$200 million contract with the military, but also all other military contractors like Lockheed and Boeing would not be able to use, Claude in their work. But the really crazy possibility that Pete Hegseth has raised is that the United States would invoke the Defense Production Act, a Korean, war era law, and would essentially force anthropic potentially to create a custom version of Claude that would do anything that the Pentagon wanted, including domestic surveillance and autonomous killing. So to me. That just represents truly one of the craziest stories in content moderation I have ever seen. Maybe the craziest full stop.
Ben Whitelaw:Yeah, this one has everything. This is like full on wild west territory. Like, did you see this coming? I mean, everyone kind of knew that, philanthropic had this big contract with US defense. they seemingly do very well with the kind of US government, the federal government more generally. did you kind of foresee. this becoming a kind of pinch point in the way it has become the fact that they have this giant contract and that they flipped in terms of trying to force them to change the terms.
Casey Newton:No, honestly, this has been a total blind spot to me. I had basically forgotten that last summer Anthropic became the first company to embed its systems in the classified systems of the US military, which is one reason why this issue has come to ahead, right? Some people might say, well. Why doesn't the Pentagon just canceled the contract with Anthropic and used Grok to create murder bots? And the reason is that Anthropics technology is the only one that so far has been approved for classified use. candidly, if I was aware of that, I had completely forgotten it. And you know, there's been some reporting about maybe what brought this to the attention of Pete Hegseth. I, I don't think we really know for sure why this has become such a pressing issue. but it has.
Ben Whitelaw:Yeah, I mean, I, I read a Washington Post story that involved Palantir, I dunno if you, if you heard this, this is the kind of gossip, I guess, that basically Palantir and Claude executives were talking to one another about the capture of Nicholas Maduro. And there was, this kind of idea that Claude had been used in the targeting of, him and also some civilians, who were killed and some kinda military personnel and. In the course of having that conversation, the Palantir exec thought, actually, I'm gonna snitch on this guy. I'm gonna go to the defense, department and say, Hey, anthropic are potentially a bit worried about the fact that they've, we've used their technology in this, case, maybe they're not fully on board, which just strikes me as the most kind of an additional wild detail in an already crazy story.
Casey Newton:Yeah, and, and we should say that Anthropic has like denied that this had any role in what's going on, which is one reason why I say we're not totally sure. Kind of what, forced the issue. Uh, you know, maybe, maybe that was, we know there have been sort of simmering tensions between the Anthropic and the US government for a while now. Like it is just the case that of all the major AI labs. Anthropic has been the slowest to express fealty to the Trump administration. They have not presented him with a golden statue. They have not agreed to supply money to fund the new, you know, ballroom in the White House. Whereas other AI companies, have, been lining up to give the Trump administration what it wants, right? So OpenAI, Google, XAI, they've all been happy to sign this provision that says. All lawful use, of our technology is enabled. And so in that sense, I think it is not surprising that, we, we sort of knew eventually that an issue like this would probably arise.
Ben Whitelaw:Yeah. that fact that Google X Ai OpenAI have done this without necessarily getting the press coverage that Anthropic has, right? Because they've just kind of signed over the terms, they've, they've agreed to the contract. There's something kind of hilarious about that, right? You know, the story should be that those companies. Have agreed to the defense, defense terms. what does that say about the kind of nature of the AI market more generally and these companies and, and the growth that they're seeing?
Casey Newton:Unfortunately, this is just sort of how news consumption works. As humans, we're drawn to conflict and so our attention goes where the conflict is. If there is no conflict, it may, as you say, be in fact the bigger story, but it just tends to seem less interesting to us. probably worth talking a bit about. Why is it important that a company like Google or OpenAI would agree to all lawful use of their technology? Because on paper, that doesn't look bad. Hey, okay. As long as you keep it legal, everything is fine. Well, the problem is there are very few laws regulating how AI is used. And in particular, there is basically no law regulating how AI can be used, particularly in the cases of domestic surveillance. An autonomous murder. And we know that the Trump administration has a robust domestic surveillance operation that is currently in operation, right? So this is why this sort of thing matters. This is not really about are you going to agree to do lawful things or not? It is really, are you willing to give unconstrained power to the US military, to do whatever it wants? And so far, only one big company has said no.
Ben Whitelaw:Yeah. it's quite shocking when you put it like that. what's gonna happen in regards to this deadline now? So we're recording on Thursday, late UK time, early-ish, California time. What's gonna happen over the next 24 hours, do you think? And, where do you see this playing out?
Casey Newton:Yeah, I find this one very hard to predict because the Trump administration really does not negotiate, it only knows the language of domination, intimidation, and threats. And it's hard for me to imagine a world where Pete Hegseth and Dario Amede just sort of shake hands and, and leave as friends. I think that what I am hearing. Is that people believe that assuming that the conflict does not deescalate, that the government will invoke the Defense Production Act and will tell philanthropic, essentially. You just sort of have to do this. And this will then create a court battle. perhaps at some time during that court battle, one of these other labs We'll get permission to, run their systems on the pentagon's classified systems and the Pentagon can then make the switch and Anthropic will, know, no longer have to, do that work. so that's one, potential, off ramp here. To me, the nightmare scenario though is that. We sort of make it through the end of this legal process and the Supreme Court rules that yes, the president does have the authority to tell Anthropic you have to make a version of Claude that kills and conducts domestic surveillance. I mean, there truly could not be a more grim outcome for Anthropic, which was a collection of people who left open AI because they thought that company was behaving irresponsibly and was not taking, a responsible enough approach when it came to safety.
Ben Whitelaw:Yeah. No, I can only imagine that, feeling that one thing that they have done as a company though in, in the last couple of days is change their responsible scaling policy. Let's talk a little bit about that, and how that kind of plays into this broader story. What is this RSP as they call it? why does it matter?
Casey Newton:yeah. This one I is tricky and nuanced. I will do my best with it. Anthropic had the idea earlier in the development of AI to create the, what it calls its responsible scaling policies, where it tried to designate, different levels. I think it was five different levels of AI capability, and one of the commitments that it made was. We are not going to train certain kinds of systems if we cannot guarantee in advance that they will be safe in these particular ways. Like essentially like there will be a stopping point for us. And the, one of the goals behind this was to get the industry on board with them. And they were hoping that folks at Google and OpenAI would sort of come to a similar view that if AI was sort of moving too quickly. That the industry might agree to pause and not put out these very, potentially dangerous models. The company has now said, during this whole Pentagon controversy that they're gonna back away from that promise that they're going to say no, actually. Like we are no longer gonna try to guarantee in advance that these systems are safe. In fact, they're basically saying. These systems are actually just gonna be unsafe in various ways. and we're gonna, sort of create a bunch of other safety related commitments, but we are not going to unilaterally withdraw from what has become a kind of arms race between all of the companies. So that is, that is the change that they made.
Ben Whitelaw:And, and do you, do you buy this idea that it's not related to the, Pete Hex he demands because on Paper and from the blog that they've written, it does give them a lot more latitude, a lot more freedom, than previously. Right. and if you kind of layer in the fact that they've, yeah. That they are. Seen as the kind of most safe AI company. it's a kind of reversal of some of those ideals comes at a very weird time if it's, if it's not connected to these, demands. Right.
Casey Newton:It comes at a very weird time, and yet I don't really understand what extra credibility this might buy them with a Pentagon, right? Like if the argument is Aha anthropic is backing away from its safety commitments so that Pete Hegseth will think they're cooler and therefore maybe not force them to make Alaw murder bot, like I think the chain of logic just kind of completely. Falls apart. so I think it's an important and newsworthy change and people should pay attention to it for a bunch of reasons, but I don't totally understand if there is a relation to the Pentagon story, what that relationship is.
Ben Whitelaw:Yeah. Okay. So potentially not a big enough scrap of meat to keep, Pete, away from the bigger prize that he's clearly got in his
Casey Newton:I, I mean, I guess if you really wanted to stretch, you could say, well, maybe some folks at the Pentagon will be reassured that Anthropic will continue to develop state-of-the-art models, even though they will likely be dangerous in some ways. But again, the message we're getting from the Trump administration is that if Claude weren't so good, they would be saying the hell with you guys. so it's just not clear to me. You know, why that would impress the Pentagon?
Ben Whitelaw:Yeah. And, and, I will say, following this story a little bit this week, that the irony, that the Trump administration was pointing fingers at EU regulators and saying that EU regulators were, causing the American citizens to, lose their freedoms and become censored. And then all of a sudden you have him and his Republican buddies, really show what kind of like. Threats look like threats to threats to life, threats to citizens. that whole kind of charade about stopping EU content, moderation and trust and safety experts coming into the country all feels like a even more of a shirad now. Right.
Casey Newton:Oh yeah. I mean it's, you know, it's like the worst thing that the Trump administration can imagine happening to an American citizen is getting their ex account removed, you know, but, but simultaneously it will create an operation to monitor everyone's ex post and use it to, block their immigration status and also simultaneously work on, you know, a whole surveillance program and potentially autonomous murder weapon. So yeah, to say that this is, uh, you know, logically inconsistent is really to understate the case.
Ben Whitelaw:Yeah, for sure. I mean, that's a very nice segue onto our next story. I would say, the other big story this week, is related to Discord and age verification process. we actually, in last week's episode. uh, two weeks ago, talked in depth about discord and, that was really fun. But, and I, to be honest, I didn't think there was gonna be a story of this nature for a while yet. but what's happening in between is, is also fascinating and shows just how quickly things can change and how quickly, things can go badly for platforms. Basically what's happened is that, discord as per the kind of global internet regulation that's being rolled out, in places like the UK and and Australia have. Explained and, started the process of rolling out age verification, across this platform. It's calling it teen by default, and basically provides an element of face scanning and ID verification, for a proportion of its users. it did not do a good job of explaining this. In the first instance, there was a huge hoo-ha amongst its users who were not happy whatsoever. and subsequently they tried to clarify that it was gonna be one in 10 discord users that were gonna have to go through this process. But to be honest, by that point, the cat was out the bag, people were pissed. things were kind of, in a bad shape. Then what happened last Friday? Was that some researchers, did some work that was published by the Rage and Independent Financial Surveillance, publication that found that parts of the age verification tool that Discord had tested in the UK was actually found on a kind of US government server. Now, we'll include the link in the show notes. It gets pretty technical, I dunno if you're kind of the man to explain this, Casey, but basically the age
Casey Newton:Wait, we're we're including a link to the Google Drive where you can just download everyone's information in the show notes.
Ben Whitelaw:it's, it's a classic, you know, we do that every week in
Casey Newton:Okay, cool.
Ben Whitelaw:It's one of the perks of being a listener.
Casey Newton:Yeah.
Ben Whitelaw:Um, but the fact that these researchers discovered this, technical, aspect of the implementation means that, hey, the age verification isn't as tight or as secure as potentially discord thought it was, or, or that they explained to its users. so. That. The last kind of five or six days has really snowballed into a big story. The Verge have covered it, subsequently, and, discord have had to then put out a long blog post, which is full and frank and an apology as I've seen from a kind of CEO in a situation like this. Essentially saying we're gonna be delaying the rollout of age verification on discord until the second half of this year. We're gonna be making a bunch of changes to, reflect on fact that this persona, issue has happened, the fact that we, they didn't explain it very well to users and yeah, they've tried to kind of draw a land under it, but what did you make of, the fact that this is kinda shifted, it's in the space of a week. Like it was bad last week. Discord have age verification and they were, users were unhappy, but now they don't.
Casey Newton:I was so surprised that this became the story that it did. You know, over the past year, we have just seen the march of age verification around the world. And of course, one of the accurate critiques of age verification forever has been it infringes on people's privacy. and courts had previously ruled in the United States in a different time, you know, that it was an, a sort of, insupportable infringement of people's speech rights and that argument is now on the wane. And so we're implementing these systems to verify people's ages. And I think what we're discovering is that a lot of the age verification technology we have. Is really shitty and has security lapses and is now causing problems for companies like Discord. So, you know, I think candidly, this is one where I'm torn because I do see the value in age verification for certain channels to prevent certain kinds of harms. And I think it's also just clearly the case that the technology that we've built to do it, it just really kind of sucks a lot of the time. I.
Ben Whitelaw:Yeah. the other aspect to this, which know, gets a bit conspiratorial a bit quickly, but is that one of the major investors in persona is Peter Thiel. the kind of Palantir co-founder, good friend of, Mr. Donald Trump, and generally not a very nice guy. And so when these researchers found that the kinda system, potentially allowed the US government to match data that it collected with a bunch of other data in a bunch of different databases, they started to quickly think that this could be used for kind of nefarious means. To what extent is, is that connection an important one in this story in your view?
Casey Newton:I mean, I think for some Peter Thiel is just kind of a boogeyman. And to see him connected to a company like this raises questions. those questions may be valid. it may also just sort of like. Verge into conspiratorial thinking. I mean, look, is there a possibility that some large repository of personal data about people's identities could, be accessed by a government for nefarious means? Like of course. And that is like a major reason why courts historically have really looked skeptically on these kinds of age verification systems. So I think. That's a hundred percent valid. Is it also the case that Peter Thiel has made a million investments across a million things and may not have talked to these founders in years and may never talk to them again? Like Yes, that is also like quite possible. Um, I think, you know, the problem if you're like the median discord user is you just like, don't know which of those two things, was the case. And so you're just gonna get on social media and say, screw this. I don't want to verify my age with you jerks.
Ben Whitelaw:Yeah, Incredibly rich man has some investments in of surveillance technology would not be a particularly new new story. Um, but yeah, it, it's an interesting detail nonetheless. I wanna talk a bit about the kind of Discord blog, from its founder and CTO. I felt like it was a, as honest and, and kind of heartfelt in many ways. Piece of corporate comms, as I've seen it a long time. you know, he kind of commits to putting things right. he suggests that he's gonna be much more transparent about the third party vendors that he brings onto the platform. They're gonna do a lot more of this stuff themselves in-house to be able to kind of create a, means of age assurance that works with Discord users who are probably more privacy savvy than the average internet user. What did you make of, the tone of it?
Casey Newton:I think, you know, as you say, there, there was a sort of like open-heartedness, um, often, that is the sort of, transparency that emerges in the wake of a major platform. Catastrophe is all of a sudden people are like, they realize what they should have done in the first place. I don't know. Like I, I just sort of keep coming back to like, I just wish we had better technology. You know, I'm somebody who really likes the idea of device level verification. You know, it's like face ID on your iPhone. Like that information like does not leave the phone. It is not stored in the cloud, but it does let my iPhone know that I am me. I would love to see a similar technology be used so that. If you're a 13-year-old, like your iPhone knows that, but like nobody else needs to know that. But it does still create some nice protections for you on the internet and stops you from accessing inappropriate materials. So that, that's where I land on that one.
Ben Whitelaw:Yeah. And, and you know, there are a number of contrast and safety leaders, people in working in for some of the platforms who also say the same, who say, you know, it's very difficult for us to increasingly comply with all of these regulations, with all the different, age verification, for example, aspects to those legislations. why don't you just kind of. increase it a level to the operating system level and have that be done by app Apple or, a Google slash Android. I mean, we don't need to go down that path too much, but I wanted to kind of ask about the other companies that use persona.'cause if I was in the shoes of some of these other platforms that have been using persona, rage verification, age assurance, I'd be a bit, concerned and, The blog by Discords founder says, you know, Hey, Roblox and, and Reddit also use Persona. So, potentially suggesting that they were in good company. do you foresee with your kind of magic news hat on that storyline going, do you, do you kind of see the platforms doing a bit of digging themselves into what persona was used for?
Casey Newton:Hmm. That's a good question. I think, I don't really know how that one plays out'cause I don't have a good enough understanding of like the space of vendors here and like which ones are doing a better job than others. I'm sure that some of them are better than others, but like if I can just sort of like. Fly out the handle and say something based on no reporting. I just sort of suspect that all of these companies are kind of shitty because like, I don't think it's that ambitious of a field to go into really. And that like, if you're really into age verification, it's just like kinda like a strange rabbit hole to like be down right now. so it probably just is kind of like a lot of. B2B software where it's kind of like the barely working version of the thing because there aren't that many competitors. And, you know, you'll probably be able to sell your software whether it's good or not. So if I've just horribly, if you work at an age verification company and I'm ho horribly slander you, like you're free to email me again. There's probably like at least one player in this space that is significantly better than the others. But, um, again, like I don't think we should be doing this at the level of the cloud. I like, I think it should be on the device. So like. Basically the, the game sort of feels like it's over before it's begun. For me, like to some extent with these folks,
Ben Whitelaw:Yeah. Yeah. It's like they've had their chance and they fluffed it. Uh,
Casey Newton:they fluffed it and let's say that let's be, let's not be afraid to say that they have completely fluffed it.
Ben Whitelaw:Yeah, exactly. yeah, I, I can't imagine many people queuing up to use persona for whatever age insurance technology. Yeah. Dead in the water. yeah. What's your views generally on, you, you hinted at earlier about your views on kind of age verification, the trade offs at play, do you think? You've written in platform at length about the harms that come from social media, and you've documented that through some really kind of robust reporting. Where has that left you personally?
Casey Newton:Yeah. So I mean, and this, this is one where I suspect maybe I'm out of step with some of your listeners. I know I'm out of step with Mike on this. Um, I just have increasingly come to the view that while I will always be a free speech guy in my heart and free expression is extremely important to me, and it is under massive assault right now. And we have to rally to support it. And I try to do that, across all the dimensions where I feel like I can. I've also become convinced that we have a product safety problem on our hand. I do not believe that Instagram or TikTok or Snapchat are safer 13 year olds. I just don't, and I've talked to a lot of people at these companies who've worked on these issues. I talked to the executives at these companies. I know how much they care about it, and it is not enough. And so until that changes, I am just going to be on the side of. These products are not safe. And if you're in a country that's like, what do we do about this? And it involves like passing a law to be like, you have to put out a report three times a year about how many predators you caught. It's not enough sweetheart. Like you need a better solution. And so I understand, again, I understand I'm completely out of step with, with my friend Mike on this. I'm just like, I know the people working on this too. Well, I know how much they care and it is not enough. And until they do, I am actually on the side of people that, say if you're under 16, sorry. You don't get to stare at the endless hypnosis machine as long as you want.
Ben Whitelaw:Yeah. Yeah. That's a kind of European sensibility. I would say. that's something that kind of, a lot of people on this side of the pond are more likely to think. Right. And we saw a couple of weeks ago, the European Commission bring the investigation into TikTok for addictive design. what did you make of that? Because the addictive design question is, pretty hard to measure, right? what con constitutes addictive? you know, the things that they said, the features that they believe TikTok. had that were addictive, were things that loads of other websites and apps also had, when it kind of, the rubber hits the road, how do you define that product safety piece?'cause I agree with you. It, it feels it should be a much bigger thing than it is.
Casey Newton:Yeah, I mean it, you know, it's really tricky, but like we have enough empirical data to know, for example, that you know, some of the kind of core, what we used to call social apps, we might need another term for them. Now, I, I increasingly just think of them as hypnosis apps. the thing about these hypnosis apps is we know that they will send children, push notifications day and night. We know that that disrupts their sleep. So I don't think that a regulation that says, look, if you're, you know, under 16, you're not allowed to send like a push notification after 8:00 PM local time. That does not seem to me to be a horrible infringement on speech. That just seems to me to be a common sense way of saying. We understand that your financial incentive is to get all of your users to look at the hypnosis app for as long as they can stand every day. But we're gonna put our hand up and say, actually not after 8:00 PM Right? so like, that's the kind of regulation that I can see, potentially being of some value here. That just addresses a real issue. Now, there are trickier things around. How do you like protect a, a child from a predator? Right? Like we know, like a common story that happens is we learn in retrospect that these recommendation algorithms were just linking up predators on children left and right. Uh, you know, the, the platforms, oh, you know, sorry about that. Uh, you know, we're gonna try to do better. I think that's a little bit harder to like solve at the design level. and, and this is, you know, may, maybe this would be a good, uh, chance to talk about, this other story that, uh, you and I were talking about offline with this, this human, rights commissioner saying, they, they had a piece this week. Uh, who was this? I should name this person. this is a Michael o Flaherty, who's from the, the council of, Europe. Commissioner for human rights and he's urging caution on imposing these sweeping bans. Right. And again, I am sympathetic to this, right? There is value in young people being able to express themselves online. There is value in young people being able to connect to others online. I was a gay teenager once before we had. Apps like, you know, Instagram, it would've been really cool for me if I could have met other people who were like me, who were my age. I also understand now, it would've exposed me to tremendous risk, and I'm actually probably on net grateful that I was not exposed to those risks. So why do I bring all of this up? Well, it's like when the problem is that you learn many years later that there was an algorithm that you weren't aware of that was matching up predators and children. I don't think any regulator can create a regulation that will prevent that from happening.
Ben Whitelaw:Yeah. Okay.
Casey Newton:the regulation is always going to be behind the technology because these platforms are always transforming. Think about how different Instagram is today than it was even five years ago, right? That is always going to continue, and so I think if you are a regulator. the logical move is not to say, let's just kind of see how it goes. Let's just take this cautious approach and trust these platforms to get it right, like this time it's gonna be different. Or is the actual cautious approach to say you just can't use it until you turn 16. Like again, I understand why in most cases you don't wanna use such a blunt instrument, but I'm just like, again, there has been so much suffering and so little accountability that I've just increasingly persuaded that this is the way to go.
Ben Whitelaw:Yeah. spoken as a man who's seen too much, you've, you've reported on too many stories.
Casey Newton:I've, I've, I've seen too much. I really have. I really have.
Ben Whitelaw:Yeah. And I think that that's probably an important point in many ways, is that, you know, when you're given the insight, and exposure to some of the teams and companies that you have, you have as good a knowledge as anyone right now. you know, I think there are regulators who, kind of trying to kind of extract information from platforms that don't have the level of insight you do. So I think that's, that's an important, Take, and I'm glad, I'm glad I kind of asked you, anything else to say on, on the kind of discord, U-turn, before we go into our quick round stories.
Casey Newton:I mean, again, I, I, I think it just speaks to the fact that like, platforms just need to understand that people really, really hate age verification. And you gotta be super careful with how you roll it out and, maybe just don't go with like the default vendor that everyone else used. If it sucks, like maybe there, there's actually an opportunity to innovate here. Figure out how you can do privacy preserving device level verification. Because if you're able to figure that out. Maybe you can even start selling that as a SaaS product. Right. I, I think there's like probably an opportunity for people here who could get entrepreneurial.
Ben Whitelaw:Yeah. Alice Hunsberger, who writes a newsletter on Mondays for everything moderation. She recently did a guide to, RFPs to bringing on vendors and she didn't include the line, like, does it have a connection to Peter Thiel or like, does it systematically connect data? From IDs potentially to other data. So I'm gonna get her to update that in light of this story. So, great. So we've, we've covered those two big stories this week. a story that links to that, which we'll start of in this kind of roundup of the rest of the rest is this interesting story from the uk. It's a, another regulator, but this time the, data regulator in the uk, the, information commissioner's office, and they. Have been running an investigation for best part of a year now into a few platforms. and this week announced that they were making a kind of large, 40 million pound fine into Reddit. this is interesting, I think for a number of reasons. They have in the announcing of, of their investigation, said that they found failures in Reddit's approach to age verification, stemming all the way back to May, 2018. and leading up to July, 2025, which is when some of the UK's age assurance mandates came into place. And so Reddit made some changes there. So for a seven year period, basically Reddit weren't doing enough in terms of, assuring, the age of users. And that meant that some of those users were exposed to harm and, and danger in, in ways that the ICO found, unacceptable. So again, Casey using those kind of product safety specifications that are increasingly popular. What's weird to me is that it's not clear why the ICO have decided to investigate and find Reddit now, like if this was happening for the best part of 10 years. And the design code, the children's design code, which is this of product safety, piece of regulation was brought in in August, 2020. Then like why now? it's a bit of a weird one, but it does show that in the UK at least, there are some regulators that aren't always off com who are trying to hold platforms to account in new ways.
Casey Newton:I mean the, tell me if you think I'm wrong about this, but this does feel like an example of like the classic American critique of European regulation, which is that you like come in like three years after the fact and tell us something that something was illegal that we didn't know was illegal. Right. Um,
Ben Whitelaw:Is that, how is that how Americans think about us?
Casey Newton:I mean, that's some of the nicest things they think about. They'd say other things, I'm not gonna say them on a family podcast. Um, so, you know, I, I, I don't know, like when I think about all of the, you know, harmful platforms out there, like, Reddit probably doesn't make the top five. I mean, to me, to the extent that there was harm here, it was probably like underage, uh, people looking at porn. but you know, like the, the UK has signaled that they don't want that to happen anymore. So I imagine that this was done in that spirit.
Ben Whitelaw:Yeah, I think there's, there's probably something in the fact that, child safety is an increasingly important thing in the uk. We've talked about that on the podcast at length, and Reddit over the last couple years has seen massive growth generally, but also amongst kind of younger users. And there is this idea, not sure if it's true, whether you'd agree that people are kind of going to. Subreddits to connect with people in kind of more human ways, in ways that feel more authentic. So perhaps there's a slightly political element to this, which is like, it's the right platform at the right time in a way that can't necessarily be disputed, but doesn't, you know, Reddit is still gonna appeal. So I dunno whether this, they'll have to pay the, the money in, in the, in the end.
Casey Newton:it also feels like a relatively small fine in the grand scheme of things. like this is like kind of like a speeding ticket for a platform like this.
Ben Whitelaw:Yeah, yeah. It's not exactly kind of 6% of global turnover, like some of the, DSA and, and the OSA fines. But, nonetheless, fines for platforms are rare nowadays.
Casey Newton:By the way, Whitney, you know, uh, you British people love to say turnover, whereas we in America say revenue So every time you say turnover, I think of the pastry. Do you also have that problem or does that not even come into your head?
Ben Whitelaw:Um, not necessarily. I, I'm not actually not a big fan of, turnovers, um, for what it's worth. So,
Casey Newton:Are you a fan of revenue?
Ben Whitelaw:big time, if you wanna sponsor control or speech, get in touch. Um,
Casey Newton:you go.
Ben Whitelaw:um, Other small stories worth mentioning, Casey. we try to keep control of speech as global as possible. Kenyan, mps have, this week rejected a ban of TikTok. which again, is interesting because we've seen this before in the us. but they have. Explained that they're going to kind of be regulating the platform much more strictly. they've come out and said that they're going to be pushing for of local data storage again, in the same way that, we saw in the US that they want, some of the AI models that the platform uses for recommendations to be trained on local dialects and local language. They want of human moderators that understand the Kenyan context. So, although that's a story that's happening in the Kenyan parliament and hasn't gone any further. That's a kind of interesting, I think development because, the idea of TikTok being banned in Kenya has been rumbling along for the best part of two or three years.
Casey Newton:Yeah, I thought this approach was interesting and like here I can sound as if I really care about speech because I was somebody who did not want TikTok to be banned in America, in part because like American regulators could not seem to state a theory of the case. That sounded even remotely constitutional. I think that TikTok is an important platform for speech and it deserves to exist in Kenya and other countries around the world. Now I think the question of, to what extent could the Chinese government potentially affect algorithmic recommendations? What might they be able to do with user data if that data were stored in China? I think those are like really. Valid questions and like may eventually be urgent questions. So I think it's totally appropriate for Kenya to be thinking through that just as I think it, it was appropriate for the United States to be thinking through that. I would note that the United States' solution was essentially to ultimately ignore those questions for the most part. Like, yes, the, the data, uh, is stored for Americans now in the United States, but they are licensing the recommendation from by dance, which I think in practice just means that TikTok is gonna work just the same as it always has. and I'll be curious how. Kenya, ultimately decides or decides not to think about that problem.
Ben Whitelaw:Yeah, I mean the nature of geopolitics means that Kenya is probably not a big market, for by chance it's probably not a big political, player for them. And so they're obviously not gonna get the same kinds of treatment as, the US have done with the kind of new US entity is still gonna be as, as important for them. But I approve and, respect the attempts being made here, if nothing else.
Casey Newton:me too.
Ben Whitelaw:let's wrap up then on a, on a story that I think it mirrors what you do on Hard Fort by finishing with a bit of, bit of fun. It's something a bit jolly because it's not always a, a fun space to cover. this is a story that came out last week, but I think merits inclusion today.
Casey Newton:Yes. So, I cannot get enough of this story. Uh, Reuters reported February 18th that the US State Department is developing an online portal that will enable people in Europe and elsewhere to see content banned by their governments, including alleged hate speech and terrorist propaganda, a move Washington views as a way to counter censorship. So, Ben, what do we think of this one?
Ben Whitelaw:I mean, this is, this is as, this is great, isn't it? This is like the EU regulation be kinda used, kind of sent on a, on a U-turn and sent right back to you. Like all of this data that's been made available, About quantum moderation decisions is now going to be kind of weaponized and used to kind of, I guess, like juice up Americans as to their speech rights. it all plays into that, digital sovereignty piece that we've seen play out in the last few, weeks and months. and yeah, can only end in, in good things I think.
Casey Newton:I, I mean you know, e everyone listening to the, this podcast already understands all the ways in which this is ridiculous. But, over just the past couple of weeks, we have seen CBS. Pull an interview that Stephen Colbert did with an American politician, over fears that the Federal Communication Commission would consider it a violation of equal time rules, which have not been enforced in decades. Right. Jimmy Kimmel was pulled off the air earlier this year. The federal government has a social media surveillance operation where they look at people's social media posts and look for pretext to deny them visas. they want to ban people, from immigrating to this country if they've ever worked in content moderation. So the idea that they are now going to rally around the principle of free speech so that they can show terrorism to Europeans is like the most empty headed. Idea I have heard coming out of an administration that is full of empty headed ideas.
Ben Whitelaw:Yeah, I, I'm always speechless. I mean, it's, we build this as a fun story, Casey, I'm not even sure it's that anymore.
Casey Newton:I know that well. Yes, it's, um, it's fun only in a dark way here. Here's what I'll say. If you're, if you're like a European and you're nervous that your government is cracking down on Elon Musk and is gonna prevent his grok from creating sexualized images of people without their consent, have no fear, because soon you'll be able to see all of them@freedom.gov.
Ben Whitelaw:I will be refreshing that, almost every five minutes to see, to see that, and then refreshing x to see what kind of, anger and, and spite and vileness is, is coming as a result. Um, great stuff, great stuff. Um, Casey, it's been an absolute pleasure to have you on the podcast today. That brings us to the end of today's stories. How have you found it? it's not quite hard fork, but it's, it's okay. Right.
Casey Newton:I would say that in some small way, this was one of the greatest experiences of my life, and I thank you, Ben, for sharing it with me.
Ben Whitelaw:I love, I love that. I'm the kind of Tin Pot Kevin Rus. That's what I've, that's what I've tried to build
Casey Newton:You're the British Kevin
Ben Whitelaw:Wow. You know, that's something, isn't it? And, uh.
Casey Newton:Rus. It's huge.
Ben Whitelaw:I look forward to listening to the next episode of Hard Fork. I'm, a big fan of your work and, and a lot of our listeners are go and subscribe to platform everyone, and go and read and, and subscribe to all of the outlets we've mentioned today. the Verge, Reuters, the BBC, Axios, they've all done some really good work. and it's important that we kind of, support good media'cause that allows us to talk about it on the podcast.
Casey Newton:And go subscribe to everything in moderation, which is one of my favorite newsletters, and I enjoy reading it every week.
Ben Whitelaw:Thanks Casey. checks in the post. that's everything for this week. Um, if you enjoyed the podcast, today with Casey, but in another weeks as well with Mike, like follow, subscribe, rate and review us wherever you get those podcasts, it all helps us get discovered and one day we'll be as big as hard fork. that's the goal. thanks very much everyone. Take care. Thanks Casey. We'll speak to you soon.
Casey Newton:Thanks.
Announcer:Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.com. That's CT RL alt speech.com.