Ctrl-Alt-Speech

The Silence of the LLMs

Mike Masnick & Ben Whitelaw Season 1 Episode 99

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 52:27
Ben Whitelaw

So Mike's spring has very much sprung in London where I'm based. And, uh, that doesn't mean I'm out in the garden, which is my, my hobby, when the weather is nice and I've got a new plant app, which is going to be, yeah, it's ripe for. control speech opening prompt territory. are you much of a gardener by the way? Are you, never talked about your green fingers or

Mike Masnick

no. My wife is an amazing gardener, and I am occasionally a helper. But, uh, is the green thumb in the family.

Ben Whitelaw

Got it. Got it. you're the sweeper and the mover of

Mike Masnick

That, that is, that is the exact description that fits my role in all of this. Yes.

Ben Whitelaw

Yeah, a very underrated task, I'd say. so this app is, from the Royal Horticultural Society. I should say it sounds fancy, very British. but the, prompt as you go into the app and you can put in plants that you may or may not recognize is search for or describe a plant. So I'd like you to, to, to search for, you you know, do your best or your worst.

Mike Masnick

Well, you know, I was away last week and I was on an actual vacation. Most of my travel lately has been for work, but I was on vacation last week and I was hiking in the high Utah desert. So, uh, yeah, all of last week, which was amazing and wonderful. And the plant that I saw the most of was a whole bunch of cacti out in the

Ben Whitelaw

Oh,

Mike Masnick

and, uh, some, some pretty amazing scenery. I will say, I think the rocks and the views from climbing up all sorts of rocks and getting scraped up and cut and hurt, the view of rocks were more interesting than the cacti, but I did see a very nice trip.

Ben Whitelaw

that sounds, sounds lovely. It sounds very

Mike Masnick

Yes, it was indeed. What about you? What plant would you like to search for or describe?

Ben Whitelaw

Yeah. I'm going to kind of take plant to mean front or, you know, kind of stooge in a way, and, going to talk about some of the kind of AI lobbying types, in today's episode, but, you know, there is an ongoing, I think, storyline about people who. in AI companies who care about or purport to care about online safety, who might not always be as interested in their companies being liable for what the models output. So plant in a different sense. but you know, yeah, both, both prickly characters, let's say.

Mike Masnick

There we go.

Ben Whitelaw

Hello and welcome to control speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's April the 16th, 2026, and this week we're discussing whether Anthropic are the good guys they make out to be, Europe's somewhat muddled thinking when it comes to child safety, and And a whole bunch of other small stories that we'll get to as we go on through today's episode. My name is Ben Whitelaw. I'm the founder and editor of Everything in Moderation. And I'm back in the chair with Mike Vaznik, who is suitably scraped. Um, did you get a tan when you were, you know, hiking through the desert,

Mike Masnick

uh, I will say I put on an awful lot of sunscreen. We were warned multiple times by, by different rangers in the park that you are, way up in, much higher than you normally are. And at certain points you were getting much more UV radiation. I probably got a little bit tanned, but I thankfully did not get sunburnt or anything. but I was pretty careful. I'm I'm I know what I'm doing.

Ben Whitelaw

Yeah. Yeah. It's not your first radio. I mean, I only asked because I am in Perugia, Italy this week. and have just come from a short session, drinking an Aperol Spritz in the sun as you are legally obliged

Mike Masnick

was going to say that is required. Yes.

Ben Whitelaw

Um, and I think I am somewhat burnt from that. You might not be able to tell. But yeah, I mean, I'm here for the international journalism festival, which I was last at two years ago. And, I remembered that it was probably our fourth or fifth episode of control of speech that I recorded from, this fair city. And I went back and listened a bit to it and boy, have we got a lot better.

Mike Masnick

Yeah. Yeah. I

Ben Whitelaw

I mean, I don't, I don't want to speak for our listeners, but you know, if you were there in the early days, um, I apologize, you know,

Mike Masnick

Thank you for sticking it out with us. We have improved over time.

Ben Whitelaw

Yeah, a hundred percent. I think that, um, I don't want to listen to that again. And nor do I want the listeners to, um, but yeah, I hear for the festival. It's a, great event bringing together not only journalists, but people working in civil society. Academia has a whole range of tracks topics. Some of the sessions actually kind of touch on issues that we talk about. And I went to a screening of a new film, a relatively new film from the BBC called The Darkest Web yesterday, which is about child sexual abuse material. and the efforts of some law enforcement agencies in the U S to tackle that very thorny problem. so it's been good to, come back to Perugia after a couple of years to connect with folks and actually, you know, see that this topic that we care about very much and which we talk about every week is, is actually proliferating into, you know, media in new and interesting ways as well.

Mike Masnick

Yeah. Yeah. No, it is. I actually got to see that documentary as well. and it is very interesting and obviously very relevant for lots of people who probably listen to this. It is, you know, we normally focus on the platform side of things more than the law enforcement side, but it is very interesting to see the law enforcement side and in fact, how it often parallels. I mean, there's all the stories of the harms to content moderators in terms of having to be exposed to all sorts of awful content. and as this documentary shows, that's also true for some people in law enforcement as well.

Ben Whitelaw

Yeah, indeed. And, I will happily admit that I, It is a very emotive, topic naturally in the way that the, the documentary is done is actually kind of very harrowing. And I, I, I got emotional, you know, I shed a tear. it's a very, very powerful story. And I think that they're doing a lot of, promotion of, of the documentary. And I think it's something that more, more folks should see and hopefully kind of discuss as well. So,

Mike Masnick

I will say if you do watch the documentary, make sure you make it to the end and one of the cards at the end reveals something that, I think there's another potential documentary to be made out of, at some point. So.

Ben Whitelaw

Yeah. Not to give the game away there, but, uh, well, well worth watching. good news as well, Mike, while you were away, I was banging the drum to, drum up interest in our, podcast via reviews and ratings for the podcast, I You might've heard, I gave an impassioned plea to our listeners to, not sit on their hands and to, rate and review wherever they get their podcasts. And there has been some, some very nice reviews, only a couple that could be more, one of which was, was very nice. And I'm, I'm going to read it out. Cause I'm, pretty proud of this one. This. This user said, I teach speech and platform regulation issues at Stanford and control speech is my number one go to podcast to a make sure I don't miss anything important and B get new insights from the always thoughtful hosts. I think she means this person means me or you, uh, there.

Mike Masnick

One or the other, but not both.

Ben Whitelaw

yeah, exactly. You know, probably differs week to week. I'm sure. Um, And this, this person says that you also have great guest hosts. So that's a, it's a perfect summary, I think of what we're trying to do. I'm very proud of that one. So any idea who it might be from?

Mike Masnick

I mean, there are only so many people who do that. Teach speech and platform regulation issues at Stanford. So I think we might have some suspects, including one who may have been a former great guest host on this podcast. I just, just maybe, I don't know, just throwing it out there. It's possible. Whoever wrote this review has, previously been a wonderful, guest host at some point.

Ben Whitelaw

Yeah. And has a very good sense of humor, uh, about the latter part, not about the fact that the podcast is good. but yeah, thank you. Thank you to that person and to everyone who is, rated or reviewed us, wherever they get their podcasts. again, cannot thank you enough. It's an important part of how we get discovered and reach new audiences and new listeners. and the milestone is an important one. Like we are about to cross. The a hundredth episode threshold of control of controllable speech. Next week is our a hundredth episode, which is, it's quite the milestone. we've got some exciting and important changes that we're going to announce next week. we won't go into that today, but, You know, this is, you, you definitely want to, tune into that. It's, it's a big one for us. How are you feeling on the cusp of your, of the podcast turning a hundred?

Mike Masnick

Yeah. I mean, it's good. I mean, as you mentioned, having gone back and listened to an earlier episode, I think we've, we've improved a lot over the course of a hundred episodes, and we're excited to get to a hundred more or. Many more than that as well. So, definitely tune in next week. I think, um, you know, as, as we will talk about next week, we've been really excited and happy with where the podcast is going and we're trying to set up stuff to, make sure that we can continue to do it in a useful and fun way.

Ben Whitelaw

Indeed. Um, but you know, we, can't, wait around. We've got a podcast to do with the 99th. Episode is as important as, as any other. Um, so we're going to jump in and talk about one of the big stories this week. The, Anthropic has really kind of dominated the news agenda in a couple of different ways this week and it's new kind of advanced frontier model has scared the hell out of a lot of people. Frankly, there's that story, which you're going to talk about, but there's also a kind of secondary story, which you, highlighted, which is interesting and relevant, potentially, you as important as well. Um,

Mike Masnick

Yeah, so, there were actually a whole bunch of different potential anthropic stories this week. They, they were making news all over the place. The big one, which I think a lot of people probably will have heard of, is this mythos model, which is a much more advanced model than even their, current opus model, which is the most advanced model that they have or that is publicly released. And the thing with Mythos that got a lot of headlines was that Anthropic effectively claimed that it was too dangerous to release publicly. but that they were actually giving access to certain organizations, companies, a lot of big tech companies and some non profits and other organizations for one very specific reason. that this model is apparently very good at finding security vulnerabilities within software. And so they had run it on a bunch of like open source projects and other things and found in one case, I believe, a vulnerability that had been there for 20 something years that had noticed before. and so obviously the, point that people were making about, being too, dangerous to release publicly was that it could be used buy bad actors to, go in and, find vulnerabilities and do bad things. And so the initial release is only to certain organizations in the belief that they're trustworthy and they will use it to secure and patch their own work. And that is a big deal. And I I've spoken to a few people at various companies some people reacted to this very cynically and like, Oh yeah, Oh, you have a model. It's too dangerous to release. Like what a great marketing hook. And you know, it's all nonsense, but I've spoken to enough people at different companies, who have, reason to know what's going on, who say this is, this is serious. This is real, that Mythos is an incredibly good cybersecurity tool for finding and different security vulnerabilities. And so the hope here is that it will actually lead to better security. That if it's going to different companies to be used in a responsible way, that we can find and patch a whole bunch of vulnerabilities. It'll be interesting to see how far that rolls out, and then whether or not eventually it is somehow gets out and abused. But the main thing is that, Really more interesting part of that story is that, one of the organizations that desperately wants access to it is the U. S. government. And as you may recall, Ben, uh, the U. S. government currently has a ban on using Anthropic because they claim it's a supply chain risk. Based on it, not letting Pete Hegseth go and kill people or spy on Americans. that's a whole background story that we don't have to go too deep into. except that a whole bunch of government agencies, like when all that came down, there had been some leaks and, you know, anonymous sourcing quotes from people within the U S government saying, uh, The anthropic AI tools are kind of necessary. They're really good. And now this one is even better. and so they're trying to figure out ways to skirt around the order, which blocks federal government from using anthropic, and there are a couple of ways that they can do it. There are two different lawsuits going on, one of which, blocked The supply chain risk designation the other one upheld it so it's a little confusing because you have the sort of Schrodinger's Situation where depending on which court you look at this ban that goes into effect or it doesn't There was is also a six month timeline before the ban technically goes into place to allow You know different agencies to transition. So like some are getting in through that. And apparently some government agencies have gotten access to it, but it is, it kind of highlights the ridiculousness of the ban in the first place, and the idea that it is a risk. And now, like, the actual cybersecurity experts are seeing it as the opposite of a risk, as, you know, seeing it as necessary to secure federal government systems if they can get access to in the process. So I think that story is really interesting. It's one of these things where like how these AI tools, especially the more advanced ones, are changing the face of the internet, and changing how all these services work. I think this is a big story that's going to play out a lot over the next few months and years.

Ben Whitelaw

Yeah. And there's a good New York times op ed, about how, the internet is kind of changing as we speak, like this is kind of changing everything because these systems are able to detect these vulnerabilities in underlying technology, a rate that's never been done before. So these kind of that kind of Anthropic has applied are really, really important to, to kind of let these underlying technologies catch up. I found this kind of interesting, Mike, that these trusted partners that Anthropic has allowed, access to Mythos are not all necessarily companies that your average person would class as trusted. Um, know, there's, I'm not going to cast aspersions on which

Mike Masnick

Oh, go, go, go cast aspersions, Ben. We are here to cast aspersions.

Ben Whitelaw

I mean, you know, you've got kind of JP Morgan chase, you've got Google, you've got Amazon web services, you've got Cisco, you know, they've naturally, they are, the kind of owners of some of these technologies. They're responsible. It's a kind of interesting, I guess it's an interesting lens to look through this story that, you should these companies be given access to patch these problems that they've had years to do, found it a bit kind of Vicky in a way that that was happening, but you also can't. Underestimate the power of the technology.

Mike Masnick

Yeah. I mean, look, I think focus is on technologies that impact many, many people and the focus really is on protecting as many people as possible. And, you know, not all the companies that make the technologies that underlie everything that everyone uses every day is necessarily necessarily good companies, but the impact they could have. if there are vulnerabilities spotted, I think is major and therefore it makes sense to do this. And also like the idea that yes, they should have patched these things before. But the idea is that there are always vulnerabilities. Like, there is no perfectly secure technology. problem is, like, you know, vulnerabilities are often very difficult track. And you have, Both good and bad security researchers who spend all their time trying to find them. And if Mythos lets you get ahead of, some of these vulnerabilities that people just hadn't spotted in a long time. you can blame companies if they are alerted to a vulnerability and do nothing to fix it. I would never blame a company for not finding a hidden vulnerability that they just didn't notice. Because these are really, really tricky to find at times, and the fact that Mythos can hopefully get people ahead of that, is potentially a good thing, if it remains in a way that is used responsibly.

Ben Whitelaw

Yeah. Very interesting. And, and, we'll come on to, I think, talking about anthropics, comms approach, because I think that's a particularly interesting aspect of this story. but do you want to talk a bit about the, the other story that you kind of noticed this

Mike Masnick

so the, the, you know, speaking of using things responsibly, you one of the big discussions that we've had for the last few years, and I, you know, back in February, I was on a panel in D. C. on this very topic. was, how the liability regime should work for AI and especially frontier AI models. And, it's often framed as like, does section 230 apply to And then there's the broader question of should it, you know, whether or not it does, should it?

Ben Whitelaw

Mm,

Mike Masnick

And, my argument that I've made for years is that 230 is written in a way that parts of it might apply to AI and parts of it might not. And it's a fairly complicated look as opposed to a simplistic thing. But in the long run, what we really do need is at least some sort of clarity and, something that companies can, understand. about what is the liability regime they face. It's the uncertainty that is, is the real killer that makes companies not build certain tools or to stay away from things. value in theory of Section 230 for most of its history, though it's going away over the last, few years, was the certainty of we can do things, we can do content moderation, we can do trust and safety, and we can have people make decisions based on what we think is best without having to worry about the legal liability that comes with it. The AI space does not currently have that, and some people are pushing for laws that sort of lead that way. And in fact, there is a law that has been proposed in Illinois that is effectively a kind of 230 for AI. which is interesting. I mean, it's interesting that it's a state law and that's partly because of the dysfunction of the American Congress right now that we have no ability to do federal laws. And so Illinois, an Illinois, legislator came up with a bill that apparently has no chance of passing. But the really interesting thing here. Is that two of the biggest A. I. Labs open A. I. And anthropic are on opposite sides of this. You might think, as with 2 30 that most of the big tech companies support 2 30. or the story that is told. But here we have open A. I. Saying yes, we need this liability shield and we support this bill Illinois and anthropic saying no, we don't support this and we think it would be bad. And we think that A. I. Company should. have liability and should face responsibility for harms that are caused by the AI. In this case, the law is like very specific only to like, mass casualty events. You know, somebody creates a bioweapon and I think I'm guessing to some extent Anthropic believes that they've built in safeguards that would protect them against it. And so they're supportive of it, but it is an interesting. Sort of fight that is different than sort of the 230 fights in the past where for the most part you had tech Companies line up and say like we need these kinds of protections and here We have two of the biggest AI companies on completely opposite sides saying completely different things and I'm I'm wondering if some of that is This is the cynical take, right? That, as we've seen, like, MEDA in particular has really gone backwards on Section 230 and has been more than willing to, they supported FOSTA SESTA, they have supported other reforms to 230, they have, pushed for some, things that, clearly would undermine Section 230. And I have argued for a long time, this is because they realize that they can survive the lawsuits that come with that, whereas smaller players cannot. And so there is a part of me again, the cynical take is that Anthropic is sort of setting it up where it's like, You know, there are a few of us with a lot of money, and we're starting to see competition on the low end. some of these open source AI tools are really, really powerful, and may be eating into the lower end of their business. But if they can put in a liability regime that says you need to have billions of dollars to deal with the lawsuits, then that wipes out the low end competition.

Ben Whitelaw

Yeah.

Mike Masnick

the cynical take. You know, the more, optimistic take is that, Anthropic truly believes that if you put regime on the AI model creators, that they will build safer AI tools. It may be somewhere in between those two.

Ben Whitelaw

Yeah.

Mike Masnick

To me, it's really fascinating and different than the 230 fight because of the way it's shaping up in terms of who lines up on which side.

Ben Whitelaw

Yeah. Although like, as you say, kind of fundamentally, it's this idea that the big players will almost surprisingly advocate for regulation where they think it's going to help their business in the long

Mike Masnick

Yeah. I mean, I've described it this

Ben Whitelaw

different and the same.

Mike Masnick

Yeah, I mean, you know, there is this element of like, regulation is a kind of moat. And if you believe that you're in a, business environment where you have no moat, and any new startup can come along and take away your business, then supporting regulation has a twofold impact of one, it creates that moat that limits competition. And two, it makes you look like To the public to the media and to regulators as oh, of course, we're happy to be regulated Again, very cynical. I think that's it's an extreme. I don't think that's a fully accurate picture of it, but I'm sure that's some of the thinking that goes into all this.

Ben Whitelaw

Yeah, I wanted to ask about that kind of public face, of these AI companies and the extent to which kind of Anthropic is winning the narrative game right now. we're speaking to a kind of trust and safety comms person and she said, we're going to look back at Anthropic as a case study and how to present your, yourself as a company, at a time where so much is changing and there is so much. growing pushback against ai, as much as it's being adopted at a hugely, fast rate. So are they really playing the kind of blinder that, you know, you think they are? I mean, their competitors are obviously not doing as good a job. Open ai. I've had a lot of criticism about, as a company so far, there was a big New Yorker profile of Sam Walton this week asking about whether he's a good guy, whether he should be trusted. No one's asking that about Dario Moday. Um,

Mike Masnick

people are

Ben Whitelaw

you know, yeah, but that's not to the same degree. And I, and I just, I wonder whether this is again, part of the. Kind of owning the narrative, and the importance of managing the message, which we've seen, I suppose, in the social media web 2. 0 era was important a period of time and continues to be so when you have big companies like Google and Facebook, kind of, impacting policy and regulation in increasingly subtle ways.

Mike Masnick

Yeah. I mean, I think that the comms part of it is a big deal. And I do think that Anthropic has done a generally good job of presenting itself way that makes them look like a good player in the space. And there are certainly plenty of people who disagree with that and claim that they're just as bad and just as And Dario in particular just, you know, another Sam Altman. I've definitely seen that discussion. But I think the general narrative For the public, if you just to ask a person who isn't deep in the weeds on this, they will say that, yes, you know, anthropic seems to be focused on being a more responsible player. And that, that is a huge communications win, With all these things, right? It's like the marketing and communication side versus the truth. There's always some element of truth within the communications and some element of exaggeration and hyperbole. but you know, yes, I think that many companies going forward. Would do well to take a lesson from how Anthropic has done their communications and how they've positioned themselves, especially over the last few years, where in the tech world, there has been this move away from like, let's listen to our users towards like, we're going as fast as we can, and we're going to dominate and how dare you get in the way, And, you know, I think that has turned off a lot of people, and a lot of the public backlash to the entire AI space is built on that, which I think is sort of adjacent to, Trumpism, MAGA, zero sum thinking of the world, and yet Anthropic has been able to sort of navigate that, communications pathway where they sort of are still doing a lot of that, but presenting themselves in a way that suggests they're not. You know, they actually do care about the impacts of, of what it is that they're building, whether or not that's an accurate statement of reality.

Ben Whitelaw

Yeah, and not only the portrayal of these companies priorities to the public, but there's also an internal benefit when you tell the story

Mike Masnick

Oh, yeah,

Ben Whitelaw

how you're doing it. Right. Because open AI, stories every week about disgruntled open AI employees who disagree with the kind of the approach to safety, the general direction of the company, there's that kind of classic type of tech story, which is. Employees posting on the internal messaging system about how unhappy they are. And that's a, you know, that's a noisy type of story. That's going to really, detract from, from internal priorities. If you, if that keeps cropping

Mike Masnick

there's a large number of people I know within the tech world who have said that they would never work for open AI, but Anthropic going to work for Anthropic is, really appealing,

Ben Whitelaw

Yeah,

Mike Masnick

because of that sort of public messaging. And in fact, I think just yesterday, One of the, either product lead or technical leads for threads, for Meta's threads announced that he was leaving to go to Anthropic and work on Cloud Code. so like, as a recruiting tool, positioning yourself as the good guy in a world of, Horrible people who are only focused on, growth and, move fast and break things. I think it's a great recruiting tool.

Ben Whitelaw

Yeah, I agree. really interesting and something that we will continue to pay close attention to. talking of kind of, I feel like we're always talking about good guys and bad guys. we're also talking a lot about child safety at the moment and the next kind of series of stories that I was reading this week, contains, that topic again, two stories in particular that I think show, Either attention at the heart of Europe's approach to child safety, if you're being cynical, which we often are,

Mike Masnick

I will take the blame for the cynicism on this podcast. You can be pure and earnest. I am the, the beaten down, jaded, cynical one.

Ben Whitelaw

yeah. Okay. normally I'm the cynical one in the room, so I think that's why this podcast works. But yeah, so, so either you can see these stories as, a muddled approach and some, Confused thinking about child safety, or you could see it as the whole strategy and I'll kind of explain a bit about what that means. the first story is from Politico it's a fairly short story about comments made by the Estonian education minister about why. they will not be pursuing a kind of social media ban.

Mike Masnick

for, for children

Ben Whitelaw

children

Mike Masnick

entirely.

Ben Whitelaw

not yet, not

Mike Masnick

Yeah.

Ben Whitelaw

but Estonia are kind of one of only a few. Probably the only actually country in Europe who has come out to say that they won't be pursuing a social media ban for teens. We talked about the, increasing number of countries, including France and Spain, who are moving towards a situation where they're likely to ban social media, for either under fifteens or under sixteens. And, kind of gist of the argument this education minister makes in these comments is that they didn't, the country don't want to make a kind of an individual problem that, you know, users having to deal with, they would prefer it to be a platform issue and for platforms to take responsibility through the existing regulation that exists, you know, the digital services act and then the digital markets act, among others. And this education minister doesn't. mince their words, words, Mike, they say, we want to actually take this power and start regulating the big American corporations as is. and so that's why they're not moving forwards with this ban. Now you could say the kind of Estonia is a very small country in the EU. they kind of don't matter when you compare them to the big, the big boys, but I think it's interesting for a couple of reasons. first of all is that they are known for their digital ID system.

Mike Masnick

were first.

Ben Whitelaw

you've written about this, I'm sure at length, Tector over the years, but they have a digital ID that they've been, it's been kind of rolled out for 20 or 30 years now, and it's used to deliver a lot of public services and, they're very kind of forward thinking when it comes to, digital ID. that means two things that they are, they're definitely not anti tech. Um, you know, you might think actually this is a country who, might be, pro ban or pro age verification. Turns out that's not really the case. and the other thing is that they are fairly influential in the EU circles, as a result, you know, so they have a lot of sway. So for them to come out and say, this is a fairly kind of interesting development. you think this, first of all, do you think that we're going to see more countries take the same approach as Estonia as some of the kind of discussion and the debate fleshes out a bit more?

Mike Masnick

Yeah. I mean, it'll be interesting. I think there is this recognition among some that like this idea that children from social media, There's no universal agreement. Obviously, you know, I keep raising objections to it and the problems with it and there are plenty of people, you know, including plenty of experts who are calling out the problems of these bands and the harm that it's done and where they've failed. And so, plenty of smart people recognize that these are not great. And Estonia, as you mentioned, has always been ahead of the curve. I mean, the joke was that it was e stonia, like, you know, in the early days, they were the ones who were leading the pack in terms of experimenting with how do we build electronic systems that serve the public. And they've always been very thoughtful and open about it. I don't always agree with the decisions that they make. I don't always agree with decisions anyone makes, but like, think they're really thoughtful. And as you said, because of their influence and experience with. Electronic services. I do think that other countries where there may be some more people who are on the fence will take notice of Estonia, saying, you know, these bands are not really good. I do think also what was really interesting in the statement that was made and an important one that I've raised, but But certainly should get more attention is the idea of when people do these social media bans for kids it takes away You know people sort of wipe their hands if it's like, okay We've now fixed the safety for kids and that takes away the responsibility from the companies and others to actually put in the work to make systems better for kids and to, you know, as I always talk about, to educate kids on how to use these things responsibly and how to use it in an age appropriate way. Whereas if you just do the ban, you sort of like, okay, well we don't have to deal with that. and I think that was a really important point that was made here and I hope that others will start to pay attention to it.

Ben Whitelaw

Yeah, indeed. And so that, that story I think could be capsulated as, you know, Government wants platforms to be regulated under the kind of existing regime. Thinks that's enough. Doesn't want to go down a age verification type route at the same time, in the same week you have EU launch, it's much awaited age verification app. so this is an app that all EU users can download. They can provide some ID, whether that's a passport or a national ID or bank. verification that, establishes the age of a, of a user. and this essentially can be used by major tech platforms, to verify users age in a way that means that they don't have to, as you kind of alluded, alluded to Mike, do the work themselves, the statement from the EU commission president, Ursula von von der Leyen and the VP Henne Verkanen, makes out that this is holding accountable these platforms, which I think is an interesting framing. you can see that very differently. but it is very clear to say that this is an, a way of providing parents, teachers, and caretakers, the ability to make children safer. So this is, is a slightly This is tension of like, holding platforms accountable, but at the same time, we're putting the onus onto parents, teachers, and caretakers. So hold those platforms accountable. the kind of key point, I guess, is that none of the platforms have to use the app as age verification. but they have to have alternatives that are in quotes as good. we don't know exactly what that means now, but, I'm sure we'll find out in due course. So you have a situation where. A state within the EU wants to go down one route. and that differs that from a bunch of other bigger countries. You have the EU pushing an age verification app, which is clearly a kind of slightly, is a different approach. Is this the strategy? is the strategy to kind of create. I guess a kind of social media ban whilst the enforcement of the DSA and other regulation grinds along in the background? Or is this muddled thinking from block of states that don't really know where to go next?

Mike Masnick

Yeah. I think it's all a mess. Right? Because the DSA, it has some good parts and it has some questionable parts. none of these things are going to fix larger societal problems. this is real issue is that we keep passing these sort of technocratic laws and thinking like, Oh, if we only had this, if we only had a ban, if we only had age verification, if we only had a thing that holds companies responsible, if we only had a thing that made them design in an age appropriate way, if we only did this, we would fix all these problems. The problems are societal problems. Like technology can exacerbate them. Absolutely. And we should be thinking about that. And how do you design things better? But they're never going to fix the underlying problems. And if we don't tackle the underlying societal problems, there are still going to be problems in the world because that is humanity. And however many, 8 billion, I don't know how many people on earth right now, we are always going to have societal problems. And. this is the frustrating thing about this sort of regulatory approach here, which is like, you know, they spent all these years working on the DSA and the DMA and then they don't give it even a chance to see how it works and where it doesn't work and then think through thoughtful ways of how do we actually approach and solve the underlying societal problems. Instead, they're just saying, you know, okay, somebody's getting hurt. We must do something. We have to get headlines. We have to talk about this. And so now everybody's rushing to, ban social media for kids. Nobody wants to take a holistic, thoughtful approach to these things. And even when they try to, nobody will give them time to actually see where they work and how to improve them. And so, I'm frustrated and cynical, but this is what you get. when that is the approach, and when you're driven by sort of political urges, rather than how do we actually fix this?

Ben Whitelaw

Yeah, it does feel like a bit of a, um, a stop gap or, that essentially the EU can kind of trump it as a win that actually doesn't, wade too deeply into the geopolitical mess between the US and the EU, which we have kind of documented over the last, 18 months or so, it's a kind of a discreet intervention, but it doesn't necessarily it's unclear like what it bridges to in the

Mike Masnick

well, the, you know, and there are a few problems here too, because like when the EU releases this tool and says, like, here is an age verification tool, it's like, A lot of companies are just going to use it because you know, like, if you don't want to get beaten down by the EU, use the damn tool that they're providing because they can never say you're not doing enough with that. But I got a text message literally as we started recording. So as you've been talking, I've been looking at this. so this was, just released this age verification tool from the EU itself. A security consultant has already hacked it and pointed out huge problems with it like how to get around it and all the privacy concerns where it is storing private data pictures of your ID in a way that Unencrypted ways that are accessible. It has other problems. and literally like just seconds ago, as we're recording, this is real time stuff, this security researcher, Paul Moore released, a Chrome extension. That will get you around you know, say that you're whatever age you want. I think, um, sort of like, once you do this, you can continue to use the web as normal if you install this Chrome extension. So for all of the announcement here that this is the safest, best, tool. it's already been hacked. You know, he has a post where he says I've hacked it in under two minutes. I mean,

Ben Whitelaw

I mean, the irony of that is that one of the things that Ursula van der Leyen said was that the app has the highest privacy standards

Mike Masnick

Yeah. Yeah. Not so much.

Ben Whitelaw

Um, yeah.

Mike Masnick

And this is the thing. Like, Eric Goldman wrote this paper last year, all about how, like, everybody claims that these systems are safe and secure and every one of them is not. And, you know, maybe if we let Mythos go to work on these, you know, they'll start to realize that. But, took a security researcher, you know, just a few minutes to figure out how to hack the system and build an entire workaround. And so, it's, have little faith that, no offense to EU officials, but I have little faith that they have the technical knowledge to build a truly secure age verification tool. And, you know, this one shows that maybe not.

Ben Whitelaw

Yeah. Bit of egg on the face there. Um, okay, Mike, well, we've gone very, very deep on those stories. I think they're worthwhile and talk to two very important topics. I'm going to be super strict. With the, uh, the small stories this is going to be rapid fire. start us off with, a company that we don't, we don't talk about as much as we should probably.

Mike Masnick

Yeah. So this is Apple, which is one of the big companies out there, obviously. And they have, content moderation, trust and safety issues too. And there were two separate stories that, that showed up this week that I thought were worth talking about briefly, quickly in our lightning round. and both having to do with, um, the App Store, and what apps they allow. And the first was that there was a fake Ledger app that made it through the process and showed up in the App Store and people put it on their iPhones and promptly lost millions of dollars in cryptocurrency. Ledger is one of the big, Cryptocurrency wallet companies and they do a lot of things to try and keep your things secure I actually saw this story originally because of an article about the musician g love Losing four hundred thousand dollars. I don't know if you know g love at all This takes me back to

Ben Whitelaw

Not very

Mike Masnick

my college days. g love and special shots were very big

Ben Whitelaw

Oh yeah. Listen, they've got some really good

Mike Masnick

Yeah. And I've got some amazing songs. I actually, like when I saw the story that he had lost half a million dollars, I suddenly went back and started relistening to a whole bunch of G love and special sauce songs. roommate in college was a huge fan and played it all the time. It was really good. Like revisiting is good, but he lost about half a million dollars and he posted about it on X and talked about how he got tricked. it's really funny because one of the, uh, crypto publications obviously has no idea. Who G love is, and and wrote this like whole story about like, Oh, you know, 9. 5 million were taken by this fake app that Apple let into the app store. And they talk about this user named glove on Twitter, complaining about losing half a million dollars.

Ben Whitelaw

That's so

Mike Masnick

But you know, shows you how difficult it is. You know, I don't know what process went through that these, you know, sort of scammers were able to get this app in and get it to show up when people were searching for ledger. That's a huge failing on Apple's part and kind of surprising because Apple is pretty strict and sort of famously strict about what apps they allow in, which Then takes me on to the second part of this story, which is a few months ago, the big story about X and Grok and, nudifying and putting people into bikinis and all the stories that you heard, lots of people were saying, Hey, doesn't this violate the app store? Requirements and why is an apple and google? Why aren't they removing them from their app stores? And there was a lot of pressure and there was no commentary and now There was an nbc news story that revealed in a letter that apple sent to some senators who had asked about this That in fact, they did say that both x and y And the Grok app failed to meet their standards and they threatened to remove both of them from the store and X, and XAI went back and forth, and some of the changes, some of which we heard about publicly, though it was never fully announced. fully revealed what all the changes were and the limitations on it were designed to appease Apple. And then they went through this back and forth process where eventually they said the X app was okay but the Grok app was still failing to meet the standards and they again threatened to remove it and eventually more changes were made. the App Store content moderation questions don't get as much attention as like, you know, Facebook and X and all those, but can be really big. And in some cases, really consequential. And I'm sure that Apple didn't want to, because think of the story that would have been if they had removed X or Grok, from the App Store. And so I'm sure they worked with. The company to try and resolve this. But the fact that they found that they did violate their rules and they threatened to remove it is a pretty big story to me.

Ben Whitelaw

So does this, does this explain kind of the weird dealings that happened around that time where Grok XAI announced that they'd made some changes and would only allow paid subscribers to use the kind of new defy tools and then quickly kind of change tact and said, Oh, actually we've stopped even paid subscribers being able to do that. Do you think that was

Mike Masnick

I think that was, that was entirely about trying to make sure that Apple did not pull them from the app store. And, and the sense from the letter is that that is exactly what happened was it was this back and forth negotiation. They did the only paid subscribers and then Apple reviewed it and said, not good enough, we're still going to ban you. And then they did the next step. So. Each of those moves was, probably driven somewhat by the demands from Apple and possibly Google. We don't know what Google demanded at the same time.

Ben Whitelaw

Yeah. Okay. Great. second story is, is a kind of middleware story as well, but for different nature and covers, uh, some interesting research coming out of India recently.

Mike Masnick

Yeah. This is about how, they're handling internet censorship. which is really fascinating in that there are rules about like, what things are and are not allowed, but. not rules on how ISPs are to stop that. And so what the research found was that basically every ISP takes his own way of doing it, which has some interesting side effects. that, you know, some of them spoof DNS, some of them just block at the DNS level. There's a whole bunch of different approaches, which they said actually. Makes it difficult for people to challenge the blocks because depending on which ISP you're on, you get a different result. The content may still be blocked, but it's not as clear how or why. I thought it was really interesting, but I do disagree with one aspect of the research, which is, it suggests that this is unique to India and different from the way that China's great firewall or Russia or Iran are, are blocking, different, internet services. but this is the way that China started. The Great Firewall actually had this almost identical process in the early days, which was that you would get there were messages about to the I. S. P. S. Telling them effectively. Don't allow anything bad. If you do allow something bad, you will hear from us and we will find you or punish you, which led to ISPs doing the same thing, like figuring out on their own what it is they were going to block and how, over time, that did eventually centralize more and more as, the Chinese government wanted more and more control over it. and you know, we've seen similar things as well. Like, there was this whole issue a few years ago in Pakistan where, they had a similar thing where you had to block certain content and one ISP ended up like taking down all of YouTube in the process, trying to block like a single video because it was sort of, this ham fisted attempt. and so I do think that, What is described as unique to India is not that unique, but it is interesting and is something that is interesting to watch and to learn from, because I do think a lot of censorship regimes start this way, which is you basically tell intermediaries like. Don't let this happen and let them figure out how and they take different approaches and then as people figure out which ones are effective and which ones people can get around, then they start to get a little bit more direct about use this method instead of that method or may take more centralized control over it over time. But it is, it is really kind of fascinating to see what is happening in India.

Ben Whitelaw

And this kind of concept of like plausible deniability that the research talks about, which is like, it's kind of like the Spider Man meme, isn't it? Where you're like, have an empowered state actor in the form of an ISP looking at the kind of government, what's going on. and both of them pointing at each other as to the cause of the takedowns, whatever form it is, it's like, who was it, who cast the first stone, you know, and actually if there are so many different ways of doing it, like there is no data to

Mike Masnick

right.

Ben Whitelaw

it was and who pushed it in the first

Mike Masnick

yeah, it makes it much, much harder to track and, you know, users can't quite tell and sites can't quite tell and you may get different responses on different ISPs and so it is effective, but Confusing and harder to actually figure out what's going on and that has advantages for a censorship regime and so it is really interesting and You know, This argument over these kinds of blocking has gone on for years. I mean, this was what the SOPA fight was about in, the U S, 14 years ago now, 15 years ago, where they were going to require ISPs to do DNS level blocking, but again, not giving a specific technical approach and just having the ISPs figure it out themselves. And so, this is an argument that has gone on for years. and in fact, we just saw in France in February, they had a ruling saying that third party DNS, companies had a had a block content as well. And so you know, a lot of this has sort of been below the surface, but it is really interesting in figuring out how different censorship regimes work, whether it's for copyright or for misinformation or just, you know, content the government doesn't want you to see.

Ben Whitelaw

Yeah.

Mike Masnick

there are different technical approaches, which all happen kind of beneath the surface, but it's important to understand the different approaches.

Ben Whitelaw

Yeah. And in a world in which more and more states and governments are taking autocratic routes to, to managing information on the internet. let's round up on, I was going to say a slightly more jovial note, Mike, but it is, this story is about horror films.

Mike Masnick

Yeah. It depends on how you view horror films. I

Ben Whitelaw

I actually am not good with horror

Mike Masnick

am, I'm terrible. I cannot watch horror films. I'm just, I, I, you know, when I was a kid, I saw a few and I just decided I do not like this. so I avoid them.

Ben Whitelaw

Would it matter if it was about content moderation?

Mike Masnick

I, I, I, you know, I see enough of that in my day job, as they say,

Ben Whitelaw

yeah, every day is a content moderation horror story. Um, but this is the kind of fun story from variety who have noted the fact that, horror films are starting to use content moderators as plot lines and as characters. And, they're essentially kind of becoming a new genre of cinematic horror. Which is kind of not that surprising when I guess you're in the weeds of it. But, I suppose I I'm really interested, Mike, in like the way that these topics are portrayed, in non news based forms, you know, we talked at the start about the screening, of that documentary, but, um, In everything in moderation on a regular basis, I've highlighted, you know, authors who are covering content moderation, who, you know, theater directors who are, placing content moderators and these topics at the center of their stories. And now we're seeing this, new trend for horror stories. so there's three basically all coming out at once. All the directors have kind of used moderators as the, core and watched a couple of the traders and.

Mike Masnick

Yeah. And I will not watch the movies, but

Ben Whitelaw

you've, you've come out from behind your sofa.

Mike Masnick

But, but yeah, I mean, it is interesting. It's sort of this cultural moment. I mean, you can see why it's not hard to think through like the stuff that, content moderators have to deal with as we talk about on a regular basis makes for good plot. Devices, right? And so, you know, it's funny because, like, a couple years ago there were, like, a few different novels that came out where, you know, the protagonist was a content moderator. And then, like, two years ago or one year ago, I forget now, there were, multiple plays that showed up that had content moderation and trust and safety as plot point. And now it's moved into the horror genre. And you can totally see why. And in fact, like I mentioned this to you, before we started recording, I think it was last year I had been approached by like this semi well known, Hollywood director, showrunner person about a TV show, a limited series that they were working on that they wanted to use content moderation as a major plot point. And they asked me to consult, which didn't happen. eventually did not work out, but I had had discussions about it a plot point. and so there's like something in the air where this is a culturally relevant thing that a lot of people can understand because It touches on things that they touch on and they have this sort of vague awareness that there are a group of people out there Who deal with all the horrible stuff and then you can project all sorts of interesting Cultural stories onto that and Lots of people are.

Ben Whitelaw

Yeah. I'm looking forward to the, uh, content moderation ballet that will

Mike Masnick

Yes.

Ben Whitelaw

pop up at some

Mike Masnick

Yeah. We need opera and ballet. So that's the

Ben Whitelaw

Yeah. the black swan, but for, uh, you know, internet safety. Um, so that's our best attempt listeners are trying to kind of create a bit of levity at the end of today's podcast. Um,

Mike Masnick

I do want to say if anyone of our listeners goes and sees any of these horror films, we want reviews. won't go see them. I won't go see them, Ben. I don't know if you can be convinced to go see them, but we do want reviews. So send in your reviews of these films.

Ben Whitelaw

Agreed, Alice Hunsberger who writes for everything moderation did a good review of American sweatshop, which is the, the other big film recently about constant moderation. So yeah, um, all reviews and, uh, feedback, very, very, welcome podcast at controllable speech. com. Michael wrap up there. we've tried our best to keep it snappy. we've gone through a lot of stories, NBC news, tech policy, press time, politico, variety, wired. We've done the whole gamut this week. just to remind listeners that next week will be a different episode for our hundredth, episode of control speech, uh, our anniversary edition. please do tune in. It's a big one for us. And, uh, yeah, thanks for listening. Take care. See you soon.

Announcer

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.com. That's CT RL alt speech.com.