Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
Age Old Questions
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- It's official: Australia's teen social media ban isn't working, yet (Crikey)
- Social Media Minimum Age - Compliance update (eSafety Commision)
- Blunder from Down Under (Ctrl-Alt-Speech)
- April 3 could create a dangerous gap in child safety across Europe (Thorn)
- Commissioners pile pressure on Parliament to pass child sexual abuse bill (POLITICO)
- Weeks After Denouncing Government Censorship On Rogan, Zuckerberg Texted Elon Musk Offering To Take Down Content For DOGE (Techdirt)
- What Is YouTube’s Dominance Doing to Us? We Asked Its C.E.O. (The New York Times)
- This Episode is Broadly Safe To Listen To (Ctrl-Alt-Speech)
- Meta will "substantially reduce" describing Instagram teen accounts as PG-13 (Engadget)
- Rated R for Ridiculous (Ctrl-Alt-Speech)
If you’ve got Elon Musk in your Ctrl-Alt-Speech 2026 Bingo Card this week, you’re in luck.
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So this week's prompt mic couldn't be more perfect for the topic and the country we're going to discuss today. Right. surf is a new social platform from the makers of Flipboard. did you ever use Flipboard much? Are
Mike MasnickYeah, yeah. No, I used Flipboard a bunch back in the day and it, it sort of fell off. But, I will say I've been, I have been playing around with surf because I had beta access to it, from. A while ago, but it was just released today. Yeah.
Ben WhitelawYeah. So it's a, a brand new social app. it allows kind of users to make feeds and to follow other people, who make feeds. So you can kind of think of it as like soup top. Hashtag pages if you were a kind of Twitter user back in the day. That's a bit of an oversimplification, but, it's a bet on the famous hashtag. And anyway, we're gonna use the surf, prompt as the start of today's podcast. And already done this, I think, on the platform, Mike, but, it asks you to make a feed about, so what are you gonna make a feed about?
Mike MasnickWell, the thing I, I was gonna say is, uh, I spent last weekend at the Atmosphere Conference, and there was so much energy about what is going on in the open social web, and that is part of what surf is, is enabling. and so I would make a feed about some of the very exciting new developments and new services. Incredible creativity that, uh, came out of my experience last weekend. what about you, Ben? What would you make a feed about?
Ben WhitelawI would probably make a feed about, how to stay calm during a house renovation project. I'm, I'm back on the, the house renovation train,
Mike MasnickOh, no.
Ben Whitelawit's bad. It's real bad. I'm kind of doing the podcast surrounded by. Rubble and with dust swirling around me, it's a hell of a time. but, you know, we plow'em. And, uh, yeah, if anyone has any kind of house renovation tips, get in touch with us. I'm all for it. I'm gonna need them over the next couple of weeks. Hello and welcome to Control Alt Speech, your weekly roundup of major stories about online speech, content moderation, and internet regulation. It's April the second, 2026, and this week we're talking about how the Australia social media ban is going. Why an EU vote potentially puts children at risk and more content moderation shenanigans from two major platform CEOs. My name is Ben Whitelaw. I'm the founder and editor of Everything Moderation, and I'm, in the chair with Mike Masnick, founder of Tech Debt, and recently back from a trip across the border to Canada.
Mike MasnickYeah, they let me back into the country. Ben.
Ben WhitelawYou didn't get, left out in the cold,
Mike MasnickNo, no, no. It, everything worked fine. It was, it was funny when I, when I landed in Canada and I, texted my wife that they let me in and she said, uh, go seek asylum,
Ben Whitelawwhich hopefully says nothing about the set of your marriage.
Mike MasnickRight, right. Well, she's looking out for me. Uh, but no, I, I didn't have any problems getting into either Canada or then back into the United States. I did have problems on flights. I, I, I had connecting flights and basically all four of my flights, both there and back had some sort of problems. So it was kind of a, rough. Trip all around. but um, you know, yeah, I made it out of the country and back in to Lovely Canada. Canada is, wonderful. Vancouver is wonderful. It's the first time I've been to Vancouver and it's a very nice
Ben WhitelawHmm.
Mike Masnickand it was great. The conference was held at the, university there, which is a little bit sort of outside the city or on the edge of the city, basically a little bit separated from it, but beautiful campus, beautiful event. Really wonderful discussions. And it was really energizing. I mean, there have been a bunch of people have written up their takes on, on the Atmosphere Conference. And just for folks who don't know, the Atmosphere Conference is, uh, it's basically a developer conference about at Protocol, which is the underlying protocol that powers Blue Sky, ding, ding, ding. I am on the board of Blue Sky. But, but the really, neat thing about the conference, honestly was that. very little of the discussion was about Blue Sky. I was noting on Saturday that we went hours without anyone even mentioning Blue Sky at all. Almost all of the discussion is about amazing things different developers are doing that are totally unrelated to Blue Sky. People are building, you know, just a better internet using the at protocol.
Ben WhitelawWhat, what's the kind of, what was the big one that you, thought, okay, this is, this is cool. Like I didn't, when I wrote the paper back, whenever it was, I didn't expect to see this appear.
Mike MasnickOh man, there's so much. it's almost hard to describe, but like, there's definitely like, so Stream Place is like the big video streaming platform, that has been around for a little while has been really successful. And they're launching video on demand now. So rather than just streaming, streaming being more like Twitch, they're now gonna start to do YouTube. So they actually were set up and they filmed every talk At the conference and are now putting them up on their new video on demand platform. that's really cool. there was like, somebody had hacked together literally days before the event. this app called you and me, which became really popular at the conference, which was, as you met someone, you would open your you and me app and you would scan each other's thing and it would bring up their, at protocol profile and it would say, oh, these two people met. And so you could track how many people you met. Some people got very competitive about it. I, I missed out, but I saw people were like running around like, oh, I haven't scanned you yet. You know? Um, so there were.
Ben Whitelawstyle.
Mike MasnickYeah, there was that, there were, there's just a bunch of like really creative stuff, people thinking through all different things. You had a lot of talk about long form content. you know, so there's like blogging and newsletters. there were people talking about, you know, specific areas of building communities, smaller communities, larger communities. it was just, all sorts of creative thinking about, how do you build tools on this, this open protocol? and it, it really was a very exciting.
Ben WhitelawGreat and surf actually, which we started the episode with, is actually kind of built on or pulls in content from various kind of federated platforms,
Mike Masnickyeah, surf is really cool in that it, builds on at proto, but also, activity Pub, which is what powers MAs it on. and RSS, which is like the old school, uh, syndication. side of things. and so surf is cool'cause it, you know, pull in content from all of those in a way that you don't have to know. Like if you just, create an account on surf and, you know, you can build feeds that, pulls in content from all of these things. And it just becomes like, a, a way to consume and follow different kinds of content on different kinds of topics. Um, including videos. They bring in YouTube videos as well. So it's, what's cool is like people are beginning to realize like there are ways to do cool things on the internet. A lot of people kept talking about how it reminds them of the early days of the internet itself. You know, people were trying to figure out how do you build websites and these are still websites and apps that people are building, but. With this social protocol layered on top of it, which begins to open up new fund possibilities, like being able to, you know, scan each other's badges and stuff and, and, even the tickets to the conference themselves, the schedule and the tickets were built on at protocol. So to get your badge, you log in with your, account, and then it shows on your account and now on the account. In my PDS in my, collection of data, which I control and can move to any other server. It says that I attended Atmosphere Conference. It's, you know, so all this data is under your control and you can begin to do really cool things with it. So there's a lot of, a lot of fun, creative thinking. and it was, it was really. Exciting. We also saw, you know, there were a bunch of, people always talk about Blue Sky'cause it's the biggest, but we're seeing like other ones like Black Sky, it has gotten larger and larger. Euro Sky, I think, has now surpassed Black Sky and how many users they have on Euro Sky. There's North Sky as well. So we're seeing other, parts of the ecosystem growing and, you know, showing what you can do when you have an an open protocol. So.
Ben Whitelawwhat's North Sky? People who like or live in the north of the
Mike MasnickIt's,
Ben Whitelawthey live in.
Mike Masnickit's, uh, I mean it's a Canadian based, service, but it's not just for Canadians as I understand it. I believe it's really focused on making sure that, L-G-B-T-Q folks are welcome on a platform and that it's designed with. that community in mind and being very supportive and welcoming. and, they're building in some other features. They've sort of plowed ahead very quickly with, trying to do permission data, which is the private data, which is something that Blue Sky has not done yet, which again, is one of these things about having an open protocol is that, that is extensible, is that other people can build on it. and there was a lot of talk about building out permission data and, how it's gonna be done. and North Sky is one of the, one of the services that's kind of leading the way there. It's just like all sorts of. Creativity and people willing to, to hack and build and create and, you know, not locked into some silo that is, uh, evil CEO's control as we may be talking about later in the podcast. But it, it was just, it was really like everybody was kind of buzzing. I mean, almost all the conversations I had with people, people were, were excited. there was this excitement that hasn't been around on the internet in a little while.
Ben WhitelawYeah. That's great. Sounds very hopeful. Sounds very, um, very positive. did you get stopped from any selfies?
Mike MasnickUh, no, no selfies, I think. But there were, I, I mean, a lot of people knew who I was, which was, which was an interesting experience. It was nice to meet a lot of people who I'd, interacted with online and to have conversations with people that I only knew online. And, you know, I mean, it was a jam packed event where there were three different tracks going on all at once, and, going through the schedule was impossible because. Basically, out of every time slot, I would definitely want to go to at least two outta three of the sessions and usually three outta the three sessions. And then there was a whole hallway track where you're just like skipping sessions and just talking to people in the hallways. And so you know, I just kept running into people, meeting people or, people wanted to talk to me about this or that thing. so it was, it was nice. A couple people were like, does it feel weird to be at this conference that sort of could be traced back to a thing that you just wrote and with all these people and all this stuff going on, and yes. That is very weird.
Ben WhitelawBut Nice, I imagine. Very nice. I mean, talking of online interactions, I mean, to, to move on from the conference experience. We had a couple of Spotify comments this week, and, and they, they weren't, you know, not, not loads, but, um, they weren't as nice as people were in, in person at the conference.
Mike MasnickOh, we got mean comments.
Ben WhitelawWell not mean so much, but people kind of, I think, had took on bridge with your critique of the, the meta trials last week. So we did the, an episode, a great episode. You were on, good form. I'd say about the California New Mexico trials that occupied a lot of the of news agenda last week. and one of the comments said, wow, good thing Massic wasn't on the jury,
Mike MasnickOkay.
Ben Whitelawwhich somebody else stuck up for you, in fairness. Um, but,
Mike MasnickI mean, I never would've, they would've kicked me off the jury so fast. I mean, come on, I've, I've gone through jury selection process, and I know I wouldn't have made the cut.
Ben WhitelawYeah. Yeah. Okay. so it's wishful thinking on the part of the commenter, but yeah. Anyway, good to get some comments. We had a couple of emails as well saying that it was a great episode. thanks everyone for, for listening and yeah, obviously get in touch with us at podcast@controlspeech.com. rate and review us wherever, get your podcasts as ever. we are seeing a, a growing number of listeners, and that's all in part. To you spreading the word, getting the word out there about the podcast and sending us lots of your feedback and, your thoughts. we have a lot to get through Mike this week, as ever. we kind of spend more time than usual, I think figuring out what to talk about this week.'cause there's, there's a whole lot and. we're taking, after a very US focused episode last week, we're taking a, a step away from the US recovering, Australia and also the EU in, in various forms. we're gonna start in Australia where, obviously surfing is, is a, uh, a very well-known. There's my link, very well known, uh, pastime. but another pastime that has kind of sprung up over the last few months is checking if your child is still on social media when they shouldn't be. the social media band that came into effect on December the 10th last year. Has been a long running story and it's, it's a piece of regulation that's now being copied around the world. So it was very interesting this week to have a, a new report come out on the first three months of the enforcement of the ban. And, uh, you've gone through it in fine, fine detail, Mike, to bring us the best of it, and save our listeners the hassle.
Mike MasnickYeah, I mean, it's interesting, the eaf Commissioner, I think they're required to put out reports on it, but it's sort of, uh, looking at the compliance and it's, you know, it's kind of funny to read, I think, in that Basically they're admitting that it's not really working, but they're trying to put a happy face on it and, and both a happy face and a threatening face, right? Because they sort of admit like, yeah, it seems that okay, a bunch of accounts were banned. and it is harder for kids to create new accounts, but there's still a ton of kids On these platforms and that's bad and that needs to stop. And there's, so there's this vague sort of threatening, like, we've started investigations into various platforms and we're gonna start to ramp up our pressure on platforms that we think are not actually doing enough to keep under 16 kids from accessing the content. I'll mention that there was another study recently, just a few weeks ago that I had written about. that was using sort of external data to try and figure out how many people under 16 still had accounts. And they actually said like, there is a dip. but it looked like the equivalent of the dip at the end of summer. Like kids log off to go back to school or whatever. and so they said, you know, there was a dip, but it was not like a major dip. and so I think that is sort of reflected in, in some of this. Report, it's almost a passive aggressive report, I think in some ways in that it's trying to play up like, oh yeah, we're, we're hearing all these good things and kids are getting banned, but also it feels like kids are getting through. some of the things that, stood out to me. Beyond the fact that they're just like, basically like, Hey, now we're gonna get serious and we're really gonna start to crack down. Was they went through these like key observations about how kids were remaining on the platforms are really how the platforms were not doing enough to keep kids off the platforms. And these struck me as bad, like, not really dealing with, with, the nuances related here. So like, they were complaining that, kids who had signed up for platforms before the, band went into effect who had indicated that they were under 16, were then urged to undergo age assurance in the lead up to it. and to me that sounds. Understandable, but the eSafety commissioner is complaining because they said, oh well, because when this person signed up, they said they were this age. You should assume that they are that age, that they are under 16. And your nudge to get them to go do age assurance is really a nudge to try and get them around the band because they have this example of someone who signed up and said they were 14, and then it pushed them to do age assurance, which did a face scan. And the face scan said You're 16, so it let them stay. So. The eSafety commissioner's argument is like, oh, this is a route around it. But it's like, no. I mean, you push them to do age assurance. They did age assurance. The fact that the age assurance didn't work, just like so many people warned you that that technology is not that good. I don't think you fault the platforms for that. That just feels wrong.
Ben Whitelawyeah. I mean that, that feels like. there's a bit of exceptionalism there, right? I mean, you're right that there's, you can't have it both ways. First of all, you know, you put a ban in place, the platforms are gonna put a, a bunch of age assurance, age verification in. There's gonna be some people who, as we all know, evade that in various ways. but there is. In all of the reporting and kind of analysis about the ban, so far, I feel there's just been this kind of exceptionalism where people find individuals who've evaded the ban. Either journalists speak to them directly and, and they kind of say without much evidence, yeah, I'm on TikTok, or I'm on Snapchat, or I'm sending, you know, messages to my friends. Or they find a parent, or in this case they, find somebody who's evaded the ban with the help of some quite poor age verification technology it sounds like. and this report doesn't really add. Much in my sense, you know, it still feels quite exceptional. It doesn't feel very grounded in, in data. There is this big number of 4.7 million accounts that, um, the platforms have, prevented access for under sixteens. But It still feels very early, doesn't it, to kind of be drawing too many conclusions.
Mike Masnickand it, gets back to the whole impossibility theorem concept here, where there's like this assumption that well if the platforms aren't doing it perfectly clearly, they're making mistakes. you know, like one of the other examples was they said, oh, you know, if someone fails an age assurance test, some of the platforms allow them to try again. And it's just like, well. Yeah, because the technology's not good. You can't do a one and done with this kind of weak technology, and yet, they act like that's some sort of big scandal. The other thing that bugs me about this is it heavily relies on parental surveys. They did parental surveys and parents are saying, oh yeah, my kid still has access to like certain platforms or whatever.
Ben WhitelawYeah.
Mike Masnickwe know that's true? Are they just saying that? What do they mean? And also because the system is, you know, for like YouTube for example, you can still watch YouTube videos in Australia for under 16. You just can't have an account. So do parents know the difference? Do kids know the difference between that? Do parents know the difference between that? So if you, if a kid is just watching YouTube without an account but you ask that parent. even the kid, they're probably gonna say, yeah, I still have an account because they're still watching YouTube. And so there are distinctions here that I think the numbers are a little bit, unclear. the one set of numbers that I did think was interesting and to me was a little bit of a surprise. And, and they admitted, you know, that there were concerns raised that, Children who are under the age of 16 who were facing abuse or threats or harassment or something else, some sort of problematic thing online, we're not going to report it because of, fear that, oh, I'm gonna be discovered as being under 16 on these platforms. I'm gonna lose my access. So there were concerns raised that it would lead to fewer reports of abuse, which could be really dangerous. The eSafety Commissioner, they don't give numbers. So, I don't know for sure, but they, they indicate that the, the reporting amounts seem about the same, that there hasn't been a noticeable change in the number of reports of cyber bullying or, or whatever. so maybe that's like one hopeful takeaway from this. but I still feel like I would imagine that there still are kids who won't report stuff because of this very reason.
Ben WhitelawYeah. Yeah. That's another potential kind of unintended consequence, which again, research and, better analysis of, of data will, bear out I guess over time. I don't think it's just the survey. causes me concerns in terms of this report, Mike, but it's all, it's the research methodologies more generally. the parent survey is, one of them, but also everything else feels quite kind of potentially cherry picked. You know, they use public reporting from media and researchers. we know what media is like when it comes to. Reporting on, issues that we talk about each week. there is some desktop research and testing of platforms by the Safety Commission themselves. You know, you're only gonna kinda look for, I guess, particular instances. there is other web forms that have been completed by parents or users. Again, they're all kind of surfacing the exceptional circumstances, I guess, and you're probably not seeing the average experience of, of most
Mike MasnickYeah. And there, there are other reports too. There was a report, I think it was maybe about a month ago in particular, looking at, disabled teenagers who had, you know, relied on various communities on social media and had lost access to those. Um, and that seemed very serious and it was deeply reported and, a worrisome finding. And this report doesn't touch on that at all. Doesn't mention anything about, you know, there, there is something about. Trying to provide resources for kids who are upset by losing their account. which is not the same thing as what that report from a few weeks ago was talking about like you know, the way it's presented in this, this report is basically like, oh, you know, those addicted kids who can't handle losing their Instagram or whatever. But no, like the report from a few weeks ago where like, people who really relied on the communities that they had built and. this report doesn't deal with that at all. It doesn't address it, even remotely.
Ben WhitelawYeah. And, and this report is, again, the safety Commission should be commended for putting this out, um, being transparent about the information. It's not just a kind of exercise in, in sharing information, though, what they've said as a result of the kind of analysis that they've done is that they're now going to bring enforcement cases against five platforms of the 10 that were being, under the regime in the first place. Facebook, Instagram, Snapchat, TikTok and YouTube. and they're in the process of deciding what that enforcement looks like. So within the next couple of months, Mike, we might see, enforcement against those five platforms for, a failure to essentially kind of make reasonable adjustments to, how the platforms work for children, which again, feels very quick to be doing that, I must say.
Mike MasnickYeah. and to me it also sort of demonstrates the problem because this comes up in all in internet related laws all over the world, this reasonableness standard, because reasonable is a very subjective term, and it's basically like if anything happens that we don't like. We can go after you. And I think we're seeing that like, okay, it's only been a few months and they're saying some of these platforms are not being reasonable. And I already described, you know, some of the examples where I think, I find it, I, okay, like again, you don't want me on the jury, so, you know, I, I have my positions on these things. But again, like if you require age blocking. Then you complain about a platform offering age assurance to someone that they think may or may not be under age like that strikes me as a reasonable approach and for the eSafety Commission to come out and say that's not a reasonable approach. It seems like, you know, it's basically like get everything right or we're gonna come and do enforcement, and that doesn't seem like a good system.
Ben Whitelawin fact, in, one of the key points of the report very early on, there's a, there's a bullet that says, that kind of admits to this question of like, reasonableness being vague. and it says, basically reasonableness is a question for the courts to decide once we present the evidence to them. So is an acknowledgement. I think that that is the direction. That, this social media ban is going there, is gonna be enforcement against probably at least one of these platforms in the course of the next, six months or so. And that will be a thing for the courts to decide. So I think we can already see the direction of this mike, which means we will probably return to it on a future episode of the podcast.
Mike MasnickI am sure that the experiment that is happening down under will be a repeat focus of the podcast going forward.
Ben WhitelawYes, exactly. let's kind of go on to matters in EU now, I think, Mike, when we talk about stories on the podcast, they always do one of two things. I think they do. They kinda explain or break down a big story that has been in the news and but maybe isn't covered in as much depth as read like, or they kind of surface. We surface stories that maybe people haven't heard of or perhaps, us mentioning them or talking about them as a starting point. And I think this second story is, is in the second category. It's, it's something that hasn't got a lot of coverage. I know you've written about it this week. but it feels incredibly, important, very relevant to our listeners. It's also something that came up a lot in the trust and safety summit that I attended last week. I was caught in the hallway with a bunch of people who said, You haven't really talked about this, this story that's kind of consuming a lot of me and my team's time and energy. are you aware of it? it has been kind of bubbling on my radar, but actually this week it really became very, very important. So let's talk a bit about the, CS A in, in Europe and many of our listeners will know about kind of hash matching. let's start there. Hash matching being the kind of digital fingerprinting that allows platforms to share, CSM in a way that can be taken down or removed, to avoid, users being harmed. And that is a technology that has worked well for a long period of time. There are various databases that platforms feed into, in order to create a, as much of a centralized. database of CS a is possible, and that allows csam to be removed, at the rate it has. I didn't know Mike, but actually in the eu, that process, that kind of voluntary removal of CS a is enabled by an exemption to the e privacy directive, often known as the, as the cookie law.
Mike MasnickYes. Well actually most people blame the GDPR for for cookies, even though you're right. It's actually the EPR directive, which is why you have to deal with the cookies. Yes.
Ben Whitelawyeah, so, so, and this kind of voluntary system has, been enabled by basically an exemption, to this law, which has been extended over a number of years, and again, had no idea about this despite, being, in the. Nominally and
Mike MasnickAt some point.
Ben Whitelawuh, yeah, in the, in the not too distant past. Um, but it was, was extended last in 2024. and since then there has been a process to create a more permanent framework for child safety in Europe. And so we have this kind of two track process going on with it. The extension, is kind of looming, but actually at the same time. European meps are trying to create a permanent solution that balances privacy and, and child safety in a way that is set up for the future. and this has been going on and last Thursday, which was just when we were kind of wrapping up the conference, there was a big vote, it was called a Hail Mary vote for a another extension of this exemption to the e privacy directive. And it did. Not pass. Right.
Mike MasnickRight.
Ben WhitelawAll of a sudden you have a situation where platforms are in a very, very difficult position. They either have to scan for CSA and it be illegal in the eu, or they have to kind of abide by the rules, but not scan for CSA and potentially be liable for, privacy violations under this, directive. What a terrible position to be in. and, and. As of tomorrow, April the third, that is when the exemption kind of runs out the current exemption. So platforms as of tomorrow will either have to pick one of those paths and this is what was kind of really getting some of the trust and safety professionals at this conference, in a rage was like, how am I meant to decide? What do you expect me to do? how has it come to this Pauline, you know. Did you, how much have you kind of been following this in, tech dirt's coverage over the last months and years?
Mike MasnickYeah, I have been following it and I did write about this this week also. this is one of those genuinely. Like, there are no good answers. There are no easy answers here. There are challenges across the board, some of which we struggle with in the us. but the European situation is really, really tricky. So what you really have, and, it's important to break down, there are trade-offs to every approach, right? The privacy directive is designed to try and protect privacy of, people in the eu. and as part of that, it. makes it difficult for companies to scan, private communications and private data and at a first pass you're like, yeah, that seems good. That seems like something that, most people would want to support. But then you have the flip side, which is that, as we all know, any network that has any kind of. ability, especially for private data, eventually gets used for CS a m and that's why we've built up these giant, you know, hash matching tools and other tools to scan for, to identify, to quarantine and to, you know, notify the various reporting groups around the world. you know, the most famous being the cyber tip line in the us, but. Other countries have similar things. There are these systems that are generally considered to be voluntary and, cooperative between platforms, tip lines and law enforcement that allows them to identify this stuff. And if there is a, a law enforcement issue, if someone is being abused or someone is being abusive, to allow law enforcement to. go deal with that. And there are different responses in terms of How well that works. We've reported on, you know, NCMEC and the cyber tip line in, in the US getting overwhelmed with bad reports or not having the right information and law enforcement not having the resources to actually deal with these things. All that is true, but there are also reports of these tip lines and the cooperation system being useful for law enforcement to actually go after real perpetrators. So you have to figure out a way to balance those things in the US the balance is very fragile, uh, and there are attempts to, break it. but in the US it's voluntary, right? They can't put into law that a platform has to scan because if they do, if it's at the behest of the government, then it's covered by the fourth Amendment, which means that you need to have a warrant. Which requires probable cause before you can do the search. And if you don't, if you do require it and you haven't gotten a warrant, that means the remedy for that is that the evidence is excluded. So if you required platforms in the US to do scans, then. You would basically make that evidence useless in court, which would be really bad. Right. You know, if they found evidence of abuse and then that couldn't be used in court, that would be bad. This is why there's like, currently, there's a private lawsuit going on about this, uh, a class action lawsuit that's trying to force Apple to scan and a separate lawsuit in West Virginia. that is claiming that apple should be required to scan iCloud, storage, both of which would create massive Fourth Amendment problems. But this situation in the EU is, is kind of the reverse, which is that you have privacy activists claiming that no, this exemption for scanning is bad. And we want the EPR directive to stand as it does without that exemption, and the company shouldn't be allowed to do general scanning without. Direct suspicion. And you know, there's a part of me that's like, you know, I tend to agree with privacy advocates on lots of things, and I, I see that and I see certainly the slippery slope concern of when you have an apparatus in place that allows you to scan all, your, storage and communications. usually that doesn't remain focused on just one item, but can. Become a much broader scan and and much broader surveillance, which would not be good.
Ben WhitelawYeah, I think the issue for me here, and I, I get it's very complex, is that there wasn't a kind of solution. Ready. and so meps have kind of pushed forward with a, obviously a, a more permanent framework. And according to people I've spoken to actually, the way that the, vote happened and the timescales from late last year, that, the extension votes took place were designed to force. agreement on the permanent framework. So, there was a kind of like emphasis on getting a permanent framework, sorted. What that's led to is this now gap. so kind of
Mike MasnickThe failure to have anything. Yes. Yeah.
Ben Whitelawright? It's kind of procedurally, I can see that you want to move towards a world in which privacy and child safety are kind of, Aligned in a framework that everyone kind of understands and the platforms can kind of take and use. What it has meant in practice is now there's gonna be, a gap where platforms are in a kind of limbo in a, in a bit of a no man's land. the other thing that I don't think covers the, EU political system in glory is that there are actually two, votes that took place. One for a clean extension to the e privacy directive, and one for a more narrower version of it. I guess as, as they move towards that permanent framework, which, doesn't allow platforms to be as broad in their scanning. And they both passed, uh, because they both passed. There was no cohesive position and it kind of collapsed. you can't move forward, right? So, if anything that shows the kind of complexities and the different views within, the commission about what to be done. But, on the ground people are saying that this is gonna make their lives very difficult. it's gonna put them in a position where actually yeah, they might not be able to, protect children as much as they would like on their platforms, which is, which is obviously not what no one wants.
Mike MasnickYeah, it's definitely a tricky situation where there's no perfect answer like many things related to online trust and safety. especially when it comes to like child safety in particular, there are serious trade-offs that, different people can come down and weigh the different trade offs in, in different ways. The procedural stuff, I mean. They've been arguing over this for years trying to find a permanent solution. And I can understand the people who are tired of just kicking the can down the road every time and saying like, okay, just extend the exemption for a longer period of time. But you know, then we're in this situation right now, and it's like they can still scan. But they could face liability depending on what comes outta the scan or people could sue over it, which means for the most part, they probably will stop the scanning. and that's probably not a good situation. Like
Ben WhitelawYeah. Yeah.
Mike Masnickwith anyone who works in the in child safety stuff, I think generally speaking, this is not a good scenario for the eu.
Ben WhitelawNo, and, but it has happened before. I was, reading up on, on how this had kind of, essentially a similar thing happened in 2021 and there was a seventh month gap, between having a, um, the extension, amended. And so again, platforms had to take the risk of whether they scanned or didn't scan. And one of the only ways you guess you can tell what the effect of that is, is that there was a, a decline in the number of reports to ncmec, National Center of Missing and it's exploit children and that. Was significant, it was 58%. so you had a situation where there was fewer reports coming through the system, and without the pipeline, it's impossible to be able to then, address, any serious issues, from law enforcement's perspective. So, I get the issues, on all sides, but it's a really tough one. A really, difficult situation for everyone.
Mike MasnickYep.
Ben Whitelawso let's go now to our roundup. Mike. we're gonna do a bit of a roundup that kind of relates back to some old episodes of controlled speech. We've, there's a, a glut of stories and, and interesting reads this week that relate to things we've covered in the past or people we've spoken about. Um, and we're gonna start by, talking a bit about. our good friend, longstanding punching bag at controlled speech Mark, mark Zuckerberg. Um, you wrote a bit about him this week and some texts he's been sending to his pals.
Mike MasnickYeah. Yeah. So this was, kind of funny and, and somewhat random almost, because it, it came out of the lawsuit that Elon Musk has filed against OpenAI, which is a whole other issue, which we're not even gonna go into the details of, what is happening in that lawsuit. But there was a filing where Musk's lawyers were trying to get certain evidence that was pulled during discovery. Suppressed and basically saying like, open AI is basically just looking to embarrass Elon Musk. which, you know, is fun for everybody, but like, you know, and they were saying all these things are irrelevant to the case. And they, showed a few examples of it. some of which are. Fairly embarrassing to Elon Musk, but one of the examples I showed was a text message exchange between Mark Zuckerberg and Elon, which took place in February of 2025. So soon after the admin, new administration came into effect, and, Elon Musk was in, in control of the government. As some of you may remember, it happened for a few months in the new Trump slash Musk, uh, administration. and Zuckerberg texted Musk, like, Hey, I forget the exact phrasing, but basically looks like Doge is coming along. Great. and just so you know, we're all set up to take down anyone who tries to dox or threaten Doge employees. And so there's a lot of context here that is important. That was just a quick message. And then the exchange, went off in other ways. I think Musk liked that. So this came 24 days,'cause I counted 24 days after the infamous, Joe Rogan, appearance, which came just a couple days after, Zuckerberg's. silly, expensive watch, fancy chain video where he is like, we're done with content moderation. We're getting rid of fact checkers. And, you know, we went too far. And then he went on Rogan and he was like, oh, the Biden administration cursed us and tried to make us take down content. And we're done with that. And we're never going to remove truthful information again. We don't believe in removing truthful information. 24 days later, he is proactively texting perhaps the most powerful person in the US government saying, we are all set and ready to take down the content that you don't want online. Which, you know, I mean, the level of sheer, hypocrisy and. You know, just brazen, obnoxious, like the whole Rogan thing was a performance. I mean, I wrote 10,000 words on why it was a nonsense performance, but this is just insane. And so some people will push back and say like, well, he said doxing and threatening like that can't be allowed, but sure, threatening fine. But doxing, these were public officials who were working for the government. There's no doxing if you name them. The other bit of context here is the day before that came out, before that text message exchange Wired had published the first of multiple reports that came out of like identifying some of the Doge kids.'cause they were mostly kids. And so it was really about that like, oh my gosh, reporters are doing journalism and identifying. Federal employees who are going through the government and shutting stuff down and pulling funding and doing all this stuff, and journalists were. reporting on them and Zuckerberg is like, please, government, like, are you happy? I'm going to remove the, you know, any information on these guys. and it seems really, really, really bad, first of all. And just completely the opposite of what he was performatively saying on Rogan, less than a month earlier.
Ben WhitelawYeah, it's so brazen. And he's, he's, it's the text of such a kind of wet. sycophantic, CEO. Like, oh, it looks like Doge is going well. Like, shut up, shut up. Stop, stop licking the ass of Elon. Like, you're embarrassing. how could, I would be embarrassed if this text came about, came out about me. I, well, I honestly would and like. That doesn't even like take into account the fact that it shows I've got no morals or principles or you know, like I say one thing in public and do something else in pride, but like all aspects of it are, are
Mike Masnickremember it wasn't, it wasn't even that long before that, that they were challenging each other to a cage match Duall, where they were gonna fight each other, right? I mean,
Ben WhitelawI mean, that was embarrassing enough, like,
Mike Masnickyes.
Ben Whitelawof. pent up frustration, the pseudo masculinity of that all, and then, then to kind of text each other and be like, oh, your government work's going well,
Mike MasnickYeah. And I'm, I'm gonna pull down accounts of revealing all the things that you're doing. Yeah. I mean, the rest of
Ben Whitelawme of the episode we talked about kind of the relationship of media and platforms. You know, here we have a, a wide journalist doing solid reporting, really good reporting on, a platform, CEO, who also happened to be in the government at the time, and you have another platform, CEO. puffing out their chest and saying, you know, I'll stand up to the journalist for you, Elon. You
Mike MasnickYeah.
Ben Whitelawit just goes to show that these, guys can't face any kind of, inspection without the crumbling in terms of their morals or values or what it is that, claim to be.
Mike MasnickIt's just, just ridiculous and, and obnoxious. And the rest of the text exchange was, Elon trying to convince Zuckerberg to join him in trying to buy open ai. But the whole thing is just, ugh. It's, it's, the whole thing is sickening.
Ben WhitelawYeah, gets, gets my goat up. Um, the another CEO who hasn't covered themselves in glory this week, has given a fascinating interview with the New York Times. Neil Mohan, his YouTube CEO, has been for, a number of years now, and he. It gives a kind of long and extensive interview, to the NYT in which he's asked a number of times about the way that YouTube does content moderation. YouTube is obviously a, gigantic platform now. It is bigger than a lot of cable channels. It is viewed actually mostly on TVs, which is a kind of interesting development. So they are everywhere. And part of the reason for doing the interview was because. It is such a bear moth now. And so it was very interesting to see him fail to answer the questions about platform's policies and how it enforces it and to really get the answers so badly wrong. I dunno if you, you read this Mike, but he,
Mike MasnickYeah.
Ben Whitelawhe kind of boded these answers. He, he skirts around them. when asked about whether she, he should have brought back Donald Trump. He, he, you know, makes a whole bunch of like
Mike MasnickIt's so bad.
Ben Whitelawodd statements
Mike Masnickit's so bad. It's, it's someone who is not prepped for that question. It's so bad. Yeah.
Ben WhitelawHe wanted to talk about creators and he wanted to talk about kind of growth and Mr. Beast and all the weird stuff that goes on on YouTube. He didn't wanna talk about, how the trust and safety team decide what goes up and, and what.
Mike MasnickYou, you can tell from the entire interview that all he wants to do is be like, look at Mr. Beast. Look at all these cool creators on the platform. Look at all this wonderful thing. We're the new tv, blah, blah, blah. And as soon as the interviewer starts asking tough questions about trust and safety, he shuts down. He has, he's clearly not prepared for it. And he's just like, he gives these like. where he is just trying to tap dance around things and just making it worse and worse. And on the Donald Trump question in particular, he keeps going back to like, well, you know, I don't really remember what our policies were back then. Come on. Like, I'm sorry, but like. That is ridiculous. That is not an acceptable answer. you can take on those things, you can explain why you've changed policies over time. You can explain why you did these things at the time. But to say I don't remember what the policies were at the time that is, is just completely unbelievable as somebody who's clearly tap dancing,'cause there are some questions after that, sort of about the political stuff and he's, you know, he does what he has to do as a CEO. in a really, really sketchy way where it's like, you know, we try and work with every administration. We try not to be partisan or whatever. It's like, sure. But That's not what the issue was. It wasn't a partisan issue. Why all the platforms banned Trump on January 6th or seventh, 2021. You can talk about what the reasons were, but he doesn't wanna do that. He wants to talk about Mr. Beast's latest, big challenge or whatever. And, and it, it, yeah. It's, it's, this is how CEOs should not. Be prepared to discuss, you know, they, they have to be prepared in a much better way than he was. And this is not the first time we talked about Neil Mohan in an interview before and how, ill prepared he was for questions about trust and safety. Even though at one point in this interview they asked him about trust and safety, and he is like, yeah, actually that's like biggest thing every day. But then as soon as they try and get in on details, he, he just, cannot answer them. It's embarrassing.
Ben WhitelawYeah. it is. If only he'd listened to the podcast last
Mike MasnickYeah. Yeah.
Ben Whitelawthis, this situation. and, and the questioning is very good. In fairness, you know, there's questions about, were you ultimately responsible for the return of Donald Trump? In which kind of he is forced to say yes. that's the kind of through line I think between the Zuckerberg story that you just talked about and Mohan, is that, you know, there is very often a very senior member of these platforms who are. Responsible or at least notified of, major decisions around content moderation, which I think we almost forget in the, melee of all of these stories every week. and also, the journalist asks about Candace Owens, the kind of mar goth far right commentator slash conspiracy theorists and like makes a case of her. she's like growing fast on the platform, but she's spouting a lot of, misinformation and. he really gets put on the spot about that. So, you know, I really respect the, the kind of level of questioning on this interview. It's definitely worth reading. We'll, we'll include it in the show notes as usual.
Mike MasnickOr you can watch it on YouTube as, as he himself, notes.
Ben WhitelawI doubt he'll be watching that back. Uh, I'll be surprised. so yeah, we'll also link to. The episode that Mike referred to, in which we, critiqued his last performance in the media last year. and we'll include a few other episodes that might be worth returning to if you're interested. one of the kind of other stories that is worth talking about Mike, is, again, a story we, discussed last year in our episode rated after ridiculous about meta, moving forward with a, film style rating system. that is no longer they pulled the plug.
Mike MasnickWell, they, they came to an agreement with the Motion Picture Association, and the agreement is basically like Meta and Instagram will no longer refer to their rating system as PG 13, which they had really already stopped doing, I think back in December. They kind of really pulled the plug on it, but. You know, the MPA was effectively threatening them with a trademark, lawsuit about this. so they came to an agreement, they put out a public statement through the MPA look. This whole thing was silly. I mean, it was silly when, when we talked about it last year, the rating system, again, was like this clues that the, the movie studios, it's funny, like there is one sort of thing that I do think is kind of funny. The whole reason why Hollywood put this, framework in place was because they were being threatened by government officials about the kinds of movies that they were putting out, that they were harmful for children. And so they put together this system of ratings to sort of appease everyone and to avoid regulation. Does that sound familiar? Right. Like.
Ben WhitelawDo you, do you do you think, I know that there's this kind of trademark this week. Do you think kind of the nm PA should have allowed the platforms to use the system? As part of the kind of broader mission of like, as you say, being clear about what information contains being, you know, keeping people safe, making sure that people understand what they're watching.
Mike MasnickI don't know. I mean I like again, all of this was so stupid because like rating system works for movies and it has like this whole like parent council, which is notoriously arbitrary and extremely prude. And the reason Instagram used this in the first place is just because lots of people understand it conceptually. What is a PG 13 movie versus what is not? and so they were sort of building off of that. General knowledge, but in a very different context where it doesn't really apply conceptually in the same way. And so on the trademark side of it specifically, like I don't think people were confused. Which is a necessary part of trademark infringement for the most part. There are some exceptions to that, but I don't think anyone thought that meta was like going to the MPA and having them rate kids posting videos on themselves doing stupid stunts in the same way that they were, like they're rating the latest blockbuster film. And so I don't think that you had like a legitimate issue of confusion, which is what Trademark is supposed to deal with. But, you know, Instagram never should have done this in the first place, and the MPA shouldn't have freaked out about it on their side. So the, the fact that they settled and basically they're just going back to the way things were before, it's fine, it's over. It was a silly spat.
Ben WhitelawYeah. Would it differ if it was YouTube? They decided to, to kind of implement PG 13, because we've just
Mike MasnickThat, that is, that is an interesting one. I could see a stronger complaint from the MPA if YouTube started using that same rating system because the YouTube experience is somewhat closer to what, the watching movie experience is. And of course, YouTube now has lots of movies on their platform. They're, you know, licensed and everything. And so I could see it being a clearer issue there. I also tend to think maybe this is, this is unfair, but I tend to think that the lawyers at Google and YouTube would not have allowed that to happen in the first place, as opposed to meta, which is
Ben WhitelawWhat makes you say that?
Mike MasnickJust a general statement on the quality of the legal team at both companies, and I'm wondering if I'm gonna get emails about this.
Ben WhitelawIf somebody listened to the podcast long enough to get their little insight, you've done well, you've stuck around. and yeah, I will, I will look forward to hearing next week, um, whether you get any emails. Um, in fact, it would have to be two weeks time'cause you're not going to be here next week. You're
Mike MasnickI'm I'm back on the road again. I am constantly on the road, so we will have a, a special guest host next week with you. Um, but I will be back in two weeks.
Ben WhitelawGreat. well that brings us to an end this week, Mike. thanks to all the outlets we have. Use this week in our coverage. couldn't have done it without them. New York Times, the Famous Tech Dirt, um, crikey in Australia. go and read their stories via our show notes and support their work where you can. that's all us, this week. Thanks for listening and we'll see you all soon.
AnnouncerThanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.com. That's CT RL alt speech.com.