Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
Writing Some Wrongs
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Grammarly turned me into an AI editor against my will and I hate it (Platformer)
- Grammarly has disabled its tool offering generative-AI feedback credited to real writers (Engadget)
- Grammarly Is Facing a Class Action Lawsuit Over Its AI ‘Expert Review’ Feature (Wired)
- Who’s a Better Writer: A.I. or Humans? Take Our Quiz. (NY Times)
- Molly vs the Machines review – a powerful story of love, loss and the dangers of social media (Guardian)
- UK: New Molly Russell documentary provides further evidence that social media needs complete redesign (Amnesty International)
- WhatsApp is launching parent-linked accounts for pre-teens (TechCrunch)
- Meta urged to boost oversight of fake AI videos (BBC)
Play along with Ctrl-Alt-Speech’s 2026 Bingo Card and get in touch if you win!
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So this week's prompt to start us on control speech mic alludes to a story that we're gonna discuss in our, podcast today. And that is that of Grammarly Grammarly, the AI editing and, spelling tool. I use it myself from time to time. and I know that you have your own little system up, up your sleeve, but the, prompt on the app store that kind of publicizes Grammarly is a kind of perfect start for us on the podcast. It is, choose the right words, strike the right tone. So it's maybe less of a prompt, more than an instruction for today's podcast. Um, what, what is the tone you're gonna be striking today?
Mike MasnickHow, how do you answer that one? So, I mean, I will say, as I mentioned to you before we started, and as you may have noticed, my voice is not entirely here this week. I, I seemed, I woke up this morning, I'm fine, otherwise I don't feel sick in any other way, but my voice slightly disappeared. So I am going to try and choose words that will actually come out of my throat and strike the right tone in that it is this, uh, slightly, slightly off, tone.
Ben WhitelawAnything that is audible today is, is
Mike MasnickYes, yes. It would be a bonus. Uh, what about you? What, words are you choosing properly here?
Ben WhitelawWell, I mean, I'd like to think on controlled speech, we, take a fairly pragmatic and, and sometimes positive, tone. You know, we try to keep a measured response to all of the, chaos ensuing in relation to the stories we discussed. But, you know, I'm feeling a bit despondent today, Mike. my tone is one of, bleakness to be honest. And, but the stories that we cover are, are not for the faint hearted. And, you know, they did leave me feeling a bit, a bit like, what have we got in store in the future? Where are we going? So maybe talking through it today will help.
Mike Masnickit's your therapy session.
Ben Whitelawsame thing. Yeah. Let's see how this pans out. And welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's March the 12th, 2026, and this week we're talking about an AI tool that's riled up writers, a new documentary about platform safety and WhatsApp finally coming to the parental controls party. My name is Ben Whitelaw. I'm the founder and editor of Everything in Moderation, and I'm joined by a real life. Mike Masnick. He's back, everyone. How are you doing,
Mike MasnickI am, I'm doing okay. There was a podcast with both of us last week. It's just, we had recorded it way before, but this is the first time in, in a while that, uh, you and I have chatted. it's been a little while. I have been traveling. We've had a bunch of wonderful guest hosts. We had the banked episode that we did. Uh, this is the first sort of, we're not quite live, but live ish episode that we've done in a month, I think between the two of
Ben Whitelawyeah, yeah, for sure. I mean, we haven't done a kind of news roundup for, for a good few weeks. and I, I do like the kind of you know, the very mysterious nature of, of your introduction, there's like, I've been traveling, I've been, I've, I've been
Mike MasnickI've been places. Yeah. Yeah. I've seen things, Ben, I've seen things,
Ben WhitelawDon't ask me any more about it.
Mike Masnickuh,
Ben Whitelawbut it's, it's very good to have you back. yeah. I've missed chatting to you about each week's stories and we've got a lot to get into. Let's crack on with today's episode because we've, we've got a lot to chat through. And before we do your usual reminder to like the podcast rate and review us wherever you, get your podcast. we haven't had a review in a few weeks now. Um, probably a few, a bit longer than that actually. Um,
Mike Masnickbut.
Ben Whitelawyeah, it's been a barren period for controlled speech, but we are getting a few comments on Spotify. so that's been nice to have some, some engagement there. we will definitely get back in touch on, Spotify if they leave a comment and we're getting a few emails as well, which is great podcast@controlaltspeech.com. really enjoy listening to our listeners. we actually had a, a really interesting email that I'll bring to you, Mike, which. I've had a good discussion with, with the listener about, and that was a, some criticism about the way that I talked about team councils a few weeks ago. and what's been really nice is to have a kind of really frank and open conversation with, with our listener about why we, talked about the team council announcement from Discord and the way we did, whether it was a fair assessment of the story and, you know, that kind of interaction is, is really important for us on the podcast and I think important for our listeners as well. So, you know, we do read and reply to every email we get and, really grateful for people who've taken the time to listen and also get in touch as well. So, yeah, don't be shy.
Mike MasnickI'll also jump in and mention, as a reminder, we are always looking for sponsors for the podcast. come on the podcast, talk to us. Part of the sponsorship is that we will have a very fun and interesting interview with you or someone that you designate, and we are always looking for new sponsors.
Ben WhitelawYeah, for sure. I mean, this podcast, doesn't pay the bills without you guys. I think that that's fair to say if you, if you enjoy the podcast, if you listen to it, if you find it valuable, yeah, do get about becoming a sponsor. Um, we are thinking about other ways to, allow regular listeners to support the podcast in New Ways. Will be coming back to you with more information on that soon. but yeah, big shout out for our sponsors. Everyone who's, been a previous sponsor, we salute you. we've also got our control speech Bing Go card for 2026, which we lovingly crafted at the start of this year. We got some help from our listeners to come up with some bingo card squares. we haven't had anybody. Tell us that they've won yet, Mike. but it's, it's a matter of time. And, you know, there's a few stories today that I think hit a couple of the squares. particularly about, AI missteps not least, and, and also some platform CEOs that crop up from time to time. So play that wherever you are listening, control alt speech.com/bingo. and let us know how you get on. So, let's crack on with today's stories, Mike. we've got, a few really interesting of, of real varied mix of stories actually this week. and we're gonna start off with, that AI tool that has riled up writers. you are not one of those writers that have been riled up, are you?
Mike MasnickI mean, I'm, I'm riled up in some ways, but perhaps differently than, than most people who have been really riled up by this. yeah, so this is from Grammarly, though. I should note though, nobody is saying this Grammarly changed their name. They're, they're no longer called Grammarly. They're now superhuman, which was, they bought another company that was called Superhuman and in the process took on their name, but everybody still calls them. Grammarly and Grammarly, you know, was this sort of simple plugin that, gave you grammar advice as you wrote? I had used it many, many years ago, kind of when it first launched, and then I found it wasn't that helpful and I, I stopped and now I have a million other tools that I use. So I've never gotten back on the, the Grammarly train, but I know it is very widely used. It's probably the most widely used grammar tool, but obviously in the last few years they've been launching a bunch of different AI related tools. I know that they have AI writing tools that have sort of mixed reviews. I know that they have an AI detector that also has very mixed reviews, but apparently back in August they launched a new feature which didn't get much notice at the time, which was what they called like expert reviews. So they could, you could review your piece of writing and it would sort of. Suggest that famous people or experts in various topics, you would get advice as if they were reviewing your piece. And so, there was Stephen King could review your horror novel that you were working on. or, you know, a, a bunch of other, famous and, and in some cases not so famous people. And, and it was sort of a strange feature in that it wasn't like you couldn't. Ask Stephen King, the fake Stephen King, Stephen King bought to review your piece. But it would sort of say like, we'll give you suggestions like what, a Stephen King would suggest. And it was sort of, you know, it, it was sort of mixed in this weird way and would make sort of vague suggestions, not very interesting suggestions from what I've seen. and so somebody noticed it last week sometime and posted about it on social media, and was sort of saying like, Hey, this is weird. Like, I don't think, I was never asked permission for this, whatever. Then Casey Newton, our recent. Control Alt speech guest host, Casey Newton, wrote a thing about it, noting that he himself was one of the people, and he spoke to other people, including Kara Swisher, who seemed, you know, as Kara Swisher does, particularly pissed off about the, the use of her name without permission. and a bunch of other authors as well. And then more and more people started looking and everybody got very angry about it. because Casey had reached out to Grammarly, they had told him, oh, you know, if someone is upset, they can email us at this, like, expert opt out address, which was weird, right? So the whole, you know, part of the reason why everybody's so angry is that. most of the people involved were never asked, as far as I can tell. All of the people whose names were used were never asked. they were never certainly offered any compensation, but they weren't even asked if, if it was okay to use your name.
Ben WhitelawMm-hmm.
Mike MasnickAnd, to then have it be opt out. Like, I have no idea if I'm on that list. might be, I might not be. I, it wouldn't surprise me either way. but like, if I didn't wanna do it, do I just have to guess and then send them an opt out and say, Hey, take my name off of this if you've used it. That seems not great either. and then so as sort of the storm sort of kept growing about this, they finally announced yesterday Wednesday that they were shutting down the feature and, you know, there was a, a weak apology posted to LinkedIn. the, the sort of usual corporate apology, not quite what we meant. You know, we, we should have taken more care, blah, blah, blah. not very interesting at about the exact same time that the CEO posted this apology and saying we're taking it down. There was also a potentially class action lawsuit filed by Julia Anguin, famous, journalist, who was one of the names. She only found out about that. She was in it from Casey Newton writing about it, seeing her name mentioning it, and she had posted on Blue Sky. She was like, wait, what? I'm included in this. And then a day later there was a lawsuit filed and the lawsuit is potentially actually interesting. Because it's arguing over publicity rights. A lot of people saw this and, and thought like there was like a defamation thing. It's not defamation. Defamation is something very different than this, but there are publicity rights laws, and they're a little bit of a mess because in the, in the US there's no federal publicity rights law. It's state by state. Some states have them, most states have something, but a lot of them don't. But the, publicity rights laws are generally meant to be about like false endorsement. You can't put an advertisement up saying, Ben Whitelaw loves. This book or whatever if you haven't actually endorsed it. and that was the point of it. Those laws have now been really kind of misused, I think in many ways over anything related to the use of someone's name mentioning someone. There are a whole bunch of cases that I think have been really abusive and, attempt to sort of censor speech in some form or another by abusing these laws. But this feels like exactly what these laws were supposed to be. If, if a user of Grammarly sees like, get advice from Stephen King, they're going to think that Stephen King said okay to that. don't think that there's any way that, you know, it feels like the lawsuit makes a, a fair bit of sense. and I'm one who feels that, you know, most of these publicity rights, laws are, are kind of a mess and, and not well used. But in this case it kind of feels like what it, what it's for.
Ben WhitelawYeah, it's a bit of a mess, isn't it? It's, it's, if you kind of put aside the fact that this is a feature that has been in the public domain for six months and there's never got traction enough for people to notice. Putting that aside for one second, um, you know, I think the kind of row back on this is symptomatic, I think of a bunch of things. It's, it's symptomatic of how. AI companies are, doing product releases and, and, updating their, products or their platforms with AI features that are shitty in a lot of cases and don't have any thought in them, whether that's from a safety perspective or otherwise. And also, kind of this broader point, I guess, which is that everyone is being affected by AI systems in ways that they're kind of, would never have, never thought to have been, you know, it's obviously a big story in some senses for the writers involved, but that kind of broader point here is that, everyone's content is in these LLMs everyone's, information is being used to train them. the fact that Casey Newton, Julia Anguin, their names are being used. In this kind of expert review, guys, is really just the tip of the iceberg. and it's, only stories like this that I think will make people realize the kind of broader public, realize you know, what we're contending with here. in terms of the, the kind of mass use of people's information, content data to, to create these systems.
Mike MasnickYeah. And it's like, I mean, there are different elements to that. You know, there's a part of me that. You know, I, I don't think it's wrong or bad to train AI systems on content, and I know that there are lots of people who disagree with me on that, and I, I understand where they're coming from. I just think that. if you think through the nature of how people learn and how people express themselves, it's often by, reading works from other people and learning from that and building on it. And creativity is always people starting from, you know, trying to copy someone else's work and then adding their own voice into it and learning their own voice. And to have a computer system do that as well. Seems to me, to be totally fair. This is why I've argued that the training of LLMs should be considered fair use. Again, I understand people have conceptual differences with that and feel differently. There are lawsuits about it. We'll see where they come out in the end. So like, the idea of learning from others I think is a natural instinct, even if it's a computer system. But that is different than. Then going out and presenting it as here, get help from Stephen King. Right. and, you know, I was thinking about it again, like, I have no idea if I'm included in this. if I was, it doesn't upset me as much as it upsets other people, but that is also like, I'm not particularly proprietary about anything that I do. Um,
Ben WhitelawWould you be more upset to be included or to be not
Mike Masnickyeah, that's, that's a good question. That's a good question. It, it might be the latter.
Ben WhitelawI appreciate your honesty.
Mike MasnickYeah, it might be the latter. Uh, but the thing that would upset me more, I think is if I was included and it was giving really bad advice, and that seems to be the case. Like I've seen the advice that it is giving, that these expert reviews are giving and it seems really generic and
Ben WhitelawYeah.
Mike MasnickLike not, particularly helpful at all. and I don't understand. There's a, there's a bunch of things about this that I don't understand. I don't understand how they release this product without reviewing it and thinking through, you know, we've discussed in the past on like, on safety features, right? having trust and safety teams in the process from an early stage, not going to them a week before you're going to release and saying, Hey, are there any trust and safety concerns about this? But like, the process of releasing a new feature or a new product should involve, multiple stakeholders within the company who can look at the possible risks and challenges and opportunities associated with the product.
Ben WhitelawYeah.
Mike Masnickwondering if this was kind of, you know, we're in the age of AI vibe coding stuff. I'm a fan of that, but like. It feels like maybe this was just like a vibe coded feature. Some engineer was like, Hey, I know what we can do, and then they just ran with it and never thought about the consequences of, of this. Or never had a lawyer review it or never had, other people within the company who think about, the trust side of trust and safety. Like this is a, the type of feature that, diminishes trust rather than increases it. and I don't understand how they got there and, you know, it seems like the kind of thing where a company that size could easily have just asked people and said Hey, are you interested? Maybe throw them a little bit of money. Like, I don't even think it would be that much. And just say like, Hey, we'd like to, you know, we've trained something on you. And like I, I mentioned this to you without going into too much details, like I've played around with, you know, I have these tools that I use to edit my stuff. I've played around with a feature kind of like this that I built myself just more to see if it worked, like feed, the tool, a really famous author's work and just say like, if this person were editing this piece, how would they edit it? And it's just sort of like, it's like a parlor trick kind of thing where it's like, oh, that's kind of funny. Like, I can see why it would do this thing. It looks like the Grammarly thing isn't even doing. It's just sort of like pasting fairly generic advice under the name of someone famous or like, you know, famous within a certain area for the exact reasons why we have publicity rights laws. Like just to, trade off of their, famous name.
Ben WhitelawYeah, and it does feel like a kind of missed opportunity from a product
Mike MasnickYeah. Oh,
Ben WhitelawYou know, it's, it, Grammarly could have taken this as an opportunity to pay editors and writers for the way that they edit to get a breakdown of how they think about editing, or think about writing, which Casey Newton, you know, provided in his newsletter this week. As he spoke about this story, and then he could have trained, you know, mini models on, Casey's particular way of editing, and then that would've been a super helpful feature. It would've, paid some money back to Casey for crafting or for honing his craft after over years and years and years. And you could have done that at scale and, and had a, fairly positive, benefit to users, to the platform and to the editors upon which that was trained. as it's turned out, they'd missed that opportunity. and I, that kind of made me think about, I guess, how product safety and product design is being increasingly talked about, in a lot of the work we do, Mike, you know, if you go back to the kind of EU investigations that have been opened recently, against TikTok, the focus is on product design. It's on addictive design. It's on the lack of thinking that goes into releasing features. That positively Im impact users generally speaking. And I, and I, I wonder if kind of product folks are left out of these kinds of conversations. I wonder how many listeners that we have are, are focused on product or product managers or engineers or people who, are building some of these products.'cause we know that we, speak to and are listened by lots of policy folks and regulators and academics and, and journalists. But almost like products folks don't tend to have the kind of sensibilities that we're, hoping that they would have in order to roll out a feature like this. again, I would love to talk to kind of product people as to how they think about these things. But yeah, that's where the, the gap is for me. Is that Product people. Having the, trust and safety sensibilities. Thinking ahead, thinking about how this might have secondary impacts on segments of users like editors, because that's really where this, this has fallen down.
Mike MasnickYeah, I mean, I think you know, I think there are different stages to this and I think, you know, I always look at the cybersecurity side, where, for a long time it felt like security was kind of an afterthought for a lot of products that were released. And then it got, there were so many huge breaches and problems that it became central to most companies. Like, oh, we have to build from the beginning, security by design. think about the security aspects from the very start. And you know, most companies now are pretty thoughtful about that. But I'm wondering if, we need more of that in. not, you know, when we think about trust and safety, we sort of lean heavily on the safety side, I think
Ben WhitelawMm
Mike Masnickunderstandable reasons. But the trust part of it is actually really important too. and I really wonder how much more, we need to make it core to everything that people build from the start. Think about the trust side of it. Like you wanna build a product that people like to use, that you have a trusting relationship between the company and the user. and something like this burns trust in all sorts of ways. Like in a really big and powerful way. Like, I don't trust that company anymore.
Ben WhitelawNo, and it, and it's obviously hard to get that trust back. that's something we've seen across both platforms, but also media in the past, other institutions who've, once you lose trust, it's very hard to kind of regain it. I think this feels like a continuation of, the assault on, Verification as a product feature when we talked about this in relation to primarily Twitter slash x in the past, but verification, is the, proving that somebody is who they say they are. And, it's not exactly the same in the Grammarly's case as it is to, Twitter or to other platforms that previously gave users a blue check as a way of authenticating who they are. But it, it's, it's a similar thing. I would say it's, it's kind of on that continuum where people now are, are not necessarily who they say they are on these platforms. And that is a problem, that is a problem whether you are reading their posts, it's a problem whether you are getting editing advice from them. and I would love, again, it's a shame that Grammarly didn't take this as an opportunity to. add back into that part of, you know, we're trying to authenticate people and, use their credibility in ways that benefits us as a platform, but also the users who use our
Mike Masnickagain, like, you know, they could have found enough people, enough well-known people that they could have thrown a little bit of money at, and the product would've been wonderful and like, like a really exciting product I think that everybody was happy with and that they, opted into and to go this other way, I, I'm, I'm just like, my mind is blown, like the, they thought this was a good idea, that anyone thought this was a good idea.
Ben WhitelawI think somebody's probably gonna go and build.
Mike MasnickSure.
Ben Whitelawthat we're talking about. That might be you, it might be you, it might be that your vibe coded, vibe coded editor gets a few upgrades and you know, it sounds a bit like Casey Newton soon. But, um, I think one last thing I'd say on it before we move on is like, isn't it ironic that, there's this long standing debate, I think as to whether kind of AI generated text is better than a human generated text. And here we have a situation where a tool is using the reputation of humans in order to validate AI generated suggestions. Right? There's something kind of very perverse about that and very ironic. It's like, okay, well you might think there's a New York Times quiz this week that our producer Lee shared, and you can choose between an AI generated text and a human generated text. In some ways it doesn't matter. Because if, you know, if grammarly's viewpoint is to be believed, people are always going to want suggestions and recommendations from people that they trust. And so there is gonna be a role for, people who have honed their craft in whatever field it is, it's just that Grammarly have, have failed to see that, or recognize that was quite the case
Mike MasnickYeah.
Ben Whitelawanyway. interesting. First, first story of the week. It's definitely been doing the rounds. A story that people may be less, aware of. Um, something I've been following is the release of a new documentary called Molly versus the Machines. Now this came out in the UK last week. I believe it's in the US on Apple tv. You haven't seen it,
Mike MasnickI have not seen It No.
Ben WhitelawNo. Okay. but I think it's worth talking about, for a number of reasons. The Molly Russell Case that the documentary looks into, and recaps took place in 2017. Molly Russell was a, a teenager who took her own life after consuming, content on social media platforms. That was the verdict of the coroner's court, after her death. And it was an important moment in the UK because it raised the issues of online safety to a much broader audience, of parents who didn't realize that social platforms could be dangerous. And Molly's Daddy and Russell has become a campaigner since then. Advocating for kind of thoughtful regulation that keeps children safe online. and we'll talk a bit about him in a second. That moment, Molly's death. And the, the work done by her dad and the Molly Rose Foundation that's been set up in her name subsequently, has also led, I think, to the kind of online safety act in the uk. And, and that has obviously spawned a number of other kind of regulatory, impetus in other countries. And so it's important I think, for us to kind of reflect on the documentary from that sense. I won't give too many spoilers'cause I think people might go and watch it, but I think what I would just kind say is that, the documentary kind of very skillfully weaves, Molly's, stages of her early life. Together, kind of her friends and her dad and bringing together kind of family members who knew her in those early stages and then lays out really what happens, in the, time before she, she took her life. it kind of very clever visually because it uses a couple of devices that I think are really interesting as a way of showing the, her journey towards taking her own life. And one of those is the fact that there's a lot of, there's an AI chatbot coincidentally that the director essentially types into, to show particular elements of the story. And I was reading more about how the documentary is produced. And there were some elements that were actually AI generated. And so that device that typing, going into a chat bot and the responses that come out, which are in an American accent, that kind of really sets up the, the tension of the story, which is this kind of teenager who is, growing up in a world where she's spending a lot of time on Pinterest and Instagram, and these kind of large Silicon Valley companies who almost you kind of can't stand up to is, is the narrative. there's a strong element here about surveillance capitalism and, the way that big tech companies have, sucked up data from users and, used that to predict what will engage consumers online. You know, Molly Was consuming over 2000 suicide posts in the months before she, took her own life, and that was a big part of the coroner's trial kind of took place afterwards. And so that's a really interesting thread that runs through the whole documentary. It's slightly kind of, it runs slightly counter to the very personal stories from her dad and also the very robust interviews with experts who have worked in trust and safety previously. I'm not sure if the, that theme of surveillance capitalism quite worked, but, the documentary was actually co-written by Shasana Zubov, who is a, a name that many people will know who, has worked and written about surveillance capitalism for a long time.
Mike MasnickShe She coined the phrase. yeah.
Ben Whitelawyeah. so that's where that comes from. Broadly speaking, really interesting documentary. it's going to, I think, shift the perceptions, at least in the UK of a lot of people's views on online safety and platforms, and will add to this fuel, I think, to, ban social media for under sixteens and potentially, regulate social media platforms in other ways. So, just wanted to bring that to our listeners, Mike, and to, to share a little bit more of, yeah, what that documentary entails without giving the game away too much.
Mike MasnickYeah. I mean, I'm, I'm interested. I'll, I'll watch it at some point. I'm sure I'm a little. skeptical of that sort of framing. you know, I sort of think back to the social dilemma documentary, which was a big deal on Netflix and definitely drove a lot of the conversation I thought was incredibly misleading. you know, I, I can't comment directly on this one because I haven't seen it, but, you know, some of the framing, at least as you've described it, also sounds to me potentially misleading. And I think that doing a documentary, looking back at someone who ended up taking their own life is, emotionally colored by the ending that everybody knows. And you can look at things and say, oh, well they did this, and then they, they ended up taking their life. but making that true causal connection, nobody can know what. Was the final straw, what made her make that, unfortunate decision? and I worry about stories that try to automatically associate it with, oh, they, spent a lot of time on Instagram and therefore Instagram must have been at fault. Or, any of these things you can color it by because we don't know. And there are also lots of stories of young people in trouble who found communities on social media that help them. And so to jump from one to the other, that one is inherently bad or causal or that a particular setup or structure is bad. Strikes me as emotionally manipulative in ways that are not necessarily accurate. I sort of feel that way about the, surveillance capitalism as a term. I think it's terrible. I thought Shoshana's book is not particularly good. I don't necessarily need to go down that road. I think it's misleading and confusing and, and not an accurate portrayal of how companies actually make these decisions. there is obviously no doubt like capitalism exists and companies are built to try and make money and to extract profits. That's the nature of capitalism itself. but I think that Shoshana's book in particular was. Very, very inaccurate in terms of what the decision making is like and why those decisions are made and the nature of the nuances behind all of these decisions. And I fear that you end up in a similar spot with again, I haven't seen the documentary. I'm sure it's a very good documentary, but I worry about the sort of emotional tug of this, that it's very easy to, you know, the number of people who still talk to me about the social dilemma as if that proved how, Facebook was manipulating people into extremism when that wasn't what it showed. it really, really bothers me. So I, I worry about pieces of, cultural artifacts like this, that tell a specific story from a specific position that will often miss the trade-offs and nuances. You know, the trade off here being, you know, there are lots of people for whom social media is really useful, and that includes young people, especially in cases of marginalized youth, where they're looking for a community, they're looking for meaning, they're looking to know that they're not the only one in the world. And social media has given that to many people. And that's not to say that it's an inherently good or bad, but it's, it appears to be good for some people, bad for some people, and neutral for some people. and the proper approach to dealing with that is figuring out how do we identify the people who it's bad for and how do we deal with those situations? And that requires an astounding level of nuance that I fear gets really lost when you have a documentary that just presents the one angle of it.
Ben WhitelawI mean, I, I'll stand up the documentary in the sense that, it makes very clear that Molly had depressive episodes, was a teenager who, like many was struggling with, her mental health. And, I suppose that the kind of her use of those platforms exacerbated that in ways that we talked about on the podcast. You know, the, the most extreme cases of, kind of mental health, however old you are, will not necessarily be helped by using platforms or using any kinds of internet services at great length. Um. Ian Russell, her dad is a very, very good narrator of the story. And, and this is not just in the documentaries. He has been a very consistent and coherent, I think, explainer of the challenges and the nuances.
Mike MasnickYeah. I, I, I think he's always been, very interesting. I always find him interesting to listen to. I don't always agree with him, but I do think there are other campaigners who I think are, less willing to dive into the nuance and he, he, which is, you know, I, I wouldn't blame him if he had gone the other way, but I think he's willing to dive into the nuances and I appreciate that.
Ben WhitelawAnd, and, and the Molly Rose Foundation, as we've mentioned on previous episode of the podcast, have actually been anti a social media ban for under sixteens because they recognize that it's, it's a blunt instrument and won't cause the outcomes that that many parents want. So in that sense, I think the documentary trades a fairly good line. I think probably what we're talking about here, Mike, is, the fact that documentaries like all forms of media mostly, they are viewed like all journalism, like lots of books because they have an, an underlying negative story to him. You know, this idea that certain stories will be kind of seized upon by audiences more, I think lends directors to, to frame these topics in certain lights. And if you did a story about the benefits of social media, I don't think you'd have people watch it.
Mike MasnickWell, I don't think anyone would pay for that right now. Right. I mean, I don't think anyone would pay to make that because it doesn't, it doesn't sell. Right. Which is the argument that, that, that you're making. But I also think that like it's, there's an element to this that is amusing, not amusing, but sort of am, uh, maybe amusing in, in concept that, All of these things are complaining about, oh, like the manipulative power of social media. And yet these documentaries themselves are emotionally manipulative in some form or another. And, emotionally manipulative culture happens and sometimes is good, right? Like drives people to realize that there are important stakes, at issue. And that people need to understand things. And I think that is the power of, culture. But it's, it is a little bit weird slash ironic, I guess, that there's this idea that, that power, that emotional power of culture when it's on social media somehow needs to be treated differently than when it's a documentary. So, you know, the joke that I had made about the social dilemma years ago was that it did everything, that movie. Did everything that it accused social media of doing. It was emotionally manipulative. It told you a story that wasn't quite true. It tried to, manipulate you into believing something that wasn't accurate. And these are the exact same things that it, in the movie is accusing social media of doing, and nobody points that out. You know, the fact that culture influences people is something that we've always accepted, but for some reason people can't handle that when it's social media.
Ben WhitelawWell, the difference I suppose, is the personalized
Mike MasnickSure. It
Ben Whitelawof, of social media platforms, right? There's if, a director makes a decision to pull at somebody's heartstrings in order to kind of. Make a point that's not necessarily going to affect people in the way that if an engineer changes the weights on an algorithm that will have the kind of scale and impact
Mike Masnickthe the underlying argument still effectively goes back to the same thing, which is like, the reason that you make an emotionally manipulative documentary is because it will sell more and it will make the studio that produces it more money. The reason that you tweak an algorithm is that you will get more people to view more ads and it will make the company more money to some extent. It all comes back to that. And like, you can critique capitalism if you want, you know, if you wanna bring it all back to that, the root cause of all of this. and that's there, but that's a different debate than do we ban social media.
Ben WhitelawYeah. I see what you're saying. I think the. The scaled nature of social media platforms is, you know, the scaled nature of that manipulation, let's call it, is what people,
Mike Masnickit's different. It's a different kind of challenge. I'm not saying it's the identical thing, but I'm saying like, if you separate out the actual claims, the claims feel like the same sorts of claims, you know? and they're the claims that have shown up forever. I mean, we had this discussion about all sorts of, you know, every new kind of media content. There were complaints about, how much the radio manipulated people. There were complaints about comic books, about Dungeons and Dragons, about, video games. These things come up and the reality is historically, everybody say, well, this time is different. This time is different. That's true. Every single time and every single time, for the most part, society. Figures out how to adapt and like when there are problematic aspects to it, it is society that adapts, not through banning stuff, but through figuring out how do you educate people to use these things in a better way.
Ben WhitelawYeah. And, and that's really the, I would say the, the thrust of the documentary is that people need to figure this out for themselves. I think, again, I would love to hear listeners' thoughts on the documentary if they've watched it. we probably kind of go, got into enough detail today. I would love to have seen it focus a bit more on the safety policies of the platforms and the, and the lack of kind of, I guess, rigor or enforcement of certain rules. You know, the leap to surveillance capitalism and the bringing in of lots of, Silicon Valley talking heads somewhat diluted. Both the words of the family and of Molly's friends and also the experts that were brought in to talk about it. But hey, it all adds to. I guess to the pile of, stories that we're talking about each week, Mike, and, and what eventually will be a, a great understanding of how platform safety works, which is I think what controlled speech is, all about. Let me know what you think when you finally get around to watching it.
Mike Masnickabsolutely.
Ben Whitelawcool. let's go on now to our other stories this week that we wanted to, to bring to you listeners. talking of child safety. Our first story in our quick roundup is something that I thought was happening anyway. Mike, um, a platform in 2026 releasing parental control features. Who would've thought it.
Mike MasnickYeah. Uh, in this case it's WhatsApp, you know, which is part of the Meta Empire. WhatsApp is always sort of set off in its own. Silo, I guess. but they, they launched these sort of fairly advanced parental features. I guess I had never really thought about it before, whether or not WhatsApp had parental features. but apparently now they do. and you know, what struck me about them beyond the fact that like, oh, you know, seems a bit late to be launching parental features, is these seem pretty well thought out to me, to be honest. Like, I, I looked at it, it's, it's not age verification, which I have my problems with and concerns with. This is literally, you know, as a parent sets up an account for a child, it allows them to set it up together so the, the accounts are linked and there are all sorts of features that are blocked and the parent has some level of control without it being surveillance based. I thought it seemed like a nice way of doing parental controls, even if it's, you know, maybe a decade late to the game.
Ben WhitelawYeah. You would hope that the platform, that was the, the latest. To the party the last to, join the ranks of platforms that have parental controls would do it best, frankly. Right. You know, they've seen everyone else go before them. They've seen the critiques. but yeah, it does seem like there's a fair bit of, granularity to, you can get reports as a parent if somebody contacts your child, if they're not part of your address book, that kind of stuff. If your child decides to block, another user, you get notified of that. So there is this, you get a lot more kind of detail I think about your, your child's usage. I'm sure that that isn't necessarily gonna be that welcome for, for children particularly, you know, those who are approaching the, age of 16. but hey, there's, benefits to that I think so. I would love to see a comparison, actually, Mike and I dunno if you've seen this already, of the way that different platforms do put into controls and have you ever seen a kind of ranking or, or a leaderboard or anything
Mike Masnickyeah, I, I, I haven't. and I do feel like. There is a concern that, that a lot of the parental controls that are out there on other apps are just not used. That a lot of times parents don't even know that they're there. Or if they do, it's sort of difficult to set up. And oftentimes, you know, the reason that younger people especially get set up on a particular app is because, their friends are using it. and often it's like, oh, you know, we gotta sign up for WhatsApp because the soccer club is all, sorry, UK Football Club is, uh, uh, you know, everybody's using it so I gotta sign up right now. and in that case, you just sign up really quickly'cause parents don't have time to figure out what are the parental controls. So it would be nice if there was like. A better way to, indicate to parents or like after the fact to have them opt in. I know I've had this with like my kids who signed up for certain platforms that later implemented parental controls and like we went and looked at it and was like, oh, well, you know, when we signed you up, we kind of lied about your age because otherwise you wouldn't have been able to sign up. And now if I try and put in place parental controls, it won't let me put in parental controls.'cause it says you're 28 years old. Uh, and we've sort of dealt with that in, in different ways over time. I wish there were better ways to sort of surface that in the first place and, and make people aware of it. But, we'll see how that goes.
Ben Whitelawyeah. It reminds me of, um, the kind of appeals that users can submit in Europe under the DSA, there's this, this idea that, you know, people don't know it exists. And even though. These are helpful mechanisms for redress and for safety. Actually the platforms don't do enough to kind of promote them and your average reader or user doesn't actually read the press release like we do. Um, they're not as sad as we are.
Mike Masnickthe the real, the real, story here, Ben, is that we need more people listening to this podcast. If everybody listened to this podcast, they would know about these things.
Ben WhitelawYeah, exactly. and maybe we'll make that lead table, lead table of parental controls.'cause that would be very interesting. Not least because my small baby son will eventually have to be on these platforms too. another story that is an interesting one that caught both of our eye this week. Mike, was a new judgment from the oversight board, the independent Supreme Court style, although they don't really like us calling it that, body for content moderation decisions. this is one that, this is the thing that made me saddest this week, Mike. This is I mean, I've read a lot of the oversight board decisions. none of them have made me as kind of blue as, as this one. And it's because I feel like there's, it, it really kind of sums up where we are heading as a information ecosystem. I'll, I'll briefly describe what, the decision is about, what the case was and when we'll talk about why it's bleak. the case in question was a video that was posted in June last year during the 12 day war between Israel and Iran. And the video showed a burning building that was badged as, a live video was. And that was meant to be offa, and it was meant to be showing buildings in Haifa on fire and. Your average person would think it was generated by ai, but that didn't stop it from getting 700,000 views and traveling fairly far on the web. it was copied in a bunch of different places. They think it might've come from TikTok and was posted elsewhere. So you had this, AI generated video doing the rounds purporting to be, something that it wasn't clear case of, misinformation, potentially disinformation. And what the case outlines is, I guess a series of failures by the platform, but also by, I think, the cross platform efforts to stop videos like this from, going viral. The first thing is that actually you know, it was reported a bunch of times by users for being AI generated, but as the case explains, wasn't reviewed by a moderator and it wasn't given an AI label, by the content policy team, subsequently Facebook have said, meta said that it wouldn't be classified under its misinformation policy because there wasn't a imminent threat of harm, which again, you could argue, and the oversight board did was not the case. it also, didn't have any kind of AI labels applied automatically. The nature of Meta's systems is that you have to either submit as a user that this is AI generated content, which users are unlikely to do, or it has to be escalated to a particular team within Meta, and then kind of applied, after the fact, and none of those things happened. And, we obviously are in the middle of a very similar war now where there's been a lot of incidents of AI generated video and content. it just feels like the future of, of the social platforms is to have content like this that is not only kind of not believable, but also isn't suitable for the systems that the platforms have in place to categorize it as, as misinformation or otherwise. And the recommendations from the oversight board, they're a kind of laundry list of things that they would like to do and none of them, to me, Mike, seem like they're going to be implemented anytime soon. The idea of having kind of specific AI generated policies, I dunno why meta would, want to do that necessarily. I dunno how they'd be incentivized to do so. The board recommends kind of having content credentials, what's known as kind of C two PA, so, so provenance for ai content integrated more deeply into the platform. I don't see that happening anytime soon. And so what what I see is this kind of stretch ahead where we just have more and more of this type of content filling platforms, creating doubt among users, and then potentially, causing real life risk. So, yeah, not to bring, you know, the podcast down from, from our super jolly, episode so far, but yeah, it was a tough read.
Mike Masnickit, it, it, it's, yeah, potentially bleak. I am, I, I feel a little differently about this, perhaps not surprisingly. Um, like I get that, I get that this kind of content is problematic. I get that. It feels like meta doesn't care that much. You know, I'm not so sure that they won't implement the CTPA stuff. I, I don't know that that will make that big a difference, because I think bad actors will figure out ways around that no matter what. but to me, again, this kind of goes back to we lean so hard on the tech companies to solve these issues when I'm not sure it should be their job.
Ben WhitelawHmm.
Mike MasnickOr that they would ever do a particularly good job of it. And I feel like, you know, when we have stories like this and like, yeah, like sure, some of the suggestions from the oversight board are, interesting. Like it'd be nice if Meta actually did pay attention to this stuff and did some of this stuff. But again, at what cost, at what trade off, focusing on this particular video, it got a lot of views. You say, okay, well they should have seen that, they should have done something about it. They should have escalated it to a team. But that also means that that team is not reviewing some other video or some other content that is equally problematic. And again, we sort of lose the aspect, it scale. It's easy to look at one or you know, a dozen or a hundred situations and not recognize that they're dealing with a million situations at and, and if you focus on one particular one at a time, you're going to be missing something else. to me, again, this kind of goes back to are societal level issues that I feel like we. Lose sight of how do we solve them at a societal level, which is often education and teaching people. How do you determine what is real and what is not real? How do you, view content skeptically and try and figure out what makes the most sense? rather than just saying, this has to be meta's responsibility to solve and get it right because I don't trust Mark Zuckerberg to get this right. Even if the law told him he had to get it right. Even if the oversight board says he has to get it right. I, I feel like there's this sort of knee-jerk reaction to say, well, these companies in the middle have to solve all these problems for us, and that's not gonna happen. And if they do try and solve them, they're not gonna do a good job of it.
Ben WhitelawYeah, I get that. I suppose in this particular case, the incentives for users, and this goes for more broadly than this one incidence. there are certain incentives that platforms set up where it is beneficial and actually financially lucrative to pump out information like this. So, so this page and this video were monetizable, right.
Mike MasnickI'm not saying that Facebook has no responsibility
Ben Whitelawright, right, right. I'm just saying I'm not, if we're talking about, if we're trying to turn the, my despondency into something practical, which I think is a good. Attempt, a good thing to try and do. You know, one thing is like, platforms should not incentivize people to create slop AI generated slop or engage anything that's engagement baiting because they will do so because they, people are,
Mike Masnickpeople
Ben Whitelawpeople are human
Mike MasnickYes.
Ben Whitelawand, you know, they will attempt to earn money in any ways that they can. This user was not in Iran or in Israel. They were in the Philippines. They didn't have like reason to be, this wasn't even politically motivated. This was financially motivated. And, the way that platforms have created a marketplace for missing and disinformation is something that is on the platforms and, and that plays into their business models because. know, they know they want people to producing more content so they can select which posts and videos and, get seen by individuals in the way that we've highlighted in the Molly Russell Case. that is something they can practically do.
Mike MasnickYeah. But again, like to some extent, this goes back to the same point I was making before, which is that this is capitalism, right? And
Ben WhitelawDon't make me advocate for the downfall of capitalism, Mike.
Mike MasnickI, I I know, I, I won't either. But like, I think you have to recognize that, that is an element of this. And it's like, it's great to say, gee, wouldn't, wouldn't the world be wonderful if that didn't exist and if there weren't monetary incentives for stuff, but there are, right? And that's not Meta's fault necessarily.
Ben WhitelawNo, it's not. but that does feel like a great place to end tonight's podcast.
Mike MasnickNext week, tune in as we solve capitalism.
Ben Whitelawyeah, the alternatives, the list is not long, but the list of stories that we've talked about this week has been, we appreciate all the outlets who've been, whose stories we've drawn upon. do go and read and subscribe. Go to the show notes, and go and see that. Mike, it's great to have you back. Really appreciate chatting to you and, uh, we'll be back next week. As usual. Take care, everyone. I.
AnnouncerThanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.com. That's CT RL alt speech.com.