Ctrl-Alt-Speech

We’ve Hit Grok Bottom

Mike Masnick & Ben Whitelaw Season 1 Episode 87
Ben Whitelaw:

So, Mike, you are old enough to remember, dig, right?

Mike Masnick:

What a way to start. Mike, you're old. Yes, Digg.

Ben Whitelaw:

Were you an active user? are you a fond user of Dig?

Mike Masnick:

no, I mean, I used it a little bit. I have a crazy story about somebody who, approached me and offered to get Teter, highly rated in dig. Uh, and I turned that guy down. because I, I don't, I don't wanna cheat. I, I wanna, I wanna earn, earn my links, uh, respectively.

Ben Whitelaw:

You're the only one who

Mike Masnick:

I know. So this is the thing that bugs me is I, I'm, being careful with my words here, but that same person who offered to get me highly ranked in dig, I saw him get another site, which shall remain nameless, highly ranked in dig. And that site ended up selling for many millions of dollars. And to this day, I remember that. So that's, that's how I always think of dig. But that's a, that's a whole different, different issue.

Ben Whitelaw:

Sounds like if you'd said yes to him, we might not be here today. You might be your, often your Caribbean

Mike Masnick:

Exactly. Exactly.

Ben Whitelaw:

Well, the reason I bring it up is that dig is back. dig, for those who don't know, is a Reddit like platform that was popular back in the nineties and early two thousands.

Mike Masnick:

it's the two thousands. It wasn't started in the nineties, it was in the two thousands, but it predated Reddit. And there, there was also where, I'm just going off on tangents here. I'm not even letting you get to your prompt yet, but. very famous. There was a very famous Business Week cover story about Kevin Rose, who was the founder of dig that had this picture of him and like old fashioned headphones and, just like looking, you know, young-ish and saying like, this kid made, I forget the exact number, but it was like, this kid made$68 million on Dig. And it w he hadn't actually made that. It was like, how much DIG happened to be worth very, very briefly.

Ben Whitelaw:

Right.

Mike Masnick:

and it was like, it was such a bad story that just sort of started to play up. This was, you know, when all of a sudden like, oh, you know, dot coms are coming back. And, anyways, there was all this sort of hype around dig and then it, failed.

Ben Whitelaw:

It, it disappeared, only to be salvaged by, by Kevin and by a bunch of other investors who bought it, and they brought it back to life. So

Mike Masnick:

by the way, again, I have to keep going on tangents, I apologize. But

Ben Whitelaw:

I didn't know you were such a big dig

Mike Masnick:

there's, there's all sorts of stories here because the other main person, it's Kevin Rose, who, started Dig, but then the other is Alexis Hanian, who started Reddit, which basically brought down dig because what happened was DIG did a big redesign and really changed the entire site. And people really mad and all of them were like, ah, screw this. We're gonna go over to this other site called Reddit. And that was like the beginning of Reddit taking off because Dig sort of lost their entire appeal because they, changed their entire voting model. So the fact that it's Kevin and Alexis together, rebooting dig is actually kind of interesting.

Ben Whitelaw:

indeed. and the point of today is that their prompt,

Mike Masnick:

Yes.

Ben Whitelaw:

their prompt is, what do you want to share with Dig? And so, apart from the many stories you have about Dig Mike, what would you like to share with Dig

Mike Masnick:

Well, Ben, I don't know if we've ever had to make you get out the bell on the prompt.

Ben Whitelaw:

Uh, I don't even have the bail, but go on

Mike Masnick:

Let's just ding, ding, ding, ding, ding. I'm going to say dig The New Dig looks very cool. It just launched. it looks really cool. They have lots of ideas and features that they're talking about. And the point that I want to make, the thing I would like to tell Dig, is that they should be doing this on, at Protocol, which is the underlying protocol that powers Blue Sky because a whole bunch of the features that they're talking about already exist there, and they can jump in and already have 42 million users and go from there. So I would like to tell Dig to, rather than reinventing the wheel, it looks like they're doing really cool stuff. They have a bunch of really cool ideas. There's no reason why the identity component of it can't be based on the ad protocol.

Ben Whitelaw:

Start from scratch. Start again, guys. I, I would probably, warn. Dig users, particularly, the younger ones, you know, right now there is no age verification on dig. but they, you know, if you're going to try and cultivate a user base in the uk, be quick. Okay? Because things are changing rapidly here in the uk and by the time, they're out of, beta, it might be that under sixteens aren't allowed on social media, at all. So a word of warning to, dig into potential users. Hurry up. Uh, we're gonna talk more about that today. Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's January the 15th, 2026, and this week we're talking about why everyone is mad about Grok UK social Media U-turns, and yet another AI moderation fail. My name is Ben White Law. I'm the founder and editor of Everything Moderation, and I'm joined with Mike Masnick, a dig fan, but also Tech Tech, founder and editor. How are you, Mike?

Mike Masnick:

I am doing well. I want to, say you did a good job. I noticed in the, intro document, we actually have it as January 15th, 2025, but you properly said 2026, so good job

Ben Whitelaw:

You

Mike Masnick:

the fly.

Ben Whitelaw:

that also happened in last week's episode, and you told our listeners again about the fact that I'd, somehow managed to figure out that it was 2026, and if nothing else, it just tells people that I'm unprofessional and underprepared

Mike Masnick:

No, no, You're real pro because you changed it on the fly, and so this was, you know, it was, it was perfect.

Ben Whitelaw:

if I, if I do it. again next week for the third episode in a row, please, somebody send

Mike Masnick:

that's, the rule is the first two weeks of January you're allowed to refer to it as the previous year. And then, then you gotta get it straight. So,

Ben Whitelaw:

Yeah. Okay. Well, I will ensure our notes are up to date. Um, how is your 2026 going? what's new? Have you, kept your resolutions?

Mike Masnick:

I don't do resolutions. I'm not a big fan of resolutions. but, you know, 2026, if, if you've read the news at all, uh, kind of sucks. you know, it's one of those things where I feel like for the past decade or so, we keep saying, well, man, that year sucked. Next year can't be any worse, and then next year proves us wrong. So we're in the, the early stages of 2026, proving that it can be worse than 2025

Ben Whitelaw:

Yeah.

Mike Masnick:

sorts of ways. And so what a mess the world

Ben Whitelaw:

Yeah. and and these kind of internet issues seem to be coming up in, other bigger types of stories, right? You've got internet shutdowns in places like Iran. You've got questions about digital id, for work coming up in the uk some of the issues that we talk about each week are cropping up increasingly in politics, geopolitics in these kind of broader stories. It just kinda shows how, how fundamental the internet is and, I think why some of the topics we talk about are more important than ever.

Mike Masnick:

yeah. I mean, it's, it is everything. I mean, everyone relies on it. It's such a central part of our lives. It's, it's not a, a separate thing. Like for so long there was this idea of like, the real world and the internet world, but no, like the internet is part of the real world, and it's deeply embedded in everything that we do, and all the stuff that we talk about impacts everything, you know? And so I, I do think, you can't separate them out anymore.

Ben Whitelaw:

Yeah. we're not gonna dive into, the internet shut down in Iran. It's something we thought we might talk about, but actually I think it's, there's a lot of other sources out there that are covering it very well. it's mainly a politics story and we, we try to keep to our lane as much as possible. but we have a whole of stories that I think are interesting and our listeners will, like hearing about this week. before we jump in, Mike, the episode last week, where we created our bingo card, our 2026 control alt speech Bingo card was very well received. We got a lot of gr great feedback from listeners and a lot of suggestions for. Cards themselves for squares, which we're really excited to, include in the final edition, which we continue to work on. we've got a really nice email as well from, a chap called Mark, I won't give a second name, who also suggested that the Center square, the much debated center square is one of our own. So it's a control speech themed square. I dunno how you feel about

Mike Masnick:

a, few of those controlled speech theme squares that we've talked about.

Ben Whitelaw:

Yeah. So, so there's a few in the running. He suggested, that we make it about my recent ascent into fatherhood. And, he suggested that, we put on the center square that the littlest white law wakes Ben up in the middle of the night, which I can tell you has already happened a number of times in 2026.

Mike Masnick:

I was gonna say every night.

Ben Whitelaw:

Yeah, I mean, to be fair to him, he's, he is improving. I want to give him credit. but there have been moments where, yeah, I have have had to wake up Leary eyed to, to shush him back to sleep. So, thanks to Mark for his suggestion and for everyone else you've got in touch.

Mike Masnick:

it. I like it. I like it. I'm gonna keep referring to him as the littlest white law from now on.

Ben Whitelaw:

Yeah. Uh, it won't be long before he is as big as me though, so we'll have to review that.

Mike Masnick:

If you still have ideas for the Bingo card, we are still accepting them, so please write us, podcast@controlatspeech.com. we're hoping to get an official bingo card out there soon, but, we're still open to ideas, so

Ben Whitelaw:

Yeah. we've got, we've got some really fun ideas about how you can play along. which I think is, gonna be a great addition to, having us coming in your ears whilst you do your, chores, while you sit on the sofa, wherever you are listening to us. So, uh, look out for that. let's get into it then. Mike, let's get into this week's stories. Um. Elon was almost gonna be our center square for a point, because

Mike Masnick:

Might still be,

Ben Whitelaw:

He's just everywhere. and actually, I was talking to, a fairly well known journalist who covers trust and safety this week. And he, he asked me, why did we not cover gr in our la in our last episode? He was fully expecting us to go big on this story, which has been rumbling for the best part of two weeks now. luckily for us, things have happened in the last seven days, relating to that cluster, fuck, let's, let's be honest. Um, and so, we've got plenty to go on. where do you wanna start?

Mike Masnick:

Yeah, it's, it's hard to figure out where to start, but just we, we should probably give the quick summary in case some of you have lived under a rock and, somehow not heard this story, which is that X has Grok, which is Elon Musk's AI from X ai, which is now the company that owns X, which used to be Twitter in case you're, you've really not been paying attention for the last few years. they have Grok, which is their, competitor to chat GPT. And there's one version of GR that is built into X itself where you can constantly ask grok questions or to do things. And one of the things that they, they've had versions of this but they sort of ramped it up recently, is that you can ask Rock to modify images and videos. that are posted. And so if you see a picture of someone you can say, you know, put this person, next to a Snowman or whatever. and of course, because it's the internet and because it's ex, which is filled with people who are mentally 12-year-old boys, immediately a whole bunch of people started putting women in bikinis. was the simplest version of it. And, and to be clear, it started actually with, a bunch of people from OnlyFans doing it to themselves and asking Crock to put themselves into bikinis. And it was part of their, promotions for their OnlyFans. And so it

Ben Whitelaw:

I didn't realize that. Is that true? yeah.

Mike Masnick:

Yeah. That was where, where that trend started. So it wasn't so much doing it to other people, it was people doing it for themselves, which is one of these things where you're like, oh, okay, well, you know, if someone is. Choosing it for themselves. That's one thing. But what then happened was just all of these people, um, x were doing a ted, everyone, I mean, it was, it was kind of overwhelming. There were a few studies showing that like, within a five minute period there were hundreds of people, taking pictures of them and, undressing them. and sometimes very young people, often underage, and in some cases then people were asking rock to put them into more sexualized positions and things like that. It was gross in all sorts of ways and horrible in all sorts of ways. And because Elon Musk's conception of free speech is not what actually free speech is, but his conception of free speech is allowing. Whatever the mindset of a 12-year-old boy wants. he made sort of joking references to it and encouraged it, and it just sort of began to take over the site in all sorts of ways. And lots of people who are not 12-year-old boys or don't have the mentality of grown past the mentality of 12-year-old boys we're rightfully horrified by this and saying like, this is bad and you shouldn't be doing this. And then there was a question of, okay, but what do you then do about it? And that's where all sorts of stuff has happened since then. But does that, does that give a, quick summation to the nature of the issue itself?

Ben Whitelaw:

Yeah, totally. And I think, I think you're right to, call out both the fact that there was, thousands of women who were affected in the fact that they had images posted on them online and then requested to be notify, which is a horrendous experience. you know, in some of the reporting that I've read, talk to those women and you know, you imagine waking up and finding that kind of imagery of yourself online, posted by somebody and receiving huge. Huge views. people then messaging you to say that this image is available online. Do you know about it? You know, the, the panic that that must, cause and then there's also the fact that all these regular users of X Twitter, who, didn't expect to have that content appear in their feeds. Did so I think it was, it was a, basically all held, breaking

Mike Masnick:

And, and I saw some people say, you can get some of the other. LLM models to generate similar imagery, though they all seem to have much stronger safeguards. in which it is more difficult, in fact in some of them you sort of have to trick them into it and convince them it isn't a real person or something of that nature. But there is a big difference with the X Rock integration, which is that it is all public and it is in people's feeds and it is recommended to them in some cases. And so that is very different. There are still problems, to be clear, right? There are still problems with people generating that kind of stuff. And there all sorts of talk about notify apps and all of these other things that are hugely problematic. They are, but there's a different level of having it occur on X, this platform with many, many people where it appears and feeds and they have, an algorithm that promotes stuff and people were joking about it and like, I saw one feed discussion where it was like, it sort of became a competition to like each person was trying to take the image further and further and make it worse and worse and more problematic and degrading and all sorts of problems. And it was really kind of horrifying. Which is different than like, as bad as it is someone just doing that in chat, GPT or something.

Ben Whitelaw:

Yeah. So, so that was the kind of situation, essentially this time last week. Talk us through, I guess what Elon did, uh, and which was not a lot initially, and then a, a bit more, and then also what I guess role regulators and politicians people have done to try and, address this as an issue.

Mike Masnick:

Yeah. So a bunch of things have happened, right? So Elian really wasn't addressing it directly. He made a few sort of side jokes about it and posted a picture of himself in a bikini that rocket generated and sort of laughed about some of the other, like, people then started like putting other stuff in bikinis. Like, you know, uh, I, I don't, I can't even remember, but people were like, you know, oh, put this refrigerator in a bikini. It, it just became sort of a meme in and of itself. and he was sort of like laughing along and then sort of wasn't doing anything else. A bunch of different countries and politicians began to call out that this was. Problematic in all sorts of ways and that something sort of needed to be done. And different investigations were sort of announced. one of the, the most prominent was, in your neck of the woods with Ocom saying that they were in specifically investigating rock And, the image generation that was going on. I think, was it Indonesia that banned crock, maybe Malaysia to a few different countries Said initially they were, banning crock. and lots of different investigations were started and plans to sort of figure out what was going to happen. And so there were some changes that then occurred. I should note one other thing that did happen last week, which was, it was widely reported. Incorrectly that, x slash grok had apologized for this and promised that new safeguards were being put in place. That was a very, very misleading situation because what happened was a user had prompted grok like any user could, and said, can you write an apology for what you did? And can you explain why it was bad? And it wrote a heartfelt apology and said, you know, I pro this probably broke the law in some cases. And so the media picked up on it, but they didn't pick up on the fact that it was, just in response to a prompt. And the machine is trained to just respond to prompts in a credible way, because then like right after it, somebody else said, now write a sneering non-apology and say you didn't mean any of it. And it wrote like a, Hey, get over it. It's just pictures of people, you know, don't get so upset about it. And so there was like for a second last week, some people seemed to believe that, oh, you know, he realized there was a problem and he's doing something about it That wasn't true until earlier this week when their first attempt to do stuff was just to limit the ability to generate images of, people to paid subscribers, as opposed to the free subscribers who previously could, use GR in all of its glory. which is kind of a weird thing because then it's kind of, you're sort of turning that into an upsell. You know, the fact that you can generate potentially, very harmful, non-consensual intimate imagery, shouldn't be an upsell.

Ben Whitelaw:

no, no, that, that is on a, that is on a sales

Mike Masnick:

Right. So that's, that was a problem. And then finally we had, yesterday this sort of announcement that they were changing the system so that it would not be generating those kinds of imagery of real people. Now I've heard mixed reports as to how well that's actually working. and some people have said they've been able to sort of jailbreak it and get it to do that anyways, but we'll see how well it works. But it does appear that the sort of worldwide response to this has caused something to change within x where they're like, maybe we shouldn't be doing this. but there have been some other things and there's sort of like the geopolitical stuff that plays into this, including the fact that, when the UK. Announced its investigation into this and said, you know, Ofcom, we're just going to look into this. the US State Department flipped out and Sarah Rogers, who's the deputy director there, who has been known to say some crazy things in the past, she is, a Trumpian through and through, started threatening the UK saying that, if they do this censorship of conservative, views, no option is off the table in terms of how we might retaliate. And that is nuts. Like there's no way to look at that unless you are arguing that the sexualizing of children is a traditional conservative value. And maybe there are some people who would argue that then.

Ben Whitelaw:

Careful Mike.

Mike Masnick:

Then, I don't think you can make that claim. This is not a case of, you know, whatever. I have complaints about Ofcom. I have complaints about off com's views of internet regulation. I have talked about these at great length. I don't think an investigation into. removing people's clothes digitally is really about political speech or censorship in any meaningful way whatsoever. Like, I, I don't know how you, it just felt like there's this sort of knee jerk reaction coming out of the State Department that any sort of European regulation that impacts Elon Musk in a way that he doesn't like, we're just going to frame it as censorship and everyone's gonna believe it. You know, and because they've done that, you know, they did that with the DSA situation last year, which we spoke about. I don't think that most people buy into it in this particular situation. I think the vast majority of people look at this situation and say, yeah, hey, you know, that seems, seems like maybe there's a problem there.

Ben Whitelaw:

Yeah, I mean, so, so do we see the state Department reaction as, Elon's powers with Donald and Elon is back in the fold, right? Because he, over the last kind of three weeks, um, Trump has said that, Elon is 80% genius, but makes errors 20% of the time. But, you know, which is, it's like one of those kind of really bad personality tests.

Mike Masnick:

Yeah.

Ben Whitelaw:

but so, so it feels like there's a kind of coming together of those two, again, they're back pals. is that all this is, is that Elon's text his buddy and said, you know, stick up for me here.

Mike Masnick:

Yeah, I don't even, I don't even think it, went that far. I would guess that there was no direct communication, like stick up for me. I've heard some indication to believe that, maybe the State Department went a little rogue on this and that the rest of the US government, maybe this wasn't like the official position. Overall, I think there's just the sort of knee jerk among the sort of Trumpian folks of anytime Europe does anything that, Shines a negative light on X, or the internet industry, or AI in general. Like we just have to come out swinging. so they just did it without thinking through like how it actually looked. I would guess that's the case. I wrote a thing that noted that, at the very same time that this was happening, the Senate was passing, was, was unanimously voting on a bill that would open up the potential for lawsuits for people who were victims of non-consensual intimate imagery. it's the, uh, I'm blanking on the name of it. Defiance Act, I think something like that. it had passed outta the Senate a couple years ago as well. I don't think it's going to pass outta the house or get signed into law. The bill itself has some problems, but it, It struck me as weird that at the very same time the Senate is like, this is a problem that needs to be dealt with. The, state Department is like threatening the UK with no option is off the table for it. Trying to do something similar comes across as very weird. I would also like to note, and this was the, the focus of my article though, is a smaller issue overall. I find it deeply ironic that the us, which on a bipartisan basis, that went all the way up to the Supreme Court that we certainly talked about banned TikTok in the us, which leaving aside the fact that then we have ignored that ban, which is still in place and should be in effect today, just leave that aside. We don't have much of a moral high ground to stand on and say, how dare you threaten one of our apps when we've said, oh, it's fine for us to completely ban an app that we decide is somehow problematic. When the UK is like, Hey, we're gonna investigate this to then say, oh my God, you can't do that. We're gonna punish you. It just, I mean, yes, this administration is all hypocrisy all the time. but we've lost the moral high ground on that. I, which was one of the things that we, I, I personally warned about with the TikTok ban, which is you are giving ammo to everyone else who's going to wanna ban American apps. in this case, maybe for valid reasons, but I'm sure it will also happen for, very questionable reasons as well.

Ben Whitelaw:

Yeah. Yeah. There's a lot of, one rule for us, one rule for everyone else there.

Mike Masnick:

that's the United States, so I apologize on behalf of.

Ben Whitelaw:

yeah. Thank you. Thank you for apologizing. It's very welcome. Um, not that my country is doing much better, as we as we'll discuss, it's worth noting that all of today's articles, the stories we'll talk about, including Mike's piece, on tech there that he published just before we started recording our, available via our show notes. So go and have a read of those. tell us what you think. what happens next, Mike, in the crystal ball of, where Gro goes now? where do we end up?

Mike Masnick:

who can tell, right? I mean, you, you, I assume the investigations will continue. I assume there may be some sort of fine issued and then we'll probably go through all this again. It wouldn't surprise me if all of a sudden Elon, has one of his moments and decides that this is all nonsense and turns scro back onto, into the notifying app that it was and threatens people or demands that they go even further with it. And does this sort of defiant, like, yes, I am perpetually the 12-year-old boy. I don't, I don't know. I mean, all sorts of things could happen. Elon lashing out is probably the most likely just because it's like such a recurring pattern. but beyond that, I really don't know. I mean, the interesting thing is we have seen for the first time, you know, one of the things about 2025. In 2024, lots of people left x and said like, we can't abide by what Elon is doing 2025. That didn't really happen. the sort of grand exodus, trickled

Ben Whitelaw:

you mean users rather than

Mike Masnick:

yes. Yeah, sorry. Users, sort of went away. I also believe the, advertisers bailing out that started to go away. It might have had something to do with the fact that Elon started suing advertisers who left, which is crazy, and threatening others if they decide to leave. And then also, once Trump won the election with Elon's, help, suddenly a bunch of companies felt that they had to stay. But with all of this and, and the, you know, notifying. Stuff. I've started to see more people and companies start to say, we can't, stay on this platform anymore. Being on this sort of like CS a m generating publicly CS a m generating platform is problematic for us. the, uh, American Federation of Teachers announced yesterday that they were leaving X I've seen a few others leaving X as well, and so we'll see how all of that plays out. But it's interesting to see after a time where it felt like, oh, in the minds of some X's general reputation had been rehabilitated. much to my, like, I don't understand that at all, but like, it felt like that was the general sense and maybe this has pushed things over the edge. Again,

Ben Whitelaw:

Yeah. Interesting. I mean, I, I wonder if Elam will have to pay any kind of fine, I wonder if he'll, get away with it just because, we've seen very little from the regulators in terms of enforcement of the, regulation, and the DSA last year with the big fine that it, leveraged against. Twitter was of a, a handful really, that we saw. Ofcom brought a fine against a, a series of small porn sites in December, I think a, a fine of a million pounds and. It was reported that, that series of sites hadn't actually known that it had been fined. so the question of enforcement

Mike Masnick:

Yeah.

Ben Whitelaw:

is one that I think is an open question for a lot of regulators this year. I feel like last year was about, having the kind of muscle to, bring platforms to heal, but actually enforcing the regulation, bringing about the fines is something that, gonna be a focus for a lot of regulators this year and hasn't necessarily been a, great record so far. So, yeah, we can't read our crystal ball, but, uh, a few various options there that we will track as we go through the year.

Mike Masnick:

will have to add'em to the Bingo card.

Ben Whitelaw:

Indeed. whilst Elon was feeding users with a, a diet of ified images, Mike, country was trying to ensure that no one got to see any kind of content like that ever again, particularly under sixteens. so the story I wanna talk about this week is the sudden u-turn, I would say, of the UK government in favor of, or at least being open to a social media ban fund under 16 users. and so this is a, i, I think a fairly rapidly moving story. that again is, that blends kind of politics, with policy and, you know, firmly puts the internet at the center of that. And it really revolves around key, the Prime Minister, and leader of labor essentially changing his mind or seeming to change his mind on whether a social media ban is the way forward. In December, he gave an interview, which he said He was personally opposed to the idea of a social media ban, and he felt that we should control content rather than put in place a blanket ban, which suggested that he was in favor of using the online Safety Act as a way of bringing tech platforms into line. Fast forward to just this week, less than six weeks later, and suddenly it's an, uh, reported in the Guardian that he's open to the idea of an Australia style social media ban. And he said that express terms, you know, we are looking at Australia and seeing how it happens. So a fairly drastic change in viewpoint and, and I to say the least, and I think, I think it comes on the back of pressure from other political parties, to consider this as a, as a way forward. And I want to talk a bit about why that is. So what we had over the weekend was we had the three other opposition leaders of the political parties in the uk, reform Lib Dems and the conservatives. basically saying that we should be considering us an Australia social media ban. And the conservatives, actually saying that they would go ahead with that if they were in power. So suddenly kind of labor and Stan have this issue in which it looks like they're the ones being, I guess, soft on, tech platforms and, not caring for children in, in the way that some of these other parties are. You've gotta kind of cross parties swell of opinion and I think that has led to him being open to this idea. Now, I don't wanna criticize a politician for being open to an idea. I think we should all be open to ideas.

Mike Masnick:

Sure.

Ben Whitelaw:

I don't think that. We should be looking at the evidence coming out of Australia and seeing what the implications and kind of second order effects of that ban are. but it, I would still class it as a mini U-turn and I think it's notable, for that, what's more interesting is some of the reaction to that ban, and I wanna call out one particular organization who released a very strong and, good statement about way forward. The Molly Rose Foundation is, an organization that many people will know. Carries the name of Molly Rose, who was a young teen who died, as a result of suicide, as a result of looking at content on TikTok and her family have been long-term advocates of online safety and I've been working with the platforms to bring about changes to how young people see an access content online. so you might expect, Mike, that the Molly Rose Foundation, is very pro a social media band for under 16th. However, this statement put out, just this morning in fact says that they, in express terms there should be no social media ban and that it could risk doing more harm than good. Something we've talked a lot about, In a LinkedIn post, Andy Burrows, who's the CEO of the Molly Rose Foundation, said, ki Stama should listen to experts not email campaigns, which is, I believe, a reference to the groundswell of opinion coming from parents, particularly the smartphone free childhood campaign, who have been suggesting that actually a ban is the way forward. So what we have here is political pressure leading to a, teetering, of opinion, a shift in view from the Prime Minister, but then a, an organization that I would believe is very, very deep in the weeds of, online safety. Actually saying that an online, uh, ban is not the way forward big week for UK online harms. I'd say. Mike, what'd you make of it?

Mike Masnick:

Yeah, I mean, I thought it was interesting. I mean, like, I, so you know, the political situation in the UK much better than I do, obviously, but I sort of get the feeling that Kiir Armer right now is not well liked. Uh, generally speaking,

Ben Whitelaw:

I think that's fair to say. I think that's fair to say. He's, he's under pressure

Mike Masnick:

he's under pressure. And so, you know, my take, and this, this is, you know, generally speaking, if you're under pressure. immediately folding to like the fact that all of your opponents have taken the position and not standing up for like what you said six weeks ago and explaining why. And actually leading strikes me as incredibly sort of weak response to anything and saying like, okay, yeah, everybody else wants to do this. Okay, sure, let's do it. I mean, it feels like a strong. Good leader would say, look, we spent all this time putting in place the Online Safety Act, which is designed to deal with all of these issues. And we did this thoughtful approach to it. And I'm, this is not me speaking'cause I disagree with that. I don't really like the way the Online Safety Act works and I have all sorts of problems with it. But I'm saying if, if I were in his shoes, I would say something to the effect of we spent all these years and we worked out through careful deliberation and speaking to experts, we came up with this online safety act. It has only just gotten into effect and we were just beginning to see how well it works and. Rather than doing a knee jerk response to things, we are figuring out the best way forward on this. And we have the tools and Ofcom has the power to do stuff, and we're seeing that play out, like going to an extreme blunt instrument like this band you know, so that's what I would do. I'm not a politician, obviously, and certainly, not the UK Prime Minister, so maybe I don't understand the politics there, just struck me as incredibly weak and, kind of pathetic to immediately cave and, change his tune on this.

Ben Whitelaw:

Yeah. I think you're totally right and there's, another worrisome detail, which I'm, almost loath to bring up'cause I know it might, it feels like a red rag to a bull. But one, one of the UK government departments has invited Jonathan Haidt, author, of the Anxious Generation book, which we've dis discussed your Good Friend, to talk to staff about, I guess, the case for a social media ban. and I guess that is where, You know, it's great that politicians are open to a variety of policy and regulatory responses, but inviting an author with a very clear perspective whose book is not based on, particularly robust data, which has been criticized by many, many, academics and professors

Mike Masnick:

including many academics in the uk. Some of the leading experts on this subject are in the UK at Oxford and elsewhere who are focused on this and actually are experts in this field and have pushed back on heights, book and, his claims. And there was the big, TES article last year that quoted all these experts saying that his, conclusions. Are very, very questionable. and yes, ha is a, he's sort of a celebrity in this field, but he is not an expert. And so to drag him in is a very sort of, a politician move, but it's not a, a smart policy move. And so, I did appreciate the, you brought up the, statement from the Molly Rose Foundation. there are things in there that I disagree with, but I thought that, it was a very thoughtful, nuanced statement saying like, we can't just go with the sort of, the celebrity approaches to this. We have to bring in the actual experts and it directly calls out how this could do real harm and can have unintended consequences. And it calls out specifically L-G-B-T-Q or Neurodiverse Children where being online can offer huge benefits around identity, self-esteem, and peer support. And that's the kind of message that, keeps getting lost in this. And to have the Molly Rose Foundation call that out, I thought was actually a really good thing to bring in. Height just feels like, it's circus time. It's, we're gonna bring in the celebrity and we're gonna, you know, have him make the points that we wanna make. And we're not doing a thoughtful, nuanced approach to this.

Ben Whitelaw:

and, and you, mentioned that this links to a, a piece of research that came out this week actually regarding Australian teenagers, which I haven't got around to reading yet, but

Mike Masnick:

this is fascinating and really fascinating timing because like, among researchers. I don't think this finding is particularly surprising because it matches some of previous research, but among the public, the assumption is that social media is just inherently bad for kids and everything about it, like a band makes total sense. So this new research, and it's in the, journal of American Medical Association Pediatrics. so these are experts in the field of pediatrics and they did research on more than a hundred thousand children in Australia. So this is not a small, situation. It is a very large study. And they looked at, kids of a variety of different ages and a variety of different social media usages, and they found that no social media usage at all was also correlated to negative mental health situations. So there was a sort of u-shaped curve in it there are some differences in specifics among boys and girls were somewhat different. But there was a general U-shaped curve where the people who had the worst mental health outcomes were the people who didn't use social media at all or used it a tremendous amount.

Ben Whitelaw:

Mm-hmm.

Mike Masnick:

People with moderate social media usage tended to be the healthiest, mentally healthy. so there is this element of there appears to be more evidence that when used well and when, they understand how they're, they're using it. This to me sort of connects with the idea that we've heard from other experts like Candace ars, who's one of the leading experts in the US on this, who has suggested that the correlation that people point to of high social media usage with increased mental health problems with children there's a correlation, but it may be a reverse causation in that it is often children who are having mental health. Problems, but don't have an outlet, are not getting the help that they need and therefore they turn to social media and then spend much more time on social media. So the uses of social media is not necessarily causing the mental health problems, but is a, an indication of the mental health and the help that they are not getting this in some sense, potentially supports that. and suggests that people with no social media usage, there may also be other indications of problem and they're sort of cutting themselves off from the world in, in some sense or another. the study seems very, very interesting. There was a, a divergence with boys and girls had slightly different results and at different ages. And so this is the kind of nuance and thoughtful studies that we should be looking at and to see if we can parse out where are the real problems, because nobody is denying that there are situations that are problematic, but understanding where they are and why and who it entails, and then how do we solve and help those people without. Doing things that harm the useful aspects of social media, which again, many research projects before this have called out that it is tremendously useful for, a decent percentage of the population as well. and so this seems to go against the idea of an outright ban and, effectively suggesting it could be really harmful for, some groups of people.

Ben Whitelaw:

yeah, so, in effect, ki or, or Sir Kiir as I should actually call him, I, I might get my citizenship revoked for that. Um, but he, he was, you know, probably right in the first case where he was saying that, a blanket band wasn't the way forward. we should deal with it through, what he called, content control mechanisms that are already in place and. this paper actually probably backs that up, in a really interesting way.

Mike Masnick:

But it, it will not get attention. Uh, and, you know, Jonathan Het has the bestselling book, so everyone's gonna pay attention to him. So

Ben Whitelaw:

yeah, it does seem like that. okay. Well, we've had a story which I think kind of undermines the credibility of your country. We've talked about a story that undermines the credibility of my country, Mike. Um,

Mike Masnick:

who else can we undermine?

Ben Whitelaw:

let's, let's wrap up today's, control of speech with a couple of interesting, stories from elsewhere that we've, looked at and, and found interesting. Yours is a, perfect example of small town, American Facebook group moderators gone mad. this is a, a perfect example of, of what I think is a, a genre of story. We cover and control at speech all the time, but there's a kind of ai twist this time.

Mike Masnick:

Yeah, there, there's a few things and I should note, so this is about Arlington, Virginia, which I think people in Arlington, Virginia would, would bristle at your description of them as a small town

Ben Whitelaw:

I haven't been, you, you're right. I should, I'm gonna apologize to the people of Arlington be

Mike Masnick:

Arlington, Virginia, if you don't know is right across the river from Washington DC it is either a suburb of Washington DC or if you look at what the original map of Washington DC was, it actually included Arlington. So there's an argument that this is really Washington DC though I think also people in Arlington, Virginia, my ol at that description as well. so, it's, you know, it is a suburb, but it is fairly urban area.

Ben Whitelaw:

Fair Consider me corrected.

Mike Masnick:

Okay. I just know if I, if I don't say that, we will hear from some people. but it's an interesting, interesting story about there's a, very large, group, a Facebook group, 25,000 members about Arlington. And the group is actually, I grew up in Arlington, so it's about kids who grew up there and they were apparently recently sharing old pictures of themselves as kids on sports teams. And people were commenting on it and saying like, oh, how cute and all this kind of stuff. And the entire group and the people who organized the group all got banned from Facebook. They think. Nobody's sure because these things, it's, it's difficult to know for sure. They think that there was sort of an AI reaction to people posting pictures of children and saying, oh, how cute. And, you know, look at this. Or, things that could be potentially interpreted in problematic ways, which I think, gets to the, the standard impossibility theorem of concept moderation at scale. Which is that, if you want to have rules that ban grooming of children or anything related to child sexual abuse material, you're going to catch things like this where here are a bunch of pictures of children, but it's adults posting pictures of themselves as children. But you could see where that would set off some red flags and. it strikes me as interesting, especially, one year ago, almost exactly, mark Zuckerberg was talking about how, you know, oh, we made too many mistakes with our old rules and we have these new, wonderful AI systems that are not gonna make any mistakes anymore and we're gonna default to leaving stuff up. And here's this group, 25,000 people strong. The, the group has been around for, 15 years, you know, well established and gets banned. The organizers of the group said they got banned too, so they can't even challenge this or they can't figure out how to challenge it. And other users who weren't banned said they can't report it.'cause I can't even point to the group'cause the group doesn't exist anymore. All of the sort of traditional standard problems that we see, with social media moderation, this is like just all of those wrapped up into one. And they're, they're assuming that it was an AI driven decision. Uh, whether or not that's true, we don't know. We might never know, but it, it did strike me as one of these interesting examples of like impossibility, the at work.

Ben Whitelaw:

Yeah, totally. I mean, I've actually seen just coincidentally two or three other people, online that I happen to know who've had groups or branded accounts or pages taken down in very similar ways where kind of suddenly, they'd been alerted to the fact that they've broken some sort of guidelines or rules. And then very soon after, had their page taken offline. And in this case, you know, the fair people of Arlington did even get warning. but, this is something that is gonna happen more and more, and this is something that obviously happened to myself. I won't go into it again, but everything in moderation's LinkedIn page was essentially prevented from publishing because

Mike Masnick:

because you're a spammer.

Ben Whitelaw:

cause I'm a spam, it was my URL was deemed to be malware. And so, the. Effort and energy it took to kind of bring it back online is something that I, more and more people are gonna have to come up against. and there are increasingly, no ways for a user to get in touch with Facebook unless they know somebody at the company. And, and that's, that's gonna become an issue. So it's gonna become an all too increasingly common issue, I think for users. the one that you've highlighted here, Mike, and I just hope that, they get access back again as a result of going to their local newspaper, which will always, be there for them.

Mike Masnick:

Yay for newspapers.

Ben Whitelaw:

I'm gonna wrap up today's controlled speech mic by talking about, a platform banning an entire category of content. this doesn't happen too often, you know, policies are often. In place. changes to the way that community guidelines or terms of service work are very rare. But Ban Camp have decided that they are going to ban all AI content from the platform. they're also banning the use of AI tools, to create, content and post it on the platform. And this follows a a few other music and creator led platforms doing something similar, but not quite as drastic as Bandcamp. so Spotify back in September, and these are another, music platform, have made some changes to the way that their policies work in, in relation to AI content. But this is a real anvil that, Bandcamp are thrown at, AI content. And I think here's something we're gonna see much more of. Mike. I think we're gonna see. Platforms who respect users, and particularly creators in this case, musicians. basically say that AI content isn't, accepted. I think AI content or slop, whatever you wanna call it, is good for platforms. It's good for the engagement time, it's good for advertisers. but it's often not very good for users. and at least, you know, I haven't seen any data that says that people really, really want, AI generated content wholesale. a lot of the decisions or platforms have made have been for their own benefits, not on the basis of user feedback. So it's great to see Banum kind of making a stand here. It's obviously in their interest to do so. but I wonder if they won't be the last before we wrap up. 2026.

Mike Masnick:

Yeah, I mean, I think I had a little bit of a different take on this. I mean, I think some of this has to do with kind of band camp. you know, as a sort of, local folksy feel to it, it's, it's more about direct connection between the musicians and their fans as opposed to like a Spotify, which is more just about, the big thing. And so even though Bandcamp was bought a few years ago by a larger company, it still tries to give off the sort of local folksy local record shop. feel. And so I think they felt because of that focus and because of that community that they're targeting, that this was sort of a necessary statement because there's sort of a, a general rejection of AI within that community. They're also facing some pressure from, some people have become somewhat upset with Bandcamp and feel that it is becoming too big and too corporate. and so maybe this is part of the sort of pushback on that there. We've seen, multiple, in the last week alone, I saw two different attempts to create an artist own competitor, to Bandcamp. So there are movements going around for, pushing back on Bandcamp is too big and too corporate. And so I think some of this might be messaging in response to that. My bigger question though was like, how do you actually enforce this? I mean, and what does it mean? No AI generated content? if you use autotune. I mean, autotune is arguably a, kind of, ai, you know, a sort of early generation ai, but that's what it is. Or, people would argue synthesizers and electronic music, where do you draw the line Exactly. And also how do you tell? And so it's one thing to say like, no, like purely AI generated. So maybe you figure out a way to block, from the various services that that will generate entire songs for you. But like, does this include using AI tools to improve your music or to clean up a recording or something like that? I, I find it a little bit like. it's a nice signaling thing to say that, I don't know how they would actually enforce it in any meaningful way. And you could see bands come around and pretend that they had written and recorded all of their own music while actually using AI tools to do it and putting on Bandcamp. As long as they just don't admit to how they actually created the music, nobody's gonna know the difference. And so, I, I don't know. I, I'm a maybe taking a little bit more cynical of a take to, to this. but you know, we'll see.

Ben Whitelaw:

Yeah, I think it's, um, we're seeing platforms, I think do that a lot more as signal to their user base, their values. And I think that's part and parcel, sort of, kind of what trust and safety is. Alice Hunsberger has written about this before for everything moderation about how trust and safety is how you put company values into action. You're totally right to call out that it's difficult to operationalize. but this feels like a, company who's saying, this is what we believe it might be difficult to do, and you can argue that, why would you do it if it's difficult to do? but this is what we believe and this is, the direction we're going, which is one approach for a

Mike Masnick:

Sure. Yeah. I think that that'll be an interesting thing to follow is how successful the various platforms to say no AI allowed. I mean, there's also recently the launch of, divine, which is an attempt to bring back Vine, the, the old service. And they claim no ai and they admit that they use an AI tool to try and detect AI to block ai. It'll be interesting to see how these things actually play out. So I think that maybe that goes on the Bingo card too.

Ben Whitelaw:

yeah, add it to the Bingo card we should add that to the Bingo card. thanks very much, Mike, as ever for this week. thanks to all the outlets that we spoke to this week. Ter the Guardian, the famous all now.com, which I won't forget. we couldn't have done those today's episode without those. Go and read them. Go and, uh, subscribe to them. Thanks very much for listening, everyone. Take care. Have a good week. We'll see you soon.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.com. That's CT RL alt speech.com.