
Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
Once You Slop, You Can't Stop
In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Twitter Inc. Official 'Bird Logo' Fascia Sign - An Iconic Fixture from the Company’s Market Square Headquarters in San Francisco (RR Auction)
- AI Slop Is a Brute Force Attack on the Algorithms That Control Reality (404 Media)
- Spain to impose massive fines for not labelling AI-generated content (Reuters)
- China Announces Generative AI Labeling to Cull Disinformation (Bloomberg)
- After Axing Fact-Checkers, Meta’s Community Notes Will Have Help From X (Adweek)
- UK to crack down on illegal content across social media (Financial Times)
- Lobsters and the Online Safety Act (Lobste.rs)
- We are sorry. The forum has closed down (The Hamster Forum)
- ‘Kids can bypass anything if they’re clever enough!’ How tech experts keep their children safe online (The Guardian)
- The Snapchat Move That Leaves Teen Girls Heartbroken (WSJ)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund. If you’re in London on Thursday 27th March, join Ben, Mark Scott (Digital Politics) and Georgia Iacovou (Horrific/Terrific) for an evening of tech policy, discussion and drinks. Register your interest.
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So Mike, I've been browsing the internet for some shelves, right? You know
Mike Masnick:As one does.
Ben Whitelaw:as one does, you know, this is the thing I do in my spare time. Now, my house renovations almost coming to an end. I've talked about it a bunch of this podcast. hope listeners aren't bored. but I'm basically at the shelf stage. So I was, looking around for some shelves and naturally I ended up on eBay. And I thought we haven't used eBay on control or speech. So, at least I don't think we have. So. They're prompts on eBay is search for anything, and you can take that as a kind of, you know, material, search, or you can think of it as a kind of philosophical prompt.
Mike Masnick:I, well, I was going to say a few different ways where we could search for a different government for the U S but I think, I think I would like to search for more time. I feel like I don't have nearly enough time for anything as we were just discussing in terms of like how much stuff is going on. I could use more time. So if I could buy more time on eBay, I'd go with
Ben Whitelaw:Yeah. Okay. Yeah. Maybe a, maybe a personal
Mike Masnick:search for anything, Ben? Ha
Ben Whitelaw:if I was to search for something, um, I would search for the kind of dying vestiges of a free speech platform. And, and, and that links to something that we'll talk about in today's podcast. so I'll explain more. Hello and welcome to control speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. it's March the 20th, 2025. And this week's episode is brought to you with financial support from the future of online trust and safety fund. This week, we're talking about AI slop, the online safety act coming into force and methods attempt to build a better community notes than X. My name is Ben Whitelaw. I'm the founder and editor of everything moderation. And one with a very time poor Mike Masnick. How much would you pay Mike to have an extra hour every day,
Mike Masnick:Gosh, I don't know. That is a good question.
Ben Whitelaw:just one hour.
Mike Masnick:Yeah, one hour is not enough. I think, I think I need an extra day every week.
Ben Whitelaw:Yeah,
Mike Masnick:Where nothing happens. Right? So. I need everybody else, like everybody else and stick with the seven day week. And I need that extra day where nothing is happening and it's just perfectly quiet and I can just catch up on stuff. I think that's what I would. Yeah. Someone get me that. Some, someone get me that.
Ben Whitelaw:So you're paying for the whole world to just stay still, essentially.
Mike Masnick:No, I mean, you know, in their, in their lives, like they can just think that the world still has seven days, but I want, I want like this pause that I can still do stuff. Give me that extra day. I'm not greedy.
Ben Whitelaw:No, no.
Mike Masnick:don't, I don't need two days. Just, just one.
Ben Whitelaw:It reminds me of a children's TV show in the UK called Bernard's Watch. I don't know if you ever came across this. It's kind of unhinged, um, but I used to love it as a kid. Naturally, he had a kind of pocket watch, an old antique watch that he used to click and stop. And he went about his day to day life and everyone else was frozen.
Mike Masnick:Yeah, that's it.
Ben Whitelaw:You want to, you want to burn his watch. Don't you?
Mike Masnick:I think, yeah, yeah. I think there was a movie like a Hollywood movie on that premise too, but I didn't see it. I remember the preview with that, premise, but no, I, I just need that. But like, I wouldn't do anything nefarious, but I would literally just sit and work. Right. I mean, everyone makes fun of me for that. Like, all I do is work, but that would be, you know, It would gimme back regular time.
Ben Whitelaw:yeah, yeah, yeah. Okay. Well, you know, I hope you don't think this is work. This is, this is, this is our time. This is your time to decompress Mike.
Mike Masnick:There we go.
Ben Whitelaw:um, I think that's a sensible thing to buy. my purchase would be something very not sensible. and I was referring to it in the opening today. I want to buy the Twitter logo. That's up for sale again. I don't know if you saw this, the big bird that used to sit outside of Twitter's office back when it was called Twitter in San Francisco has gone up for sale again, and it is the perfect analogy for what has happened to Twitter slash X since it was bought by Elon Musk, so it was. I don't know if you remember in 2023, Musk sold it as part of a kind of fire sale, like a kind of garage sale. Is that what you call it in the US? Yeah. Yeah. Um, and it was, it sold for a hundred thousand dollars. It's currently up for sale now and you just looked right. And it was. 2, 27, 000.
Mike Masnick:7,500
Ben Whitelaw:Okay. So yeah, just as Twitter's X's value plummeted over the last several years, although there's been some sort of correction this week, so has Larry Bird, the giant blue logo that used to sit outside the office in San Francisco. maybe I could use it as a shelf.
Mike Masnick:Yeah, there you go. You take two birds with one stone.
Ben Whitelaw:Oh, would you look at that? Yeah.
Mike Masnick:but the, uh, yeah, I mean, I guess it's just been sitting in storage. as I said, it was in a storage facility in San Francisco. And so, you'll have to pay for shipping to the UK if you get it. But I will note that it says there's about six hours left as we record this. So by the time this comes out, may only have a few minutes left. Uh, To, to bid on it and purchase it for Ben in case any of our listeners want to make a donation to control out speech and, to Ben shelving fund in particular, uh, you can purchase the bird and, send it off to London.
Ben Whitelaw:You would get regular photos, uh, you contribute.
Mike Masnick:wonder what your wife would feel about a giant bird showing up
Ben Whitelaw:Yeah.
Mike Masnick:newly renovated home.
Ben Whitelaw:Yeah. I probably wouldn't love it. Um, but you know, it's funny to see such a kind of old, vestige of what, Twitter used to be come up for sale again in such circumstances. Uh, cool. So purchasing power aside, you know, our wishlists we've kind of dealt with today now, but we have a lot of stories that we want to get through. we have some really interesting kind of notable stories that we're going to talk through and explain just as we always do in the podcast. before we do. A request for some help from our listeners. We haven't had a review of the podcast since I think September, 2024, Mike, isn't that tragic?
Mike Masnick:yeah. That's
Ben Whitelaw:if you don't hate listening to this podcast, let's, let's, let's mix it up a bit. If you don't hate listening to this podcast,
Mike Masnick:very low bar you're setting there.
Ben Whitelaw:I know, but it might prompt people to think, actually, this is me. You know, I, I'm somebody that doesn't hate this podcast. mean, maybe I'll spend some time helping Mike and Ben in their quest to have their podcast discovered across all of the platforms on which it is disseminated every week. So if you don't, if you don't hate it, Spotify, Apple podcasts, wherever you get the podcast, leave a little review. It doesn't need to be very long.
Mike Masnick:yeah, exactly. And you know, we look at the numbers and we've seen like download numbers increase. So there's more of you. There's definitely some of you who weren't listening. back in September of 2024. So someone out there must be listening to this and thinking, I have not written a review for controlled speech and I could write a review, like think positively. You can do it if you're listening to this and you haven't, haven't written, written a review.
Ben Whitelaw:and we'll give you a shout out next week's podcast if you do you won't we won't shame you and it's not there's there's no shame in submitting a review But we will give you a shout out. there is a risk Mike that some of our Our listeners perhaps are connected to our first story of this week. I'm hoping that every single, download and listen is a real human person who works in the industry. There's a, there's a small chance. There's a few AI generated bots. There's how the internet works. I'm sorry to say it to you. Um, and the first story from 404 media really goes deep on this idea of. AI slop and content being kind of created as a result of, AI generated, systems and platforms, I got so deep into this, explain, explain what it is, why you picked it and why it's important to our listeners.
Mike Masnick:yeah, it's I mean, obviously four or four media does such wonderful work, but, this is such a, such a wonderful story kind of laying out everybody's heard about, you know, the rise of AI and AI slop and four or four and in fact has done a lot of reporting in general on. And the fact that, you know, social media, especially meta properties in particular are sort of filling up with this stuff. And there's been this like confusion over why and how, but part of what's really interesting to me here is just like, not just How much is getting filled up with, just nonsense AI stuff. But the fact that there are these people, these, content creators who are basically saying like, we're fighting the algorithm and we're trying to figure out the best way to get the most click throughs. And the best way to do that is take as many shots on goal as we possibly can. And AI will just generate a ton of stuff. We don't have to put work into it. You look at the really successful content creators like Mr. Beast or whatever, who spends millions of dollars on every video and then, is able to pay that off because he gets tons and tons of views. But these guys are saying like that space is too expensive. There's too much capital costs up front to get there. And it's too competitive. And so the better thing to do is just use this. bullshit generator engine to generate as much bullshit as possible, put it all out there. If stuff doesn't work, it's no loss to us because it doesn't cost us anything. And therefore let's just get it out there. until we find something that, that hits. And so now you have a whole bunch of people doing that. And so that's sort of like the motivating engine behind a lot of this is this idea that, well, we need to get attention and the best way to get that attention is just to. Try as many things as possible and see what actually clicks. And then some of these guys are like teaching other people how to use a AI slop, and that's becoming like a, you just have this whole ecosystem of nonsense. Which is all about trying to game the algorithm.
Ben Whitelaw:Yeah. The, the bit that I really like about this piece is the comparisons to a kind of brute force attack, which is obviously a kind of cybersecurity term, you'll know more about it than me, but like talk us through like why AI slop is, is kind of like a brute force attack.
Mike Masnick:right. I mean, it's just a question of like trying to get through. Right. And so like a normal brute force attack is like, the simplest version of a brute force attack is like trying to guess a password or something. Right. And so you just will generate as many random, not random, but you know, passwords that might get into an account, or if you're like looking for, some other way into a website, you'll go through like, Scrape the entire thing and try and find like every possible way in. It's just like trying as many things as possible. It's not very sophisticated. It's, you know, it's brute as it says on the tin. but this is that for the algorithm and, the thing that it, kind of reminds me of in a lot of ways is when Google came along, there were other search engines the 1990s and the rise of the web and, you know, none of the search engines were that good. And then Google sort of solved search by using this page rank system. And they said, you know, we have this system to actually make sure that the best stuff gets to the top. And the way it was originally done was based on how many links with the idea that things would only get links if, if they were good. but pretty quickly people figured out like, Oh, here's an algorithm that we can gain to our advantage is more people go to Google to figure out where to go if we can just get more links and then became this whole, crazy race and one upping each other of, search engine optimization and often. what's sometimes referred to as black hat methods, sort of, sneaky methods to try and get around it. I mean, not a day goes by when I don't get emails from Spammers asking to pay me to put links on TechDirt because TechDirt has, you know, a strong reputation within Google and they, offer me money to put links because they want those links to game the system.
Ben Whitelaw:but you know you could use that money to buy more time.
Mike Masnick:I thought you were going to say to buy the Twitter site.
Ben Whitelaw:Oh yeah, yeah, exactly. Either one will do.
Mike Masnick:I will say I every once in a while, when I'm really like in a cynical mood, I will respond to one of those, guys and say, sure, it's a hundred million dollars. and I've usually they disappear after that. occasionally I've had one come back and be like, no, what's the real number? And I'm like a hundred million dollars. I'm not. Joking about this. Like if I'm going to destroy the reputation and value of my site, I might as well get paid for real, you know, not, not whatever they're offering. They're always offering like a hundred bucks or something
Ben Whitelaw:Right, yeah,
Mike Masnick:you know, but every system gets gamed and it's just that in the past, like gaming, social media algorithms was, Tricky, right? I mean, you had to put in the work, which is what they're saying with, like, the successful ones, like, you know, there's some inexpensive YouTube channels or, Instagram things that have been successful, but, the really successful ones are a little bit more produced. And so what the AI is enabling them to do is to, like, make it cheaper to game the system. So now everybody's doing that. so to me, it sort of then becomes this arms race, you know, we saw this with Google and like at some point, in the sort of mid to early 2000s, Google completely revamped their ranking algorithm because they knew that they had so much search spam and they referred to it as search spam, that was clogging up the results and making the experience worse. And so they completely redid the algorithm, some companies that have sort of been built up around. Search spam got destroyed by it. And there was like the big shakeout. I'm guessing we're going to have to see something similar happen in the social media space. and that, there's gotta be some way that meta in particular, YouTube, to some extent have to figure out how to deal with this stuff. And whether or not people actually want this content, because I think, yeah, like people are clicking on some of it, but then I think people are getting annoyed or, or like people are clicking on it because of the novelty. And like, why am I seeing this nonsense? You know?
Ben Whitelaw:yeah. I mean. To kind of emphasize the point and what the piece does really well, it's just emphasize the insanity of some of this content.
Mike Masnick:yeah,
Ben Whitelaw:It's bonkers, isn't it? Like there's, there's crazy stuff, like there's like giraffes eating people and spawning on subway platforms. There's like, it's, it's almost. bonkers. You know, there's like grannies kissing llamas and there's like heads appearing out of sharks and models sitting on the street of markets being fed food by like market traders. There's a man with, I can't even explain how crazy it is. but the weird thing is it's kind of watchable.
Mike Masnick:yes. Yes.
Ben Whitelaw:You kind of like, can't look away, which obviously is, why that content gets recommended to other people because people. are engaging with it to some degree. the thing that I, you talked about the kind of speed with which this has happened, Mike, it took, years for people to figure out how to game Google and to an extent game other social platforms. The speed with which this has happened is, I think what's Shocking to me, and I wondered like, if this is about the shift from social platforms that are based on following other people and then the shift of people being grouped into cohorts or segments and just being served content because. That shift means that AI slop can kind of proliferate, right? if this stuff had happened back in the day and you were following somebody on Twitter and they started sharing insane videos of a, of a man, you know, a pizza dough man, rolling out his own stomach. Which is one of the ones in the, the, in the 404 article, if somebody did that, you would unfollow them or, you know, you would, you have the option to opt out of that. You don't have that option when you're on Tik Tok or whether you're on reels on Instagram, when you're just serve this ongoing fire hose of posts.
Mike Masnick:Yeah. I mean,
Ben Whitelaw:that what's changed?
Mike Masnick:I think, yeah, it gets to this issue of like how social media for the most part, not entirely obviously, but for the most part switched from this thing that it was, the people that you followed, which then led to, the rise of the algorithmic feed and the algorithmic feed originally on social media was designed to sort of highlight it. Yeah. The content from the people that you followed that would be most relevant to you because otherwise it was just, undifferentiated from people you follow you, but you would miss stuff. And so it was an attempt to do that. The big shift then was to go beyond that to the. sort of TikTok style for you page, which was beyond the stuff that you were directly following, having that algorithm try to pull in other stuff. And you can understand where there's like some element of value there, because like, maybe you don't know about some other people, but then that has introduced this thing where now it's entirely the algorithm. And so where the shift is, that it went from you saying who you want to follow being the signal for the kind of content you wanted to now the algorithm beginning to take on not just that as the signal, but also what things are you watching and then what might similar people want to watch. Right. And so you have this, the concept of cohorts and everything like that. Whereas then you could opt out of following certain people. You can't opt out of your cohort that they sort of built for you. And that makes for a very. different experience. And so I, I mentioned this to you before the podcast. I just yesterday as we're recording this, I just interviewed Chris Hayes, who's the, cable news, journalist, really interesting guy who just came out with this book called the siren's call, which is all about how our world is now based on attention and fighting for attention, and business models built on attention. And I think all that plays into this. that interview will be on the tech dirt podcast soon. maybe next week we got to figure out which, which thing is rolling out when, but probably next week, but we had a really interesting discussion about this very thing and, how algorithms have sort of taken over our lives. Um, But, you know, one of the things that, we really tried to explore in that conversation, which went, I thought, thought went pretty deep was this idea that algorithms, like a lot of people right now want to blame the algorithms. And so I think it's really easy to look at this 404 story and say, like, this is the algorithms fault. Right? Like, oh, you know, algorithm stage. And you sort of, I'm not saying you did that, but you're, you're saying, we've switched from this thing where it was the people you follow to the algorithms. And that leads some people say, well, you know, the answer that then has to be like outlaw the algorithms or require you to have like a following feed or whatever. You know, what we've seen is that users don't actually really like that. Like the pure following feed doesn't work as well as like a good algorithmic feed. The issue to me And this is a lot of what I explored with Chris is who controls the algorithm and for what purpose, not so much the existence of the algorithm themselves. And as I said, like the early point of the algorithms was good. it was to sort of highlight the content that would be more valuable to you and would be more useful to you in the same way that like the success of Google early on was that it found stuff that you were searching for that was better and more useful. It's just the problem is that then people start to game it and then it becomes this, ongoing back and forth. And that's where the problem then comes in is whose incentives, you know, you're dealing with a bunch of different incentives. You have the company's incentives, they're trying to make money. You have the people who are trying to game the system. They're trying to make money. And then you just have like little old you who just is trying to find interesting information. Uh, and you're just kind of like at the. the whims of all this other stuff, but if you had more control over that algorithm, right? So, I mean, you were saying before, like it was people you were following that, that is still an algorithm, but it was one that you had control over. So you could remove someone. And so the thing to me that I keep coming back to is like, the real issue is who controls the algorithm. And what their intent is. So when it's a company that's just trying to keep you engaged or trying to sell you stuff or trying to, really sell your attention to advertisers, which is the way that most of these companies work right now, then their incentive is misaligned with you. That leads to the sort of insidification. And so that fits in neatly with then a bunch of, crazy people with access to these AI tools who can just pump out. so much slop and just throw it into the system. Doesn't matter to them if it, doesn't work because all the incentives are messed up for the users.
Ben Whitelaw:Yeah. so what you're saying is kind of the, the algorithms have algorithms as a term have been kind of associated with very centralized platforms, the ones that we've used for, over a decade now, but what you're saying is actually, if those algorithms are, controlled by users and there are dials and levers and you can tweak them, then actually the algorithms themselves on the issue, that makes sense. the kind of issue that I was wondering about was like something you touched on is when is the course correction going to happen? You know, when our platform is going to limit the visibility of this kind of AI content, do you think they have incentives to do so? And where does labeling fit into this? Because I remember last year around this time, Meta, which is where 404 Media focused much of its reporting in this piece, came out and said, we're going to label AI content, we're going to be able to detect, yeah, they said. We've got this covered. Don't worry about the rise of AI content. we're going to detect the signals, we're going to it's clearly labeled to users that this is AI generated content, and we're going to, we're work with other platforms and other partners, going to kind of. Embed ourselves into some of those existing initiatives. There's one called, C2PA, which has got a lot of the platforms signed up. And the suggestion was that they were going to make it very easy for us to kind of opt out of that content or for that content to be kind of downgraded if it was less quality. It doesn't feel like that's happening. Um, do you think it will? It goes back to incentives.
Mike Masnick:yeah, it's a good question, right? I mean, Mark Zuckerberg seems to have this view that like AI content is actually a good thing for the platform. uh, he's making sort of a bet on that and he sort of talked about that. Right. And they've talked about, building their own, like, AI generated users on social media that could interact. And we've talked about that in the past. and so. I think, his view and therefore the company's view is whatever keeps people coming back and using the platform is good. I do wonder how successful this could be long term, right? You know, if you're just getting all this garbage content, like sure, it is entertaining and amusing. the first few times, like, you know, check this shit out basically, but after a certain point, you have to be like, you know, come on, why am I spending all my time looking at this when I could have more sort of realistic engagement somewhere else. And so don't think this works long term. I sort of feel. If anything, the most likely scenario is that it follows the path that, Google followed at the time where they suddenly decided, okay, we need to crack down on this. And then it becomes a big project and they come up with some system to crack down on it. and that only lasts for so long. I mean, I think a lot of people would argue. Probably correctly today. Like Google is, a bunch of garbage again. And, and the search engine optimizers have really won. and Google has made a lot of really bad decisions that have really sort of insidified the results some cases. know, it's a constant struggle and a constant battle, and you hope that they'll, reach the point that they'll fix it. But I sort of feel that meta in particular is going to have to realize that this, this kind of content is not a longterm sustainable success, especially as there are more and more alternatives that are sort of trying to take a different approach and, you know, one of the things that. And this is another thing that came up in the conversation that Chris and I had, which is that more and more people seem to be moving away from social media platforms like this to like group chats where there is no algorithm and it's much more authentic and you're able to engage and the feeling of fun that people had In the early MySpace, Facebook, Twitter years, people seem to be getting that feeling more from group chats than they are from global social media. And it's because you have all this other stuff sort of crowding their way into social media. And I think, there's gonna have to be a reckoning of some sort by meta, with that.
Ben Whitelaw:Yeah. So you're right. it could be that users vote with their feet and say, actually we want to spend more time elsewhere, but it could also be that regulation, dictates, and the government's dictate actually what content should be labeled as AI slop or otherwise, and you found a couple of stories that relate to that this week. Right.
Mike Masnick:Yeah. And so this was interesting where, Spain announced that they were, I'm going to change the law so that there would be finding companies, for platforms if they don't label AI generated content. And then there's another one saying China is doing the same thing. And so there is this part of me that thinks, as I often do, like if Western countries are doing the same thing that China is doing. maybe think about that for, for a second. It's funny, the, the article, the one on China is from Bloomberg, and it, has the lead of. China joined the EU and U S and rolling out new regulations to control disinformation by requiring labeling of synthetic content online. And I was like, what are they talking about regarding the U S? Because we don't have a regulation requiring that at all. So I think reporting on this is a little bit confused. The Spain one. It was just from Reuters is interesting where they're saying they have this new bill that will impose massive fines on companies that use content generated by AI without properly labeling it as such. And you understand the thinking behind it, the. Chinese approach, like it's pretty clear that that law is designed to like stifle certain kinds of speech and use AI labeling as an excuse. the Spanish law I think is, is meant in, good faith cause they're scared about deep fakes, but I find these approaches also to be kind of silly and probably ineffective in part because like, where do you draw the line, right? I mean, everybody who's doing content creation these days uses AI tools in some form or another at some point. Okay, maybe not everyone, but a large percentage, right? So, if you're writing and you're using, like, Grammarly to check your grammar, that is an AI tool. Do you need to label it that it, told you you had a typo? Or maybe it suggested, you know, not ending a sentence in a preposition, right? Is that now AI generated? You know, I think most people would say no, but then that raises the question of where is the line? But same thing with like, Images and videos like if you're touching up an image if you're, Changing a video if you're editing something a lot of those tools are really AI. So where do you draw the line? between what needs to be labeled and what doesn't it feels like this is the kind of thing, this is my complaint with lots of regulations where it's like, okay, I see this thing, it is a problem. Therefore, let's just outlaw it without thinking through like, does that really make sense? Is that really targeting the thing or are you wiping out a whole bunch of other useful stuff with it?
Ben Whitelaw:I think the line is pizza men rolling out their own stomach. I think that's for me.
Mike Masnick:Yeah. How? How do you write that into law though?
Ben Whitelaw:easy, easy. No, I, I
Mike Masnick:Just ask Ben. Ben is the decider.
Ben Whitelaw:yeah. Once you, once you see it, you know, lemme tell you that I will, I will say that. Um. I've been thinking a lot about like what the terms we use to refer to content on social media platforms in the age of AI slop. And I wonder if we can, I'm going to use this opportunity to pitch you, Mike, a change in terminology. You know, we've been using the word feed for, as long as social media has been around. think it's time to end the use of feed. Now that we are consuming all this AI slop, I'm thinking we use the word trough. I think we, you know, we, I'm going to, we can say, I'm going to the trough to consume my news. I'm going to open my, you know, my app and go to my favorite trough. I think that's kind of a fitting of what, where we're going with, content on the internet. And we can all, you know, like little pigs around a trough, just consume all the, crazy images of pizza dough men and insane, human seals and whatever else AI generators have got for us.
Mike Masnick:Yeah. So, so Ben, just for you, I'm going to use an AI generator to generate a picture of pigs around a trough with, with social media content flowing through the trough. All right. So I think, I think that's going to work.
Ben Whitelaw:Yeah. Coming to a platform near you.
Mike Masnick:There you go.
Ben Whitelaw:Uh, okay. Well, we went a bit mad there, Mike, but I think that's a fascinating story. Do go and read that 404 media piece. We'll include it in the show notes. our next story is one that we've touched on a lot since the start of the year. It's big old question of what matters doing now that it's not fact checking. And many of you listeners will have seen this week that the company has announced that it's rolling out its community notes function. it's decided that this is the way forward. It's starting to test it with around 200, 000 people that have signed up. And it's some interesting details in there, Mike, in the, press release this week that I wanted to kind of talk a bit about and get your thoughts on basically the trial, the beta. Uh, it's in six languages. It's only focused in the U S but it's covering six different languages. And, what's going to happen is that people are going to start to see the community notes underneath particular posts, and they're gonna be able to contribute to them. So if you're one of the 200, 000 that signed up, you're going to be able to, give your thoughts. Every community note is going to be 500. Characters has to include a link and, the community notes will be kind of, I think, validated and rolled out over time. And then fact checking will be scaled back. That's what the blog post says. really interestingly, the very end of the, press release, it says that the community notes are, it's just in the U S right now, but there's a plan for it to be. All over the world, there's a global domination, in aspects to the community note. So it's just the U S now, but there's clearly plans to do, it beyond that. And I think there's a couple of really interesting points to this month that for me, there's, the missed opportunity. And then there is the fact that they've somehow bungled this, that it's even worse than X's community notes. Okay, so I didn't realize this. did you know that when community notes are applied on X on Twitter, that actually it doesn't affect the distribution of that post? Did did you know about this properly?
Mike Masnick:think I had heard that. I think I kind of knew that, but I hadn't thought deeply about it.
Ben Whitelaw:Yeah. I mean, so Facebook is Meta is copying X Twitter in a number of different ways. It's using the open source code Twitter X has been putting out into the ether. It's taking a similar broad approach, but it's It's also deciding not to downrank posts that have, community notes that, contrast to the post itself. So, and I'm just thinking that's a clear missed opportunity. What, is the point of getting people to, input on a post and, have the process that goes on behind the scenes on a community notes post where you have people from different parts of the political spectrum inputting and potentially disagreeing, and then showing that community notes, if you're not going to then take. I kind of view on the distribution that seems like a mistake to me, or, you know, a decision, at least the matter is taking, in the way it's doing community notes. So I I take umbrage with that. Then there's also the fact that at least on X, there are community notes on ads. You know, some of the best community notes have been on random ads on the platform where it's calling out gambling companies for, crazy claims or, really low quality e commerce sites for, the bonkers stuff that they sell. And you get some kind of funny, interesting community notes. They're the ones I've seen the most, at least. Meta is deciding not to do that on its ads. And clearly, there's a kind of commercial imperative or incentive do that, but I'm like, that's another big missed opportunity. Like at least, you're going to take a lot of the broad approach of Twitter and X, at least do the kind of coverage of community notes that Twitter has. So Casey Newton on platform has written a bit about this as well. This week, he says that it's, flawed approach. It's unclear how it's going to affect the experience of people on platforms like Instagram and WhatsApp Facebook. But from the press release alone, you can see that there's, there's not been as much thought put into this as there could have been.
Mike Masnick:Yeah, I mean, there's a question of how much thought went into all of this. Right. I mean, the, the sort of the entire new approach, right. I mean, a lot of it really felt like Zuckerberg just sort of giving up on content moderation generally, and just being sick of it just not wanting to deal with it and sort of saying, well, you know, people seem to like community notes, so let's just go with that. and in fact, you know, just the fact that they're using the same code, which itself was based off of other open source code, is kind of interesting.
Ben Whitelaw:Yeah. This is what happens when I engage in ideas and good faith, Mike. You're right. this whole U turn was never, was never done for good reasons or for better outcomes.
Mike Masnick:So, yeah, I don't know that necessarily. I like trying to. parse out, like, why exactly are they doing it this way? I'm not sure that there's a good answer for that. You know, I think also like just the fact that, on X, the fact that community notes does apply to ads may be a legacy issue as well. And that, ads on X were based on just taking tweet and promoting it. And therefore, you know, probably separating out that code, the community notes originally birdwatch code from those tweets. Probably was more difficult than it was worth. And so people were like, let's just leave it. And then, Musk fired everybody anyway. So there's like nobody to actually do that. I think if it became a big enough issue, they would probably remove it too.
Ben Whitelaw:um,
Mike Masnick:whereas meta is much more of an ads focused business, you know, that is their, business. And so I'm sure they were much more conscious how that would play. And also their ad system is a little bit different, and a little bit more evolved. And so I sort of understand probably the way that came down, but yeah, it does sort of give you a suggestion of like, this is not really about, you know, the whole sort of embrace of community notes was Zuckerberg to say like, here's a solution. It's like power to the people nonsense allows him to say something that is not entirely like we're turning off all of our moderation,
Ben Whitelaw:yeah,
Mike Masnick:and then, you know, sort of just hoping for the best. so, you know, the fact that it's not, floating back to the, the recommendation algorithm, interesting, I mean, it sort of goes into our, our first story and the AI slop, are there going to be community notes on the AI slop? That'll be interesting to see what happens there. And so I could see, there are arguments for having that play into the algorithm or not. You know, one of the arguments against it is like, if it does impact the distribution algorithm, then you're giving more incentives for people to game the community notes. And community notes, again, is designed to be resistant to gaming. It's actually very clever in terms of the way it is set up and the sort of consensus mechanisms that are built into it. But. That doesn't mean it's impossible to game. And so if you were to put it into the distribution part of the queue, that would create larger attack surface that people would go after. And then you would see brute force attempts on the community notes feature.
Ben Whitelaw:that's a theme already of the, of the podcast. I actually went to look at what the eligibility criteria is for users on X who want to sign up as, contributors to community notes. Meta have said, you need to have an account for six months before you can join the program really aptly. if you go to the page on x. com that says become part of the community of reviewers, you get this 404, cannot access. basically it's broken, right?
Mike Masnick:Well, it's, it does tell you the eligibility requirements, which are also the six months and a verified phone number, but then wait a second. Oh, not anymore. So we checked, we checked this right before you can check now, Ben, but you had me check right before we started recording and I got the 404 page and an error. And I just went again to click to open it again because I wanted to read the exact error message and now it is showing me a button to join community notes.
Ben Whitelaw:Oh, okay. So somebody has been listening into a pre pre podcast preparation.
Mike Masnick:There was definitely an error before
Ben Whitelaw:Okay.
Mike Masnick:because you sent it to me and I clicked on it and I got the error message and now I'm like, okay, so here's what it is. if I go directly to the link that you sent me, I get the error message and it says, oops, something went wrong. Please try again later. if I follow the, flow through reading the rules and then click to go, it does give me the chance to join community notes. And I'm actually going to click this button. I have no idea what is going to happen.
Ben Whitelaw:do you know what it is, Mike? think it's the twitter. com versus the x. com.
Mike Masnick:It could be.
Ben Whitelaw:So I think that there's a page on the community notes, help center article that directs to a old URL. That's twitter. com forward slash flow forward slash join birdwatch. And that clearly is an old URL.
Mike Masnick:Well, I do not, I have now found out I do not qualify to join Community Notes.
Ben Whitelaw:Yeah, I thought that,
Mike Masnick:It does, it says, so there, there are five categories and I have checks on four of them, but I am rejected because my phone carrier is not trusted.
Ben Whitelaw:Oh,
Mike Masnick:So I have no recent rules violations. I joined at least six months ago. I have a verified phone number, which I probably should remove. Uh, And I'm not associated with any other community notes account. They do not trust my phone carrier.
Ben Whitelaw:interesting.
Mike Masnick:I am, blocked from joining community
Ben Whitelaw:Well, we are all worse off for that because I'm sure you would have left some very good community notes. Um, Sorry, sorry to say that this is embarrassing for you to be recorded, isn't it? You know, a man of your standing being denied, it's a blow.
Mike Masnick:I would expect nothing less, I would expect nothing less.
Ben Whitelaw:Um, well, fingers
Mike Masnick:I will say Meta did pop up a thing inviting me to join their community notes.
Ben Whitelaw:Oh, are you in?
Mike Masnick:and I clicked and I said, okay, and then it said, we'll, we'll let you know more. And I haven't heard anything more.
Ben Whitelaw:probably won't hear back if,
Mike Masnick:Yeah, I might, I might not.
Ben Whitelaw:if Twitter's anything to go by.
Mike Masnick:I may get a notice that my, phone is not trustworthy.
Ben Whitelaw:yeah, indeed. Great. So there were our two kind of big stories that we selected. Obviously we, go a bit deeper on, the chunky, interesting stories that we feel merit most discussion this week might, but there's also third story this week that we both touched on. you know, happy online safety act week. Uh,
Mike Masnick:To all who celebrate,
Ben Whitelaw:to all who celebrate, you know, other, you know, regulatory. acts are available. So on Monday, the OSA in the UK came into force.
Mike Masnick:I'm almost surprised we're allowed to talk, Ben. You're in the UK and, and I don't know our podcast recording doesn't violate the online safety act.
Ben Whitelaw:I mean, if, a lot of the coverage is anything to go by this week, then you would think so you would think so there's been a whole tranche of op eds and reports about what the duties are of the OSA. You know, we've touched on it a bunch of times, but this is the UK's. long in the making regulation about, platforms and intermediaries having to remove illegal content and reduce the risk of, UK users on the platform, if they don't, you're liable for an 18 million pound fine. Or 10 percent of the global revenue, whatever's greater that old chestnut that we've seen from other regulations. And we've touched on the OSA Mike in recent weeks, because a number of smaller sites and forums and communities have started to talk about the fact that they're nervous about whether they going to be caught up in, in the OSA. And. we've seen a couple more this week. my favorite was, uh, a website called lemme dot zip. I don't know if you came across this finish. It's a Finnish news aggregator kind of decentralized, platform, quite kind of quirky texts only. And it's got a very long and quite scathing. Review of the OSA in which it says, the act does not meaningfully protect users, the Ofcom, the regulator of the act has demonstrated a lack of technical understanding when drafting and enforcing these rules, very, very kind of. Pointed in its, criticism of the act continues a lot of, what we've seen in red and, but found a really interesting counterpoint to that Mike, which is a, a forum that you found is taking a different tact. It's going a different way.
Mike Masnick:Yeah. and so, you know, there are a number of different forms that are sort of reacting different ways. So one that, that I had seen was from lobsters. dot RS. The lobsters part is dot RS at the end. and it's sort of a techie focused forum. It's been around for a while. They had announced way back when that they were planning to block UK users, the founder of it, just, you know, explain that it would be effectively impossible. And then this week in the lead up to this, they announced that they had. Disabled the geo block. They were going to geo block UK users, but they had disabled and they wrote this very long and kind of interesting analysis of it where, I mean, there are a few different interesting elements to it to me where they said, basically, we're going to do this in part because we don't think they're, they're based somewhere in the Pacific Northwest.
Ben Whitelaw:Mm hmm.
Mike Masnick:they're based in Oregon. And, you know, have no connection to the UK. And even though the online safety act claims to, cover every internet site that is accessible in the UK, part of what this guy is claiming is that, have no jurisdiction over me and, I doubt they're going to go after me because I'm a small forum, but if they do, I'm sort of willing to kind of fight the extra territoriality aspect of this. but what's interesting is like, he notes that he like reached out to Ofcom people multiple times trying to like have a discussion with them. And he goes through like all the reasons why it's effectively impossible to comply with how the fines are ridiculous and disconnected from reality for this, which is a hobby forum that, that he just, you know, runs on the side. He's not making any money from it. but it's basically like, it's just. Not worth it. And there are too many, too many questions and too many things to deal with. and so he just sort of gave up on the idea of, geo blocking it and just, figures he's, he's effectively going to take his chance. So there is this thing in here, which is that at one point. Ofcom, replied, and said, when we write to you to say that you're in breach, that is you are in breach, which, you know, like even though, you know, and I, this struck me as interesting because for years now, Ofcom has been running around and I saw them say this directly at TrustCon that they were not going to be this, like, Iron fisted enforcer of the law, but that they wanted it to be this sort of back and forth. And if they thought you were not in compliance, they would reach out to you and have a conversation and they would be the friendly regulator. And yet this message that even though it's kind of interesting that it's in this post where he's saying he's just going to ignore the law. They say, when we write to you to say that you're in breach, you are in breach. and basically saying, don't wait until you get that breach letter and says, reach out to us, work with us. So they still have that a little bit like work with us, but it's like. they're Not shying away from the fact that they could try and find you for millions of dollars. and so I thought it was a really interesting analysis at the same time as it's basically saying, look, we're not going to comply and we're just going to hope that they don't go after us.
Ben Whitelaw:Yeah. I mean, that's, With a lot of the regulation that's come in, that's really the role that it plays. it's the kind of Damocles sword of that, that hangs over your head and that they can kind of dispatch at any moment. and the question will be is like, is Ofcom willing to, let that sword go against a relatively small forum based in, the Northwest of the States, like probably not. So we're seeing more and more reporting of. The implications of the act, which is, I think, interesting because right now it feels like people still don't understand the implications of, it and how it affects their access to the internet and communities that they've probably long been part of such as the cycling forum and such as the kind of developer website we talked about in previous
Mike Masnick:And, and, and the hamster form, the hamster form has shut down
Ben Whitelaw:oh man.
Mike Masnick:and it says, we are sorry, the forum is closed down. You can now find us at Instagram. And so if the idea was, you know, again, this is the point that I've raised a bunch of times when you have these online regulations, they often give more power to the big companies that supposedly these regulations designed to deal with. And so. The hamster form, a form about hamsters has shut down
Ben Whitelaw:uh,
Mike Masnick:and moved to Instagram
Ben Whitelaw:to, to listen this a controller speech that are users of the forum and who are hearing about this now. I'm sorry. it's not great. But yeah, this is, a theme that we are seeing. we won't figure out, uh, the online safety at this week, Mike, we are going to talk about it again and again, and again, I'm sure I wanted to bring a story to you that is actually a kind of positive story, a story that I really liked. I almost sent it to you. as soon as I read it, it's a guardian article about How tech experts keep their children safe online. And I love it because it says so much. It's kind of sensible stuff about kids safety and, children's use of, smartphones and internet devices. And in particular, says a couple of things that I think are just so worth reading if you're a parent, if you're not a parent, there's so much good stuff in this article the crux of it is, you know, and there's a few experts in there that both say the same thing, the crux of it is talking about the internet to your child early, and it's exactly what you've been saying in previous episodes of control or speech, there is no simple way to protect kids on the internet. They're going to see inappropriate content from time to time, just like they fall over and graze their knee. And, actually the only way you can deal with it is by broaching with them early, working with them, almost as, as adults and understanding that they know as much as you, or sometimes more than you, it's just such a great article. I've, very rarely seen anything like it. So, what did you think about it? It was,
Mike Masnick:Yeah, it is a fantastic article. I'm definitely going to use this and send it to people as well. it makes the point that I've been trying to make, except coming from an actual expert as opposed to me, some doofus on the internet, talking about like. Part of it is modeling good behavior for your kids, but also just letting them know, like teaching them how to recognize that, they may come across things that are dangerous and learning how to do it and you can handhold with them. but the idea of like trying to lock down the internet and make sure that they never see anything bad is not only impossible, it is ineffective because it also doesn't teach kids how to use things appropriately. And so there's a lot of like, be open to conversation, let kids know that they might come across stuff if they have problems to come talk to the parents, to have that open line of communication, which I think is the most important thing and to just let them know that there are problematic things on the internet. And, teach them how to use it appropriately, maybe work with them to see how they're playing stuff or, you know, using different sites or different games or whatever it is, and make sure that you have that line of communication open is the most important thing. Because anything that you do that is like, try and cut them off or try and block them. Or it even says like, you know, kids are going to break the rules that you set for them. They're going to get around whatever blocks, but like, not freaking out about that and not coming down hard on them either, but like figuring out, like, what is it you're trying to do? Like, do you just want to. watch something a little bit more, like, let's talk about that. And, this is something that we've done in our family as well. Like we do have time limits set on things, but we have the ability. And this comes up relatively frequently where my kids will be using computers and they'll run out of time and they'll say like, Hey, can I get another half an hour? I was in the middle of this or whatever. And we talk about it and figure out, yeah, you know, it's usually okay. and like, that's what it's giving this sort of common sense approach where the big thing is Don't lock down everything that's not effective, but have those open lines of communication. And it's, it feels unsatisfactory for some people, but it's super important.
Ben Whitelaw:Yeah. and I'm not saying at all that it's not hard, um, for parents to figure out how to, set up particular platforms or set up their home wifi in a way that, screens out most of the risk. talked there about how you've done it in your home and I'm, I've got friends who are parents and they spend a lot of time thinking about this, but this article is great for acknowledging that, but also saying that there are, alternatives to it too. So great, great piece. Really recommend it. onto our last story, Mike, which is, is it a little bit linked? It's, it's about how teams use social media and this one scared the hell out of us. And I'm going to, I'm going to pose it to you directly. Okay. So I want your kind of honest answer. Are you a half swipe guy?
Mike Masnick:Well, I will say I did not discover the concept of half swiping until last night, uh, when I read this article, so I did not know this was a thing.
Ben Whitelaw:Got to explain it to us.
Mike Masnick:Snapchat, there is the ability to half swipe, because apparently when you're, you know, like reading thing that was sent to you on Snapchat, the person who sent it to you can see whether or not you've, looked at the content that they sent you. So Snapchat had implemented this feature at an affordance within the app, where if you don't fully swipe in, but you half swipe, you could see the message. That was being sent to you without indicating back to them that you had seen it.
Ben Whitelaw:so no read receipts.
Mike Masnick:no read receipts. It was, it was a sort of a way of getting around the read receipt is you could do that. so people couldn't tell, but then they added this other like premium feature for some users. Like if you paid, you could see if someone was effectively doing the half swipe and then you would have the information you say, Oh, they're not really committing to seeing my content. And so. There's a lot of talk in here about how, for teenagers who are in different social relationships or, you know, interested in, people romantically, how they're like freaking out about this. Like, are they half swiping or, are they, not replying to me? Do they not really like me if they only half swipe all of this stuff? That's just like all of this social dynamic stuff that is. Heightened by the affordances specifically of the app.
Ben Whitelaw:Yeah.
Mike Masnick:The developers of these apps, I'm sure I haven't really thought through. I think about this in the context of, there's all these, talks now, even with Snapchat, there've been a bunch of lawsuits about specific affordances within the app and how they're sort of negligent in some form or another,
Ben Whitelaw:Hmm.
Mike Masnick:you can see how these kinds of things develop. They're not developed to like, make a. teenage girl have her heart broken because someone half swipes instead of full swipes or like they have this example where there's one girl wanted to hang out with a boy and sent him a message and he apparently half swipe, but she had the premium account. So she could see that a half swipe and he didn't reply for a whole hour. And then he did say he wanted to hang out. And so they got to hang out, but she's still upset because he took an hour and he half swiped. And what does that mean? Maybe he doesn't really like me and all this stuff. And I'm just like, Oh my gosh. Like. This is, this is teenagers, right? Like this is always the way teenagers have been. And I talked about, I grew up in a time pre internet effectively and, calling, we had to use the phone and call. And there was all this nervousness. And if their parents pick up, what do you say? And how do you address and all of this kind of stuff? I think some of this is just the nature of like being a teenager. But it's sort of turned into this like big deal because of the, app itself. Right. so to some extent, I think this is an exaggeration, but it's also kind of funny to see like how these things play out the nature of a teenager who's sort of like going through the things that teenagers go through.
Ben Whitelaw:Yeah. It's so true. It's the importance of product design and how it contributes to interactions. It's not necessarily a safety thing, but you can see how that half swipe and that delay in somebody replying could lead to, the kind of anxiety and stress that might lead to. safety issues to, bullying to, people to take decisions that they don't want to take. So it's a lighthearted story that has a serious element to it. And it goes back to the story we, touched on, that was published in the guardian about importance of conversations and a points of kind of talking to, Children and anyone, frankly, about, kind of different affordances of different platforms and technologies. there's no way out of it otherwise. And, this has been a conversation that has been, you know, helpful for me, Mike, and hopefully will be one for listeners as well. that's the point I think we should wrap up this week. We've really enjoyed talking about this week's news. thanks again, Mike, for joining. it's been great to chat to everyone. We'll see you next week. Don't forget that review. we'll see you in seven days. Bye.
Announcer:Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.