Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
New Blocks On The Kids
In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- It’s Cool to Have No Followers Now (New Yorker)
- Introducing Safe for Work? — all about T&S jobs (Everything in Moderation*)
- Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do (Techdirt)
- Reddit and Kick added to child social media ban (ABC News)
- X boss explains why ‘horrific’ video viewed by Axel Rudakubana wasn’t removed (The Independent)
- Southport Inquiry (YouTube)
- How Elon Musk is Boosting The British Right (Sky News)
- arXiv Changes Rules After Getting Spammed With AI-Generated 'Research' Papers (404 Media)
- TikTok Investigated in France Over Content That Promotes Suicide (Bloomberg)
- France Moves to Block the Shein Website Over a Sex Doll Scandal (New York Times)
- EU leaders paper over splits on US tech reliance (Politico)
This episode is brought to you by our sponsor Safer by Thorn, a purpose-built CSAM and CSE solution. Powered by trusted data and Thorn’s issue expertise, Safer helps trust and safety teams proactively detect CSAM and child sexual exploitation messages.
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So this week, Mike, I've been flexing my creative muscles, um, which I do have, you'll be pleased to know. Um, I've been trying to design a logo for a new everything in moderation series about the trust and safety jobs industry. I'll include a link to the show notes, but there's lots of cool stuff that Alice Hunsberger and I have been coming up with and. I went to Canva to do this well-known, design tool. And Canva now has an AI function that allows you to kinda write what you want, describe what you want to create, and it does it for you. And I thought the prompt was a great, great opening for us today in control speech. so I, I posed to you the Canva prompt, which is, describe your idea and I'll bring it to life.
Mike Masnick:Oh my goodness. Uh, what?
Ben Whitelaw:don't be held back. Let your imagination run wild.
Mike Masnick:Well, as you know, I've done a bit of vibe coding myself, and I'm, I'm working on a project right now, which I'm not ready to talk about, so I'm not, not gonna do that, but, but I like this idea of like human vibe, coding. If I describe my idea and Ben will bring it to life. Okay. this one I didn't prepare ahead of time. Normally some of these prompts, I, I, so this one is just on the fly. I'm gonna say, I would like for you to make it so that when, people like Elon Musk talk about content moderation, and trust and safety and free speech, they actually have some clue as to what it is that they're talking about. Can, can you, can you. Can you bring that to life, Ben?
Ben Whitelaw:that's actually one step too far for me and for any AI tool, I think. Um, yeah, I mean, I
Mike Masnick:what a disappointment.
Ben Whitelaw:sorry, so soon in the episode as well. I would probably say similar to you that I, I would imagine a world I would, like for. A world in which grifters and charlatans and assholes don't control some of the world's biggest platforms or have millions of followers that they can kind of manipulate, others to their will. I'm not asking a lot, Mike. I'm really not asking a lot.
Mike Masnick:Well, let's, let's bring it to life.
Ben Whitelaw:Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's November the sixth, 2025, and this week's episode is brought to you in partnership with Safer by Thorn, a purpose-built CSA and Child sexual exploitation solution. Powered by trusted data and thorn's issue expertise. Safer, helps trust and safety teams proactively detect CSA and child sexual exploitation message. This week we're talking about why you'll never stop kids from chatting to their friends. Twitter's involvement in last year's UK riots and academic slop. My name is Ben Whitelaw. I'm the founder and editor of Everything in Moderation, and I'm with Mike Masnick, who is not only, experienced and knowledgeable, but he has a lot of online followers and therefore it is not very cool.
Mike Masnick:I am. I am so uncool.
Ben Whitelaw:You're so uncool. I, I'm, of course referring to this new New Yorker piece, Mike, that we both read this week, and which would include in the show notes about how the number of followers you have is no longer a sign of your relevance in clout. And as somebody who has quite a lot of blue sky followers, there's is bad news for you.
Mike Masnick:Yeah, I'm, I'm done. I mean, this presumes that there was ever any point when I was cool. Okay.
Ben Whitelaw:In, in some corners of the internet,
Mike Masnick:Yeah, I don't know. I did like this piece. It was kind of fun just talking about like, it's becoming, it's becoming cool to, you know, use social media in, in a more narrowly focused way. And I, I actually like that idea. I actually on the Tech Dirt podcast this week, I had, Corbin behold on as a guest, and we just kind of had this. discussion about like how social media is changing. And part of it was that, it's not all about just influencers and everything. And people are sort of looking for different, kinds of meaning, different kinds of conversations and different, different parts of the internet. And we all shouldn't have the same experience. And I think this New Yorker piece was a little bit of that same, thing by, but I, the title is great that it's cool to have no followers now. So
Ben Whitelaw:Which is great news for me, I would say.'cause I, I
Mike Masnick:you have followers.
Ben Whitelaw:I have a modest five and a half thousand followers compared to, compared to your, let me just check the latest LIFEST number, 195,800 followers
Mike Masnick:I know. I can't get over 200 K though.
Ben Whitelaw:if you reached a plateau.
Mike Masnick:I'm, I'm stuck at this. I've been right around, yeah. 1, 1 90 to 1 95 for quite some time.
Ben Whitelaw:this is not something we were gonna talk about, but I, I think I might have the reason why
Mike Masnick:oh. Go for
Ben Whitelaw:When you, when you search Mike in the, search bar of Blue Sky, at least for me, you are not the first, not the second, not e, not even the third Mike, to come up
Mike Masnick:what,
Ben Whitelaw:in this search. You are the fourth mic to, to come up, and that is somebody who, as somebody who, you know, I engage with a lot, you know, I have a podcast with maybe, you know, obviously you're on the border of blue sky, ding, ding, ding.
Mike Masnick:I, I
Ben Whitelaw:Could you sort that out?
Mike Masnick:I need to, to put my, thumb on the scale and make sure when people are, are searching, Mike, that I am at the top.
Ben Whitelaw:Exactly. That's what I'm suggesting.
Mike Masnick:that feels like a very Elon style move and that that is not my nature.
Ben Whitelaw:Yeah. It, it isn't. No, but it is, it is appropriate for today's episode, I would say. I'll leave you with that, speak to, speak to the powers that be at Blue Sky. well, talking of kind of other wins for me this week, we actually got a, a, really nice review, Mike,
Mike Masnick:On Blue Sky.
Ben Whitelaw:on Blue Sky. bringing together two worlds there, which I want to read out and just get your kind of, reaction to. is quite long. it's from an anonymous user, so I'll, I'll just read it out. it says I source my podcast anonymous, so I need to write my review on Blue Sky. I love control speech. Ben is great. Mike's good too.
Mike Masnick:I love it.
Ben Whitelaw:is kind of like, come for Ben, stay for the mic, whom I once publicly addressed as Mick. Apparently that's bad news for you again.
Mike Masnick:I'm so memorable.
Ben Whitelaw:it's, it goes
Mike Masnick:you search for Mick, I would come up earlier.
Ben Whitelaw:yeah, exactly. Maybe I'm spelling it wrong. Um, anyway, it goes on to say, we hear time and time again about the dangers of social media and the internet, and they're very easy. I know it. When I see it. It's obviously a no brainer solutions that everyone has thought about, that take, you know, whole minutes to come up with. The thing is, there are others who have spent literally years researching and writing about these problems and who can debunk all of those easy solutions while asleep. Mike Masnick and Ben Miot are just two of them, and their discussions or control al speed are informative, thoughtful and nuanced. Always nuanced. Like the show could be called Control Alt Nuance, which I. I think it's a good gag. Um, it goes on to kind of say, you know, subscribe wherever you get your podcasts. So first of all, thank you to this user for the Reviewer tool. Thank you for, suggesting that I contribute more to this podcast than Mike Masnick. And we all know that that's not true.
Mike Masnick:It is a 50 50 partnership pen.
Ben Whitelaw:I'm, I'm here for it anyway. Um, and again, great to have reviews. Very happy to take them on Blue Sky if you want to stay anonymous. But also please do go to your podcast platform of choice listeners and leave a rating review so we can be discovered and, reach more, more listeners like yourself. we have a lot to get through today, not least because Mike, we have a great bonus chat with thorn's. Pailes Halai about the positive application of AI to detect child sexual abuse material at scale, and also why safety by design principles matter more than ever. so we will go to that at the end of today's podcast. for now, we are gonna, jet set to Australia, for our first story today where we have a bit of an update on a, a kind of big story that's approaching. like a juggernaut. I would say, Mike, the December the 10th deadline, for the teen social media ban is hurtling towards us. We've talked about it on the podcast before and there's a bit of an update this week, which you found interesting.
Mike Masnick:Yeah, I mean as we get closer and closer to the social media ban for those under 16, they keep expanding what is included. And, you know, a couple months ago they added YouTube to the list. Which I thought was kind of crazy because YouTube for many kids is effectively TV and I couldn't imagine just like an out and out ban on TV for kids. like you can have limits and things like that, but out now ban. But now they've added both Reddit and Kick. To the social media ban as well. And it's basically, more and more places that kids go are gonna get blocked from anyone under 16 in Australia. and I find this to be somewhat silly. ki I can somewhat understand just because there, there have been some discussions about. The lack of effective trust and safety efforts on that platform, you could say. Uh, and it is popular among children, and so I sort of understand where that is coming from. Reddit to me just seems like, again, this kind of overkill situation. There is lots of really, really useful information. On Reddit, and the nature of Reddit is that you have all these subreddits and sub-communities, and some of them I imagine are, really useful for many kids for a lot of reasons. And the idea of a blanket ban for all kids. From Reddit just seems crazy and just seems like a complete over, you know, there are tools within Reddit, like every subreddit has their own moderation tools and you have things where they can put restrictions on it. If there are concerns about like, I mean already I believe. I'm not a big Reddit user. I shouldn't say, but like, there are forums on Reddit that are not safe for work and that have restrictions for, I think it's for over 18. I came across that when we, we were. Looking at, or I don't even remember if we talked about the story about grok generating not safe for work images. And there was a, discussion on, on Reddit, and I had a show that I was over 18 just to go to that subreddit. you know, so there are, tools and ways to do it and it should be on a community by community basis rather than doing a blanket ban. And it just, it strikes me as, just another of these things where for years people have complained about the fact that kids have fewer and fewer places. third spaces, places that they can go that are not school and are not home, where they can, spend time with each other and hang out and, interact where they're not being watched by parents or teachers. And it used to be. The mall or the park or whatever. But there's been like such a fear of like letting kids be kids that, we sort of don't allow that as much anymore. And there have been lots of efforts to try and bring that back, but it just, it hasn't really been coming out. And, and that's why I think so many kids end up going to social media because it is kind of a third space. And maybe it's not the best. Maybe it would be better if there were more physical spaces that people could go to. And so this just strikes me as another, you know, kids are going to these places because they want to communicate and they want to spend time with each other and just completely blocking. It just seems so silly. Which leads me to a story that I wrote this week, which I think sort of gets included with this. you're gonna interrupt here.
Ben Whitelaw:Well, I guess I wanted to kind of just point out the, the kind of difficulties of preventing under sixteens access to Reddit everywhere, because I was thinking about how Reddit. Is increasingly showing up in Google search results.
Mike Masnick:Yes.
Ben Whitelaw:it ranks really well because of the kind of authority of the information on Reddit. So, you don't even have to go kind of halfway down the page on, on a lot of, search engine result pages to see a Reddit link or a link to a subreddit. And I, you know, I wonder how, Google is also going to Manage this process for under 16 users. if, this is to be blanket as, I'm sure the eaf commission would like.
Mike Masnick:Yeah. Well, I think it's, it's even crazier than that because like a lot of people now will go to Google and, specifically search Reddit because they know, and, and in fact like Google added, I was trying to find it really quickly. Like they added a feature that basically searches through Reddit. If you do the web search. which, it looks at places where you're likely to get more information, like Reddit, YouTube, Yelp. whenever I'm in the middle of trying to fix some, home appliances and the combination of Reddit and YouTube are basically like the answers to everything. so people now like, go and add site. reddit.com to their Google searches. And Google actually added this web search thing, which specifically looks on Reddit, and then to, block that entirely from kids. Just it seems like such overkill.
Ben Whitelaw:Yeah, there might be like, hundreds of thousands of kids who have been like searching of how to do household chores or, you know, or fix their bike that no longer will be able to do so in Australia because of this band. There might be a kind of plague of rusty bikes,
Mike Masnick:yeah. I mean, I, I'm, I'm assuming it's more like, how to build something in Minecraft or like, things like that. that kids are using, put like. There's a lot of really useful information on Reddit and honestly, like, I actually think Reddit is kind of a useful tool for, you know, something that I talk about a lot, which is sort of the media literacy thing, right? Like learning what is good and bad information. and I find going through Reddit threads to be really useful for that because you have like a whole discussion on whatever topic you're looking at, and oftentimes. it's very useful. Like we're working on a project which is not announced yet, so I can't even talk about it yet, but I was looking for recommendations for like a vendor that could help us do something. And Reddit was incredible'cause there were multiple discussions on this and back and forth and it's like, why this particular vendor is good and why that one is, not so good. And it got into like, nuances of like why different people are good for different things. And it's incredible how much you can get into nuance here on control alt nuance, uh, from like a discussion on Reddit. And it just seems like such a, a weird thing to just say, we're gonna ban it entirely for everyone under 16. don't understand it.
Ben Whitelaw:Yeah, I mean this, this is what critics take umbrage with about the law generally. Right. You know, which we had talked about this time last year, which has obviously had a year to, come into play and they keep adding platforms and the justification, I'm sure there is some for it, but the justification feels a bit weak. you weren't gonna talk about the kind of workaround or, or how, how kids can be clever, uh, in doing this. Like, do you wanna talk about the post you wrote on this week?
Mike Masnick:Yeah. So I, I think, are a few different things. So part of it's just like. people always joke about like, kids will find a way, and kids will find a way about around almost anything. And so I had just randomly heard a podcast that revealed one example of it and it reminded me of another one. So I wrote, this piece that first talks about, a story from 2019. So going way back about how kids in schools were turning Google Docs into like their own private social media network, which I just
Ben Whitelaw:Nice.
Mike Masnick:Fascinating. And because a lot of schools use Google Docs as, you know, you do all your assignments there and, project announcements are there. And so what the kids were doing was they would take an official document from a teacher, like, here's your assignment. And they were making a duplicate of it, which is something you can do in Google Docs. And then using the comment feature. So if anyone like came and looked over their shoulder, it looked like they were looking at the assignment page or the scoring rubric or whatever it was, and they were just commenting on it and you can comment on it no problem. But like they were using that to have the communication sort of hide their communication and. Then if, if it ever got to the point where somebody was looking closely, you can just click and resolve and it just goes away. And it's just like, oh, these kids are amazing. Right? They're like figuring out ways. And so the story that I heard that was new this week and I heard it on, an NPR podcast where they were interviewing someone who's like social media manager for, different NPR podcast, the TED Radio Hour, and she was saying one of her jobs is to monitor the comments on Spotify and all of a sudden she was seeing like a rash of comments on a podcast from years ago she talked about one from 2022. But there were a few of these and there were all these comments that were like. Weirdly outta context. They had certainly had nothing to do with the actual podcast itself. And there was clearly some other discussion that had occurred beforehand because a lot of them, they were very, kid speak and a lot of it was like, you're so pretty. Oh, don't listen to them. don't believe that you're so pretty. Like that was the discussion.
Ben Whitelaw:Right.
Mike Masnick:So she was investigating and she said that she was talking to some of the other social media managers and they were seeing a similar thing on a bunch of other NPR podcasts. That old podcasts would suddenly, for a day be flooded with, you know, between 20 and 50 comments. I think there was one that was like 90 comments and what they were assuming, and it was all sort of kids, it was clearly kids, and they were all clearly not referencing the podcast, but they were talking about something else. And what she believed was that. It was probably a group of school kids or a group of friends somewhere who had put together a playlist of podcasts because they could probably access Spotify for podcasts at school. And NPR podcasts are, you know, that's upstanding. There's nothing wrong with that. You're, you know, you're listening to the TED Radio hour, you're getting educated and stuff. Nobody's gonna be checking the comments from a random. episode from 2022 on leadership, which was like the one that I saw. And, and so they believe that they were putting together a playlist that was basically like each day, this is where our conversation is. No adult is ever gonna find it. No one is ever going to realize. And it's just this recognition that Kids wanna communicate, like the reason kids are using these services is because they do wanna communicate and they do wanna share, and they do wanna talk to people and they do wanna experience culture. And if you block them, you know, it's getting back to that discussion I was having about third spaces. They're gonna find the third spaces and they're not gonna be as prepared for it as well. Guarded or, or, you don't have trust and safety watching the, the Spotify comments on TED Radio hour for like, maybe you have an NPR social media manager who's like looking at it, but you're sort of driving them into these other spaces. They're going to find a way. And, there's an element of like, that's cool and it's kind of funny and it's, an interesting cultural moment. But like, wouldn't it be. Better if we just like took the spaces that we had and, tried to make them better, rather than just saying like, outright ban on kids.
Ben Whitelaw:Yeah, I mean, it's, it's really interesting. I mean, I, I love, situations like this where. a group of users are being innovative and determined and motivated. Can you even imagine what the teacher would've been thinking? Like, little Jimmys, getting genned up on leadership so he can go into the corporate world, like poor teacher. and, and so this, I love this story. I suppose like, critics might say, people might say, you know, well there are apps designed for kids or for teenagers. we've featured some of them. We've talked about some of them. and they don't often do a good job of, ensuring that people can communicate in the way that they want to. so it, it begs the question like, is it actually achievable? for young people to communicate in ways that aren't underneath NPR podcast. It, you know, it is all well and good that this kind of probably one class in the states has managed to do this, but it doesn't seem to work at scale, does it?
Mike Masnick:Well, I mean it depends, like this is the nature of community and sort of learning to some extent. For a while, we had this period of time where it felt like the belief was that there should be one giant community, the one giant chat room that everybody's in. and more and more people realized like, maybe that's not, not great. And maybe having smaller, targeted, narrower communities is actually better and beneficial. And it's why you have so many, chat groups that are, just like a small group of people that is, there are no outsiders in there. or you have, you know. Discord groups or, or whatever. You have more and more of that showing up, and you have more things that are more community driven. I mean, even Reddit is an example of that, right? I mean, it's all based on building a whole bunch of different communities, and so I think that. that's an element of, society reacting to this period of time where we thought everything was like one giant shared chat room and then this realization like maybe, you know, maybe that's good for certain purposes, but we should also have more private, more condensed spaces where communities can grow. And I think that's just a natural thing that happens. People want. Community. You want to have culture, you want to share it with other people. and so that is going to happen. And this, focus on banning, I just feel is like, is not a good way to do it. But like I think once you get past the idea of just banning, then you can have a much better conversation about how do we build good, safe places for different communities, and not just for kids, but certainly for kids, but also for for other people too. Right? I mean, you keep hearing all these stories of like the loneliness, epidemic, what was the, the book Bowling Alone, right? Like, there are all these things about loneliness and, and people not communicating with each other at a time when we have tools for all sorts of community and, getting together and all this stuff. and yet, I think we've sort of overreacted to the like, well, there are some bad things that happen and therefore we have to like, stomp out the badness. rather than focusing on like, how do we make the good parts better?
Ben Whitelaw:Yeah. No, it's interesting. I think. these kinds of stories bring to light the need for certainly kind of children to have these spaces and, and how often they can be kind of left out of the conversation about what that looks like. which is probably not what we can get into today. But, it is definitely, important. I'm gonna move us on now to a platform that, the eSafety Commission has listed in one of its nine platforms that are, are not allowed for, under sixteens as part of the ban in a month's time, X slash Twitter. how I always refer to it. I will never. Refer to it purely as X. Um, this is, I think, an interesting story. I don't think many folks would've heard about this, Mike. Some of our listeners, are based in the uk, but you have to be pretty, pretty in the weeds of UK politics to, to be doing what I did today, which is listened to and watch three hours of testimony from x slash Twitter's global head. Global Affairs. woman called Deanna Ramina Kanko, who I'd never heard of before. today's, testimony, at the moment there's a, a big inquiry happening in the UK called the Southport Inquiry. In response to a story we talked about on the podcast last year, the UK riots that happened in Southport after a. Young man, a 17-year-old called Axel Rues Cabana, stabbed and killed three young children at a Taylor Swift, dance class in Southport near Liverpool in July last year. it led to a series of riots, after. People online, kind of misidentified, Ruda Kabana, and believed he was somebody else. And that misinformation spiral caused days and days and days of violence, police, aggression and, really, became a big political story. Listeners from, back last year will remember this inquiry has kind of gone into the detail about why that happened. what was the causes of, those riots. And as part of, they called, Kako for testimony. And that happened earlier this week for me, having watched it all today, the appearance is kind of like a really good, clear example of, of what free speech and online safety means to x and Elon Musk, and we've talked about it obviously at length, and we, talk about Musk's comments, but she is. Probably one of the most senior, staff members responsible for policy in the company. And so I wanted to kind of unpack a little bit about what she said and, get your thoughts. It's worth saying this is a long testimony. so if you get a chance, go and listen. I'm not able to cover all of it. And also to say that this event obviously happened last year before the online Safety Act came to play and the child's safety duties came into play. So, Twitter didn't actually have to Do anything in terms of preventing, 17 year olds like, Ruda Kabana from doing anything more than enter a birthday. and that's, that was a big topic of conversation. So a couple of things that stood out for me. first of all, it's the fact that the team at Twitter, we, we know there's been lots of trust and safety cuts. do you wanna guess, Mike? How many people there are in the public policy team at Twitter right now for its 600 million global users?
Mike Masnick:I, I'm afraid to guess, and I would, I would put the number probably under five, but.
Ben Whitelaw:I, I was a bit surprised as well. There's 14, 15 people. I was surprised that it wasn't, yeah, three people and their dog. Um, but then, also there's probably 15 people working for mentoring in public policy in London alone, or in Washington alone. So it gives you a sense of, you know, how those cuts have affected parts of the, the company more than others. and so she, she kinda makes that clear in the start. In some of her comments, she outlines, X slash Twitter's mission. When it comes to speech as well, she talks about the importance of truth seeking and the kind of, uncomfortableness of speech sometimes. And, she says that, free speech is the only way that humanity moves forwards, which, is, uh, straight, straight out of gonna be Elon Musk, podcast episode. I think. So, so there's, there's a lot of I guess, posturing about, that. she's asked about age verification. The, the kind of interesting point about Rudy Kano is that he searched for a video of, a bishop who was stabbed in Australia last year. Again, this is a story we talked about. So it brings together a few different threads that we've already discussed. he searched for the video on Twitter. it was a, violent stabbing, in which, Bishop Murray, Murray Emmanuel was attacked. he wasn't killed. But the video remained up for reasons that we'll kind of go onto. And Ruda Cabana searched for it six minutes before ordering a taxi to go to the event where, he stabbed these young children. So the inquiry today was talking about whether Twitter felt responsible or what it could have done to avoid that happening. and you can imagine that she was kind of distancing herself from that, making it clear that, there's no necessarily causal link just'cause it happened. doesn't mean that Twitter slash X is responsible. she also mentioned that there was only, about 198,000 Twitter users, x users under 18 on the platform. which again is, a small number in 600 million is also not very. Robust because users can select any date of birth that they want. but the kind of points she made around vage verification and, and the downsides of it, the privacy concerns that, can emerge if you get everyone to upload their IDs, is, was a kind of sensible part of, I thought the deposition and, and what she said less so was what she said about the Sydney attack, in my opinion. So you remember that Twitter slash x resisted the takedown? of the content, globally. This, so the eSafety commission, Australia's regulator, took Exeter to court, about taking this content down globally. Twitter pushed back, in many people's eyes for good reason. the questions were about, you know, do you think this piece of content violates. Your policy on, graphic content and gory content and, and there was some lines read out from that policy. If you've seen the video, you'll know that it, probably does, in many people's reading of the policy. But she made great pains to say that actually, you know, it didn't do so. the knife wasn't visible, it wasn't glorifying. There was kind of newsworthy protections. and it goes to show the kind of nuances and the difficulties that come in making calls about whether a piece of content fits a policy. It's worth noting that Meta and Google and TikTok and others did take, copies of the video down for reasons obviously, that they felt, cause of their policies maybe being slightly different to Twitter's. Anyway, the point is, some sensible things were said about age verification. I would say some kind of, I would say, swerving of the responsibility was, made around maybe why, the Bishop Murray attack video was not taken down. and it felt like a bit of a doubling down really on that decision because so much was made at the time. But again, fascinating insight into, several stories, Mike, that we've talked about on the podcast over the last year.
Mike Masnick:Yeah. And, and there are a few interesting things to talk about here. so one, you mentioned you watched the entire video and, suggested other people could do that. I was unable to watch it. and as of right now, at this moment, I think no one is able to watch it because you sent me the link, before we were recording today, and I went and checked it out and that video is gone. It is set to private on the account of the Southport inquiry. We have no idea why. could speculate, but the video that you got to watch, you got to watch something. That that is secret and no one else can see it. We suspect maybe it'll come back in a redacted State. we're assuming that maybe there was something included in that testimony that, shouldn't have been or whatever. so I, could not see it. I heard your report on it. and there is an article in the Independent, which seems to cover most of it, that I did think was very interesting. But I, think you're right that this is, interesting in that it actually. Does display the, the real complexities here and the bishop attack in particular in Australia was a challenging situation and I understand why. Many platforms took it down and they say it's a violent attack. And you have, you know, a lot of platforms, especially since the shooting in Christchurch years back have decided that any of these kinds of attacks they're going to take down. And she made the case that she views that content differently and in fact. that the bishop has said that wants the video, to be out there and that he forgave the person who stabbed him and even apparently while he was being stabbed, like. Said that he forgave him. And so she was saying that she sort of viewed it as a kind of miracle and inspiring in some way. and what that gets to is how much context really matters. And this is something that I think gets lost in so many of the content moderation, debates, where for some people it's an inspiration to an attack. And with, Ru Kabana, it appears that that was the way that he viewed it. And for somebody else it may be viewed as inspirational in some ways. And we've had this discussion going back, a decade and a half where we talked about the demands to take down terrorist content on YouTube, where. The exact same content that was being used and described as terrorist content was also being used by human rights groups to document war crimes and atrocities. And the context matters, but it can be the exact same video that is viewed in different ways. And then there's a question of how do you deal with that from a trust and safety standpoint. There is no easy answer. And so as much as I criticize X and I think a lot of what they say is self-serving nonsense and not really connected to free speech, and they wrap this sort of cloak of free speech around it, which I think is obnoxious and often wrong, I, I actually do feel. She's right about some of this, that this is complex and the idea, and it's easy for politicians and certainly the media to come in afterwards and say like, well, clearly this guy, he watched this video and then he went and he killed kids. The video is the problem. Why didn't you take on the video? It's entirely your fault, And, you know, I, I don't buy that, right? You know, like this person was obviously very disturbed, wanted a reason to go and do some horrific act, he was going to find a reason. The fact that this video was or was not available on this one platform, I don't think was going to change the situation. And I think it's a little bit. a little bit ridiculous to say like, oh, it's somehow this platform's fault because they didn't take down this one video from Australia.
Ben Whitelaw:Yeah, I mean, the only, the only point I made there, which I don't think was made very strongly in the testimony by the kind of government, legal representative was the fact that he searched for the video.
Mike Masnick:Yeah.
Ben Whitelaw:I think I feel like there's something I. More causal about it. if you've searched for the video, it's not like he just watched it in his feed and then went out and did it.
Mike Masnick:I mean, to me that almost goes the other way, right? Where it's like, he was trying to psych himself up to do something so yes, he searched for this video, but like. If he didn't find it do you think that would've stopped the horrific acts that he did, you know, would he have searched it for it somewhere else? Would he have searched for a different video if he was just sort of trying to psych himself up? I mean, I think the argument, if he had just come across it and then that made him go do something, like, that feels a little different, like if that inspired him to do it. But the fact that he searched for it meant he knew about it. He was thinking about it already and he was already just sort of. in that mindset. So I'm not sure that that actually makes the argument that, you're saying it makes.
Ben Whitelaw:Yeah. I mean, I think, I mean, it wasn't available anywhere else. Is the point. So we don't, yeah, we,
Mike Masnick:really, I mean, I don't believe that, like, maybe it may have not been available on any major platform, but I'm sure there were, there were, other places that you could go to find it. This stuff doesn't disappear. Right. you can still find that stuff and, maybe it's a little bit harder and maybe that. Sort of friction limits the reach of it. Sure. I'll buy that and maybe that's a good, but you know, if you have somebody who is determined, and in this case it feels like he was probably determined he was gonna find that video.
Ben Whitelaw:Yeah. I mean, I, I, I definitely think the friction comes into it and I dunno what the perfect amount of friction is, but a point that kind of, I guess. Backs up. What you're saying is actually something, again that was mentioned very briefly, the conversation with, the ex staff member was, one of the other things that they believe led to the attack or that, that Rudy Kabana, accessed in advance of, committing this atrocity was an academic paper that published the full. extent of the jihadist, kind of creed or, or, or like manual. And I don't imagine that academic or wherever that paper is published, being kind of invited to talk at the testimony about why, why was that,
Mike Masnick:Yep.
Ben Whitelaw:why was that handbook including that academic paper? And so there is this, you know. I appreciate, and I acknowledge the, the kind of slight change, the slight difference in approach there where you, you know, we go firmly in on platforms where we state that they have a greater responsibility maybe than others. I still think that she, she was a little bit, un unhelpful and didn't necessarily engage with the idea that this was a violent act in the majority, in the eyes of the majority of people.
Mike Masnick:yeah, I mean, the fact is like X itself and Elon Musk and the team that works for are disingenuous, so that is part of it. And I, I would actually bring that into, the next story that we have here, which is that, sky News put out this amazing report where they were sort of curious about. what sorts of things the algorithm on X feeds you. And so they set up a bunch of accounts some of which they set up to follow left wing users, some of which they've set up to follow right wing, some of which they set up to follow, you know, non-political users. And then just tracked for a long time with a lot of data. And, it's fairly incredible and it's a really nicely done article with really great artwork and, infographic kind of stuff. But just a really quick summary is that when they did this, if you had a left wing user that only followed left wing users, the algorithm fed them an equal amount of left wing and right wing content, even though they only followed left wing users. Okay. So you're like, okay, those showing 50 50, that's cool. But then when they did the ones who were right wing and only followed right wing users, then they only saw 14% left wing content. So it was basically. Almost all right wing content. And then they had the neutral users who, followed neither, left or right wing political partisan users, and they saw it was like twice as much right wing content. So like 67% right wing content and only 33% left wing content. And. and they said that all of the barely any of the political content shown was nonpartisan at all. So basically it's really pushing a partisan message. And right now they're heavily in, Elon in particular is heavily involved in UK politics and trying to push certain messaging. So, it's pretty incredible and they're pretty disingenuous. And the whole idea that this is all supporting free speech is kind of nuts. And you also sent me. Elon Musk went on the All In Podcast, and, and so I, against my better instincts, watched some of that and watched the sections where he was talking about how he's brought free speech back and how any kind of algorithmic bias is a huge problem you know, and the, idiots on that podcast were cheering him on about how he saved the world from woke and all this stuff. Without any inkling or any recognition that they have gone in the extreme of the direction, and they're free to do so. As I've always said, like the editorial discretion, they're free to do so, but the claim that they have somehow fixed things and, gone neutral is just, utter, utter nonsense. And so I think you have to keep that context in mind when you look at some engagement in what's happening in the uk And this, particular inquiry.
Ben Whitelaw:Yeah, I think, I think you're right. I think there's, um. there's a nuance to it as always. Uh, what, what would, what would our instrument for nuance be, Mike?
Mike Masnick:Oh God, the, the, the accordion of nuance.
Ben Whitelaw:yeah. Probably make quite a kind of sad, sad soft sound. Um, yeah. To go along with a trade off tuba. Um, yeah, I think there's, definitely nuance to this. I, I, I felt that some. Parts of the testimony were, really interesting and, and I again, think it's a, a great topic. We'll probably return to this in the future. as the inquiry wraps up and as we, find out more about what Twitter slash X do now that they're being supervised by the Online Safety Act, we'll move on now, Mike, from that, talk a bit about a couple of other stories in brief terms, as briefest as we can be. let's start with, a story that you picked up, about archive, and about the kind of ai slop that's not only filling our workplaces now, but also, academic journals of note.
Mike Masnick:Yeah, so Archive, it's RXIV, which is impossible to, but it's pronounced archive. It's put together by my alma mater, Cornell University, as a place for Preprint Journal, articles to be posted and. Preprint journal articles are actually really important to scientific research because most of the things, once they've gone through peer review, go behind paywalls and they're very expensive unless you're at an academic institution or even if you're at an academic institution in some cases. So various preprint servers like archive have been really, really important. and they have. they claim, not changed their rules, but started enforcing the rules that they had not been enforcing before because they were getting overwhelmed with AI slop. and so in particular they limited a specific type of article, a review. Paper, in the computer science field, which they had said was never the kind of paper that they, they had rules against that originally, and they said, but there were like so few of them that had been submitted that they sort of let it through, but now it's just being flooded and clearly with just absolute nonsense that. Purely chat GT generated or whatever, and it was becoming too much for their moderation team to deal with. So they have now said, unless you can show that this paper is being presented at an academic conference or is, being reviewed by an actual journal, you can't put it up there. So they put, you know, some additional rules. They said, we're just enforcing the rules that we had before. But the thing that struck me about it is we're talking about, you know, the rise of ai. And that's changing and challenging content moderation but everybody always thinks about like social media. And here we're having it in, an academic journal hosting service, which just shows once again, like how trust and safety and content moderation. Is everywhere and needs to be. And also kind of like, I will go back very briefly to the last story in that all in podcast, Elon Musk totally mocks the idea of trust and safety and like the idea that anyone should ever have trust in their title is the dumbest thing ever. And it's all just censorship and like, no, it's not and this is a case where it's like you have rules and you need somebody to enforce the rules. at some point, you know, you might let some stuff go through, but archive is not there to be a platform for just AI generated slop, right? They want to be a place where you can find good, useful academic research that are preprints before they've done peer review and they got overwhelmed and it made a mess and it made it hard to do things. So they start enforcing the rules and that builds trust. And so yes, you need trust and safety. People like that's part of the process. And yes, it's okay for them to have trust in their title because that's what they're building and. So this was like, you know, it's a smaller example, but I always love the non-major platform stories because I think they're important to sort of get people to recognize like, these things actually do matter and they matter, not just for the biggest platforms.
Ben Whitelaw:Yeah, indeed. yeah. And so good news for your, pre-prints about labor relations, Mike, that you will no doubt, I'm not gonna go into that. You have to listen to last week's podcast to know what that reference is about. Um, we kind of round up on, uh, a duo of stories, uh, Mike, that are about France and about how kind of French authorities are, are cracking down on, a couple of different platforms.
Mike Masnick:everything.
Ben Whitelaw:Yeah. And, and what I think is a kind of sign about how the country, is thinking about its relationship with big tech. There've been a couple of stories recently. Politico wrote one, about how it's kind of differing, from its bigger European neighbors, including Germany. About how it, relates to these big tech platforms. You know, Germany's thinking more pragmatically about it, the kind of ongoing diplomatic tensions that we have talked about on the podcast. Whereas France has been quite gung-ho in terms of kind of pushing back against, some of how the platforms work and, what they do. And one of the stories that I found you, you got another, but one of the stories I, I found was about an investigation that's being opened into TikTok. following a government committee report that suggested users were at risk of suicide as a result of its algorithm. So, there is now this kind of broader investigation that's being opened about tiktoks algorithm and compliance with national laws, particularly around suicide. probably more fascinating and, and certainly more public. was the story you, you saw about Sheen, uh, the big Chinese platform as well?
Mike Masnick:Yeah. And, and she, and you know, it's like the fashion retailer. Very, very cheap, fast fashion copy stuff. and so they opened a store in Paris, which I think like a, an actual physical store, which I think got a lot of attention. And then at the very same time, you know, the France authorities got really mad because they claimed that there was people found on Sheen's Marketplace where, other people can sell stuff. sex dolls and apparently they were small and therefore potentially implied child, which is bad, like just, you know, o obviously bad. And so France is threatening to ban sheen entirely. and she and I think said they were shutting down their marketplace feature in France and response to this, and again, it's. It's another example of like an alternative platform and you know, people will do bad stuff and break the rules on these platforms and managing that and handling the trust and safety and content moderation aspect of it is really difficult. I'm sure there's a ton of material on there and the ability to catch something like this is probably difficult. I don't know exactly what their particular rules are and how this fits. Obviously when you're selling stuff that's a, marketplace kinds of things is a different sort of challenge than just like social media content. All of these are different kinds of challenges, but involves, there's always some effort in, in managing the marketplace and dealing with bad sellers or problematic sellers and who violate the rules, And I find it a little bit convenient and a little bit, you know. Yes, like a bad seller slipped through with a bad product that happens all the time and, immediately jumping to we're gonna ban you. It felt sort of performative in a way that France can be performative about the internet. And you know, to me, part of this also just like France, you, you said like, yeah, they're taking a different approach to Germany. They've always done this. Like I remember the fights over copyright stuff where France was always the worst on this stuff too. And just there's some sort of like, I don't know if it because they had the minitel before the internet became big, that they just have this like. This is a little bit of stereotyping, but there's a bunch of people in France who seem to really hate the internet and, and are really, really quick to attack the internet and, and, you know, within the political class. It's just, it's amazing to me how often France is just doing something bad,
Ben Whitelaw:Yeah, interesting. I, I, I hadn't kind of known that. Hi, that backstory, that history. Um, but yeah, I mean, it's definitely a bad time to be a kind of Chinese owned, uh, platform in France. You know, not only are she in and TikTok being investigated, but Temu with Chanel Express are also being investigated under the kind of, marketplace investigation that you talked about. So yeah, we'll probably keep track of that as we, go into next year. which. Will be fascinating. Mike, it's a bad time to be a Chinese platform in France. It's a good time to be a listener of controlled speech'cause we will now go to our bonus chat, with thorns, Payless Halai, talking about how platforms are scaling their CSAM detection. Using safe's new text classifier, which helps trust and safety teams find conversations that actually violate child sexual abuse, policies. I found this a really fascinating chat. I didn't realize that it was possible to actually detect kind of in words, the kinds of behaviors and, interactions that might lead to, a grooming, an exploitation event. So really glad to have pays on the podcast this week. So welcome Pailes to the podcast. It's great to have you here on Control Alt speech. really nice to see you. let's dive straight into, what is a really fascinating topic. many folks working in industry will know about Thorn. In part due to its safer Cs, a detection tool, which more than 90 platforms use day in, day out. Can you share a little bit about the nuts and bolts of how it helps platforms address the challenge of moderating csam at scale and what sets it apart from other tools?
Pailes Halai:Yeah, yeah, of course. And, thanks for having me on Ben, big fan of, of your show and, and the newsletter. So, so really glad to chat to you today. so as you mentioned, Safer is a scalable solution for, you know, companies to use to identify and remove, CSAM at scale. we have both large and small companies using it. So, you know, thinking a two person team running thousands of pieces of content a month, to like a hundred person team running billions of pieces of content a month, it works just as well, at either end of the spectrum. So. It's a really, really good tool, as a starting point, but also as you know, a really robust solution for identifying and removing CSA. I'd say the core of the product is really in its, known CSA detection. So that's gonna be via the hash matching service and also our classifier models. So these are machine learning models that are gonna help, companies identify. Previously unknown or novel CSA. that's really the core offering, right? you know, in our opinion, it's not enough to just look for one or the other. You really need to be looking for both to ensure you're running a, a safe platform, and to keep, the viral spread of CSA, down to a minimum. beyond that, I think in terms of what sets it apart. First of all, you know, at Thorn this is all we do. we are narrowly, narrowly focused on the issue of child sexual exploitation and child sexual abuse. so that really sets us apart, I think, first and foremost as an organization, as a tool. I think it's a very unique tool in that so much of what we build is informed by our in-house research team. You know, how many, how many companies can say that they have a world class research team in-house. so we kind of take the trends that they're seeing. And, you know, marry them with some kind of broader industry trends that we're seeing in the space. And we really try and develop tools that meet the moment of how children are being abused, today, and how they might be abused on in different, surfaces in the future as well. So, uh, An example of this, in practice is, as part of our online grooming research, we've seen a, a pattern of perpetrators moving kids to private chats since as early as 2022. And, and we know probably before that, so, we found in that research. around two and three miners reported experiences of online contacts, inviting them to move from a public chat to a private conversation on a different platform. And we know for various reasons, you know, this is where, the bulk of grooming happens on these, platforms where it's kind of a black box. So. Taking this, these insights from our research team, you know, that really helped inform the build of safe's, latest classifier, which is the text classifier. and this differs slightly from our image and video in that it's text, and it. Enables users to identify sexually exploitative conversations, whether it's in strings of texts or in conversations. and I just got an update this morning. so this might be breaking news, uh, but we've just added, we've just added a, a grooming label to the text classifier. So again, this was entirely informed by user research, our in-house research team. trends that we're seeing. You know, in terms of growing harm types, grooming is top of mind for a lot of trust and safety teams. so the idea with this new label is that it helps identify grooming patterns in conversations. So I'm thinking, planning or suggesting offline meetings, encouraging self-harm or emotional dependency and things of that nature. so yeah, I think. Really a combination of us Thorn as an organization being so narrowly focused on this issue space, in combination with our in-house research team that's deeply, deeply embedded within our product roadmap, really makes safer such a unique and powerful product, you know, in, in this space.
Ben Whitelaw:Yeah, really interesting and, and amazing that there's, you know, such iterative development happening, literally as we speak. Um, always good to have the, the latest breaking news on uncontrollable speech. you mentioned about the kind of scale pals of, child sexual abuse material that can be kind of daunting for.
Pailes Halai:Mm-hmm.
Ben Whitelaw:Not only users, but trust and safety teams as well. it doesn't take a lot of looking at media headlines to see cases every week of, grooming and, and, you know, exploitation. Can you kind of share a bit about what maybe gives you hope for the future and, and where you see the most meaningful progress happening with this harm?
Pailes Halai:Yeah. I get asked this question quite a lot actually. Especially, you know, working at an organization like Thorn. You know, you're kind of exposed to the worst of the worst, and, and you're right, every day there's a new article, a new harm type, and it just seems like you're swimming upstream all the time. Right. So for me, I think the, the first thing that comes to mind is really the people. whether it's people at Thorn who have dedicated their time to work solely on this issue of child sexual abuse and exploitation. or, you know, the companies that we work with. I'm fortunate to work with a lot trusts and safety teams in industry and seeing the people that are dedicated to their jobs and, how they approach their day to day, gives you nothing but hope. but then. You go to somewhere like Trust Con, you know, you guys are are normally kind of the, the curtain call at Trust Con with, with the live, version of, of this podcast every year this space seems to grow, right? Like more and more people want to come and join this space, like no matter how hard the issue is. and that's really helpful. And thinking beyond the people. I think just innovative solutions. You know, I mentioned, the text, classifier, I mentioned the grooming label. These innovative solutions really make the biggest dent. You know, you can have all the people working on the issue as you want, but without innovation, without leading edge technology like classifiers and, And machine learning models, you know, we really can't solve this problem. So I think, you know, a combination of, great people in the space, inspiring people, and innovation really, really makes me feel hopeful. and even zooming out like a few years and thinking about where Thorn started. In launching Safer. you know, since then we've launched, uh, proprietary hashing algorithm, proprietary classifiers. as I mentioned, the grooming label. We've done a lot of work just in the last few years and, it's really amazing to see that actually making a difference in the ecosystem.
Ben Whitelaw:Yeah. definitely. I mean, it's definitely a topic that has got a lot of people's attention and, focus right now, and it's, it's obviously great that Thorn is one of those organizations, as well as developing this kind of detection tools and technology that you've mentioned. thorn also talks a lot about the concept of safety by design. can you talk a bit about what that means to you at Thorn and what are the biggest opportunities you see in embedding safety by design approach into particularly emerging AI systems?
Pailes Halai:Yeah. Yeah. I think it, it's a very complex subject, uh, especially in this, the day and age where, Everyone's adding the words ethical or responsible to their development. but I think, and we at Thorne really just think that safety by design means baking safety into every stage of AI development or really any product development for that matter. so in practice that starts with inputs. So cleaning up your training data, we've heard many stories over the last couple years where there's, kind of, tarnished training data that then enables the model to spit out even worse, generated content. So it starts with clean training data. and safer can be a really big help with this as well. we have companies that have used safer and are today using safer to clean up training data to ensure that they're using the cleanest training data sets. beyond the inputs, you know, you can think about red teaming.
Ben Whitelaw:Mm-hmm.
Pailes Halai:Um, so it's another thing that companies should be looking at and also something that th offers as a service. and this can be, You know, us testing the model for harms offering solutions in order to mitigate those harms. So it's really, at every stage, it's not a set and forget it type of, initiative. you know, you wanna really be aware of model drift, et cetera. So, every stage, you know, we want to be thinking about safety, and, and thinking of this like from an operational perspective. It's just a lot easier to be preventative rather than reactive. we've seen kind of companies trying to, to kind of put the genie back in the bottle, you know, and it's really, really difficult and almost impossible. So if you focus on ensuring that the model can't produce those harms in the first place, you're gonna have a much easier time preventing the outputs, in being harmful. So, yeah, I think. That's kind of how we think about safety by design and, and it's something that we, we really believe in, but we also practice at Thorn as well. in terms of opportunities, I think. there's a real chance for companies to differentiate themselves in my opinion. you know, you, you, you go on to Google and, and type in AI model or, or chat bot. There are dozens and dozens of options now, and I truly believe, and I, I really hope that users will. Vote with their, with their money, right? They'll go and choose the model that is safe for them, for their company, for their kids to use. So I think there's a real opportunity for companies to. Really lean into safety by design principles and build a really safe responsible model that users are gonna want to keep using, and users aren't gonna be put off by, by producing adverse material, as outputs. Mm-hmm.
Ben Whitelaw:Yeah, I certainly hope that too. I mean, safety, more AI companies focusing on safety is not a bad thing. and, and it's great to see Thorn kind of contributing to that space in the active way that it is. when you kind of fast forward of a few years, payloads, when you think about thorn's impact, um, in the future, in the context of AI and content moderation, what do you think that looks like in kind of real terms. Do you ever envisage a, a world in which AI used to kind of defend and, and not endanger kids in the way that it often is nowadays?
Pailes Halai:Yeah, it's a, it is a great question. And, you know, I would say the AI is being used and has been used now for a long time to mitigate these harms. So, for example, I mentioned Safer was launched around five years ago, coming on six years now. And we've had our image and video classifier for almost as long. And that's, that's ai, right? a AI takes on such a different meaning now with, with LMS and, and chat bots. we are firm believers in embracing the fact that whenever there is a new technology and a new development, you know, whether it's, being able to stream high quality videos and share high quality videos to, AI generate CSA, these technologies are always gonna be used for bad. cause the technology is agnostic, but there are always bad actors using, trying to exploit any angle. So we embrace that at Thorn and we try to lean into that and, and try and build solutions using the same technology. And as I mentioned, the, image and video classifiers that we have, you know, machine learning models that are able to predict whether it, this is, previously unknown CSA or not. The latest product from Safer. The Text Classifier, is a language model that's able to detect, sexually exploitative and abusive behavior, including grooming within text. so it's really, really about, I think, understanding that tech is gonna be used for bad and it's our job to try and stay ahead of that curve and address it in the best way possible. so. I think the moral really is to, to embrace that it's always gonna be used for bad. that's never gonna cease to be the case. we are there to ensure that the technology is there to be used for good and to defend kids. Yeah.
Ben Whitelaw:Brilliant. a, a good note to end on, I'd say, and, thanks for taking the time to, talk to us today about the work Thorne's doing to give us a bit of context of, you know, your approach and, a bit of a look under the hood as to how you're thinking about CSAM and, preventing it at scale. So appreciate your time today.
Pailes Halai:Yeah, no worries, and thank you for the time.
Announcer:Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.com. That's CT RL alt speech.com.