 
  Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
Chat Bot Your Tongue?
In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Character.AI is banning minors from AI character chats (Financial Times)
- Strengthening ChatGPT’s responses in sensitive conversations (OpenAI)
- Senators propose banning teens from using AI chatbots (The Verge)
- EU accuses Meta, TikTok of breaching digital rules (Politico)
- Meta and TikTok are obstructing researchers’ access to data, European Commission rules (Science.org)
- Hey Elon: Let Me Help You Speed Run The Content Moderation Learning Curve (Techdirt)
- China’s new law: only degree-holding influencers can discuss professional topics – netizens divided on its impact (IOL)
- Wizz is like ‘Tinder for kids,’ as teens use the app to hook up while adult predators lurk (NY Post)
This episode is brought to you by our sponsor WebPurify, an Intouch company. IntouchCX is a global leader in digital customer experience management, back office processing, trust and safety, and AI services.
Webpurify has just launched their very first podcast series, Trust Issues - Insights from the People Who Keep the Internet Safe, and Mike and Ben are fans. Listen to all three episodes on Spotify and watch on YouTube.
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So, Mike, last week, you surprised us all when you demonstrated a real aptitude for the youth lingo.
Mike Masnick:Oh, I'm, I'm, I'm an expert on this.
Ben Whitelaw:You, you're really down with the kids. Um, so last week I asked you to spill the tea. You understood what that meant. and so this week I'm using you as a gauge for another youth targeted app called Wiz. Okay? it's in the headlines this week for reasons that we'll touch on briefly in introduction. But the Wiz app, gets users on board by saying you can pop into online profiles and chat wiz new people.
Mike Masnick:Very clever.
Ben Whitelaw:that's not a typo. chat with new people. So, so let's start this week's episode by asking what would you pop into your online profile and who would you chat Wiz?
Mike Masnick:Well, I'll say I, I attended a very nice event yesterday in San Francisco, where I was on a panel talking about AI and safety related things, which I think will be a big part of today's discussion. But I got to chat with a bunch of very interesting people at the event. and it was a, really, really nice time talking about AI safety related issues. So that was my chat with new people. how about you? what, uh, what profile would you pop into, or who would you chat Wiz.
Ben Whitelaw:I would chat with some researchers in Europe who have gone to great lengths to solicit information from platforms under the DSA and have been squashed and kiboshed at every level it seems. we're gonna hear a bit about that today, but yeah, sparing a thought for those, European researchers trying to get valuable data out, out of platforms to do important research. I'm here for you. Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's October the 30th, 2025, and this week's episode is brought to you by Web Purify, an InTouch company. InTouch CX is a global leader in digital customer experience management, back office processing, trust and safety and AI services. And in fact, web Purify have just launched their very first podcast Trust issues, the brilliantly named trust issues, which brings you insights from the people who keep the internet safe. And here at Control Speech, we are fans. It focuses on digital trust and safety, content moderation, ai, customer experience, lots of the stuff that we talk about here on the podcast each week.
Mike Masnick:And it's a podcast,
Ben Whitelaw:and it's a podcast who doesn't love podcasts, right? but the difference with, with. This podcast with trust issues is that it delivers that interesting information in candid round table style discussions with industry innovators and leaders. hosted by a brilliant host, Adish Daley, who previously worked in trust and safety at TikTok, and a little known fact about her practice as a barrister at law. So she's, she's qualified in a number of different ways, and, uh, there are three episodes of trust issues out already. My favorite is with, Dr. Tracy Elizabeth, who's the former head of Family Safety and Developmental Health at TikTok. She's also worked at Netflix, previously before that as well. And she, in the latest episode talks really interestingly about a story that we talked about last week, Mike. Instagram's move to a PG 13 content classification and the importance, which we didn't really talk about, of well-trained human moderators in making sure that that content is classified consistently, over time, which I, was a completely new angle for me. did you have a favorite episode
Mike Masnick:Yeah,
Ben Whitelaw:the ones that have been published yet?
Mike Masnick:yeah. I mean, I, I listened to all three, and I, think they're really good. The one that stood out to me, and I think will be somewhat, for obvious reasons, was the second episode. Which was with Abba Roy from Google, and it's because he helped create a game called B scam Ready. And this is a trust and safety style game, which is something that I am deeply familiar with, having created a few of my own. so I, I hadn't heard of it before, but it's sort of teaching people how to recognize potential scams and whether or not they might be, you know, targeted, by a scam. And it, it's a really interesting tool because, you know, as. Google has found and lots of other people have found, is that the more you educate people to look out for this stuff on their own, to be kind of media literate, the more likely they are to recognize the scam early and, perhaps stop it. And so, uh, we talk about, the thinking behind it and the game and how it works. And, I thought it was absolutely fascinating for both the game designer element and the trusted safety element and the combined trusted safety games. That is a very small community of people. So.
Ben Whitelaw:High praise. High praise indeed. the other episode that is out already is with a name. Many people would know, Cecily Fi Hoy, I hope I pronounce that right. Who is, the kind of main character in the Netflix series? Tinder Swindler. There are some great guests coming up from Rob Locks and from Blue Sky. so really, as our partner for today's episode, you know, If you're a trust and safety specialist, if you're a community experience leader, somebody thinking about AI or just broadly interested in tech, go and have a listen to trust issues. The new podcast from Web Purify and InTouch CX company available on Spotify and YouTube. great to have them Part of today's show, this week we're talking about the pushback to AI chatbots the EU going after two big platforms under the DSA and whether Mike's undergraduate degree would get him enough credit to be in Chinese influencer.
Mike Masnick:What a lead in.
Ben Whitelaw:more on that soon. Um, I think we, we've got a lot to get through. Mark. This has been a, it's been a real grind to get through all of the stories since we can figure out what we're gonna talk about. I think we had maybe 25 stories to sift through today.
Mike Masnick:Yeah, it was a, it was a big one.
Ben Whitelaw:Most and a long while. so I think we'll, dive straight in. there's no time like the present. we're gonna start by talking about a series of stories about, the pushback I'd say, or the course correction to what has been a longstanding narrative that we talked about controlled speech, which is the concerns about, user safety on generative AI tools like chatt, pt, and other such tools. We know that there is, a particular stream of concern around mental health and particular kind of the mental health of, of young people. And we've talked on the podcast about the legal cases that have been brought against OpenAI, Google and character AI as some of the big companies for deaths related to the usage of, these tools by particularly young people, people in their teens. And so. There's a series of story this week that I think kind of bring together the, I would say the demand for the tools, particularly the, you know, as companions as AI companions in a world we just increasingly complex and, difficult to navigate. And the kind of the supply really of that and the challenges that we're seeing from a supply issue, and how companies and platforms and politicians are thinking about regulating that supply, into what I guess is a increasingly large and lucrative market for these services. Where did you want to, to kinda start? What's, the beginning of, of this thread that we're going to pull in today's episode?
Mike Masnick:I think there's a lot of things, but I think, I think maybe we start with, uh, character ai, and, their decision, you know, they were sort of one of the first ones that people called out as, you know, concerns about it. And there was the lawsuit that was early on. They got a lot of attention and concerns about the nature of, companion chatbots and their potential impact on mental health. And the big announcement this week from character AI being that they were. Banning, those under 18 from having longer conversations with their, chatbots. And I, and I thought it was kind of interesting even in the, the way that they're implementing this, which is they're effectively weaning, teenagers off, where it is basically, I think it's starting with a limit of two hours a day that you can chat with someone and it'll get less and less. And then sometime, in about a month, they will block those chats entirely. I am not a user of character ai, so I don't fully understand some of the details here. They are saying that, teenagers will still be able to use the app for like, some element of generative AI generating videos or, or something of that nature, but not having sort of ongoing chats with characters anymore. And there is this element of, you know, I've seen some people sort of completely celebrating this. And, obviously with that, when you're banning teenagers, there's some element of age verification that's going on because you have to determine who is a teenager, and who is not, obviously the other half of that. and it'll be interesting to see how this plays out. I feel that, that company was probably effectively forced into making that decision because of just the constant legal threats against it and the ongoing lawsuit and all of this discussion you know, how do, how do you respond to that? And it probably got to the point where it was easiest for them to say, okay, we're just gonna Cut off kids. Whether or not that makes a difference is, is a whole other question. And, I still have this general problem with the idea that the answer to any of these issues is, is an outright ban. do we think that if we ban people who are 16 and 17 from using a companion chatbot in any sort of form, that suddenly, you know, when they turn 18, suddenly they have the skills and the ability to use it appropriately. I find that hard to believe. and so I understand why character AI sort of feels pressured into this, but I, I find it, less compelling as a solution.
Ben Whitelaw:Yeah. This is, is surprising, I think because of The immediacy of this change. Right? Very, very often you have a platform say, okay, we've taken onboard, research, and the kind of media reaction to this, and we're gonna do something in a month or two months down the line. The change that you've explained there, the, the decrease to two hours is immediate. then the weaning takes place over the course of a pretty short period of time, of like a month or something. So there's something has happened here where you're right, there's been some sort of advice, legal perhaps, or, a change in kind of, you know, strategy, something internal has happened where they've thought this is the least worst option, for us as a company. and that's, I think what many people have been kind of calling for in some regards. You know, there's in the Verge story that we'll include in the today's show notes. There's lots of, you know, nonprofits, people who've been asked for their, comment on this story, and they're saying, this is what we've been asking for for a while. it backs up research that. was done earlier this year by Common Sense Media, which tested a lot of the AI models and found that, there was issues with the types of responses that under 18 users were getting. And so that research has been out there a while. It's taken something else, another force, another factor to, cause the reason why, they're changing that now.
Mike Masnick:Yeah, and I'm sure, I'm sure legal liability is a big part of it. And, and also sort of the, the constant media bashing of this. And it's, you know, there's, there's a different story every week and it's not always about this one particular company, but I think at some point, lots of companies come to the point where they're just like, screw this. You know, like, we're just getting totally beat up on this. And I'm sure they have some sort of internal models in terms of how many of our users do we think are, of that age, and if we're cutting them off from the service, how much of an impact is that on our bottom line, as opposed to, getting past this media story and focusing on the larger audience of people over eight, over the age of 18.
Ben Whitelaw:Yeah, because there clearly, there clearly is an opportunity, which will come on shortly be before we do, just to talk a bit about, that percentage of users that they believe is under, under 18, the CEO of character ai. Karen Deep Anand spoke to the Virgin, said he thinks it's around 10% of the user base, which obviously is, I was surprised about how kind of little that was, frankly, that we're under 18. So, but without kind of age assurance or age verification, as you say, it's impossible to know. We like to always kind of analyze the responses of platform CEOs when they're talking about safety. And I thought he, in that piece gave a couple of, kind of quite, kind of smart on the money comments about how the risk can never be zero. and also, you know, the fact that. These are big numbers that they're talking about, and there's always gonna be an element of people who are suffering from some sort of, you know, mental health crisis when they come onto the platform, which is something that often comes up in these discussions about, to what extent is this the responsibility of platforms to, manage that usage, on the number of users, is 10% kind of around the number you, you thought, where did, where did you think, a service like character ai, which where you can build your own, uh, character, a lot of people are using it to, kind of conjure up characters from books and films and tv? I would've expected to be higher.
Mike Masnick:Yeah. I mean, if, if you would, again, like I don't have any direct insight into it, so I don't know, but if you had just asked off the top of my head, I would've guessed it would've been higher, but not, not ridiculously higher, I would've probably guessed more around 20%. range. You know, I do think that there are plenty of adults who are using these tools also, and I think there is this sort of, The way it's talked about in the media is, is almost as if it's only kids who are using these kinds of tools. And I think that's clearly wrong. I think there's plenty of evidence that lots of adults find these tools useful as well. Which then gets me back to the point, and we'll talk about this in some other context as well, but like I. If we're in this world where so many people do find these tools useful in some form or another, in some cases, and this is the part that gets people scared, it's like, are you using it for therapeutic reasons, which I think a lot of people are, and that is beyond just the idea of like using it as a therapist, but being able to converse with someone, or in this case, I guess something is therapeutic for many people. Like if you don't have someone to talk to about issues, just being able to talk can be very helpful. and, you know, varying degrees and, and obviously depends on lots of context. and so to me there is this element of if that is going to be a part of many people's lives, wouldn't we be better set up? If we then were teaching people how to use it wisely and use it safely and use it appropriately, and this rush to jump in and just say, well, you know, the only way to deal with this is to ban it entirely. Just again, like, you know, and we've talked about this in other context, but it's, it's just like the prohibition, the just say no approach to things. And that never actually seems to work. You know, prohibition with alcohol didn't work saying, just say no to drugs. abstinence education, none of those things actually seem to work. They make adults feel better because they're not taking on the real issues with children. it allows adults to be like, well, you know, I've, I've protected the kids. And I don't think it's, it does a really good job of actually, getting kids to understand what is appropriate when, how, how to do things safely. and so you know, I, I, it, it has all the hallmarks of a typical moral panic, and I totally understand why character AI did this, and I don't fault them for making the decision that they did. But I think it's important to call out, like, we've seen this play out in other contexts before and it seems a little silly to me to think that, banning it is the only and best option,
Ben Whitelaw:yeah. I, I was talking about this with a friend and we were saying that what's needed in a way is, an equivalent of having a glass of wine at home with your parents.
Mike Masnick:right?
Ben Whitelaw:You know, there's this idea that, you know, before you turn 18 in the UK at least, I'm not sure if, if this kind of happens, if there's a social norm for this in the us but certainly there's this idea in the UK and probably in Europe as well, more broadly, that if you have a, a glass of wine or a beer with dinner with your parents, you normalize alcohol in a certain setting so that when you turn 18, you don't go out and drink, three liters of that vodka and fall under the table. And so there's almost like a, kind of what we're lacking is that for the internet, the means by which kind of parents control the usage of the internet kind of via a control panel that only they have access to. And, and there isn't a kind of, it's not a socially, I guess like mediated interaction. it's kind of slightly, they have control and, they have to be very aware of. of the kind of harms of the internet to be able to talk to a young person about it in a way that is much like going for dinner and, saying, here's what a glass of wine looks like and here's how you drink it with dinner. So do you think there's scope for that in other ways? And how, how do you think we get there?
Mike Masnick:Yeah. and I think, you know, and we have talked about this in other contexts and I've certainly written about this and talked about it in the past, in other contexts as well, but you know, with the internet, I think it is important for parents to have a role in terms of teaching kids appropriate use, which is more than just banning it and talking about like, which apps are you using and why, and, and how to think about it. And also importantly, to make sure that kids have someone they can talk to. About these things that they feel comfortable talking to. And you in some cases, in many cases, it's like making sure that kids are comfortable going to their parents and saying, I made a mistake, or something bad has happened. And if it's not a parent, then having someone else who is a trusted adult in their life that they feel comfortable with. And we've talked about this, generally in the, with some of the, uh, you know, extortion scams, the sextortion scams where, what they're really relying on are kids who don't have anyone to go to who feel trapped and have no escape. And that's where the worst happens. And so I think there are ways to, you know, uh, Candace Rogers, who's a researcher, who we've mentioned before and I've interviewed before, who's talked a lot about, how. her kids are on Instagram and they spend time just like going through Instagram together and saying like, this is the kind of content that's here. Here's what you should think about different, you know, here's how to think about this stuff. And not saying like, you're always going to do it together, but, to sort of walk through this is the way this app works. And even saying like, here are things that it might try to do to manipulate you or pull you or, or try and get you to purchase stuff or whatever it might be. Just have that discussion and then, allow kids to go off and spend time on their own. You don't want to be, you know, the helicopter parent, uh, just hovering over everything that they do. You know, especially as, kids get older, you know, all of this is, there are different levels at different ages, but understanding how to do that and teaching them like, you know, there are risks here, but we're, giving you the tools on how to use it appropriately. And we expect that sometimes you will make mistakes because everybody makes mistakes. And I think that is the sort of like. Have a glass of wine with dinner with your parents kind of situation where you can, give people, examples, you can give people also examples of good behavior yourself. You know, when you are using these apps, don't use these apps all the time, right? Like if you're a parent and you're at dinner and you're just scrolling through TikTok yourself, probably not a great example, you know, but, sort of understanding how to have that conversation in a way that is sort of, introducing them not just to the apps and how to use them, but also the risks involved with the explanation that like, everyone makes mistakes. You may make mistakes, but trying to recognize it, trying to figure out how do you come back from those things is really important. And I think that you can do that with, the, chat app or chat bot kind of structures as well. but you know, it's something that we'll have to, to learn how to do appropriately.
Ben Whitelaw:Yeah, certainly. So this is an interesting story, I think in a broader suite of stories, around course correcting this narrative, of, chatbots being dangerous, mental health. And so character, ai, ai have taken the approach, as you say. of changing their policies for underage users. OpenAI have done something slightly different.
Mike Masnick:Just slightly.
Ben Whitelaw:Yeah. Just you know, rather than restrict the usage for a particular group of, you know, potentially vulnerable users, they've actually gone to explain the work that they've done with a whole suite of mental health and, and wellbeing experts. And they've produced a very detailed, very kind of transparent piece about how they have improved their default model, in a number of different ways to cater for these harms that we're talking about. And this is like a new approach, I would say, for open air. In terms of kind of defining the narrative about how harmful these tools are, would you say.
Mike Masnick:Yeah. Yeah, I mean, I thought it, it was interesting. I thought, obviously, when a big AI company's coming out with something like this, it's been gone over with very, very carefully by lawyers and marketers and everything. But this struck me as like, there are a lot of indications in here of a company that is just taking this seriously, and that is thinking through, like we know that. There are concerns about mental health and chatbots, and we want to explore those you know, how can we be a good partner in this space? And that's not saying necessarily that OpenAI is a good partner, but they, they are taking steps that suggest that they are actually thinking through, like, how do we minimize the harm? And there is, there is an admission here and, and I think, that is important. That, and, and you'd mentioned this a little bit before with the character AI stuff as well, which is like when you have a tool that is used by this many people, there are some people who are going to be using it, who are having mental health issues that they're facing. and then there's a question of, well, one, how much is the tool. Exacerbating that or is it helping it or is it doing nothing? Did it cause it, is it making things go to another level? Like there are all of these different questions involved and the answers to them are going to be context specific and are going to be different per individual, per particular conversation. And, and sort of coming out with a blatant, you know, this is where we get back to like the full ban on under eighteens is such a blunt instrument for a situation that has so much nuance between the specific use case and the specific users in question. And it appears that, how well all this actually works, I think remains an open question. but you're always going to find some cases, and these are the cases that become huge New York Times stories of. this person had a mental health episode, they were using chat GPT, and it said something bad to them. those stories are going to exist. There's no way to make that go away. There's always going to be a case of that, and there's always going to be some situations where that sort of thing might occur. But I think given all that and the fact that OpenAI has decided that they do still want chat GPT to be able to have these kinds of conversations, they've taken a bunch of steps here to say, how can we maximize the better results and minimize the really problematic results? And so this struck me as a really, really interesting and very transparent and open idea, you know, open approach to how do we deal with these things? And they mentioned here, you know, like the cases of, Suicidal ideation or things of that nature are very, very rare. which is good in that, you know, you don't want to see like a lot of that and you don't wanna see it going up. But that also means that they don't have as much data and history in terms of identifying it. lot of these things, training these models. It depends on how much data you have and if they don't have as many examples of it, and examples of the different ways in which it shows up, that makes it harder and harder to actually do the detection as well. But it, sounds like they're really working on it.
Ben Whitelaw:Yeah, I think, I think you're right. And again, this probably most likely has been caused by the lawsuits that they have, they have faced. but the, the kind of bringing together of they say 170 experts in mental health and, and wellbeing, and using them to kind of figure out what are the most, most common issues when it comes to psychosis, mania, self-harm, suicide. And then addressing those spec, particularly within this model, they say has caused improvements in what they call compliance with desired behavior. So having the model react in the way that it is best thought to in kind of 65%. to 80% of cases. So big jumps in the way that, you know, the model was reacting, to the kinds of questions that, like you say, end up in the death of people. And the reporting of that by a major paper, which obviously no one wants in any case, that's when the thing that has gone too far. the thing that stands out for me is, is the depth and the detail of this piece it's clearly a designed to show the workings out of the work that they've been doing to say, we are working on this. Here is the numbers. we can't be accused or you, you have to go some way to accuse us of not caring about this or putting, uh, resources into this. Like you say, you know, gotta take that with a p of salt, but. It's detailed and it's data driven and it's, it leaves you with a thought that this is a company that is thinking carefully about this.
Mike Masnick:Yeah. The thing I'll add to it, is I am in my head at least somewhat comparing this to the situation that meta faced a few years ago when Francis Hogan L leaked these documents from internal discussions within. Meta about, you know, specifically the one that keeps making the headlines is the discovery that in some cases Instagram was making, young girls feel worse about themselves in relation to body image. and. There were all of these headlines for months, and you still hear it, and you still hear it in, discussions among policymakers. Oh, Facebook or Instagram knew that they were making girls feel worse about themselves and they did nothing about it. And it's, that latter part of that sentence, that was the concern because this was internal research and the, response to it is not public. And if you actually look at the research, which one was, first, leaked mainly by, Francis Hogan. But then meta itself revealed more of the details of that research. What you saw was like. It was actually an example of the company taking those issues seriously, doing the research to sort of try and find out, how are people reacting to Instagram and different situations. And they looked at a number of different categories and saw that, you know, some of them, most of them, was not doing harm. But in this one case it was, and they called it out in an internal presentation. They call it out because they're trying to respond to it. But because the narrative got set by the media rather than themselves, the story that is still the belief today is like, oh, Facebook didn't care about this. Instagram didn't care about this. So I look at this as sort of like, whoever put this together has thought through that. I still think that this will be used against open ai. I still think that no good deed goes unpunished in this kind of thing. People are gonna look at it and say, oh, you know, like from this research, you know that there are mental health problems that people are having.
Ben Whitelaw:Well, and I've seen reporting of, of this kind of blog post framed as, there are millions of people who are using chat TPT to ask about their suicide. And, that's because obviously there's a very low prevalence as you, as you mentioned, in nor point, nor nor one something. But because of the large user base, it's the same old thing that social media platforms face it because you have a large user base that equates to a large number of people. So the, criticism potentially valid is that there are millions of people who are coming to JT PT for this issue. How can we ensure that those people are getting the right responses and the right information and the right interventions. And I think there has been some improvements with the way that people have been directed to suicide prevention, hotlines and other services that are available in, in their countries. That's as a result of, I guess, research and pressure and, and media coverage. And I think this is also a similar, thing where the media coverage will be not always spot on is kind of forcing open AI to be more open and transparent without there being any regulation that forces them to do so. Uh, that's at least how I see it.
Mike Masnick:Yeah. I, I, I, I think that's, that's valid, right? I mean, I think the underlying point that you're making, that the scale here matters and the scale is the thing that is very, very difficult for people to wrap their heads around. Right. one case out of, a billion users is still bad, but it's still, you know, that's an, an exaggeration, right? It's, it's, you know, more than that, but like the scale actually matters. And scale is something that you know, at the, at the kind of. Scale that we're talking about is impossible for, humans to wrap our heads around. Like, you know, most people can't even comprehend the difference between a million and a billion, and it's a really big difference, right? and so all of that matters, and yes, like, even if the media is getting these stories wrong, and even if politicians are getting these stories wrong within a certain limitation because politicians can actually, cause regulatory changes that could have real impact, I think all of that is pushing companies to think through these things. I, I think, you know, you probably wouldn't see this kind of explanation and detail and reaction from open AI without that kind of pressure. So I'm not against that though, you know, I still worry when it, when it presents a, an inaccurate picture and creates problems and like, to bring up the, Instagram. Research Francis Hogan stuff. Again, like I certainly heard from companies after that came out that other companies in the space were scaling back their internal research programs, not because they didn't want to do that kind of work and determine, what is healthy and what is not. But because they were afraid that it would be represented, out of context. And if we do the research, then it can be presented as, oh, you knew that your app was a problem. Whereas if you don't do the research, then you have the plausible deniability and you can say, well, all we didn't know. And that's not great either. and so it's interesting to me to see open AI sort of. Buck that a little bit and say like, yeah, we are doing the research and yes, like maybe it's because of the lawsuits, maybe because we're getting pressure on all sides about this stuff, but we're trying to actually do something and here are the results of, what we've done and sort of trying to, in some ways, get ahead of that story where they can.
Ben Whitelaw:Yeah, yeah, it's true. And, and you know, we, there is no age authentication or age assurance on, chatt PT yet, I believe. I think there are, there is a safe version. I'm not entirely sure how you get served that, but, know, you mentioned lawmakers seeking to kind of weigh in on the discussion around chatbots. And this kind of brings us to the, the third potential course correction. We have character ai, limiting the service to, to under eighteens. We have open AI being more transparent about how, what they're doing and how they're trying to address the issue. And then you have senators who are seeking to bring their, you know, political power to bear, let's say.
Mike Masnick:Yeah, yeah. You know, I mean there, there are so many different bills and so many different regulations being proposed, but this week we saw that Senators Josh Hawley and Richard Blumenthal have introduced something called the Guard Act, which is an attempt to, ban everyone under 18 from using any kind of chat bot. it has some other features too. my general rule of thumb is that you shouldn't trust any, internet related bill from Josh Hawley or from Richard Blumenthal. And if the two of them together are on it, you should probably burn it with fire. Um.
Ben Whitelaw:You know, you know, you know, when I think of, Hawley and BLI guitar, I think of Pinky in the Brain. I, that's what
Mike Masnick:Well, which one's? The brain here,
Ben Whitelaw:It's true, it's true. I, I think it's mainly just the forehead of Richard Blumenthal that I think about, whether it has a brain or not. Um,
Mike Masnick:trying to take over the world. Huh?
Ben Whitelaw:kind of, yeah, kind of,
Mike Masnick:Um, you know, this, this bill's bad. I mean, look, you can read it however you want, but like it, it potentially bans search engines for people under 18 if read as broadly as it could be read. And that seems like just bad drafting. and when you're creating a law around important internet related things. It could be drafted in a way that would ban search engines, that's probably bad. you know, these two are not very careful about their, bill drafting in lots of ways. and they're both just, you know, they would be happy to sort of ban the internet, I think. and so they don't really care about the, collateral damage that they are doing in this process. But I, I do think it is, it is another reaction to the general reporting that, oh, these things are bad and that kids are, definitely being harmed by these tools. and I don't think it takes into account any of the nuance that we've been trying to discuss in that, you know, I think actually for some kids, these tools can be really helpful. and the, the idea that they're just inherently bad, I think, leads to problematic outcomes like a bill, like this one.
Ben Whitelaw:Yeah. it's striking and I guess that there is, in so many of these cases, very little input from children about how the services can be useful or what they want from these services. And we saw probably too late, but social media companies create youth councils and advisory bodies where, you know, under eighteens and teens would come on board to, I guess, provide their opinion about how they were using. Facebook groups or TikTok or Snapchat and that input across the business, probably has some sort of benefits. We we're not entirely clear how they work, but to have them heard I think is, is a right thing. And, and I wonder if that's something that you know, open AI and other companies will start to do more of
Mike Masnick:Yeah, and we, we've definitely seen that in the social media space. And also now more recently, parent councils too. Uh, which I think are a really great idea too, for understanding the challenges that parents have regarding how their kids are using these apps, because I think that is like a really interesting and underexplored area as well. And again, like when you put together these kinds of councils, they're not necessarily going to be, perfect and they're not going to be experts, but they can begin to express their own concerns and that gives a picture for the company to think about. you know, have we considered, how this will impact the kids that we speak to or the parents that we speak to, to get a sense of it? I don't think, uh, you know, I, I've, I've seen some panels, discussions like, uh, you know, there was a conference, uh, Around kid safety on the internet last year that I attended that led off with two panels of like middle school and high school aged kids talking about their experiences, which was absolutely fascinating. Part of the recognition was it was really useful to learn that the struggles that they had where they found tools useful, their policy recommendations struck me as fairly naive. But that's, you know, they're not policy experts, but understanding their lived experience with these tools I think was actually really valuable to then think through the actual policy implications of, you know, how do you deal with this? And so, yes, I think it would be good to see more of that kind of thing around, in the AI space as well.
Ben Whitelaw:Yeah. And, and all of these stories together, Mike made me think a little bit about your content moderation learning curve that you wrote about, um, in regards to Elon when he took over x slash Twitter. It feels like there's a kind of another learning curve that's being speed run here, where Sam Altman and these AI companies are having to really get to grips very quickly with the different kind of incentives and forces that, come with creating policies and, adhering to regulations. Are you seeing kinda similar characteristics between the one for social media that we've, we've seen over the last 20 years and this one.
Mike Masnick:Yeah, absolutely. I mean, I think it's, it's a direct parallel as your colleague Alice Hunsberger has written about as well, so we'll give you, give you the plug there. Um, yeah, I mean, I think there's definite parallels, right? And obviously, a lot of the safety people at the AI companies, a lot of them came from the social media world and they lived that learning curve. and so it's not a surprise to see them doing the same thing in the AI space. Now, there is one thing that I think is different there are a few things that are different, but there's one that I think is important that relates to the story that we were just talking about with, the Holly Blumenthal bill, which is the policymaker's involvement in all of this, which is that in the social media space, while you had. You know, as people went through the learning curve, while there was some discussion of policy stuff, it wasn't as prevalent and it wasn't as focused. so there is this element now with AI and AI safety where I think a lot of the policymakers and regulators feel that they, made mistakes and that they missed out on the social media situation and that they messed that up. And so here they're overcorrecting and they have decided that we're not going to make the same mistake again.
Ben Whitelaw:Hmm.
Mike Masnick:we screwed up, we didn't regulate social media properly, and therefore we have to be much more aggressive with AI tools and chatbots to make sure that no harm happens, as opposed to social media, which we now think was like hugely harmful, which is very much disputed, obviously. and so I think that element changes The learning curve a little bit because, the thing that, was important to me when I wrote about that original learning curve piece, the Elon Musk one, was how much of it. Was driven, not by regulations, not by legal threats. So some of it is, but most of it is really driven by the market itself and the users and the partners and the advertisers and the fact that like people don't like it when your platform fills up with abuse and harassment and spam and all this kind of stuff. There are these natural factors pushing you in this direction. And there is, this comes up in all different contexts, but there is this belief out there among some, which is frustrating to me, that the only incentives are the legal incentives. And that if there is no law in place, no company will make their platform safer. And that's just not true. And part of the learning curve is the fact that like, trust and safety did not come about because. Of legal mandates, right? Trust and safety came about because people at platforms were like, oh man, that's bad. we should be better. We shouldn't allow that kind of bad thing to happen on our platform. It's driving users away. It's driving advertisers away, it's driving partners away, all this kind of stuff. That's where much of the learning curve comes from. But in this context, in the AI context, the regulatory situation is much more pronounced, I think, than than last time around.
Ben Whitelaw:Much stronger. Interesting. Okay. So we have platforms who are self-regulating. in the form of character ai, we have platforms such as OpenAI who are focusing on radical transparency as a way of mitigating this idea that they don't care about users. And then we'll probably have some platforms, Mike, who are going to lobby against or for a better version of the legislation that Haw and Blumenthal and others around the world I'm sure will be cooking up as we speak. So, yeah, lots of different ways of, course correcting against this ai, harm narrative that we are seeing week in, week out. the topic of regulation is a good one to segue us into our next story. we saw last week, just after we stopped recording an announcement from the, European Commission about, uh, an investigation they've been running against TikTok and Meta. I dunno about you, Mike, but I think that the EU commission have been awaiting until we've recorded, until they announce their big investigations.
Mike Masnick:sure they're big listeners and they're just like, they're probably spying on us knowing exactly when we finish, drop the announcement as we're talking about what we're gonna title the podcast this week and it's too late for us to go back. So, yeah.
Ben Whitelaw:hoping we don't cover it next week. Well, you know, this is an important, story in a couple of different senses. The long and short of it is that, these two investigations against Meta and TikTok, which have been happening a a long time, finally come to fruition. they focus on primarily access, researcher access to data, which is a part of the DSA that often gets kind of overlooked. doesn't really get a lot of attention, but is a kind of big part of the DSA and, and what it's about. There are other parts of, this announcement as well related to content moderation, appeals and notice and action mechanisms. So the way that users are able to report content, and how that is, you know, kind of spun out into data that then gets reported to regulators at the other end. So there are two kind of aspects of this. fair to say that the researcher element is not a big surprise. You know, researchers if, if, if you, you know, know of any, who have been focusing on, technology or big tech, they found it very difficult to get any data about platforms. And there's a really good piece that we will also share in the show notes. By science.com, I think it is, that talks about, brings to life the difficulties that researchers have in requesting data from platforms, the delays that they have when, when asking for data. the kind of poor formatting of data makes it very difficult to do any kind of meaningful research. so that's not a surprise in terms of this investigation. it is more of a surprise. I think that the content moderation, appeals and notice and action mechanisms are in there. we see a lot of people obviously report content. so to say that the reporting mechanism is somehow, hard for users to access or, difficult to kind of manage or, or find is a surprise to me. but in, you know, wherever you sit on the decision, this is only the fourth and fifth platform to be kind of hauled across the coals by the DSA. Previous ones being x slash Twitter, Ali Express and tmu, I believe. So. We don't see a lot of these investigation kind of, announcements. This one is, notable and I think, you know, the platforms have some time to, to react to the investigation. an appeal. there could be big fines, but it's unlikely. it's unlikely we'll probably see some changes to how the research access is implemented to keep the EU happy. What did you think, Mike?
Mike Masnick:Yeah. and just a quick realtime correction, it's science.org is where it is, but you can check in the show notes to get the actual stories. You know, I'm going to play my usual role on this podcast of saying, there, there are a lot of nuances here to consider. Right? and I think that, researcher access is one that, you know, when you hear about it, you're like, yeah, that sounds good. Like, obviously researchers should have access to data and we should want people to be able to research these things. But it has proven to be a lot more complicated than, people make it out to be. For example, the somewhat famous Cambridge Analytica scandal started as a researcher access to data story. And people forget this, but everyone's like, oh, Cambridge Analytica was obviously evil. But it started as somebody who was presenting themselves as an academic researcher, trying to get data on, different connections between people, which then was used. nefariously later by political campaigns, whether or not that was effective as a whole other story. And so I bring that up to point out that like, again, the situation where the companies, the lesson from Cambridge Analytica is be really, really careful about which data you're sharing with which researchers. And then there are, elements in the DSA to try and account for that somewhat, but like I also understand where the platforms are hesitant to just be like willy-nilly handing out data in an easy to use format because they have these concerns. And so, I think it's, it's interesting, it's interesting to follow and it becomes this kind of dance between the regulators and the platforms about how do we actually comply in a way that doesn't lead to other problems down the road. And sort of understanding those, those differences becomes really, really important.
Ben Whitelaw:Yeah. I mean, I, I, agree with that. I would say if I was a platform that was doing as much as, for example, open AI is, in terms of mental health and, teen safety, I feel like I would want that data to be as available as possible and, and for people to kind of tell the story that obviously I'm trying to kind of tell, and I think there's this element of, Obviously there can't be, unfettered access to data and there should be credentials in place and people, it should be a kind of governed process. and maybe there's more the DSA can do to kind of explain what kinds of access to data is, is available to what kinds of researchers. But there's, it seems like this is a kind of area of the DSA where there's upsides for everyone, more so than maybe other parts of the DSA.
Mike Masnick:Yeah. In theory it's just again, I mean, you have different elements here too, which is that. you know, like Meta has tried to be more open in the past, right? And they set up like these different programs to try and allow researcher access to data but you run into all sorts of problems and, the different articles, the science article, the political article that we have about this, both of them also mentioned at the same time the political complication, which is the, shit show that is the United States government right now, where you have those in power in Congress and in the administration claiming that the EU and the DSA is a censorship machine. Which, you know, I've certainly raised my concerns about how it leads to potential suppression of speech, but like they're totally misrepresenting the nature of that. And to them, the things around researcher access to data is. For the purpose of censorship is the way that they read it, right? And so they're freaking out about it. And so you have these companies that are sucking up to the current administration and they don't want to be seen to be feeding the, censorship complex or whatever, like, you know, it's all nonsense, but they're all of these different competing factors. And you're right that in theory it should be a win-win. But there are a whole bunch of narratives going around that are important, that need to be understood. And there are a number of different competing incentives often in direct conflict with each other that the platforms are having to negotiate internally as well. And so I don't think it's quite as simple. In theory, yes, it should be a win-win win, but the reality is unfortunately, a lot more complex.
Ben Whitelaw:Yeah. Yeah, I think that's fair. And I think more than anything else, you know, this is a, a sign of just how difficult it is to bring these cases to bear. You know, these are, you know, these investigations been running since April and May, 2024. So these are taking a long time to build up the kind of complex political environment behind it, as you say, is probably causing that. It doesn't help that, you know, TikTok is about to be taken over by, uh, the US government and have a, have a Trump on the board. Sorry to raise your blood pressure, but Yeah, so there's, there's.
Mike Masnick:That is not confirmed yet. Even though, even though as we're recording, I believe, Trump just concluded a, a meeting in China. Uh, but, uh, the last I saw before we started recording, maybe it's changed, but before we started, is that there was no final deal on TikTok yet, even though I, we were told that it might be announced today, but
Ben Whitelaw:Yeah, we'll see. let's finish off then Mike, with, talking of kind of Chinese social media, a really interesting story that you've seen, kind of do the rounds in a couple of outlets. None of them, particularly large or reputable. We, we should say that upfront.
Mike Masnick:We have some concerns about the details here.
Ben Whitelaw:little bit. A little bit. But it's, a good enough story that many people have picked it up, and, and run with it. And this is kind of China's new law on ensuring that influencers have the sufficient professional credentials, let's say, to talk about certain topics online. Is, is, is, is labor relations one of them? Are you,
Mike Masnick:I, I don't think so,
Ben Whitelaw:do we need to take down the podcast from a few weeks back?
Mike Masnick:That, for, for non-regular listeners or those who might not remember the podcast from a few weeks ago, that is a reference to my credentials and my degree in industrial labor relations, which, uh, contrary to, our discussion from a few weeks ago does not really make me an expert in, in labor relations. You know, I, I certainly know some stuff and maybe a little bit more than your average bear, but, I, I don't think that the credentials themselves don't necessarily make an expert and,
Ben Whitelaw:but you would qualify in this,
Mike Masnick:and under this scenario. If, if it was one of the categories that they were regulating, and again, like we don't have full access to the law and the reporting on it is a little bit all over the map. We're not entirely clear how this law actually works, but the way it is being widely interpreted is that uncertain topics. And you can guess it's the more controversial topics. Basically, China is now requiring you to have some sort of credential, a degree or something to prove that you are an expert to discuss them. Now, this is just blatantly a censorship law, right? Like there's no two ways around it. You can't have it any other way. And, the vagueness and the way it's being reported on to me is almost a feature, not a bug of the law, which is that the important thing for most censorship laws, including this one, is that the impression is more important than the actual enforcement of the law. And if everybody believes that you can't talk about something unless you have a credential. they're not gonna talk about it, right? It's going to scare people away into self-censorship, which is the most effective way that, most censorship laws work. And so the discussions that are going on across the internet are basically like, oh, well, if you're in China, there are certain topics you can't talk about unless you have a degree. And that means people are just going to avoid talking about them altogether. and you know, China has lots of experience with this kind of thing. The original, great firewall laws in China never, we're very clear in terms of like, this content is banned, this content is not allowed. But rather it just told the intermediaries the different service providers that if we determine that you have allowed bad content through, we will punish you. And what that leads to is over censorship. The intermediaries say, well, we can't allow anything bad. The fact that it's not defined well, that means we just ban and block a lot more of it just to be safe. And that's, that's how these kinds of laws work in practice is, is the name of that sort of thing. So I think this is that same sort of thing. There are topics that the Chinese government doesn't want people talking about online. And so they say the only people who can do that are those who are properly credentialed and therefore a whole bunch of people. And obviously intermediaries that allow speech will say, well, sorry, we can't allow this kind of talk because you're not credentialed. And it gives them another excuse and it gives people themselves an excuse to self-censor and not talk about these things.
Ben Whitelaw:Yeah, I think that it's a really, you know, with the rise in creator led content, people going to creators for their news, for their information, people feeling that these individuals are more authoritative in some cases, definitely more authentic. I think it's a really interesting development. We, we do see elements of kind of Chinese laws proliferate to other parts of the world in strange ways. and there's, the conversation that's we had around. the way that certain ranking a algorithms kind of prioritize authenticity and, expertise and experience in ways that, are not government mandated, not based upon your degree certification or any kind of formal qualifications, but they, they are judging somewhat, your experience and, and expertise in some form. So it's an interesting conversation that maybe we'll pick up another time. I think we could probably spend another whole hour talking about that, Mike. But, for today I think we'll probably round up with the news that yes, you would qualify to talk about, know, labor re relations on, I dunno, WeChat or Ian, uh, in, in the near future. That's listeners if you, if you want to get Mike's take on those issues, you know where to go.
Mike Masnick:Oh, I was gonna say, the other thing is that we should, we should start a credentialing service where we'll, we'll give people credentials on their expertise on certain topics, and there's a big business opportunity there.
Ben Whitelaw:We can crack China. Finally,
Mike Masnick:There you go.
Ben Whitelaw:um, thanks everyone for listening. we've, uh, had a really good run through of some great stories, from the Ft, from the Verge, from the New York Post, from Politico. go and read and subscribe to those outlets wherever you get them. come back and listen to us, uh, every week. we're here next, next week, and if you've got time ready to review us, wherever you get your podcast really helps us get discovered and, find more listeners like you. Thanks everyone. Take care. Have a good week.
Announcer:Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.com. That's CT RL alt speech.com.