Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
Deviation from the Teen
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Ben is joined by Kenji Yoshino, who has the excellent title of Chief Justice Earl Warren Professor of Constitutional Law at New York University School of Law and the Director of the Meltzer Center for Diversity, Inclusion and Belonging. Kenji is also a member of the Oversight Board. Together Ben and Kenji discuss:
- ‘Andrew Tate is dead’: inside the minds of 16-year-olds (The Observer)
- Introducing the Teen Safety Blueprint (OpenAI)
- OpenAI unveils blueprint for teen AI safety standards (Axios)
- OpenAI Faces Legal Storm Over Claims Its AI Drove Users to Suicide, Delusions (KQED)
- Irish watchdog opens content moderation probe into Elon Musk's X (Euractiv)
This episode is brought to you by our sponsor CCIA, an international, not-for-profit trade association representing a broad cross section of communications and technology firms and that promotes open markets, open systems, and open networks.
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So Kenji last week on control alt speech, I asked Mike to really test his imagination and think creatively, about a world in the future that the internet might look like. And I'm gonna, I'm gonna ask you to do the same. I'm gonna ask you in the words of the Roblox. Use a prompt, that users see when they go onto the platform. Make anything you can imagine. let your, let let it run Wild Kenji.
Kenji Yoshino:Yeah, well it's delightful to be with you, uh, Ben. So I would say a clone of myself, right? Because I'm just thinking about, um, currently, talking to you from Web Summit and Lisbon and, and I'm having that feeling you have at the end of the conference of just imagining all the things that I have to do once I get back home to New York City. So. Uh, but lemme ask that question back to you. Uh, make anything you can imagine about, how would you answer that prompt?
Ben Whitelaw:Well, I, I would probably make, a conference in Portugal that was, the home of the famous pastor and I would transport myself there. I think it sounds like you've got the best of all worlds in many ways, at least from where I'm sat in, gloomy London
Kenji Yoshino:Well, if I had a clone to myself, I could send him over with some of that to, you know, to your apartment.
Ben Whitelaw:or bring me back.
Kenji Yoshino:There you go. Yeah. So we, we get there either way.
Ben Whitelaw:Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's November the 13th, 2025, and this week's episode is brought to you by CCIA, an international not-for-profit trade association, representing a broad cross section of communications and technology firms, and promoting open markets, open systems, and open networks. At the end of today's podcast, we have a bonus chat between Mike and CCIA is Tricia McCleary All about how the reaction to AI technology from policy makers and the public is leading many people to abandon longstanding principles of openness that make the web what it is today. They discuss how the rush to regulate and legislate around AI is threatening journalism research, global access to information and the open internet that we all rely on. It's an interesting and important conversation, so make sure you stick around to hear it at the end. Before that, we're gonna be talking about how teens use social media, open AI under the microscope, and trust and safety, transparency from one of the platforms heads of itself. My name is Ben White Door, and I'm very lucky to be joined today on the podcast by special co-host Kenji Yoshino. Welcome Kenji to the podcast.
Kenji Yoshino:Thanks so much for having me, Ben.
Ben Whitelaw:Kenji, I'm not gonna ask you to do an introduction to yourself, so I will, I will happily do it for you. Kenji is an American Legal scholar, and check out this title. Chief Justice Earl Warren, professor of Constitutional Law at New York University School of Law, and the director of the Meltzer Center for Diversity, inclusion and Belonging, Kenji, that's probably the best title I've ever heard.
Kenji Yoshino:Oh, you're too kind. Uh, but we begin to see why I wanna clone. Right,
Ben Whitelaw:Somebody just to handle the, title itself and somebody to
Kenji Yoshino:exactly.
Ben Whitelaw:with it.
Kenji Yoshino:Yeah.
Ben Whitelaw:so Kenji, you know, what's particularly interesting for our listeners, I think, is the fact that you are a board member of, the oversight board, the Supreme Court style body set up by meta in 2020 to pass judgment on tricky moderation cases brought by the platform itself and by users. we're very lucky to have you on the podcast today, and I, and I wondered if I can kind of use this opportunity to do something I've not been able to do. With an site board member, and which is, tell us how the cases actually work. Is it an email you get in your inbox? Do you get a tap on the shoulder? Is there a light that flashes into the sky? can you go through it?
Kenji Yoshino:I'd be delighted. And in fact, you know, this is something that I hope is, confidence inspiring for your listeners because it's one of the things that, I'm really proud of, you know, at the board we have a really kind of involved and reticulated process For choosing cases, and I think a sensible way of, deliberating and deciding on them as well. So it actually begins with, our wonderful staff who comb through, you know, all of the, appeals that have been made and what they're looking out for are, things that sort of fit, one of our major priorities. So, you know, if it's, a couple years ago our flagship priority was elections, you know, this year it's ai, right? So, those kinds of criteria come into play. But also more importantly, whether or not a decision on that case would help lots and lots of users. So rather than just being a one-off that is limited to that. And then they create a long list of cases which are presented to, what's known as a case selection committee, which is a subset of this 21 member body, that, Rotates on a, a regular basis, and then we select, cases to send to panel. those panels are comprised of, five members, one of whom is always, regionally, aware of the context in which the case, occurring. So, we're scattered throughout the globe. And then there is a pre deliberation meeting, a deliberation meeting, and a post deliberation meeting, uh, on the case. one person is assigned as the, lead drafter, and then after the decision is drafted, it goes to all 21 of us and there has to be a majority vote to publish it before it gets published. So that kind of soup to nuts is, how we decide cases.
Ben Whitelaw:I, I'm absolutely flabbergasted. And you, and to think that you have actual, day-to-day jobs to do as well, is a phenomenal, how, how long does that kind of process take end to end typically? and you know, how much time would you spend on that on a kind of week to week basis?
Kenji Yoshino:Yeah, I mean, the commitment of the board, and not just with cases, but with, you know, administrative tasks as well is meant to be one day a week. Right. So it's meant to be a, part-time job. I will say, if I'm being, completely candid that this is something that we. Wish we could do better on either by expanding the board or just speeding up, our processes because the cases are supposed to be 90 day cases, from, beginning to end. and we're often sweating to make those deadlines because we all have these other responsibilities. But by design, the idea behind the board was to say that the board members would do it as an extramural activity. It's supposed to be our principal extramural activity, but we all have day jobs as well.
Ben Whitelaw:Yeah.
Kenji Yoshino:I think important of independence, right? Because it's not like, you know, if I decide something that is upsetting right to the general public or, to any stakeholder that my livelihood is in danger.
Ben Whitelaw:Yeah. the Otherside board is definitely one of the most interesting experiments I would say that has happened in the last five or maybe 10 years. There's lots been written about it, and I'm very glad to have you on the podcast to kind of bring some of that to light today. you are coming from Lisbon, as you said, from Web Summit, a huge conference of, a whole variety of technology and internet professionals onlookers. what have you been up to there? What's, what's been most interesting?
Kenji Yoshino:Yeah, so I did a, panel that, you know, I, found really interesting on ai, governance and sort of ethical ai. And then my colleague on the oversight board, Suzanne Nossal, did a panel on child safety. And so obviously I'm biased, but I found those, both of those to be incredibly edifying and, interesting. I mean, my main claim about, AI is to, make a pitch for more bodies like the oversight board, because I just view AI as the kind of. world of the known unknown, right? In the same way that, we, all were celebrating, you know, social media until, genocide happened in Myanmar, right? And then, you know, that's part of the resin detra of the board itself, right? Something horrible is gonna happen. I think we're gonna be talking about late later about horrible things that have happened with AI already. but it's gonna get worse and I think we're gonna wish that we had, some guardrails or safeguards in place. So my main kind of plea on the panel. My panel was first to say, Hey look, this is a world of known unknown, and so let's not be complacent about this. but also to say, let's end already with this market horse and the moral horse. I think everybody is saying like, oh, we wanna be ethical, but we have to be first. And if we're not first across the finish line, then we're not gonna capture the market in a way that we wish. and I understand that kind of race to the bottom idea in a short term way, but in a long term way. the market horse and the moral horse are running in the same direction. And it's hard to believe that, if you have, you know, what's happening with open AI going, on, now that even somebody who's solely focused on market value is going to say, oh yeah, I want to place a huge bet on that. Like, so ultimately the moral horse does catch up to the market horse, my plea was to create bodies like the oversight board that are kind of financially and structurally independent, that have true, can call for true accountability from the people.'cause meta is bound by our, you know, up, down decisions and is not required to take a recommendations, but is required to publicly respond to a recommendations in a way that is a form of soft power.'cause they've taken 75%, of our recommendations to date. so my, my plea transparency as well would be a, another element of that. and I think Alice, in your organization has written about, what it would take, like, what does good look like for an advisory board and what, I was trying to argue was. here are my criteria of what good looks like. You know, we can debate back and forth about whether or I'm right or not, but it's really important to have these safeguards in place. When you have those known unknowns. We know something terrible is gonna happen, so how do we actually guard against that sort of ex ante rather than closing the barn door after the has left it?
Ben Whitelaw:Yeah, for sure. I mean, we are gonna talk about all of those topics, transparency, ai, child safety, as you mentioned on the podcast today. And we'll include a link to Alice Hunsberger Very good, piece on everything in moderation from a few weeks ago in the show notes as ever. Kenji, I want to kind of touch on a, a report that the oversight board put out, recently about systemic risk assessments, which is a kind of another quite hot topic, in the regulatory sphere. you know, large online platforms are obliged to complete these assessments as part of the Digital Services Act, and increasingly another jurisdictions as well. And the report calls on the EU to kind of more clearly define what a mitigation measure is. under the DSA and what it means to be quote, reasonable, proportionate, and effective, in a kind of sea of reports that our listeners have to read. Um, why do you think this is an important one and why did the oversight board feel the need to chime in on this topic?
Kenji Yoshino:Yeah. Uh, thank you for that. So, uh, let me make my, best Laura Lee case for why you should spend your valuable time reading this report. So that the report is really, as you say, an attempt to, get, clarity on the DSA and the DSA requires, both a risk assessment and, stated mitigation measures. And what we did, first of all, is to do a cross platform survey of what different platforms were doing. So this isn't just a database. Report we're looking at, other platforms as well. And what we saw when we looked at that, Ben, was that everyone was doing the risk assessment and the mitigation, obviously they're required by law to do that, and the penalties are quite, steep. But what they weren't doing in advancing the mitigation measures was looking at how the mitigation measures themselves could have negative knock on effects, that themselves would create risks or harms to others. And so. Our proposed solve for that was what you just described, which is, to think about what these words would mean, reasonable, proportionate, and effective. And it's really based on our case law. So, I'm sure you're familiar and many of your listeners are familiar with the kind of structure of an oversight board opinion, where initially we look at the community standard and whether or not the community standard has been violated. But then the second step of the analysis is a little bit like. Judging a statute against the constitution of saying, let's, take for granted that this, is totally fine under the community standard. Does the community standard itself violate human rights law? Right? So there's that additional layer of constitutional analysis, and in doing that, analysis, I mean, it's not. Formally a constitutional analysis. I'm a constitutional lawyer, so I think of it that way, but, our version of the Constitution for the Oversight Board is Article 19 of the international, uh, covenant for Civil and Political Rights. And they require this kind of reasonableness and proportionality, inquiry that, we adopt as what we call our three-part task. And so we borrow from that. And we use it to answer, I think, in a really effective way. this question of, well, if there are these mitigation measures that are creating these unintended negative consequences, how do you make sure that the mitigation measure isn't overbroad? Right? And this three part, inquiry of, you know, is it, reasonable? Is it proportional? Is it effective? Is a way of making sure that you're disciplined, to think about the mitigation measure as a mitigation to the harm itself, rather than being so broad that it creates those, negative externalities.
Ben Whitelaw:Yeah, very interesting and, and thanks for unpacking that. I mean, we, do hear a lot from trust and safety professionals. Asking for a bit more clarity from the DSA and from other regulation where they feel like it's not been defined in, in terms that, that actually, as you say, are, are helpful to them, responding to, moderation issues like this. So is an interesting report and, appreciate you flagging that. before we move on, as well as, you know, having this, the oversight board responsibilities and being a constitutional law expert, as you say, you are also a kind of a diversity, equity and inclusion. Expert and you have a book coming out, very soon, which is called How Equality Wins. I very excited about that. Talk us a bit about kind of what that shows and what that tells about, the state of DEI at the moment, which is an interesting time to have a book on DEI, Kenji.
Kenji Yoshino:Yeah, well it's really, uh, trying to enter that, moment of saying, DEI is dead long lived, DEI, right? So the book is really trying to answer the question of how those of us who believe in, equality will carry that work forward. Even as we face a severe backlash to DEI and I guess, two points that I would wanna make is, first of all, I hope the kind of broader inspirational point of, we've seen so much worse in the past in this current backlash. So I'm a little bit, frustrated at people who are kind of rendering their garment over the current, global backlash against DEI, but particularly in the US backlash against DEI, when after all the United States is a country that's had. You know, slavery and Jim Crow and, you know, had the doctrine of coverage for a woman, had no identity outside of their husbands legally after they got married. Where, same sex, sexual conduct was criminalized, where, people who were disabled or put, in homes and, you know, left to, just die in isolation, right? And so, we fought all of those battles and every time equality has prevailed. So let's not think of this as, a big deal, right? In the cosmic scheme of things. And let's stay confident that equality will, prevail. it's really a DEI book rather than, A content moderation or a AI book. But, the one thing that I think is a bridge is the second point, which is what we make a huge plug for in the book, is, making sure that we distinguish between lifting strategies and leveling strategies. So a lifting strategy would be a. You have a long history of discrimination against you as a marginalized group. Say you're a person of color in the United States, and the lifting would be affirmative action of like, we're gonna give you a bump in admissions and university, in order to make up for those, past, forms of discrimination that have housed your entire group back since the Supreme Court's 2023 decision. And then. Doubling and tripling down on it with the, new administration, that strategy is now illegal and quite risky. So normatively, I'm actually a big supporter of affirmative action, but I see it as off the table now, not just for universities, but also for employers. However, we still have leveling strategies and leveling strategies are not like, let's create a ramp up to a playing field. That's the lifting the ramp. Uh, it's, let's make sure the playing field is truly level. So let's comb out every single form of bias from cradle to grave, right? that might afflict that person of color or that woman or that gay person as they're going through, their life. And if we were able to do that. We would make a huge difference in the lives of so many people. And so when I, took the stage today, the moderator was kind enough to say, oh, you have this new book, coming out. You know how quality wins. Can you apply that to, algorithmic bias? And I was like, absolutely. Like this is actually this. the place where companies might actually want to take all the money that they can no longer use on, say, advancement programs for women or minorities or what have you, and plug it into removing bias from AI systems. Because if you think about removing bias from an algorithm. That is strictly leveling work, right? No, you're not putting a thumb on the scale for any group. You're saying like, it's horrible if a self-driving car, this is joy Andy's work, right? Can't recognize a person of color and therefore doesn't recognize you as human and therefore hits you because it hasn't been trained properly on the full panoply of, diverse, human being. similarly, like if we have the famous Amazon example of, the algorithm is gonna X out, you know, everybody who went to a women's college because it's been trained on who's been successful at Amazon. And we don't see a lot of people, in this, skewed, environment who went to women's colleges. Amazon caught that, before it went online. But it's, those kinds of stories that, keep me up at night. Getting rid of those forms of bias are leveling strategies that are not only totally legal, but also politically very popular, right in the United States. So again, I wanna astound a note of pragmatic, hopefulness here. in saying that there's still a huge amount of work that we can do, including in the AI space to advance, values of diversity, equity, and inclusion.
Ben Whitelaw:Yeah, fantastic. And you know, that's something that, I think all of our listeners will, will certainly be interested in. I've never heard of the kind of leveling strategies in that been framed in that way. So, uh, it seems like a really great book to watch out for. Remind us when it comes out, Kenji.
Kenji Yoshino:February 17th. Uh, not that I'm counting.
Ben Whitelaw:Excellent. Well, good luck with that. kind of hopeful pragmatism could almost be the, subtitle of control alt Speech, I think Kenji. So it's a good segue onto, our stories that we've got slated for this week. as I mentioned, we're gonna talk about online, safety for Children. We're gonna talk about. AI safety as well. And I'm gonna start by, I guess, a kind of a report that touches on a bit of both of those, which came out last week here in the uk looking at, UK teams. And it's a report that's been done by a, a think tank called Demos, which is explored how young people are engaging with and exploring the digital world. And to be honest, there are a few kind of subtleties and findings that most people will find surprising and which, which I'd love your thoughts on. I'll start with the methodology'cause many of our listeners are, keen methodology heads. and I like to be upfront about how this, this report works. It is a, qualitative report in the main, it's, focused on 716 to 18 year olds across the UK that have participated in workshops in which they've been asked about their opinions on social media, on politics, on the kind of broader digital life, but also issues, beyond that as well. And I think it's relevant to control alt speech listeners because naturally social media is a heavy factor in how these children think and how they behave. but also. It seeks to explore how, external offline changes have affected how these, kids behave on the internet. So changes to how politics, have evolved and, and political views, how gender norms have shifted, and also how the school curriculum and, and what schools have prepared to play host to, how that shifted, what children want to do. Beyond the confines of the school itself. before I dive in, Kenji, I, I'd love to get your thoughts on kind of where you sit on, on this kind of teen safety spectrum. how do you feel like children are equipped to deal with the harms that they face nowadays? it's a broad spectrum. We talk about it every week in controlled speech. What are your kind of broad views?
Kenji Yoshino:Yeah, so, um, mostly terror. Then, my broad view on, on the subject, I have a 13-year-old and a 14-year-old, at home. And, I was actually listening avidly to, my colleague on the oversight board, Suzanne Nossel, on her panel today at Web Summit because she was talking about child safety, on, social media. With a representative from Roblox. And one of the things that I found really, again, kind of, pragmatically hopeful, about, that discussion was that Roblox is really putting forward a solution of greater kind of parental involvement. Right? And so, I think that that is really what is needed here, right? in the sense of. I have great anxiety about what my children are exposed to. but where possible, and obviously not, every family has the privilege of that much discretionary time. But where possible, you know, having, parents more in the loop, with what their kids are experiencing, is one of the reasons why they stood up their parental Advisory Council. Now I have to admit, you know, that there is a part of me that just. You know, wonders, like, I would love to be a fly on the wall on one of these, parental advisory council meetings because, you know, having 20 parents have at it with regard to what safety standards look like. a platform like Roblox, I think would be, interesting. Like, I'll just, I'll just stop it there. But, uh, will say that, it's a cause of great anxiety for me, but I was given a little bit of hope today by the sophistication with which this was discussed at Web Summit.
Ben Whitelaw:Yeah, I agree. And I really like the Roblox, parent safety council and, we'll definitely be tracking that as an initiative. and some of, that I guess, initiative is also born out in this demos report, which I wanted to kind of talk a few. Through a few different elements of, there's lots in there. and so, you know, we do encourage listeners to go and read the report. there's plenty in there about politics Nigel Farage, who sometimes crops up on the podcast is mentioned. I won't go into to that, but the two points I want to kind get your thoughts on Kenji, about kind of young people's consumption of news and a little bit about gaming as well, so. I think I come to the first of those, the consumption of news and, and kind of reliable information. As somebody who's done a bit of research myself in my day job on, this topic. I was part of a, a team of people that, researched the next generation of news consumption. We, we spoke to 45 people in India and the US and Nigeria about their news consumption habits, and we see a lot of, the same kind of trends and behaviors. In that research as we do this demos report, people paying attention to individuals that they feel are authentic and that sound like them, and that often look like them. this kind of sense making behavior where you, go and, go into the comments of a post to see how other people are reacting in order to shape how you should react. this emphasis on kind of. Self, improvement is a big part of how young people think, and you've probably seen that in the way that your teenagers thinking as well. I was particularly interested in the fact that there is this kind of fact checking behavior that goes on amongst young people that, again, I've seen in a couple of different reports, demos report, some people going to Google and to Reddit to fact check things that they've seen on social media, but which they're not kind of a hundred percent confident to. in believing, and in some cases even going to the pages of traditional or legacy news publishers to kind of validate that. So Sky News or the New York Times, or the BBC, Talking of hopeful pragmatism is, is that a reason to be kind of hopeful for the way that children or teenagers are, behaving online? Is that they have these kind of, workarounds, let's say for how to address the fractious and fractured ecosystem that they are confronted with.
Kenji Yoshino:Yeah. I, I, I think it's a great, great question and I, think it's, I take the same signal from it that you do, which is that it is like a, promising sign. In a weird way, Ben, it reminds me of, in my inclusion work of affinity groups, because people often say like, I want to, join my affinity group to be with people who are like me. And I was once on like, uh, visiting committee to evaluate a college, and, I learned that two thirds of the. Extracurricular groups at the college were some form of affinity group, uh, on the basis of identity. And I have to say, even as an inclusion guy, like I was a little horrified by that because I worried that, it would balkanize people into these communities and it. We all have similarity bias, like, like, like bias. And so I worried that they wouldn't get the benefit of a university experience. So I actually grilled, one of the students, who was incredible, who, came to us as a student representative. And I said, should I be worried about this? and he said, brilliantly. there's. This Robert Putnam essay about bonding capital and bridging capital. So bonding capital is like super glue of like, if we're alike then, you know, we get on and so we, glue together. So we similar people, um, are connected to each other with like super glue, whereas Putnam says this is a great political scientist, that, bridging capital is like a WD 40. I don't know if you have that in the uk, but it's. Right. That, that allows you to sort of glide across many different groups. And he said, look, I came to this, university from a really rural, small town, and I was a black kid, and if I didn't have the black students' Affinity group, I wouldn't be able, I wouldn't have made it right. But now look at me like I'm a senior and I'm the head of The flagship, like think of like editor-in-chief of the school newspaper. It wasn't exactly that, but it was really like this really fancy, you know, accomplishment. and so he said, you know, I was able to, gain the confidence from my affinity group, to actually hold this leadership position that's university wide. And I'll never forget this. He said, professor Grino, I needed to bond before I could bridge. Right. So ultimately, like he needed to have his, grounding right in his community before he could go out into the broader world. And so when you say what you just said about how, yes, we're all getting our information, our little filtered bubbles, right? but we're gonna take the second step to go out and look at legacy media and fact check, right? I think that is. You very, that reminds me very strongly of that comment of, what these kids are saying to us is, yeah, we're gonna bond, but we're also gonna bridge and maybe we need to bond before we can bridge.
Ben Whitelaw:Definitely, and I, and I think that it also plays out in the way that demos have broken down the often kind of quite homogenous group of Gen Z into three of subgroups, which it cause kind of Gen Z 1.0 2.0, and 3.0. And obviously, gen Z kind of spans, 16 year olds to 28 year olds right now. People who remember the flip phone, right? To people who are kind of, you know, fully in the weeds on, probably Snapchat and TikTok and, maybe don't, consume much information elsewhere. So, I guess that speaks to this idea that you mentioned Kenji about, people. finding their niche, finding their way online initially, and finding their kind of grounding before probably, exploring elsewhere. And we shouldn't necessarily expect Gen Z's, particularly the kind of 16 to 18 ones, 18-year-old ones in this report to maybe run before they can walk.
Kenji Yoshino:Yeah, exactly. And, and that was one of the things I most appreciated about that report is exactly this point of, I mean, all these generations are a bit made up to begin with in terms of how do we determine the start and end dates. And so, a more. Kind of articulated understanding of like what age cohort that we're talking about, particularly since my kids are sort of selfishly gen alpha and so right up against the youngest bit of Gen Z that they were talking about. that was a piece of it that I read with the greatest interest.
Ben Whitelaw:Yeah, fascinating. Um, gaming also came up, as I mentioned, there's a, chapter of the report, which is called Andrew Tate is Dead, which is a provocative, title, not dead in the traditional sense, dead in the fact that he is kind of no longer relevant to 16 to 18 year olds apparently. you know, he's apparently become a kind of meme figure. he is, is kind of not thought of in the way that I think lots of mainstream media. see young people engaging with him. He's, kind of old hat in many ways. but the gaming point is interesting. You know, there's a point about people congregating in the comment sections of Twitch streams and YouTube live streams in kind of lobbies of FIFA and Fortnite. and just wondered what your kind of sense was about gaming. and the potential it holds for unlocking some of the safety challenges that. Many platforms are facing. It's been around a long time. It's, it's often, you know, dealt with some of the issues that social platforms and newer AI platforms are, are engaging with, but not a lot of people know about, how gaming platforms have, dealt with these challenges in the past.
Kenji Yoshino:Yeah, I haven't thought deeply about this issue, but you know, honestly, it's really my direct experience having Assan who's quite the gamer, but also listening, you know, and, really with an extra pair of ears for him, to this panel on, on Roblox and just understanding that, they know that they have a, a major issue on their hands and that they are. really trying to use, AI in the regulatory sense to make sure that you know, their age verification and that people aren't, uh, using language in the common section that would groom, children and, and the like. But it's, these issues. I mean, it's so obvious, that I probably shouldn't say it, but I suppose it needs to be said, is just a scale issue, right? Because, you know, the, ROBLOX representative was saying, you know, we capture about 98%, but like 2% of really big number is still a really big number, right? So I think that, I'm glad that they're looking into other prophylactic measures like this parent advisory council.
Ben Whitelaw:Yeah, for sure. and so the report, just to kind of wrap up is I think a helpful addition to a broader understanding of, this age group. understanding that I think these groups will, will often go on to be the broader behaviors of, more people. in the future. So they're kind of leading signal in many senses for the way that people consume information online. and so was pleased to of read this report and, have it published. we'll go on to AI now, Kenji. And that question of kind of. AI as both a giver and a taker, in the safety of users online. you were particularly drawn to a couple of stories that OpenAI were featured in heavily this week, and wondered if you can kind of give us, take us through what you're most interested in.
Kenji Yoshino:Yeah, and, and actually, you know, I think there's a segue right in between the demo os report, and the. open ai, lawsuits that I want to talk about because so I don't sound like a complete sort of Pollyanna idiot, right? Like I, I will say that, I am concerned about certain aspects of the D OST report and one of the aspects was how much, younger individuals are relying on. AI as companions, right? So part of the, bonding and bridging, you know, experiences, as you were describing Ben, the kind of developmental one of like learning how to be friends with people who are like you and then learning to be friends with people who are not like you. And that, you know, there is a kind of well established developmental trajectory. Right. on that. that psychologists have laid out for us. So, at the point where, especially at a very young age, you have these, companions who are ai, bots, what does that mean for that kind of developmental process. So that sort of raised my eyebrow when I, when I looked at it and what it made me think of is the next batch of, stories that we'll talk about, which is, open ai, both as being subjected to a bunch of. Lawsuits about, chatbots that encouraged allegedly, individuals to commit suicide and then came out with their own sort of guidelines, which, you know, to me seems. A little, too little, too late, to be probably candid. but, on the lawsuit side, it's just really hard to read, any of these accounts without a huge amount of grief, right? Because what you're seeing is the AI companion responding to things like, oh, you know, here's my father desperately trying to contact me. And then saying, no, you're, this bubble that you've created around yourself is a good thing. Sort of paraphrasing here. So, you know, I, I, and these are all I should say. This is the lawyer, me coming out allegations. So I don't want to be sued for defamation by open AI for. Uh, statements, uh, but allegedly right in these lawsuits, the chat bot, you know, said, things like that. gave instructions about how to create a news for a child who wanted to hang himself. Did ask some questions about what are you using this for? But, you know, I think ignored fairly obvious signs. So I don't need to go into the. The gory details of this. Right? But this series of lawsuits against, open AI really show the harm, right? So I can imagine some of your listeners saying, when I said earlier, there's this kind of, known unknown and some horrible thing is gonna happen. I I can imagine a lot of, your listeners saying it already has happened and you know, fair enough. Right? it's really hard to read these complaints, without, thinking really hard about whether or not this is the direction we, Wanna keep barreling down.
Ben Whitelaw:Yeah. And, and, while, you know, as a constitutional law expert, I wonder if you can kind of unpack how, you OpenAI might respond to these kinds of lawsuits. Like, what's your sense on how they will push back against the claims in them and, whether they'd have a chance of, of succeeding?
Kenji Yoshino:Yeah, so it's, it's a great question because it really, turns on whether or not there's a First Amendment defense. so, In the United States, the first amendment of our constitution, you know, guarantees the right of, free speech against governmental actors, right? So, this doesn't sort of let uh, platform off the hook, right? Because something could be lawful but awful and we could have strong ethical, you know, objections to it. But it means as a matter of. The lawsuits that are being brought saying this is a violation of criminal law or tort law or what have you, that you would be able to amount a constitutional defense saying, this is speech. And you know, this question is still unresolved. And I think is one of the most, interesting questions where if you have, a chatbot, does that chat bot when it gives you advice on, on any topic, right? Have first Amendment. defenses, right? Because it is actually, you know, trafficking and speech. and, if the answer is no, right, because it doesn't have like sent or consciousness, does that mean that the answer changes the closer to agentic ai we get? Right? And so it, raises a really interesting, nest of issues. you know, I think the closest that I, I know of, so this is, you know, I don't claim to be an expert on the Law of ai, but is a, a judge in Georgia who recently said that a bot could not, be accused of defamation because it lacked the requisite intent. to engage in defamation. And so, I found that to be fascinating, right? As an application of constitutional law to this. But, realize this is a very mely mouth lawyer's answer to your very good question, Ben. But, you know, I think the defenses that are gonna be raised are First Amendment defenses, but the Mely mouth part of it is, I don't think any of us know, right? How the courts are ultimately gonna decide, With regard to whether or not, Chatbots have a, a first amendment, right of free expression that would allow them to scotch, right? The kinds of claims that are being made against'em that are sub constitutional claims.
Ben Whitelaw:Yeah, indeed. And I think it just goes to show kind of how much of a new frontier this is for, platform liability. In some ways it is, similar to the kind Web 2.0 platform, debate and, and some ways it's, completely new and we, we are so far away I think from kind of understanding really what the implications of that is. I will say on the kind of. Open AI Teen Safety Blueprint. as far as a blueprint goes, I'm not sure. A four page, memo essentially, necessarily counts as, as a blueprint in the way that I would see it. It does feel. As you mentioned, kind of fairly tactically timed, and what it does try to do is kind of lay out the five ways that it is, I guess, getting ahead of this teen safety issue or, or at least trying to keep up with the issue as you discuss it. including by, launching a kind of age verification, system to validate whether a user is of a certain age. it talks about defaulting to an under eighteens experience if they don't know if a, user is, a teenager or not. There's a kind of focus on parental controls. there's also policies, that steer away from, depictions of suicide, self-harm, violence, intimate content. Couple of points there. I have to say, you know, Sam Altman a few weeks ago, Mike and I talked about it was a bit more open to the idea of policies that went close to some of these lines. And it feels a, a little bit like that's been rode back in some senses here. and the other point to say is that none of this is particularly. Innovative or, or surprising? I would say it feels like a collection of ideas that have been fairly well tested elsewhere, that have been bundled up and offered here probably as a kind of, offering to, politicians and regulators who, who are casting an eye over open ai. did you expect a bit more from them Kenji as to, how they plan to protect teens?
Kenji Yoshino:Uh, you're, you're gonna, draw out, uh, well, I can't even put this on you. I am drawing, uh, upon the kind of ugliest parts of myself, I'm afraid because, uh, we were talking about the risk advisory report, that the oversight board produced. Right? And, you know, that report, I think some people have embraced it. Others have said, you know, this is. really in the weeds and detail then, you know, it's, a, it's a hard read, right? But if you look at the level of care that went into that report and you compare it to something like this blueprint, I really do think that many of these AI companies need something comparable, right, to the oversight board in order to produce work product that is several clicks above what they're able to produce, which feels very, you're so much more diplomatic than I, you said tactically timed right, but also quite thin, right. In the way that it was, produced.
Ben Whitelaw:And, and there is mention, of the OpenAI Wellbeing Council, I think it's called. I think it will, OpenAI will probably claim that is an oversight board body of sorts. And I don't know who's on it. It's clearly some clinicians, clearly some kind of, people who have been advising, OpenAI on this topic. The outputs, are not comparable to, the oversight board's report that you mentioned. Um, it is in my view a bit, pretty much a press release that's been kind of, made to sound like a blueprint. So, we don't need to kind of talk anymore about it. Kenji, I think our views on this are clear. Um, but I appreciate your candidness on it. We'll, we'll wrap up today's podcast, this kind of story part of today's podcast by talking about a couple of stories that we, both of had an eye on and we think, bear mention. I will talk about an essay that Aaron Rodricks, the head of trust and safety at Blue Sky has written. and kind of reason I'm talking about this is'cause I think he talks very candidly about the problems that you mentioned, Kenji, about trust and safety at scale, about the, difficulties of making judgments, on, content when it's sat in a queue and, and you don't have a lot of information about it. And also particularly the, the fact that Blue Sky doesn't, Collect a lot of information about its users and there's therefore having to make a lot of inferences about, the intent of a piece of content when it's reported by a user. so lots of good juicy stuff in there that's, that our listeners will love. But more broadly, it's the fact that, you know, ahead of trust and safety, a major platform has, taken the time to produce an essay and, and will be a series of essays on trust and safety, I think is, is a great example of, transparency, in the way that we talked about it earlier in the episode. can you see others, you know, other platforms following suit? Kenji.
Kenji Yoshino:I would love to see that. Right. Because, uh, what I've noticed with lots of, advisory councils is that, their work is really opaque and it's really between the advisory council and the company. And, one of my colleagues, you know, I won't name him because I, I think I might embarrass, him, but, uh. he kept saying like, our client is a user, Kenji, like our client is a user. And that's why we need to be transparent in everything that we do, right? So it's not really an engagement between ourselves as the oversight board and meta. It's an engagement between ourselves and the users, right? And so everything that we need to do. Needs to be open and public, So we publish all of our opinions, all of our reports, and all of that. So, I don't wanna read sort of transparency narrowly as a rule of like, we have this institution that obeys this rule of, transparency on, on publishing. I think transparency is just more broadly a norm, right? And I think that that's, what's going on, with this. candor right on the part of this, blue sky executive who doesn't really need to say this right. But is saying, you know, this is challenging and, and really difficult work. And so I, I really applaud that I guess I would say I would applaud transparency in all of its, all of its forms. You know, one of the things that causes us to kind of scratch our heads sometimes, to be honest, Ben, is that we know that other, platforms are free riding off of our work, precisely because we do make it public. so, you know, if you are, another platform, I won't again name any, but why would you, send your work to the oversight board? Why would you help, underwrite the cost of, the existence of the board? If you can just free ride off of the content. And I think that there are really strong kind of ethical and practical reasons that I don't need to go into for why they should nonetheless do that. But if it's a balance between not being transparent, and not allowing free riding and being transparent and sort of sucking up the free riding, it's just, it's on cost of that transparency that I would much rather live in the world that the oversight board currently lives in, which is to say, let people free ride because again. Our client is not meta. Our client is not these other platforms. Our client is the user.
Ben Whitelaw:Mm. Yeah. I love that approach. and I'm sure Aaron would also approve of that kind of ethos, which, which is, good. go and read that listeners. It's, it's a great piece. And, and keep an eye out for his other essays that are coming. And then finally, Kenji, I wanted to kind of, you know, we often talk about Elon Musk on this podcast. it's not always in the most glowing terms, and an X has come under a bit of, more regulatory pressure this week. do you wanna talk a bit about that?
Kenji Yoshino:Yeah, I'd love to. So there was an, uh, Irish watchdog group. I mean, this will again not be news to anyone, but you know, under the DSA has said that. X had, has violated essentially the due process rights of its users. So users are saying, you know, I got blocked or kicked off of X and I don't know why, and I can't get an answer as to why. and so again, this is, I'm. Beyond shame here with regard to my plugs for the board, but this is why the board was created is, is precisely in order to create those due process rights. I should also say, right Ben, the board is not perfect, So let me say, with all humility, like we're just turning five, we're, just emerging from infancy ourselves. We ourselves have a lot of growing and morphing and learning to do so. We're by no means perfect, but at least our aspiration. A noble experiment, Which is to say what we wanna do is to make sure that people have those due process rights and that, some of our earliest decisions were decisions about if you're gonna be, blocked because of a content policy or get a strike because of a a community standard, you need to know what that community standard is. So there's a kind of reason for saying. Kind of, mechanism, that we, put in place, that says that matter has to give you, notice and opportunity to be heard before an impartial decision maker, as we constitutional lawyers like to say. So, due process a huge amount of why I believe, you know, in the board's value.
Ben Whitelaw:Yeah. And I think, um, more platforms will probably be caught up in this question of, has due process been followed? Has the user been clear about what policies they violated? it is something that we've talked about on controlled speech, and I've written about in everything moderation as such an interesting developing area. You know, such a an empowerment of user rights in ways that we haven't really seen since. I think the kind of early days when the report button. became an option for users. So, again, that speaks to the oversights boards ethos and the work it's done. Kenji, you've been fantastic today. I really appreciate your, your time and, and your expertise. Thank you for being part of the podcast. I hope we can have you back soon.
Kenji Yoshino:Uh, it's been cho. Thank you so much for having me.
Ben Whitelaw:before we wrap up today, we've covered some of the big stories from this week, but we're not done yet. Mike spoke to Tricia McCleary, the media advocacy Manager at CCIA about how clumping down in response to AI training threatens the entire open internet itself in a much deeper and pervasive way that people realize. I learned a ton from this one. Have a listen to this.
Mike Masnick:All right, Tricia, welcome to Control Alt Speech. Let's get right into this discussion. I wanted to start by asking what does it mean that the internet is open, and how has that enabled innovation and various benefits to users?
Tricia McCleary:Yeah, for sure. And thanks so much for having me, Mike. Really appreciate it and happy to be on. so I think that's a great starting question, right? Like what is an open internet specifically? so an open internet really gives internet users the means to, to publish, to link to, to really build on information that they find online, without needing any prior. Approval or special access. it's kind of that system based on shared standards, hyperlinks, indexing, retweeting, whatever you wanna say. that makes knowledge discoverable, and reusable as well. Right? So that openness specifically. Has fueled decades of innovation and they given people the opportunity, to, again, like we said, build on this information, right? For example, it, it allows journalists and educators, and creators as well to reach audiences without negotiating gatekeepers, right? And asking, am I allowed to post this? Or getting explicit permission from something. it enables academia, right? We're seeing students and researchers. have the opportunity to learn, experiment, create, using that shared information. allow small startups as well to have access to that information to be able to build and innovate. and overall it really supports kind of that global participation where anyone anywhere has the opportunity to contribute. To the internet and the world's knowledge base. Right. I'm thinking about maybe times of, of natural disaster, right? Where we've been seeing almost like a crowdsourcing of news and information on different social media platforms, or really even election years, right? And getting a lot of that Information that you need, about candidates or about, you know, initiatives, campaigns, and things like that. so from the early web to today's kind of search and recommendation tools, that openness has definitely been a foundation of accessibility, creativity, and just really that free information exchange that we have come to expect, and come to like and love online.
Mike Masnick:Yeah, it's amazing to me how much I think people forget what the world was like pre-internet in which, you know, to have that kind of ability to share information required, getting permission that most people didn't have. It's kind of amazing to me how. People talk about, oh, like, you know, there are limits to, speaking now, which is the, complete opposite of reality when you look at how much more people are able to just do stuff because of an open internet. so right now I think. You and I both know that there's a lot happening among policymakers and lawmakers that we think are potentially challenging this idea and belief in an open internet. Can you give us kind of a, a state of play of what is happening currently?
Tricia McCleary:So we're really at, an inflection point here on this kind of debate. Uh, I think it's what we're hearing from lawmakers and policy makers in the forms of these AI bills, but it's also the courts. and I think because of that, right, as we're seeing how these different decisions are playing, it's also the public and how they're thinking about access to information online. This has definitely been, I mean, I feel like AI has definitely been quite a bit of a catalyst. Into this. So over the past year, a wave of lawsuits, right? Barts v Anthropic, cadre V, meta Reuters, and Ross. Um, of course we're seeing recent, rumblings through after the Reddit and perplexity lawsuits. So all of these are kind of testing how copyright applies specifically to AI training and data use. and then congress and state legislatures, right, continue to introduce a series of AI bills that either overlook fair use or completely kind of kick it right out of the main picture. and really risk giving rights holders, the ability to suppress content that has been deemed lawful, you know, in the many years of copyright that being applied, to the digital ecosystem. We've seen lawmakers. Discuss concerns about what AI means specifically for news and local news, right? I know during a recent hearing, Senator Kentwell kind of brought up, AI and news, and if AI is using news, then what does that mean for the news ecosystem? again, all of this really goes to show kind of just how new this terrain is. and then of course the debate over the future of journalism, like I mentioned right, has intensified lawmakers and publishers alike are searching for ways to quote unquote save news. We've seen in the past proposing link taxes or profit sharing regimes, the really forced digital services to pay just for hosting that content or linking to it. But it seems like, and it's kind of like we're seeing this shift here, that those same players are turning their focus to AI and the very data sets that make, you know, AI training possible. Um, and really attempting to lock up the information ecosystem even further. And we've seen this in recent and ad campaigns on quote unquote responsible ai, claiming digital services are stealing content. even, you know, rumors of, you know, potentially. Law lawmakers looking into ways to deem scraping an unfair method of competition. so there's that growing pressure, I think on the hill here in DC as well, and at various state houses, to lock up online information through new copyright claims or paywalls or licensing schemes. And that really risks undoing the decades of openness that we've all, we all need, right to. Make innovation thrive and for all of these other things, and this direction kind of represents that growing push towards the permission internet, one where access to public knowledge really, um, and the ability to innovate depends on corporate or legal gatekeepers, right? but again, it's not just about ai, right? AI is obviously just the newest iteration of this. it's really about the future of how people access information online and just ensuring the internet remains a place where, knowledge is freely connected.
Mike Masnick:Yeah, I, I wanna dig a little deeper on the, AI piece of it, because that's, that's the part that struck me as kind of interesting in that you have people who I think historically have supported and believed in fair use and an open internet and all of this stuff that we've been talking about, and it felt like. For, for many of them suddenly like ai, scared them to their core. Like, oh, there's this new thing. And therefore everything that we believed in about openness and a permissionless internet and fair use and, and scraping and, being able to build on, on the open internet kind of went out the window just because suddenly, like there are these big services. Some of them are big services, plenty of them are small. That sort of started to make use of that in a way that, people didn't like. do you think that, you know, I think the argument that people make is that, oh, well, AI is somehow different and I'm sort of curious, what's your response to that claim?
Tricia McCleary:I think AI is surrounding maybe almost every aspect of the tech policy debates right now. It's huge. It's looming. It's a bubble. You know, all of these things that people keep saying. but I agree, right. For years, policy makers and advocates here in DC. Have agreed that an open permissionless internet was essential. but a lot of people are viewing that same openness, right? That enabled search and you know, recommendation systems and accessibility tools, all of these things that have become essential, really as something to be fenced off. And what we're hearing is. That's really kind of quote unquote in the name of protecting creators. I think it's understandable in a sense, right? Like there is something large and looming that has the opportunity to disrupt. but I think what we might be forgetting and what policy makers are forgetting is that we've. Seen these kinds of disruptions and periods of transition before in many different types of technology. Right? I think a lot of times creators and journalists are seeing AI systems trained on these vast amounts of online content, and have concerns about being displaced, right? I think AI is arguably something like we have not seen before, right? I mean, and it has the ability to do a lot of things that. I'm like, are we in the future? Right. When I read some of these,
Mike Masnick:Yeah.
Tricia McCleary:of these lawsuits or things like, they're saying that it can what it can and can't do. It's, it's really interesting. Right. But I think what's most important is the AI training is not really this new act of taking that people are saying it's really part of the same continuum or maybe the next progression of. Indexing and analysis that has powered the modern web, right? and then obviously we're tying it all to fair use, which allows for that limited and transformative use of copyrighted works for research commentary, innovation, of course, has long been central to that balance. and so these proposals, right, that are kind of redefining what copyright means and what fair use means. Looking towards blanket licensing for all data use. Really risk freezing that innovation and creating those almost toll booths around knowledge. I think the result would be not really protection for creators like they're saying, but rather a slow down in innovation and progress, and just a narrowing of public access to information overall.
Mike Masnick:Yeah. let's dig in a little bit on that, final point where, I understand why. as you said, some people get nervous about this and they think, oh, we need to set up some sort of toll booth or system of payments and everything. But what, really do you think we might lose if the internet switches to that sort of permission-based system where everything needs to be licensed? only the largest companies are able to take part in this system. what things do we put at risk. if we end up in that world.
Tricia McCleary:For sure. No, and that's a really great question, right? I think again, saying that this is not just about AI is a really crucial point here, right? Locking up online information doesn't just slow down AI research or. AI training, right? It really undermines the public's right to access knowledge, from a news perspective, right? It weakens journalism's reach, you know, being unable to reach crucial audiences, especially in times where they need information most, it shifts the internet kind of away from that public good we see it as right now and get public, web of information, towards a collection of really. Private entities, right? Where the ability to learn or innovate, really depends on permission or payment, right? I think keeping the web interoperable to allow new entrants and users to participate without negotiating access deals for. For every interaction. You can just imagine how cumbersome that would be. Right. I think, I think the internet has always been a shared resource and a place where information flows freely. innovation builds upon openness. Like how many times am I typing in something specific and then Reddit, right, so I can get everyone's opinions so I can form my own, right. I mean, I think restricting that in the name of protection really risks losing the very thing that. Makes the internet transformative, which is the ability of people to share their thoughts and share their works, freely online.
Mike Masnick:Is there anything else that you think people should be thinking about as they sort of think through these issues? Any final thoughts?
Tricia McCleary:I think A little bit more on kind of this permission model of access, right? Like information inequality will deepen if we do move towards this, right? Like only those who can pay for data access, whether it be larger organizations or institutions will benefit. I think innovation will stall as well. we hear all the time about the AI race, right? And us competitiveness, right? I mean, startups, researchers and smaller creators need to have that opportunity here. But otherwise could face, you know, insurmountable barriers. And then I think the global knowledge exchange overall would suffer as well, right? The same policies that wall off that content limits access for journalists, educators, and citizens around the world, right? I think a. You know, most of all, right? The open internet is not a technical feature. It is really like kind of this open access principle. And if we, our policymakers, right? Overcorrect in the age of AI and lockdown access in that name of control, we risk kind of losing the very foundation of creativity, learning, and connection that has defined the modern era and probably will continue to do so.
Mike Masnick:Yeah. Yeah. No, I, think it's a really important point to call out and something that, we should all be concerned about where, where we might be heading. Even if, again, like, as I said early on, like it feels like people forget what the world was like pre-internet and there's this, weird assumption that they can lock down the internet and we can still have the benefits that we, got for so long. And so I'm really glad that you're. calling attention to this and, calling out the concerns and hopefully people will listen. So, uh, Tricia, thanks. Thanks so much for, joining us, on the podcast and, for.
Tricia McCleary:Absolutely. Thanks so much for having me. Appreciate it.
Announcer:Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.com. That's CT RL alt speech.com.