Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
Honey, I Shrunk the Kids’ Internet
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Ben is joined by Fadza Madzingira, a digital policy expert with a decade of experience at Meta, Salesforce, Ofcom and currently Twitch, where she leads the policy, outreach and education teams. Together, they discuss:
- Exclusive: Meta has discussed ending funding to the Oversight Board (Platformer)
- Spotlight: Five Years of the Oversight Board, from Experiment to Essential Institution (Ctrl-Alt-Speech)
- What Teens Are Doing With Those Role-Playing Chatbots (The New York Times)
- Early Lessons from Australia's Teen Social Media Ban for the Rest of the World (Tech Policy Press)
- Stuck in the Middleware with Youth with Vaishnavi J (Ctrl-Alt-Speech)
- Greece to ban social media for under-15s from 2027, calls on EU action (Reuters)
- The Family Tech Cycle: Navigating Screens, Devices, and Social Media (Joan Ganz Cooney Center)
We’re still yet to find a Ctrl-Alt-Speech 2026 Bingo Card winner — could this week be your lucky day? Play along.
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So faci, we were talking just before we started recording, about how the prompt some control al speech, which we use every single week to start the podcast, have got weirder and weird, uh, and weird. Uh, so I'm, I'm sorry to put you in this situation'cause this is maybe one of the strangest, Janor AI is an actually an app we're gonna talk a bit about in today's episode. It's a kind of immersive character, world building type app, used a lot by teenagers, and when you go onto the site it says, you know, big kind of bold letters, be anyone build anything. So, as your introduction to, control speech, what would you be and what would you build?
Fadzai MadzingiraI love that you pick something that is clearly we are the target audience of, um, I, I think if I could build anything, it would be something that can. Track these safety stories as well as you and Mike seem to do, because preparation for this, I was like, how do, um,
Ben WhitelawYeah. Okay. Okay. I mean, I, I would like that too actually, because it, if you build it, let me know. it would save us a lot of time, as you say. Um, I, I think if I was to build anything, apart from your magic tool to help you prepare for podcasts, I think I would be a janitor actually. I spent,
Fadzai Madzingirathat's nice.
Ben WhitelawI spent a summer as a university kind of janitor. I was painting rooms and dorms and you know, generally sprucing up the place in between cohorts of delinquent students. And I had a great summer. So maybe, you know, taking, taking the janitorial, route would be actually quite nice. Not that I'm not enjoying this.
Fadzai MadzingiraYeah, I know. I love how your, your home renovations have not deterred you from the desire of like, let's make things nicer. Um, so that is excellent.
Ben WhitelawMy therapist would've A field day. Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's April the ninth, 2020 sixth, and this week we're discussing the future of Meta's Giant content moderation experiment, grease, jumping on the social media band bandwagon, and how the kids might be. All right, depending on who you talk to. My name's Ben Whitelaw. I'm the founder and editor of Everything in Moderation, and as you would've heard already, Mike is taking a well earned break this week from hosting duties, and I'm very lucky to be here with Fai Mad Gira, a policy expert with a decade of experience at Meta Salesforce, Ofcom and Twitch. Fai, welcome to the podcast.
Fadzai MadzingiraThank you. I am atic to be here. Anyone who knows me knows I am nerding out like crazy right now. So thank you so.
Ben WhitelawIt is an absolutely pleasure to have you. I mean, we, we should the full disclaimers that Fad Z and I met several years ago at a conference to do with content moderation and internet regulation. And we are both based in London and we, we are, I'd say we're mates now. I don't wanna, I don't wanna
Fadzai Madzingirathink so. I think so. I try. I don't
Ben Whitelawwell, I was worried because I was putting myself out there and, you know, sometimes that can be unrequited, but I'm glad, I'm glad you think the same. we're very lucky to have you on the podcast. As I say, part of your role at Twitch is where it's very multifaceted. First of all, you're leading the policy outreach and education teams, at Twitch, which, many of our listeners will know about. can you tell us a bit about how that role plays out and what kind of structure, uh, the team looks like at Twitch?
Fadzai MadzingiraYeah. so, uh, like you said, I, I lead, it's a global team, and it's policy outreach education, which is the three large pillars. And what that means is my team is responsible for, safety policy development. we provide input into product roadmaps, and then we engage with community insights, which helps shape our safety communications, and we also lead our safety education program. it's a very strong part of our core value around being community first. And I'll speak on that a little bit. but before I do that, I don't wanna make any assumptions. so I might wanna start with just like, what is Twitch? for those who may not be familiar, Twitch is a live streaming platform. we're, in. any given month, about 5 million streamers create content for over a hundred million users. I think what might surprise you is that while Twitch is often known as being gaming adjacent, and as such is often viewed as having a very young viewership, actually over 70% of our communities, sit between the ages of 18 and 35. we actually have a really diverse community. we have, not just gaming, which is what we're historically known for. We have people, who are, DJs, musicians, political commentators. It is a wonderfully long list of, individuals who are on the platform. what I love most about Twitch and how it impacts our safety is, the centering of Twitch is around community building. so I'm a lurker, on several like black women gaming streamers and a lot of, DJs. and they do a lot of long form content. And long form content is a little bit different because it requires, it requires someone to really engage with those who are in chat and the, people who engage with their content. And so what happens is you start to build, community across very specific, niche and diverse types of, issues or areas or, interests. and, uh, how that impacts what it means is that the focus for us is how do we make sure we're building safety. Products and policies that inspire that community expression. and so my team is a reflection on that, in that I don't just lead the team that owns the community guidelines. it's also a team that's focused on, well, how do we talk about our safety policies and practices? How do we explain that? what does our education look like if the center of what we do is about building community and explaining why we are doing what we're doing and then iterating and trying to learn how do we educate better? So rehabilitation is also a strong part of how we do that, and that's reflected across all of our safety teams. So that's engineering, tooling, operations. and I engage quite regularly with our executive teams as well because of that safe community's focus. and it inadvertently aligns very closely with. My, research work, which you're aware of. Like I focus a lot on what does it mean to build user empowerment within communities. and so it is a very different way of how I've historically done trust and safety. The focus historically has often been how do we take down harm? And this is very much about how do we build safe communities, which is really cool.
Ben Whitelawreally interesting. And I think that has been the trend over the kind of two years or so that Mike and I have been doing the podcast is that shift has been happening, every week, right? There are stories about how users are being given tools to be able to manage their little corner of the internet, more closely to the values that they have. you know, as you say, like creating policies with users that allow'em to express themselves in the way that they wish. So that, that's really interesting to hear your career follow the same route. I wanna talk a bit about the kind of education piece, um, particular, because I think that's a pretty unique part of what you do in, in terms of the, the roles and the hats that you wear. what are the best ways that you've found to kind of educate users about policies?
Fadzai Madzingirathat is a great question and it's an ongoing question. I am so grateful that I have such an excellent, education lead on my team. and her entire focus is not just the presentation of education, but what does safety literacy mean? and I think the way. We like to think about it is that it depends, on the audience, where the creator is in their journey, and making sure that any information is accessible, easy to get to, an easy, understandable, and then it also means iterating. I realize I'm not answering, I'm answering your question like a lawyer, but what that means is that it depends on the thing we're educating on, right? If we're launching brand new products that no one's Ever been able to engage with? so when we first, launched Auto Mod, which is a tool that helps, our creators and their moderators moderates their, chat and their channel. when you launch something like that, it's not enough to just do a singular blog or a singular tweet. You'd provide a lot more detail guides, as well where people can go and find information on how to use the tool, how to engage the tool, and then we would continuously update and monitor based on, feedback we're getting from the community. whereas some things, a small enough changes that we are sort of like, Hey, we've just updated this one thing around, how moderation, prompts work, or how our chat prompts work. And so heads up and then that might just go out on socials. it's also been, I think To your question, it also means then we've had to be quite creative about how do we know if something is working And it's required for the first time in my career, which is what's so cool about Twitch, of really sitting down and being like, well, what does working mean? and how do you think, how do we think about actor behavior and what does it mean to build a toxic, free space? And when we say we want to rehabilitate, does rehabilitate mean they never do anything wrong again? Or the way they don't do this specific thing wrong again? and that's been really cool and that continues to be an area. We, we work on a lot. There's a lot of smart people that I kind of just sit in a room and get to ask questions and engage with, who work in the space.
Ben WhitelawAmazing. that suspension piece you mentioned there, one of the metrics I guess that most platforms will think about and many people in your types of role will think about is, is the kind of suspension rate. Um, and that suspension system, who you ban, what you ban them for, you know, when they're allowed to kind of return is obviously a very, interesting part of, the trust and safety armory essentially, you know, if you're Donald Trump, you get to come back, if it president changes, but that's not always the case for your average user. so I just wanted to kind of, you, you've done some work on this over the last 12 months or so. What made you re-look at this? Within the twitch trust and safety, structure and, and what's new about it.
Fadzai MadzingiraYeah. so how we've been thinking about enforcement has actually been an evolving process over the last three years. and it's really a focus on our side of, I think historically we had a very, I mean, and you know, this industry wide, it was very all for nothing. It's like you lose your account, you're gone forever. That's it. Goodbye. but as policies become more nuanced, the societal norms change, but also our recognition of acknowledging that, our platform is somewhere where people are making strong communities and friendships and, they're finding, people who are like them and interested in things like them. Cutting them off in that way is an incredibly punitive measure. And so it starts to really mean that we need to think about the severity of. The harm, but also in what instances is someone who just didn't know or understand our rules. Someone who, once we get, they get a warning or they get a singular suspension, in what ways do they start to realize like, oh, okay, I think I understand now. I'm not gonna do that again. Um, so it's very dependent on severity and context. And so we made that initial change, a little over a year and a half ago where we introduced, escalated consequences and we introduced a more nuanced warning system. And then in February, this year, what we really, tried to do as part of just like modernizing our efforts, is to focus our enforcement restrictions specifically where the offense occurred. So rather than, someone losing access to the entire platform, we are really, putting the suspension where the offense, happens. So if a user violates our community guidelines in streaming, then they will receive a streaming suspension. but it still means they can chat and they can engage with some of their favorite, creators and streamers as well. again, it will depend on the severity of the harm and so on. And so we still take our most severe. Violations quite seriously. And in those cases you'll lose complete access. But we are really honing in on what does it mean to educate and re rehabilitate. And the statistics tell us quite firmly that actually the majority of our community have never had a suspension. And those who've had a suspension or have had a warning, are actually less likely to repeat the same thing. And we've seen the similar evidence as well with our chat prompts. which is something that, it's a bit like a nudge. and I know there's a tons of research, I think roadblocks and meta have done over the years to indicate that if you tell someone like, Hey, it looks like what you're about to post might violate, people are more likely to update or change what they were about to say. and so it's pretty clear that the majority of our users are good faith. So what does it look like for us to build an enforcement system that acknowledges that, instead of like overly, I, I like to use the example of, what does it look like to have a system that focuses on, really helping in community building versus like, how do we get the bad guys? because we've gotten really good at getting the bad guys, but we haven't done enough on the other side, and now we've really focused over the last three to five years on let's make sure we are actually building for the majority of our users.
Ben WhitelawYeah. Uh, it's a, it's a really nice approach actually. And something that I talked a lot about when I used to work had a very different situation and a very different scale working with newsrooms and their commenting systems. And actually, you know, if you're just about removing people, you incentivize the kind of wrong behaviors, whereas, like you say, you want to create a system where people leave you know, their best contributions and, adds to the environment that you're already making, which is really interesting. So, so what you're saying is kind of, previously people would've been banned completely, but now they're getting banned from either the, the streaming or the chatting maybe for 24 hours or a week, and then they're allowed back in.
Fadzai MadzingiraYeah. And again, it depends on the severity or the nature of the violation. cause the point on the severity is really important, mostly because our most severe and egregious harms are, you lose access. But for something where it might be, um, and this happens a lot. Someone's come from a different platform. and so they're not as familiar with our rules as compared to a platform that has very different rules. And so they might accidentally, break a rule because they didn't realize it. They receive a warning from us, and it's a chat suspension, so then they can't chat, but they can still stream. But now they've also learned, oh, okay, got it. I'm not supposed to say that in chat. and we found that that actually helps keep people on the platform and connected to their communities. so it was a really cool, very recent, very cool, change. But I, I think it's focused on the right thing.
Ben WhitelawYeah, definitely. I mean, it's, it's sounds like kind of very, smart of set of improvements there. And, you know, great to hear from people like yourself working at the cutting edge of trust and safety as well, making changes under the hood. I think that's kind of really what we wanna do more, uh, here on controlled speech. we're gonna jump into today's stories fad. I, we've, Got a range of, reads to get through before we do so, reminder to our listeners to rate review, like, and subscribe wherever you get your podcasts. if you enjoy a controlled speech on Spotify, you know what to do. If you're a YouTuber, you know what to do. If you're on Apple and any other podcast, apps, don't be the person who doesn't rate and review us or, or leave a comment. we need you. This is your call to arms. so yeah, thanks. for that great intro. Lovely to hear about your backstory. it was kind of inevitable really, FAI that we were going to. Start with a company that you previously worked at, um,
Fadzai MadzingiraYeah.
Ben Whitelawjust depended which one. And we are gonna start with meta. Um, we're gonna start with the oversight board in particular. a story that came out right after we recorded last week, and which you were kind of keen to talk about is the news coming out from, Casey Newton's platformer. He who previously co-hosted the podcast a few weeks ago that the oversight board might have their funding reduced, when the next renewal comes round. Talk to us about this story and, and why you found it interesting.
Fadzai MadzingiraYeah. So, I, uh, I mean then Facebook now meta, but when I was there it was Facebook and, the reason I found the story so interesting is I was part of the teams right at the beginning when we were ideating around the oversight board, what its role, what's what are we aiming to do. and I just, I view it as such this great experiment of like, well, what does oversight mean within the realm of just trust and safety? and so I did a lot of the workshops, going with the oversight board team to different parts of the world to talk about these are our guidelines. How would you do this? and it was probably the most I'd engage with people who weren't in tech or in the industry while talking through like very transparently, like, this is how we do this. and then engaging with the first set of members just before I, I left the company and just thinking, wow, these are really smart people who are committing their time to do this. And so I've watched the journey for like years, over the last couple of years and just watching how they, they've shifted. and so I just thought when I saw this story about, what an interesting. Point to be in, because I think, I think the oversight board has done such a one great job actually of being able to talk about themselves as independent, despite all the concerns at the beginning around funding. and wondering like, well, what does this mean if their, their funding is ending, what does it mean for the oversight board? or their funding is being adjusted rather is I think how, the story talks about it. and that they're debating different options and like what is available to them and what does it mean for them to work with other, platforms or, um, yeah. So I, I, I thought it was just interesting because I wondered what does it mean for us to think about oversight
Ben WhitelawMm-hmm.
Fadzai Madzingirathe industry, and was really interested in your thoughts.'cause I know you and Mike talk about the all.
Ben WhitelawYeah, I mean, I, I think it's, um, definitely a story we followed over time. We've actually had one of the oversight board members, Kenji Yoshino co-host the podcast. we'll share the link to that in, in the show notes. Fascinating conversation. He was very open about the board's challenges and, the work that it's done. And then we also had a spotlight episode with, two other, Members of the oversight board, Paolo, Kara.
Fadzai Madzingiragreat episode.
Ben WhitelawYeah. And Ju and, Julie as well. Um, and so that is a really interesting arc, I think, as you say, over time. I'm, I'm not that surprised by the news, I'll be honest. I think there's, there's something kind of inevitable about the time we're in, uh, the political moment that we find ourselves in, and the work that we know behind the scenes of large tech companies to kind of ingratiate themselves to particular presidential, regimes. so I wasn't particularly surprised, I'll be honest about this story. I've kind of joked in the past about, you know, when is it going to happen? there'll be less funding almost certainly, but like, will they have the funding removed completely. surprising thing for me, I guess Vai is like how early this story has been put out into Ether. the funding actually runs for another couple of years, right? So it's, it's, going to be, I think 2028 is when the next, round happens. And so we're miles in advance of that. Um, and that's normally a bad thing. when we're talking about kind of media, stories is that somebody is kinda preparing for something that would be, would be my guess and I dunno any more than that. so what, what is it that you about the shift? If the board disappears, what would you see happening as a result?
Fadzai Madzingiramaybe not worries in as much as I think it. it might be an indicator, like you've said, like it's a shift. Looking at the, time we're in right now, I think there has historically always been this desire for like, we should have oversight from third parties. I suppose then what does it look like to have oversight in different ways? or what does consulting or engagement look like? for Twitch, for example, we have a safety advisory collective, which is made up entirely of our creators. And so those are individuals who consult on our policies and products because we wanna make sure we're hitting the right theme with our communities. so that's not to say there isn't still room for expertise, but I wonder if maybe the shift is more onto like, proactive consulting versus oversight. I don't know. but I think it's a, a significant shift, for the industry.
Ben WhitelawYeah. And, and we've talked on the podcast a bit, as well about the role of these advisory councils. You have expert councils. OpenAI has one for health experts. There are parent councils. We've mentioned the ROBLOX one, relatively new and, and kind of working behind the scenes at the moment. And then there are team councils again, the most kind of notable is, tiktoks. And those teams are kind of vising on the way that they use the platform, what they would like to see, the way the policies are framed and, and communicated. And some of the members are actually put in front of other key stakeholders and regulators as far as, the way that they use the platform. So there are multiple types of council and multiple ways that they're used. my concern with that is that. They're not necessarily as transparent or independent as perhaps the oversight, oversight board is, or, or has purported to be. You know, that's one of the big things that they, the members talk about is that we are kind of independently, appointed to the board. We, publish all of our judgments. Everyone can go and read them. Doesn't matter what kind of platform you work for, you can take that information. You can kind of use it and apply it in your own work, even though you're not working at meta, that isn't the case for these other councils, right?
Fadzai MadzingiraAnd I, I, think that's absolutely right. I wonder if then this is the point where we start talking about transparency in those consultation, uh. Realms rather than, because I do think there's something about oversight that really implies, or requires that says, this is the only time you really require this transparency, which is why regulators have to be transparent and oversight, why these have to be transparent. it's always. Always been, and I think you and I have talked about this in in other conferences, it's always been weird to me that we are not as interested in, well, what was the process? And from a consulting perspective or from an input perspective, or what did the experts tell you? and like all of, uh, my team included, but all of us internally, we are engaging with experts all the time. and I wonder like, well, what does it look like for us to be more transparent in those engagements and whether those learnings would actually be more helpful for smaller companies to figure out like, well, I could take this learning. whereas sometimes I think with really like companies like meta, smaller companies, like how do you start to even replicate, some of the decisions or discussions that have come out of the oversight board? So, yeah, that's a really good push. I do, I would love more transparency around like, who are you talking to and what do they tell you?
Ben Whitelawtell you? Yeah, exactly. And what, what is the relationship that these experts have or these parents have with the platform? there's a really fascinating interview in the documentary, Molly versus the Machines, which talked a bit about recently. This is the documentary about Molly Russell, the teen who, committed suicide in the uk, seven or eight years ago now. And, and one of the. Interviews is with an expert on mental health who was in meta's of advisory council. And she, you know, mentions about the way that the company kind of used them as a sounding board but gave them very little notice about when changes would be made. didn't really take their advice or recommendations on board. It was fairly kind of, very high level, as a kind of engagement. And yeah, good to know that now, um, that's in the public domain, but actually knowing that in real time would actually be more helpful. if you were to kind of, again, you were involved in the early stages of the oversight board, so I'm gonna put you on the spot. What, what would, uh, kind of, do you think there's scope for an oversight board for the, for the whole industry? Is there a, a structure that is done for meta, which could be applied to all major platforms? Can you see. Other platforms submitting decisions or appeals in, in the way that you do For the oversight board?
Fadzai Madzingirathis is gonna sound super facetious, but isn't that what regulation is? Um,
Ben WhitelawYeah.
Fadzai Madzingiraand may like again, I'm not trying to be, but I, I think in the, at some point the oversight board had really talked about, well, what does it look like for us to do this, like for the internet? And I'm like, wow, that is a broad scope. I think maybe the focus and the question we wanna ask is like, what is the problem we believe an oversight board would solve for? and then what would that look like, at scale? Um, I think on our side we were very much like, well, what we wanna know is like, do our communities feel safe, Safety is a very personal thing, so we need to be able to empower them. So we need to be able to design with them. So now we have to have a safety advisory collective. So that, that was the very backward, working backwards way that we kind of were thinking about it. think that's the question is like, if we feel like oversight is needed, I suppose the question is like, what is it solving for and what's the problem we're trying to deal with? and then we can ask then is an oversight board the right way to do it?
Ben WhitelawYeah. And I, I think the oversight framing is maybe something that platforms in this political moment wouldn't necessarily want. Um, but I think that the kind of product focused council and, and consulting and advisory, if you take into account the recent, cases in the US that have focused on product design and, and product safety, you can definitely see, I think how that, trend may continue. but yeah. Okay. So, so no giant, oversight board of oversight boards,
Fadzai MadzingiraIt sounds like a giant overlord, doesn't it? Um, yeah. I, I mean, I would love to see what that, what I'm like, how do you pick cases? and like, does it matter if it's like middleware or, B2C or oh my gosh, I'm like, my head is turning with a bureaucracy already. Um, it.
Ben WhitelawI mean that. It would be, it would be. I mean, the, one of the criticisms of the board is that they haven't taken as many cases on as many people have thought, particularly for the kind of cost of the board. You know, all these experts are expensive. You know, they are some of the best legal scholars and human rights experts and journalists and editors in their field, in their respective countries. And, you know, then you have a, a case and a kind of case system behind that, a bunch of people who help sift through the, the cases that come through. So that's been one of the criticisms of the bang for Buck. It hasn't been, particularly, effective, but again, I think as you say, you have to really work out what you're judging the board on. and, their report that they put out a few months ago suggests that, you know, from their perspective, they've, they've done as, as much as they possibly could in terms of making meta change their processes.
Fadzai MadzingiraRight. And arguably, which it's not untrue is, when they've been able to change how meta has engaged with something or done something, it impacts how smaller companies might see that issue or how the industry might see that issue. and so from that perspective, arguably it's, it's incredibly impactful. I was really happy to hear that they're starting to think about actor level and behavioral level, policies.'cause I think as an industry we've just been like, this is where we need to focus. and I think you, you said something really important about, they're telling us this now and it's until 2028. So I'm like, okay. Well that's a good point. That means there's still a lot of, engagement and work and product. I assume that will continue to come out of, the board. All that to say is I think even what they might do despite this news, is still gonna be something that the industry is going to look at, which I think is really interesting. But I wonder if we will start to look at it in the backdrop of, well then what else do we want? do we wanna have more consultation kind of bodies rather?
Ben Whitelawyeah. Maybe this, maybe this story will force, that kind of thinking amongst people working in platforms and in the wide industry. It's like if the oversight board doesn't exist, what do we want the next version of that to be, if at all? Yeah. Super interesting. let's go on now to, another story. Another story we both found very interesting this week. and this is, on the outside of things, just a kind of another child safety story. It's another piece by the New York Times, about the online life of a teenager. And we've talked about many of those over the last few months. this one is a little bit different. The piece is called What Teens Are Doing with their Role Playing Chatbots. And it looks a bit like, one of those kind of sad stories of a teenager who has been affected, uh, through their mental health or worse by the use of a platform on and, and, over engagement with it. And just to kinda give you a sense of why that looks like the, the headline is pretty vague. There are these black and white pictures of, of a teenager and, you know, you start reading the piece and I dunno if you found this faci, but the sense is that something is going to go wrong,
Fadzai MadzingiraRight, exactly.
Ben WhitelawIt's kind of a bit disconcerting. Why I think it's worth talking about is actually kind of what the piece is actually doing and how it was written and how it was researched. So just wanna talk a bit about that. the piece focuses on a guy called Quentin, who is 15 now, but was 13, a couple of years ago when, when the reporter met them. And he talks about how he uses these character chatbots, these social companions that we've talked about a lot. Some of the big companies that do them include character ai, who face lawsuits, for their, the way that they've kind of not kept teens safe online, but also kind of smaller, certainly less regulated, apps like talky. Uh, there's one called Poly Buzz and, and janitor AI as well, which we talked about at the top of the episode. And all of these very immersive chatbots are now apparently what teens are using to, keep themselves busy, feel the time, but in parts of their day. And this guy, Quentin, is using it kind of an hour after school, uses it for four or five hours, on weekends. And, he's able to get through the kind of age verification of character ai, all the usual kind of japes that a teenager, gets on, in using an online platform. his friends use it and all of this is kind of, laid out in the first half of the piece. Then there is this kind of surprise, paragraph like halfway through about where the reporter says, you know, I've spent the last, year and a bit speaking to Quentin and his friends via Discord as a way of trying to find out how and why they use these social chatbots That's, really the most fascinating part about this piece for me, faci. It's like we have a reporter who has actually spent time talking to end users and figuring out what role this technology plays in their lives. And that's, yes. It's the New York Times. Yes. They're very well funded. They have more time than the average outlook would be able to, to invest in that. But that's an, as a starting point. I think that's an amazing thing. before we get into kind of like, what was interesting about Quincy's experience? Like, was that surprising to you, the fact that that happened?
Fadzai Madzingirado, you know what? It isn't until you sit it just now that I was like, oh yes, he did say that. but I, I, so I think you're right because I think we spend a lot of time, in the space really focused on, what did the platform do and what are the issues around the platform and what are the harms and so on. And I don't think we spend as much time thinking about, well, What did end users say? And, I mentioned, at the beginning that when you think about communities, the more di diverse communities, safety is gonna look different, right? And so like, really asking them like, well, what, tooling will help you here? What products will help you here? Like, how do we design with you? and so like, actually you're right, it's actually quite novel to think like, okay, someone actually sat down with someone, not just once, but over an incredibly long amount of time to really understand like, well, how are you engaging? And so on. and I love your point about the gray photos. It was very cinematic, but also very like, what's happening? and so we'll get into Quinton's experience, but I'm so glad that wasn't the only one who noticed the photos, but yeah, it's really cool to hear that community engagement.'cause that's a lot of what my team does. It's good to think, oh, so reporters are doing this now too, which is really great.
Ben WhitelawDefinitely, and, and I've thought a lot about, and talked about, with Mike, the way that media portray online safety issues and the fact that, the general public's understanding of, safety on platforms is driven by media coverage. could argue there's a direct line between the way that media has covered platforms and the, almost never ending number of social media bands that are cropping up every single week. we know that also platforms care a lot about what media says about them. You know, they, um, they respond, they, they respond very quickly when a reporter comes to them, with an issue or having found a certain post. You know, you might have been on the end of a receiving end of one of those things yourself. Um, no comment. but you know, the way that these stories are told are, IM, are important. I think that's what we try and kind of communicate on the podcast. in terms of Quentin's experience, and again, he is just one, teenager, so, we shouldn't kind of overestimate, this, but it feels like this piece is a kind of average experience of a teenager on these social chatbots, which I think is very rarely presented. Would you say the same? Yeah. and just to give you a sense of why I think that, you know, he, uses the chatbots extensively, but over the course of a couple of years as he, kind of comes out of his shell as he gets a girlfriend, as he finds of other hobbies, as he actually, does some therapy, coincidentally his circumstances change and he kind of removes himself from talking to this chatbot as much as, as he would normally do, and. His life essentially gets better and he stops, using these chatbots in the same way. And I found that kind of very, very interesting. Like, yes, there are, they are addictive products. we know that there are, there are teens who have committed suicide as a result of developing very, very intense social bonds with them. and some of the people in the piece, admit that, that they do the same, but it's not as bleak a picture as I think many other pieces or coverage makes out.
Fadzai MadzingiraI would agree. I think, and so there's a couple of things I think we're all aligned on the importance of how do we prioritize mental health in, safety and in platform engagement? And like, what does that look like for young people especially? I think what I found really interesting about Quentin's experience is the reasons in the beginning that he was, using the chat bot was very much like, hey, it's always available, and I'm bored and I'm by myself, so this is easy to engage with. But, it seemed him and his friends also had. They seem to have quite a clearer distinction on the role of the chat bot in their lives as compared to, and they called out like, yeah, some more vulnerable teens. That might be really hard for them, but I know like, this is just like for games. and then the cutest, um, and not to spoil the ending for, for listeners, but then the cutest part is then he gets into, he gets his first girlfriend and then he starts using it less. and so like that movement into like real life, relationship and engagement and there's something to be argued. oh gosh. My, my friends are gonna laugh at me'cause I always bring up Esther Perel, but Esef Perel, has argued about what is the role of chatbots in supplementing, our understanding of how to engage with others, rather than considering them as. replacements or avoiding them completely. Right. and I think there's something really interesting about what does it mean to build a platform that prioritizes mental health, that prioritizes community engagement and encourages someone to be like, this is a good place for you to test what you are gonna go do in the real world. And I think that's essentially what Quintin did is that he used it as a, as a soundboard. And it was fun for a little while. He got his first girlfriend and suddenly realized like, actually I'm on this for less than three hours in a week. and oh, I barely log on anymore. and I, I think that it says something about if we were to co-design this with that in mind, what would that look like? And I think that's really cool actually.
Ben WhitelawYeah. And I, I think, I think it's a really good point. I think the reporter, KMIR Hill, does a good job of that in the piece. But she also is in the comments underneath the article. And because I'm a, I'm a former kind of comments editor.
Fadzai Madzingirayou are
Ben WhitelawI obviously read the comments and she's in the comments talking to readers, and there's a really interesting exchange about how social chatbots, might be the, fast food of intimacy in the future, which I think kind of speaks to what you're saying, right? Like eat fast food from time to time, but you know, it's not always good for you. it's cheap, it's quick, it's on demand. it is regulated to some degree differing in different countries. But actually, you're not banning fast food completely. You can still go and get fried chicken or, or mackey's or other fast foods are available whenever you can. so that was a kind of helpful framing for me actually, is like, this is fast food that does a very specific job. it's not gonna be the thing that, you want people to eat every day, but you know, there's, there's a time and a place for it.
Fadzai Madzingirayeah. Um, I mean, uh, fast food, that's quite a comparison and I think I need to sit with that. Um, but I, I, I do think there's something, that we don't talk about a lot. And I think some of the other stories, talk about this is, to your point about platforms and, these places being a space for, fast tracking, learning about intimacy and so on is like, well, if I'm learning about how to build and foster community and connection, this is actually helpful in how I engage out in the real world. we have a lot of Twitch creators who meet each other on the platform and then organize real life meetups, um, and will. We'll come back to the company and tell us about like a creator meetup. I had a colleague who met her boyfriend on Twitch, which I remembered asking like how, and it was like I finally, they saw, they started following each other's, channels and they started talking and then they met in real life. And then that's what happened. And so like there is something about, acknowledging like, hey, if platforms are prioritizing mental health and are prioritizing, building connection and building community and they're empowering people to build that community, it could actually be quite helpful. And it could be especially helpful if, you are young person who has the right resources and framing around them, and it could be really helpful if, I know you and Mike have talked about this, like certain vulnerable young people who are either differently abled or of color or queer, um, where they find their communities online and are able to foster a sense of identity and expression because they've found these places. I think there's something to be said that this story seems to indicate like, okay, I, I had a safe space. I also understood what this thing was in my life. I had access to other resources. I was in therapy separately. And so by the time I was connecting with people and building my relationships, this thing wasn't a thing that stopped me. It was a thing where I, I practiced beforehand and now could do this for real. and so like, yeah, when I got to the end, I was like, oh, it's a love story. Um, and it just took a long time to get there, but it was quite beautiful.
Ben WhitelawYeah. And you don't, you don't get many of those in control or speech, I'll say that. but yeah, it does feel like a kind of, it feels like an average experience for probably more than most teens. and again, that's just my sense ha having read a lot of these stories poured over a lot of the coverage. move on to kind of talking, talking of average experience of teens. We're gonna, talk through a number of stories that I think kind of, again, speak to this, this idea that. You know, maybe teenagers aren't doing so badly. and we're gonna start with the, piece from Tech Policy Press that you flagged about something that Mike and I talked about last week. The Australia social Media ban. Some smart analysis from people who are kind of in the weeds on, on tech regulation.
Fadzai MadzingiraYeah. so I thought it was really interesting because, I mean, it's nothing, I don't think it's anything new. I think it's very much like, oh, this is a lot harder than might have been thought of at the beginning. There's no. Real early lessons right now. But what was interesting for me at least is the story was really like, is it really something for us to be like, okay, the rest of the world is watching really carefully what's happening. I mean, the UK is doing a consultation right now on a range of options. Indonesia is also thinking, through an, an approach similar to Australia. and then I think for me it was just how clearly the article was like, there's no such thing as a one size fits all. and we really need to have a more nuanced approach. I think when you had, Avi on the podcast a couple of months ago, he talks about it as Swiss cheese, which has never left me as like an approach. Like there's no silver bullet. It's a little bit like Swiss cheese. and I think that's, what we're bumping into is like, actually this is, this is really like doing this at scale is difficult. so we all need to think about What we're trying to achieve.
Ben WhitelawYeah, definitely. uh, and vie is, is a really, very smart switched on child safety expert. I had a chat with her about kind of all things, child safety, youth safety, and age appropriate design a couple of months ago now, would include that link in the show notes. I think you're right, Faso. It's a very timely reminder about that episode exists. yeah, I think this, kind of analysis is, is a great read. If you've kind of been catching up on, the Australia social media ban, progress. And I think it's, you know, a reminder, like you say, that we, we need to be clear as to what it is we're wanting as the outcome. you know, it's, it's not gonna be, as we've mentioned in the New York Times piece, there's gonna be Detrimental effects to social media bans. and they're not always gonna work as we, as we hope for, you know, age verification is a tricky thing to be able to do. It's not consistent. It's, technically very difficult. So, you know, where does that leave? children that we're look, you know, trying to keep safe.
Fadzai MadzingiraYeah. Because some of the stories, and again, I, I think I liked how this one was written.'cause often when it comes to, when we talk about age assurance at scale, it turns into very much a regulator versus platform. And I, I like to remind people, I'm like, I think we're all committed to wanting young people to be safe. and we're all committed to trying to do whatever it is we're aiming to do. Like, if that's our aim, we're all on the same, we're all working towards the same goal. I do think it's really interesting, Just some of the conversations and the framing. I think the, the story you flagged, like literally right before, um, around Greece, when you flagged that, I was like, oh, wow, Greece is now jumping on the bandwagon. but actually it was interesting, even hearing some of the, the government spokesperson was quite nuanced in the fact that. yes, they're looking at a social media plan for under fifteens, but actually their concerns, around like addiction and mental health, their concerns are more just like, uh, how do you do this across Europe? Like, it's not helpful for us to do this on our own here in Netherlands to do this on its own here. Like, what does it mean for, uh, for the EU to have like a coordinated effort? and I was quite appreciative to see that because I, I do think there's something about how we are talking about these concerns that really require, just a reminder, like, we are all on the same, same page. so like, what are we doing to fix our concerns,
Ben WhitelawYeah, I agree. I mean, this, story, this Greece social media ban story, which came out this week is, is the latest and a long line of countries who are, who are looking to ban social media. slightly different in that they, gone for an under fifteens age limit, which, is slightly different. Most countries are going for an under sixteens. and they're going to, as of next January, January, the first going to force age verification on, the major platforms there is, from the coverage I've read, it's not exactly clear what those major platforms are. one Greek outlet, I read so that, that was Facebook, Instagram, TikTok, and Snap. The New York Times reached out to, yeah, the, the New York Times reached out to YouTube. but that wasn't mentioned in the Greek report. So there's gonna four platforms. I think that a, a majority of. Greek teens use, they're going to have to implement age verification in Greece. which is a new thing. But yeah, there's a, there's a much wider push in some countries across the EU to consolidate that ban, which is a massive, massive kind of shift, I think is, having a cross territory ban approach. Very difficult to do, I think, in the EU because of the, the differences in politics and, and social values. But maybe it's, one for next year.
Fadzai MadzingiraI wonder if, and I think, just to go back about a coordinated approach, only because I don't think it should just be a coordinated approach on should they be a social media ban. I think there has to be a coordinated approach on what, does our multi-pronged approach look like to keeping young people safe online. and I, I think part of what we are seeing with the tech policy press piece and all of these other pieces, cause I really wanna make sure that it doesn't sound like it's like, aha, it didn't work. And I'm like, no, that, that's not it. I think it's, what are we trying to solve for? So that way we have that multi-pronged approach. And so like, what does education look like and what does safety literacy look like? And, I mentioned the family tech cycle, uh, report. But I, I, I just think that we've, We've sometimes missed the fact that it really isn't a silver bullet. and I, want us to be, have more of those conversations about we need a coordinated, multi-pronged approach.
Ben Whitelawwe will come onto the report in a second.'cause I think that it starts to offer some of the, ways forward. The Greek thing is really interesting to me. I I hadn't really been following their attempts over the last year to really crack down on screen time. You know, they're very, very, forward thinking, I guess is one way of, of talking about it in terms of, children using screens. So twenty, twenty four, they banned phones in schools. So there was a kind of very, a much stricter enforcement of phones in classrooms. If you took your phones out, and you were believed, you know, you were seen by a teacher, you were suspended for one day. so very, very strict. And then last year they expanded that to, allow parents to really restrict screen time via an app called the Kids' Wallet, which is a kind of. Greek wide, app that is linked to, the kind of government tax id, which means that you can essentially get your kid to download it. You can link it to your government app that you pay your taxes and do your kind of admin through. And at that point, you can then allow and, and manage your child's screen usage and app usage in a way that, I guess, takes the responsibility away from some of the platforms. So, the new band that's coming in next year is the kind of third stage of, that policy. That strategy, which again, is surprising for a relatively small country, within the much wider eu.
Fadzai MadzingiraYeah. I, I think, and maybe those are the experiments to watch.'cause you, you shared that one about the Irish, um, was it an Irish village that was working on being phone free? and I wonder, so I have such a bias towards communitarian, programs and goals, but I, I think there's something so beautiful about what the village was doing, because their focus was instead of just focusing on removing phones, we actually need to build a, shared, a shared reality of what we're solving for. And then we need to educate parents, guardians. And children as to why we're doing what we're doing. And so they ran workshops, they had podcasts that they shared with parents. They talked through how to keep kids safe. and then, then they started instituting like time free, zones and times and so on and so forth. And then they found that to actually be far more successful from a wellbeing perspective. And it was because it was a community effort and that they were like, it takes a village, so we have to do it as a village. and so maybe it, it would be interesting to think about these experiments in smaller countries and, smaller communities. and what does that look like? and I think, I think it also pushes the, this is why it's a multi-pronged approach. Like if the, even these smaller communities with shared values are like, actually it still needs to be more multi-pronged how we do this. and so a ban isn't always just the way to go.
Ben Whitelawyeah, let, let's talk about the kind of report that you, you discovered this week and that you found very interesting.'cause I think that speaks to, to that ethos, right?
Fadzai MadzingiraYeah. So it's, um, family tech cycle and it's really focused on, and what I really liked about it is it gives really practical advice around what does it mean to design with children and parents in mind. and so they think about, where do parents feel, overwhelmed and exhausted. and then how do you design in a way that brings children along the way? think what I really liked about it was, They, they have really practical callouts, like actually parents want more specificity when it comes to what our kid friendly settings, but also. Something I thought was really fun is that they talked about, well, can we create materials that are fun for digital literacy and can be delivered in a multimodal way? So not just articles, but can we have videos, can we have things that we can share with our kids and do together? and I thought that that was really cool because it talks about what does it mean to foster safety literacy in a family and not just with parents. and then some of the other great suggestions that I, I just, quickly wrote down was, for platform designers specifically is they talk about what does it mean to have dedicated onboarding for parents and for people who are under 18, what does it mean to teach children how to self-regulate themselves? So. They make a point about different developmental stages. So if someone opens an account when they're 14, what does it mean that when they turn 16 and you decide actually they should have access to these features that they might not have had before. and then how do you show them like, this is a good thing you've changed, like you've developed at a certain point, so now you can use this thing. And like, what does it mean to teach them how to regulate in that way? And that empowers a child and how they engage on the platform. last thing was that they just were like, safety literacy is ongoing. It doesn't stop even when someone turns 18. and that we have to acknowledge the Societal challenges around, mental wellbeing and loneliness and anxiety, and how do you talk about that out loud with your child? And I just thought it's so well placed. and I've had a couple of conversations with other colleagues who think about, that wellbeing by design or safety by design framing for children who have said the same thing. And I just, I loved the article.
Ben WhitelawYeah. So there, are there things that you, you might see Twitch taking from this report and, building off the back of it? Like I.
Fadzai MadzingiraYeah, I, I definitely, I mean, there's things similar that we do. So we have creator camps where we think a lot about safety education. So when someone's becoming a creator on the platform, what safety education do we need to give them? we also have, hubs for parents and guardians as well that allows them to engage with, like, especially recognizing that a parent might not be as familiar with Twitch. So it also even goes into terminology or like, this is what this means. And so when you see this, you should make sure to do, to switch this on or engage or have this conversation with your child. Um, and I think what I love most about our hub for parents is we have a section where it says like, regardless of everything above here are conversations you should have about your child, about privacy, about scams, about what, like, you know, and it just gives. Wording and framing and vocabulary to a parent who might just not have known or thought to have that conversation with their child, prior to being on the platform. I think that it has to be something that continues. I'm always concerned when safety literacy is like, well, as long as we've taken care of everyone who's under 18, we're done. where I'm like, actually, it's for everyone who's on the internet. Um, so, my education lead thinks a lot about where's a creator in their journey, and how do you write education depending on that journey. And so that's something we think about as well when it, depending on the community, the creator and why user empowerment tools that matters so much.
Ben WhitelawYeah. Great. That's really helpful. Um, and, and also the kind of the line that stood out for me really was the. line that said that families are absorbing cognitive and emotional labor that product design could meaningfully reduce. And that after all that's kind of gone on over the last few weeks. The, the big meta trials in New Mexico and California, you know, the big focus in the EU on product design and addictive design. there is a lot to be done within the product teams in particular, I think, to help parents help their children, which this report is a very helpful reminder of, and again, a nice kind of upbeat end to controlled speech. That's how you've kind of introduced, a, a hopeful tone to controlled speech, which I'm not saying that Mike, I'm not saying that Mike doesn't do, but the, the has been noted. It has been noted, and I wonder if our listeners will, will, will think the same.
Fadzai MadzingiraOh, wonderful. I am always here to bring hope. I think that might do an episode where it's Mike and I.
Ben WhitelawYeah. Then we'll see who, see, see who the real, kind of like, who the real, uh, you know, stickler is.
Fadzai MadzingiraMike.
Ben WhitelawYeah. He's, he's a good guy. He's, he's actually back in the hot seat next week, so he will be, joining me again. And yeah, Faso, just to say thank you so much for your time this week. I really appreciate you joining us here in Control speech joining. It's been wonderful to have you.
Fadzai MadzingiraThank you so much for asking me to do this, this.
Ben WhitelawGood, glad, glad. It was fun. Um, and listeners, yeah, enjoy this week's episode. go and read and subscribe to all the outlets we've spoken about this week, the New York Times Tech Policy Press, platformer Reuters and others. we can't do the episode without them. and yeah, share the episode. Go and like review and subscribe to Control speech wherever you get your podcasts, and we'll see you next week.
SpeakerThanks for listening to Control Alt Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting control alt speech.com. That's CT RL alt speech.com.