Ctrl-Alt-Speech

Panic! At The Discord

Ben Whitelaw & Blake Hallinan Season 1 Episode 91

In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Ben is joined by Dr Blake Hallinan, Professor of Platform Studies in the Department of Media & Journalism Studies at Aarhus University. Together, they discuss:

Play along with Ctrl-Alt-Speech’s 2026 Bingo Card and get in touch if you win!

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

So Blake, with all the TikTok hoo-ha over in the us we've seen and you would've, been familiar with the kind of rise of some of these. TikTok upstart apps that kind of look and feel, like the great Chinese origin. and one of them that's been advertised to me incessantly on various platforms is Deep Stash, which kind of badges itself as a intellectual TikTok, if ever there's such a thing. gives you quotes from books and, you know, tidbits that you can pretend Make you pretend that you've actually read the original source. and I downloaded it and I started signing up. And one of the prompts, on the onboarding journey as I entered my details was what's your main focus here? and so I felt that was a great way for you to introduce yourself to control Al Speech listeners today. What's, what's your main focus here?

Blake Hallinan:

My focus is on creator governance. Basically how content creators, influencers, live streamers and their communities of. And supporters haters shape the experiences that we have on digital platforms and also the platforms themselves. What kind of role can they play in making our information environment better, worse, uh, something we haven't figured out yet.

Ben Whitelaw:

Amazing. I mean, that's, almost entirely by what you are here today. I'm very, very glad to have you on the podcast. my main focus today, as, as kind of the person in the chair of the podcast today is to just play Mike. I'm gonna play Mike, I'm gonna talk about section two 30. I'm gonna, flag up great articles on tech there. I'm gonna, make sure that he comes back and, feels like he's been re, represented on the podcast in his absence. So,

Blake Hallinan:

I feel like you should, uh, insert the meme sound of a, an eagle crying out its freedom.

Ben Whitelaw:

Exactly that. He will be, he'll be very pleased with that. Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's February the 12th, 2026, and this week we're talking about platform product changes to get ahead of regulation. Regulation that calls for platform product changes and the holes in both clauses, constitution and metas. Political ad policy. My name is Ben Whitelaw. I'm the founder and editor of Everything Moderation, and this week Mike is away. I'm not entirely sure where, but I'm very lucky to be joined by Dr. Blake Callanan, professor of Platform studies in the Department of Media and Journalism Studies at Ahu University in Denmark. Welcome, Blake. How are you doing?

Blake Hallinan:

I am doing great. I'm very happy to be here.

Ben Whitelaw:

Excellent. You sound like you. You know you're not entirely sure are you are.

Blake Hallinan:

No, no, I think I just have the same kind of stereotypical response that I hear a lot of other guests on podcasts go, like longtime listener, first time caller, like hearkening back to the the talk radio days. And I, I'm feeling that right now because I am a longtime listener and first time joining.

Ben Whitelaw:

Well, very, very glad to hear both of those things. I've, I've read your work, at length over the last, few years since I've been writing everything in moderation. Your research has kind of cropped up in the work that I've done, and I've always wanted to have the chance to kind of talk to you a bit about it. so having you on the podcast today is a fantastic thing for me as well. before we dive into today's stories, I want to talk a bit about that work'cause it's incredibly varied and, rich and super interesting. You've looked at a whole range of, topics including kind of typologies of social media, rituals and, the nature of kind of authenticity on Instagram. And, a piece of research I really enjoyed was the, the value. Inherent in social media platform policies, looking at what you can discern from how, platforms write their platform policies. and I wanted to kind of ask you a bit about that last piece of research. Like how did you come to look at what are ostensibly very boring, quite dry documents, and what did you learn about them?

Blake Hallinan:

so there's a subfield of. Academic research, concerned with infrastructure, infrastructure studies and the lore that I hear about it is that that came out of a, a group of people who describe themselves as the Society for the Boring. and, you know, I wasn't part of that society, but spiritually I feel like I belong there. But in terms of platform governance research, I think things like policy documents become really attractive sites to study because they're one of the few things that's available. and they have. Been this kind of public display to users, to regulators who might want to step in to advertisers, to perform what is happening on platform, what its commitments are, how those look like in practice. and so like lots of political scholarship concerned with constitutions laws, these institutional documents, platform governance research also kind of gravitates towards them.

Ben Whitelaw:

And you found some kind of commonalities between a lot of the platform policy documents, right? You know, you looked at a handful of them and you found some, shared kind of messaging.

Blake Hallinan:

Yeah. Yeah. So it was definitely part of this comparative investigation, and I want to make sure I shout out my co-author and leader on this one, Rebecca Sherlock, and LeMar Schiffman. But we looked at. Facebook, Instagram, YouTube, Twitter, and TikTok. And we focused on their kind of main policies, privacy terms of service, community guidelines, and we look not just the details of what they were doing, but why they were justifying any particular. Suggestion, encouragement, ban that they had. and so we talked about these as values and were surprised with how similar they were across. so things like promoting expression, building community, protecting safety, providing choice, and constantly improving the experience of all of those things. were these kind of really common refrains and I think we see those values in terms. Really dominate, public conversation and concern with social media. Now, not necessarily buying into the fact that platforms are accurately promoting them by any means, but that's kind of what's at stake here. It defines a conversation.

Ben Whitelaw:

and did you kind of, figure out or, or get to a sense of why there was such commonalities between them or, where they originated? there any sense of that?

Blake Hallinan:

Yeah, so this particular study didn't look at the kind of historical orientation, but some of the work that we've done since then, has. Approached this as part of the professionalization of trust and safety, where it moves from, the one person who's at the company who you know was coming out of the law focus to becoming a broader field that talks to each other, that circulates documents, that has online material. To create a shared professional identity to help navigate all of these circumstances. and I think we see this professionalization reflected in all of the commonalities that there is mostly, like, there, there's some consensus among social media platforms about what the kinds of things you need to adopt are, how you might go about addressing them, and the kind of common concerns there.

Ben Whitelaw:

Yeah, and how they've changed, I guess, over time as well. one thing that I was particularly drawn to was your work on content creators. part of my kind of day job is, is looking at how the media is shifting as a result of this rise in content creators who obviously kind of have a lot of, sway online. They have a lot of followers, they have a lot of power. People are drawn to them for, for the way that they act and share experience. And you've looked at kind of how content creators engage with content moderation decisions, which I think is a fascinating topic. we've talked a bit about Dylan Page, the TikTok. News Daddy is his kind of aliases on TikTok and how he has been engaging with this idea of being shadow banned slash censored in recent weeks. What ways have you seen kind of creators engage in, platform governance and, and in understanding what it means to be moderated or, or to kind of be at the behest of these platform policies and how are you seeing that change over time?

Blake Hallinan:

Yeah, so. The work that I've done has found the two main approaches for creators to participate in platform governance or content moderation. one strategy is horizontal, where creators are speaking to other creators, as peers, as colleagues, as people operating in the same space and creating norms about. How they should or should not behave. so you see this around particularly lately, been a hot topic in Denmark, but also maybe in the the US as well, around, gambling and financial, advertisements and should influencers take them on. and there's disagreement on it, but you get these efforts designed to kinda shape industry consensus, similar to the professionalization of trust and safety. You have creators taking on that role. Calling out what each other are doing, and trying to rally their audiences to encourage or promote particular types of behavior. main strategy is vertical, where you get creators kind of trying to reach upward towards the platform, and shape what its policies are, how it's enforcing things, what features should it make available. And that's usually done by speaking out, by using the influence that they have and use for advertisements, for building community. Directing that to pressure. platforms in the way that, you know, journalism might do as well. and, you know, sometimes, sometimes it's successful, it's easier to get particular decisions overturned. so, you know, you can still get quite a lot of pressure on X If you rally your audience, keep tagging at YouTube, at Twitch, at TikTok, maybe they'll reverse the ban or the strike or the particular decision. It's much harder to get structural changes, but. People try and, um, it's also difficult to see are those effective because you do get all of these policy changes over time that are shaped by many different factors, including creators. So it's likely that there is some influence there, but it's much more diffuse or indirect.

Ben Whitelaw:

Yeah. Yeah, that makes sense. you know, the other route seemingly on Twitter is just to tag Elon Musk, as often as you can and, you know, irrelevant of what you've been banned. For you, you may well be freed again one day. Super interesting. I think there's, amazing learnings for our audience in the research you've done. I'm really glad to kind of have you on the podcast today to bring that to bear. So thanks for sharing a bit of that and hopefully we'll, we'll dive into that as we go through the stories today. before we, go into the stories that we've selected today, Blake, I wanted to do a bit of admin. as we talked about at the top of the episode, I feel like if Mike was here, he would remind everyone that this week was the 30th anniversary of of Section two 30. And he has written, and his, some of his colleagues at Tech have written some great pieces on, why this, is a pivotal moment for the internet. Why, section two 30 still matters and looking at some of the, kind of the arguments, Around its repeal, potentially why it shouldn't be repealed, which I'm gonna direct you towards. There'll be in the show notes even though, he's not there. That's what a loyal co-host does. a couple other things to know, we've, got our control alt speech bingo up and running, on our website. So if you are about to, get settled and listen wherever you are. Go to control speech.com/bingo. You can refresh your bingo card. You can get a, a host of regularly updated, slots and play bingo as you go today. I already know that there are some things that Blake and I will be talking about, which are, in the Bingo card, which is exciting. please do not play whilst you are driving or, if you are doing anything else that is dangerous, I'd hate that to happen. finally, we, we won't have Controlled Speech next week, both Mike and I are away. but that doesn't mean you can't, like, subscribe, follow us on all the major platforms. if you're a regular listener, please do take the time to review us and rate us wherever you get your podcast. It's massively helpful for helping us get discovered, and combat the. Ever growing, tranche of podcasts that are released, and gets kind of stand out from the crowd. So thank you to listeners who have done that already and continue to do so right there, Blake. Let's get started. not only was it the kind of birthday anniversary of Section two 30, this week It was also Safer Internet Day, which feels a bit like kind of Black Friday for. Platforms, safety initiatives that you, it's a kind of, everyone's got a sale or a selling something and you know, you were very interested in a couple of different platform responses this week.

Blake Hallinan:

Really a, special day for press releases, dealing with trust and safety. And, you know, I think the one who has captured the most attention by far is Discord, who, announced a new policy that'll be coming into place in March. What they're talking about as a team by default settings. So this is, uh, only new for some of Discords users. It's already been in place in the UK and Australia where there have been restrictions on teen use of social media, and they're kind of rolling that out to the rest of their user base worldwide. So things that previously would've required, or associated with like, not safe for work. That might be a particular, channel in a server. It might be a whole server dedicated towards it. It might be other kind of features. If you want to do that now, they'll need to consider your, account an adult.

Ben Whitelaw:

Interesting. So, so this is a kind of extension of, age assurance technology, essentially across, a whole new set of markets. Um. do you think the kind of implications of this will be in terms of the user base or what reaction has there been so far?

Blake Hallinan:

Yeah, so there has definitely been protest, callouts complaints on, on x, on blue sky. I definitely heard about this via its repercussions on social media before I'd come across the press release or journalism coverage myself. and a little bit of, um, efforts to mobilize people to boycott financially. So especially Discord users who had Nitro their paid premium service calling on users to cancel this. and a lot of the concerns here are not specific to Discord, but any of these policies around age verification, privacy. What is required for you to do so? How might that be misused?

Ben Whitelaw:

And, and so this is almost like another example of I'd rather botched, I would say announcement around age verification and age assurance. as you say, there was had to be a kind of clarifying statement from Discord, which. Clarified that actually not everyone would have to input their, age details or upload a an ID in the way that was initially thought. and as you say, you know, there's funny people for certain people who are accessing particular servers, particular channels. where do you sit Blake, on the kind of age assurance spectrum? it's a topic that's very controversial. Mike and I have talked about it at length. just so kind of listeners understand, like when we talk about it. As a way of keeping users safe, what comes to mind?

Blake Hallinan:

My main association with age verification is it creates a hurdle that clever teens learn to get around quickly. I might be projecting back onto my own childhood. And many other peers that I've talking with here. Uh, I think, you know, my parents had probably, a OL kid features back in the, the nineties. but you know. There's questions around their efficacy and, how well they work to serve their purpose and the spillover effects that it has for other people. And I think, in terms of, privacy, loss, and information control, and there's promises here, like this doesn't affect the majority of people. don't have data about how much people are accessing NSF, Content on, Discord already, or what percentage of their model, how it works, inferring. Existing users, ages, or even what service they're using because the clarification had all of these promises about keeping your photo or video, your video selfie on your device, deleting things quickly. It seemed like they were using, the KID service, but they look at, users are reporting at least on XI think there was a. viral tweet from its, FOLF that was like, if we look at what's going on, on, uh, the UK version of it right now, the vendor they're using is persona, which is, associated with funding from Peter Thiel. So that made people significantly less optimistic about would they keep these promises around, protecting your data.

Ben Whitelaw:

Yeah, I actually didn't know that, there was a, a link between Persona and Peter Thiel. That's very interesting. yeah, it's, it's funny isn't it? There was a host of stories as well that, showed that there was, creative individuals who'd, created models 3D models that allowed Discord users to essentially bypass the age verification system. There was a host of Individuals who were boasting, I think probably fairly en enough that they'd used photos of their relatives, pets, whatever it might be, to get through the, age assurance technology. And so again, you know, we're in a familiar cycle of, I think platform trying to. responds to regulation, in Europe and the uk. Try to standardize their approach, alienating a bunch of users in the process, and then actually potentially not necessarily yielding the, the impact that they want. do you think, Blake, that we're approaching a, a kind of age tiered internet now? I mean. This idea of having age assurance, as part of the kind of safety stack increasingly means that, we're creating versions of the internet based upon how old we are or how old we think users are. what do you think the kind of downsides or problems are associated with that?

Blake Hallinan:

it's definitely easier for some people to verify than others. and you know, the two common ways that. Discord promised are pretty industry standard One, you submit identifications that get recognized, and one you have access to your biometric data in terms of video. and that gets interpreted in particular ways. and some people look easier to their age than others. some people are more willing to share or have access to identity documents, and. I think there are definitely global inequalities about what documents get recognized, how they're issued, that happen. but I also think that there is this risk of, determining like what information people should have access to, how they should be able to express themselves. The concerns around the teen age restrictions in the UK and Australia and being talked much broader are, you know, like there are populations who access to these communities is transformational. It's really important to'em, people who are alienated from their communities around them for political, ethnic, sexual orientation, all of these factors that could lead people to, reach out and try to find. People there like, so, you know, I worry about losing the promise of the internet even as we are burdened with all of its current problems.

Ben Whitelaw:

Yeah. and, you know, losing some of the kind of trial by error I guess, that, you know, you talked about when you were a young UAOL user, um, all those years ago. that idea of, I guess, predetermining what certain age groups of internet users see kind of links to the second part of the Discord announcement, which was, its. unveiling of a new teen council that it's, going to be putting together over the coming months, which will be 10 to 12 teens, who use Discord and who will shape. The platform's policies. I guess give kind of color to the idea of their involvement in the platform, what they need, how they build connections on the platform, what is it that they looking for in the future. I have reservations about this team council approach and some of the council structures that some of the platforms build more generally. Blake. They feel a bit like kind of icing sugar. They feel, fairly decorative in terms of their work. You know, we see a lot of the same people on some of the same on in different councils in fact. And it feels like something that people, as you say, are able to do a press release about and then somewhat move on. do you see these councils as a kind of response to some of the challenges we face, nowadays?

Blake Hallinan:

Yeah, so I mean, I think that they are this gesture that platforms make towards more distributed, more representative, more fair decision making. It's not just us. Just our bottom line. Instead, we're listening, we hear you, we care. and you know, none better than meta at doing this, but most platforms have some version of this. And you're absolutely right that the same people or NGOs, especially as you get bigger, appear on the same consultative organizations, uh, and a lot of them. Come out afterwards reporting quite critically of their experience, speaking into the void, not having meaningful voice. and you know, discord has a history of creating these other initiatives for teens. They used to have a program to help train teen moderators that was really interesting and impactful for the people involved, but also short-lived. There's no promise for how long it lasts or what the outcomes are, or how that might change when public attention focuses on something different.

Ben Whitelaw:

Yeah. Yeah. Really interesting. I, I would love to be able to do a bit of analysis on that. cause I think those council structures often get referred to by platforms and, but then kind of evaporate, in ways that we don't always understand. Another part of the Safe Internet day. announcements. Blake was, one from a, the UK government, which I thought we could talk about a little bit before we, shift gears. And that was a piece of research and actually a, a new advert that the Department of Science, innovation and Technology have put out. The research is fascinating because it talks about the lack of, Conversations that parents are having with their children, which I guess is, you know, somewhat leads to age assurance and age verification, being prescribed as, the answer. but they, they spoke to, did a survey with 1100 parents Children between the ages of 18. and 14, and interestingly, over 50% of them, had never had a conversation with their children about online experiences and those who had had conversations reported that it was one off or infrequent. So, Mike and I often talk on the podcast about how, it's important for parents to engage in these conversations. There is a responsibility to do so. he has experience of, of some of those conversations with his children. I do not, I will say that. but you know, that was a number that was higher than I thought. The, advert that they have put out in kind of tandem with that encourages parents to have conversations, which I think is really interesting because there is this regulatory push that is obviously. Directed towards platforms. So here you have a, an initiative that is essentially kind of government approved, government funded, that is actually targeting parents to, I guess, have conversations that they, they don't already. What did you make of, that small, not not very representative piece of research, I would say, but still interesting.

Blake Hallinan:

Yeah, I mean, I think it, felt surprising to me, I'm not as much into the parent and youth research angle, so I'm sure some of my wiser colleagues would be like, yes, obviously we anticipated this. But I think for the public, given that there's so much media attention and parents expressing concern to press to their legislators really calling for and endorsing these bans, that there's all of this outward directed talk about social media and seemingly very little. in person talk, in family talk, it seems like a real disconnect.

Ben Whitelaw:

Yeah, and it goes back to, the very popular adolescent series, which I'm not sure if you've seen, it's on Netflix here in the uk, caused a lot of discussion towards the back end of last year and essentially characterized a young. Teenage boy who went to his bedroom, we weren't kind of shown what it was. He did, but was never talking to his, parents about what he was seeing online and then ended up, Committing, a murder of, a school friend. And so, I do kind of welcome this idea that parents are responsible and should be having these conversations. you know, that there is no single silver bullet on these, topics and everything is, you know, parents have a, responsibility to play alongside platform. So, very interesting to see the UK government using safe internet day. as a way of announcing that and launching that. let's move on now, Blake, that's, we've talked a bit about discord, preempting, regulatory measures. Our next story is one in which, a platform is potentially a little bit too late on that process. it's an announcement that came. Just after we launched last week's podcast, an announcement from the European Commission related to TikTok, and I want to take you back to 2024, February, almost exactly two years ago before this podcast even existed, when it was a kind of mere twinkle in the eye of, Mike and I, we hadn't even released an episode, but the EU announced a kinda multi-pronged investigation into TikTok. Including about its addictive design patterns. And this week it found the Chinese platform in breach of that, aspect of the investigation. Other aspects are still, ongoing. and it's interesting'cause it's the first time that there's been a kind of legal standard set about addictive design, in the world. All the other kind of platform regulation. that we've been talking about on the podcast has never really, touched on addictive design in, in this way. And, you know, I wanna explain why that makes me feel a bit, weird to be honest. but I want to kinda give you a bit more detail first. So, first of all, the, commission found that TikTok didn't adequately assess how features on its platform could impact some of its user behavior. and particularly users physical or mental wellbeing, is a direct quote from the announcement. So that, that's a really interesting point to note physical and mental wellbeing. will come back as to how, they might have, figured that out. and, and it pointed to a couple of different aspects of TikTok to sign, primarily the, auto scroll, the famous auto scroll that if you are. A TikTok user yourself, or if you look at anybody using TikTok, is the kind of surefire sign that they are on that app. quick fire, almost RSI inducing, scrolling that goes on, and the commission say that this. And this is, these are quite big words, lead to compulsive behavior and quote, reduced user control, which again, I would love to know, what you think about this Blake, and, and how the commission kind of got to this, Secondly, it also came to the conclusion that screen time and parental controls were ineffective and easy to dismiss. Which again, we've talked about on the podcast the effectiveness of parental controls and so it's very interesting, potentially. Industry defining that some of the platform parental controls that TikTok have are not up to scratch. And all in all, this led to the recommendation from Hena Kinnan, who's the EU tech chief, that TikTok changes its design in a number of ways over the coming years that it disables the Infinite scroll that it's so famous for that it builds in screen breaks, that it adapts its recommended system, which was a kind of small part of the announcement, and I want to kind of zoom in on that. so yeah, you, you know, people might think, yeah, that, that makes sense to me. I'm addicted to TikTok, or I know people who are, the EU spokesperson actually mentioned that 7% of 12 to 15 year olds spend four to five hours. A day on TikTok, which is a, is a shocking number. and in that context, and with everything we know that's happening in the world about the push to, protect children online, it does make sense to go after TikTok for this, however, I have mixed feelings. I think it should be the case that product design is targeted. by regulators, safety is an important kind of design choice. you know, there are ways that you can build safety in Design matters in that, you know, if you create streaks on a platform app, if you, uh, rewards, they, they shape behaviors and they, create incentives, right? We all, we all know that at this point. and so the idea that. platforms should be responsible for this, I think is right. and I, welcomed the age appropriate design code from a few years ago, in the uk, which led to the California one. I think that was a good direction we went in. However, I don't love the idea that, we label all. Semi engaging features on a platform as quote unquote addictive. the idea of infinite scroll or autoplay or push notifications or any kind of recommender algorithm being addictive, I think fails to have any kind of nuance, in the discussion. So, we do regulate product, I think we, you know, which I think we should, we need to do so in a way that differentiates between what is manipulative and, you know, what is helpful and engaging. And so, Blake, I'm kind of stuck. I'm stuck with, you know, wanting my cake and eating it here. help me figure out, where my argument goes wrong. What, what, what do you think?

Blake Hallinan:

Yeah, I mean, I think I am with you on the mixed feelings as well, and part of the challenge is. Translating these values that people care about safety, in particular here, into design. There is not consensus around what that means from academics to policy makers to different platform ideas. And so you get really common sense ideas like, we should care about this. We are invested in safety. There are problems here, but the design implications for that Are not clear. And so I think that lack of clarity also comes up in the internet reaction to this announcement where a lot of internet commentators note that these features, that TikTok is being criticized for. Are really ubiquitous. they're not TikTok exclusives. Even though people sometimes talk as if TikTok is the only algorithm, the only platform with an algorithm in the world, the infinite feeds, the recommendation systems are common. Part of web apps, internet kind of hard to escape anywhere.

Ben Whitelaw:

Yeah, and I was actually thinking that, you know, news websites, which I'm, I'm most familiar with, have all of the different features that the commission have, listed as potentially contributing to addictive design. You know, many homepages have infinite scroll, many autoplay videos. You know, there is push notifications in every single app to the point of kinda distraction. Recommended systems are increasingly built into news apps, to help people find the content they want. So that suggests that in and of itself, those aren't features that are addictive. which is where I think the recommender systems come back in or, or the algorithmic, element to it. TikTok was kind of famous for moving from a kind of social graph to a topic graph, right? You know, not really caring. Who you were connected to, but just kind of profiling you in such a way that you could be served other content. and there's been some really great digging into its algorithm to figure out kind of what those profiles, what those personas, what those segments, kind of are and, and look a bit like. The commission's judgment doesn't say anything about any of that. Blake, the thing that might be the smoking gun or the, you know, you could argue is the, big differentiator when it comes to TikTok versus the other big platforms isn't discussed at all. and as you say, there's no mention of YouTube, shorts or Instagram reels or, features on other platforms that, fall into the same bucket as infinite scroll. So in that context, do we think this is a targeted announcement, for a particular moment in time when we know there's what's going on in the world around TikTok? Is there a kind of political potential element to it?

Blake Hallinan:

Yeah, I mean, I think. That makes more sense as an explanation than the inherent design features. it's hard to know the data that they're drawing these conclusions from, because we will have to wait to have access to any of that. TikTok now can get access to it to appeal the process, but. I guess maybe I'm too much in media history, lamb, but I think even beyond social media, these concerns with autoplay remind me of flow tv, broadcast television, always on channels that you could scroll through, always there for you and the concerns that people had around that as addictive. The way that that got algorithmically reinterpreted with Netflix and binge watching, has a lot of the same kind of addictive concerns and same technologies behind it, even if you're on long form versus short form video. And so it's just really hard to square these concerns with the rest of not only social media environment, but broader digital media apps, platforms.

Ben Whitelaw:

Yeah, I think you're right. it's potentially very, very wide ranging judgment. obviously TikTok are going to appeal this. They've said that they, the kind of claims that made by the EU commission are categorically false. and you know, they will, obviously appeal that. I don't think you'll get to the point where TikTok have to pay a 6% fine of their global revenue. I don't think that's likely at all. but it is interesting for the fact that this is a kind of line in the sand when it comes to, product design in a way that we really haven't seen at all before. And yeah, really don't want Netflix's recommended system to go away, Blake.'cause it's already very hard to find stuff that I want to watch. So, I have a skin in the game in that sense as well. I will flag just, which I think is a kind of a. A small step away from this topic is a really interesting discussion on the Ezra Klein Show, which, we'll link in the show notes as well between Ezra Klein, who's a journalist at the New York Times, and also, two very interesting thinkers in when it comes to the internet. Tim Wu and Corey, Dr. Row, both authors. Tim, we were legal scholar. worked in the Biden administration. and you know, they, talk a lot about in this podcast the product design choices that are made and the fact that this cause is kind of lock in and, behaviors actually that kind of allude to what the EU commissioner I think has tried to do here. but not got round to. So, European Commission might want to. go to Corey and, and Tim to, for a, a bit, bit of a better way to explain some of what I think they're trying to do. cause that is a great podcast worth listening to. Blake, let's, do our best. To go through the rest of the stories we picked today in a, in a fairly timely manner, we will start by, talking about a story that you, somehow plucked from Hungarian media, in what has to be one of the most, one of the best finds for a story on controlled speech they've ever seen. tell me how you came across this story about, meta's political advertising ban holes and, and. what led you to be reading Hungarian Media?

Blake Hallinan:

So I. Feel a bit rude.'cause I, I know that I encountered this through my own social networks. Probably someone on my LinkedIn, but I don't remember who it was that posted it to give them proper credit. and then when I went to read more about the story, I found like there really was one English language article on it. But, I thought it was a, a really interesting topic. So right now Hungary is. In the lead up to their parliamentary elections, those will take place on the 12th of April. and Political Capital Policy Research and Consulting Institute did an investigation to look into the use of political advertisements on meta, and that's a contentious topic because meta announced that they would not be allowing political advertisements within the European Union since, October, 2025. Hungary has long been one of the number one users of meta political advertisements. Victor Orban has been a major spender here and the country in the three months leading up to the ban had spent, more than Germany on ads on meta, despite Germany having eight times the population.

Ben Whitelaw:

Good Lord.

Blake Hallinan:

Yeah, and I found that, you know, I guess the spend is down, but it's not non-existent. 14 candidates have ads in Meta's repository. a few of the ads had been flagged as political and restricted. But over 150 were allowed and still active. And so it raises concerns about the enforcement of this policy, and also what, uh, is going on in terms of elections in Hungary.

Ben Whitelaw:

Indeed. I mean, I, I'd be surprised if this wasn't happening in, in other countries as well in Europe, where obviously the ban is in place, but, probably enforcement isn't as strong as, as we expect it to. I mean this is probably, as you say, an enforcement issue. It's probably not, meta seeking to profit, from this. I don't think there's. There's huge reasons to be allowing 150 ads through. I don't think the kind of spend on those will be so strong. there. So people may claim that something kind of not quite right here. I, it's probably just the fact that this is a, fairly underrepresented European language and a country that doesn't get a lot of. Probably attention within meta, but really, really interesting. I think the way that ad policies are enforced is a fascinating topic, particularly on these massive platforms. I remember, I think it was the backend of last year where Reuters did a really fascinating piece of research about fraudulent, advertising on meta and found that so much of. The, of spend millions and millions of dollars worth of spend was essentially kind of fraudulent ads that violated their policies. And so, we can't underestimate the scale that this is happening at, but when it has the potential to shift elections and, and get people out to vote or stop them from voting, it does take on a new, importance, isn't it?

Blake Hallinan:

And I think it also shows the importance of the little bits of data access we do have to platforms. So shout out to META'S ad repository and journalists and civil society organizations that. To engage these accountability efforts.

Ben Whitelaw:

Yeah, definitely. I'm always worried about the, the kind of ad repository being taken away in a kind of peak of anger. that would be a bad thing, wouldn't it? thanks so much that Blake, a story that I wanted to talk about, which we've touched a bit on. Controlled speech is, Claude's constitution. the very long, 20,000 plus word, love letter almost, to Claude and, the constitution's kind of interesting because it's a new way of trying to implement safety rules within the clawed platform. The old kind of way of doing things that worked for the major platforms, having kind of very strict rules and very clear policies clearly has, is gonna come up against some problems in a world where anybody can ask, LLMs anything. And so the claw constitution is a, new way of doing that. It's been talked about as soul, and. We talked a bit about, the originator of, that document, explaining a bit about how it worked, on a New York Times podcast a few weeks ago. the piece I wanna talk about is a, is an op-ed by a oversight board member, who's. given a bit of a critical view on the Constitution. Suzanne Nossel is the former CEO of Penn America, and she has called for the Constitution to have a bill of rights, alongside the, of constitution itself, in which, users are given a clear sense of what the remedies are and, and what reviews look like. Very interestingly calls out the fact that there is no mention of what happens if, this constitution goes wrong or, takes a turn. and it's a very clear ride assessment of the Constitution at a time when I think lots of people were quite excited by it. potentially a bit, misty eyed about it, maybe thinking it's the, the new way forward. There's always a, a sense of that, I think when, when something new and sparkly happens. but Suzanne Kind is very, eloquent in, talking about how it feels like not dissimilar to the early days of Silicon Valley when, platforms talked about connecting the world and, providing kind of human knowledge to all corners of the globe. But were then criticized for the fact that, their judgments were made from a very small pocket of, individuals in Silicon Valley. and she kind of equates the two, says that this is an interesting approach, but there are some, improvements that can be made. Blake, what did you think about, this piece and what have you made of the kind of constitution discussion more generally?

Blake Hallinan:

I mean, I think. Warring metaphors here between the Facebook oversight board and the Anthropic constitution are quite fun. And also really different. I like constitution and, oversight board have a lot of affiliation, but the way that it's implemented, they're really different ideas about governance and ethics. So philanthropic is trying to do this deontological, or no, I'm getting my philosophy wrong. It's been a while doing a virtue ethics approach. How do you cultivate the good bought baby? whereas meta's oversight. Board is much more kind of classic liberal tradition. It is rights, it is oversight, it is process, and, the op-ed is very much in line with that. and so I think like the criticism of this as Constitution, not the only one making this, but it, it's, it's a good one. It doesn't function the way that we have thought constitutions do, or if it is, it's a constitution addressing Claude itself as an entity, not. The world using and interacting with it. It's not for the users in the same way.

Ben Whitelaw:

Yeah, that's such an interesting point. I hadn't thought about the kind of people, behind both the Constitution and the oversight board almost being know, in different camps in many ways. You know, the person who originated the Constitution is a philosopher by training and therefore, you know, approaches some of these I ideas and these challenges differently. Everyone on the oversight board is. You an academic of law or professor of something or, very highly, highly regarded journalist, or human rights practitioner. And so, yeah, you're quite right that there's, there's some kind of clash in, in values there, which, you've kind of read between the lines in a way that I didn't, which is really interesting. out of interest, do you use Claude? Like what is your, preferred. AI system, how do you kind of base where you spend your time? or, what choice of system do you use?

Blake Hallinan:

I'm so uninteresting on this topic because I use very little ai. I use Microsoft Copilot, which we have licensed through the university to do translation work. That, that's it. Um, mostly because I, I'm learning Danish. I'm doing my best, but, uh, I'm not literate and copilot does help me, navigate that, but that's about the extent. How about you?

Ben Whitelaw:

Amazing. Um, I mean, I, that's probably where I'd want to be in terms of a AI usage to be honest. But, um, I use a couple of different systems. I, I use chatt PT, but I, I'm increasingly kind of drawn to Philanthropics approach when it comes to, to safety. And it's the way it's set itself up as, I guess the frontier model that cares most about safety. And then, so I'm very interested in these. Kinda meta discussions about, whether it is doing what it says it is, um, whether, whether it has the values that it claims it does. and you know, they've invested a lot of money in marketing recently, which many people have seen around the Super Bowl. they aren't going down a, a route of advertising the way that Open AI and Chat two PT are. So it is, you know, those tensions will always kind of come up against each other, I think. Very interesting. let's finish off Blake today with a, research paper. Very apt. seeing as, you are, uh, in the research space, you brought two controlled speech of paper that was about, debunking adult industry workers in Canada, which, is a fascinating topic.

Blake Hallinan:

Yeah, so, this is a new report done by two researchers. associated with ECP or Ethical Capital Partners. And so that's the kind of financial group that owns alo, which people probably know better by PornHub, which it operates. and so, Researchers who have independence from the organization, but it's a organization with a particular, uh, agenda and interest. And the report deals with the experience of, adult content creators, and other people working in adult, industries, access to financial services and looking at the policies around, access. The report looks at the phenomenon of deep banking. So what happens when someone is refused banking procedures or services, or has their accounts closed? Sometimes this directly involves digital platforms, so things like, uh, What do you use for money? It's mobile pay in Denmark, but I'm, I'm trying to think of like other one, but any of your electronic wallets well, um, are things that, uh, you know, could be close on you in an instance. And we see these concerns around access to finance, whether for individual creators who are, are, are speaking and making content, or for entire platforms. So it made me think of the. uproar, last year, around the influence that credit card companies were having over the policies of steam. or what kinds of games can we make or circulate and how can people rise up or push back against that? And, you know, calling pressuring to create more protections here. For promoting particular kinds of speech. Sometimes it's politically extreme speech, sometimes it's adult speech, but I think that the report draws attention to the role of, finance and bank as part of our platform governance conversation and content moderation.

Ben Whitelaw:

Yeah, it's a really good, call out. I think there's not something we actually cover enough, I think, on controlled speech. so I'm really glad you brought this here. It reminds me of, um, the FT podcast, hot Money. I'm not sure if you've, listened to that, but it's a really interesting series about the porn industry, and the way that some of these financial institutions. Are involved in the process of kind of allowing, certain porn sites to exist on the internet and therefore allow, creators to make a living. And when, those financial institutions suddenly change tack, which they can do, it's, you know, their right to do so. that causes a, massive kind of downstream effect. And so, you know, even listening to that podcast, I was kinda shocked as to the centrality of those financial institutions. And I think this research talks to that as well. so yeah. Great, flag for listeners. really interesting, read for, for listeners over the next week. not sure we had a pawn bingo card slot, but, you know, if we, we might have to add on in the future for, for future, episodes. Blake, that brings us to the end of today's episode. I'm very, very grateful, for you today. Thanks you for bringing all those amazing stories to us. I hope you enjoyed it.

Blake Hallinan:

Absolutely did. Thanks for having me.

Ben Whitelaw:

Brilliant stuff. yeah. Promised Blake wasn't coerced into it, at all. She, she volunteered to be on the podcast, I promise. Um, if you'd like to be on the podcast, if you, if you want to kind of put yourself forward, we'd love to have people like Blake who are working in the industry, doing amazing research, doing amazing, work, co-host the episodes from time to time. Get in touch with us at podcast@controlspeech.com. we're always looking for, interesting co-hosts. and like I said, please do like review and rate the podcast wherever you can. it's massively helpful for us. And, with that I bid you fa Well, thank you Blake, and we'll catch everyone next week. Take care.

Blake Hallinan:

You too.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.com. That's CT RL alt speech.com.