
Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
With Great Platforms Come Great Responsibility
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Ben is joined by Thomas Hughes, CEO of Appeals Centre Europe and former Director at the Oversight Board. Together they discuss:
- Appeals Centre Europe Transparency Report (ACE)
- Most people want platforms (not governments) to be responsible for moderating content (Reuters Institute)
- Happy Birthday, Digital Services Act! – Time for a Reality Check (Algorithm Watch)
- Proof-of-age ID leaked in Discord data breach (The Guardian)
- Update on a Security Incident Involving Third-Party Customer Service (Discord)
- Another Day, Another Age Verification Data Breach: Discord’s Third-Party Partner Leaked Government IDs (Techdirt)
- Exclusive: Apple Quietly Made ICE Agents a Protected Class (Migrant Insider)
- My Email to Tim Cook (Wiley Hodges — Substack)
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So Thomas, I'm not sure how you curate your music, online, but in the attempt to start this podcast with evermore obscure. user prompts from weird and wonderful apps across the internet. I found one that might be suitable for you. So there's an app called Dissonant, which sends you physical music, so CDs and I think vinyls as well, based upon your tastes, with a very nice handwritten note, from one of its personal curators. It's a very sweet, kind of retro, music actor that I might well subscribe to now. and on one of its kind of screens, it has a, call to action. That says, if you don't like it, send it back cause it has a free mailing policy. so I'm asking you, as you debut on controlled speech, what would you like to send back?
Thomas Hughes:Ben, thank you. And thank you for the invitation to join you. I think what I would send back is the initial drafts of the data sharing agreements that we were given by the platforms as we set up our out of court dispute settlement service. Although, Ben, I will not tell you what my handwritten notes said on top of those drafts.
Ben Whitelaw:Yeah, not, not suitable for a, podcast such as control or speech. Um, um,
Thomas Hughes:no slow.
Ben Whitelaw:in terms of, my thing I'd like to send back, well, I've had a few moderation decisions in my time that I've been subject to, which I would like to return and have marked again. Um, and I think that's a very nice segue into, you and, and what you do. And today's podcast. And welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's October the ninth, 2025, and this week we're talking about public confidence in big platforms, discord, data leaks, and the creative use of anti hate speech policies. I'm Ben Whitelaw. I'm the founder and editor of Everything in Moderation, and I'm joined this week by not Mike Banick, who is traveling, as we speak, but with Thomas Hughes. Thomas, welcome to the podcast today.
Thomas Hughes:Ben, thank you. Pleasure to be with you.
Ben Whitelaw:I'll give you a bit of a, an intro to Thomas. Thomas is the CEO of Appeals Center Europe, an out of court dispute settlement body. and he said that if I mention the, uh, URL of Appeal Center, his comms team will be very, very happy. So I'm pleased to say that Appeals center.eu is where you go to find out more. We'll learn about Thomas's work, but I really wanted him on the podcast this week'cause he's got a lot of experience working at the oversight board. The Supreme Court style body set up by meta to advise on some of its content moderation decisions. And also at Article 19 before that, so he brings a wealth of experience. before we get into it, Thomas, how do you curate? Your music playlists online, I'm presuming you're not quite using dissonant yet.
Thomas Hughes:I wasn't expecting that question, Ben. Um, I, I, I just trial and error. I think. Trial and error. I'll speak. Yeah.
Ben Whitelaw:Yeah, uh, me too, me too. I don't have a, a, a real set policy of doing it as well. Look, it is great to have you here either way. I think the, out court dispute, settlement bodies are fascinating. your appearance on controlled speech is very timely as well, because you've got a report out recently that talks a bit about, this fascinating area. I'll leave you to explain what ACE does specifically, but as background for listeners, you'll know about the Digital Services Act, the eus kind of big, bold attempt to regulate platforms, to create a safer internet. It includes an article called Article 21, which allowed for the creation of organizations called ODS Bodies out of court dispute settlement bodies or online dispute settlement bodies that help users appeal moderation decisions. Now I felt like this, when this came out, when this was announced, it was one of the most interesting things that have happened since the advent of the report button many, many years ago on the earliest platforms. It gives users a real, accountability and transparency that they haven't really had for a long time. And these ODS bodies, they. Review cases from users, as Thomas would explain. they solicit information from platforms and they deliver a judgment. To those users. And this can be anything from, where users have been taken down from the platform themselves, had their accounts blocked or on particular posts. Whilst these decisions are non biding, they obviously offer a resolution which otherwise would be very difficult for, for users to get. and so this is why I wanted Thomas to kind of come on and talk a bit about the work that he's been doing. Thomas, there are a bunch of ods bodies, the Appeal Center, Europe is one of them. How did it come to be and how did you come to be working at, at one of the IDS.
Thomas Hughes:Well, Ben, thank you. Um. I guess the genesis for the Appeal Center Europe goes back, for me personally, would go back to Article 19, where we actually looked at and tried to think through what regulatory models might look like to pursue greater transparency and accountability, for the very large online platforms. but in more recent years, really a group of us who were. with the oversight board. we kind of realized that, at that time and, and paid close attention to emerging, regulation coming up around the world. But, but realized at that time that that a statutory underpinning. In a co-regulatory structure, would give us the sort of the strongest basis for being able to really pursue, you know, that greater transparency and accountability through, an oversight over the content moderation decisions that platforms take. And, and obviously those decisions impact people. Everyday individuals, but also communities and societies in a very, very real and meaningful way. And, and I think that's extremely well understood and documented now. So, so this for me is one of, one of the most important elements of the Digital Services Act, uh, along with some of other components that, that I think we'll get to later in the podcast. so what we, what we did is we tracked this emerging regulation was really keep a close eye. On the European Union, of course, kind of well established, assumptions, underpinning the thinking that what the European Union does, you know, has a ripple effect around the world. It's not to say that all jurisdictions and countries will follow suit precisely, but they can help create, a certain pattern to regulation. And if that's regulation is good, right? then of course that could be a welcome ripple effect. and within the eu, uh, legislation, obviously the Digital Services Act, you know, seem to be, and, and is, the most relevant, piece of legislation that we pay close attention to. and at the same time. Within that Article 21 on outgo dispute settlement, and also articles 34 and 35 on systemic risks. And again, we can get back to that later, seem to be the, the parts that spoke most closely, um, to the type of work that, certainly the oversight board was doing at that time. so, essentially we took the opportunity to utilize, resources from the oversight board trust, to have a one-time irrevocable grant to get this new body set up to seek certification in Ireland from the commissioner man. that was, quite a lengthy and in-depth process that we went through in 2024 to establish our independence and our impartiality and our expertise. And then we launched an entirely separate new institution. at the very tail end of last year, and, and that institution is the appeal center. Europe Ben, as you said, like there's a half dozen out of court dispute settlement bodies now in existence in the eu. we have a network actually where we meet very regularly amongst one another. we're all, we're all different in Genesis and different in background and we have sort of d different approaches, but that diversity is, very welcome.
Ben Whitelaw:Yeah, really helpful overview. Thomas listeners might be interested to know a bit about who's reviewing these cases and kind of what their expertise is. I think that's, there's maybe, uh, an interesting discussion we've had there, but just give a, can you give a sense of how the process works and who's receiving these cases from users, and then what the kind of steps are before they get, some feedback about whether the platform's policies were followed or not.
Thomas Hughes:Sure. So, one of the exciting things about out of court dispute settlement is that of course it creates for, for users, uh, whether that be an individual or an organization really for the first time, an actionable, realistic, timely. Free path, to be able to make it, to raise a dispute about a content moderation action on one of the large platforms. And Ben, as you mentioned at the start, that's either to dispute the removal of their content. Or the suspension of their account, or to, dispute a piece of harmful content that they see remaining online on the platform that they think is policy violating and should be removed. and essentially a user can, come to, uh, appeal Center EU after having reported it to the platform, and then raise a dispute with us. We will then, receive that dispute depending on the nature of the dispute. after having requested the data, from the platform that's connected to that dispute, we'll then review it. It'll go through various pathways of escalation, depending on context and language and so on and so forth. and as an outcome of that, we will. Issue, then a decision back to both user and platform. those decisions, as you mentioned, are non-binding. but I think the really interesting component here is that there is a tieback in the legislation between Article 21 and articles 34 and 35. So between out of court dispute settlement and the decisions themselves and systemic risks and the individual decision, of course, in giving redress. To an individual user about a piece of content that really affects them personally in their life. Or maybe an organization that is trying to campaign against different types of harmful content that they see online, like getting that content either removed or restored. That is extremely important. That is an individual's, right? But there's a secondary element to this, which is that decisions, particularly those that are not implemented by the platforms, they, I think will create over time a bit of a heat map. So what I mean by that is. You'll be able to take those decisions and say, well, why is this platform not removing all of this, hate speech, targeting this community and this language in, in this city, in this country, or whatever it may be. That is a systemic risk, right? And I, when I read the legislation and look at what forms a potential systemic risk, right? Fundamental rights, they're clearly, clearly defined, right? They are in there, And that to me is a systemic risk. So I would expect then under articles 34 and then 35, the auditor who looks at those systemic risks and the regulator by extension to be saying to the platform, well, you failed to mitigate this systemic risk, which that ODS Body has repeatedly told you is violating content.
Ben Whitelaw:Interesting. so essentially we'll start to see. types of, harm and also maybe countries, regions, geographies within the kind of European, system where platforms are failing to mitigate those risks. Is that what you're saying? That kind of we'll start to spot, in a way that is probably difficult to do now, beyond, big news events that maybe Mike and I talk about and that crop up from time to time. we'll be able to spot those more easily.
Thomas Hughes:Yeah, I believe so. And, and I also believe that the data that we produce can also help other actors, like vetted researchers, for instance. obviously the type of data that we will produce and, and we have the aspiration to be releasing data, in as much depth and breadth and as frequency as we, as we can. We, the, the transparency report we put out last week, that's just the first one. it's obviously a, a kind of a veneer of the type of data that we hope to be able to release. In future, but certainly we think that, that creates a bit of a heat map that then researchers can go to platforms and say, well, we can see there might be a problem over here, or there seems to be an issue emerging over there from out of court dispute settlement. Like we want to go deep, like, obviously the data we have will never have the same breadth and relevance as the platform's own data. That has to be the primary source, the real source. But certainly we can help point. Better researchers, regulators, media, others to where issues may exist. So this is the way in which the DSA kind of locks together.
Ben Whitelaw:Got it. And, and the, the report you mentioned, is fascinating. Alice Hunsberger, who writes Trust and Safety Insider reflected on the report in her newsletter this week. Essentially, you've, you've shared a breakdown of the 10,000 or so disputes that have been submitted across, various platforms over the last 10 months. You've, kinda acted on 3,300 of those. They, they are the eligible ones. Um, a lot of people are clearly submitting cases that aren't eligible underneath, under your scope. There's some really interesting trends in there. I wonder if you could give me a couple of, of highlights that, you were, know, particularly surprised by.
Thomas Hughes:Certainly. Yeah. So I think, from my perspective, I think that this initial, transparency report really demonstrates one, that people, users, organizations, want this and, and it's very relevant to them and that they're starting to make use of it. And as you can see from the report, month on month, there's a clear increase. and I also think it demonstrates that this sector can work, right? It can function and it can do what it's what it's set out to. to be able to, I think what the original drafters of the legislation wanted it to do. Now, that's not to say there aren't teething problems. there certainly are. Right? And as you will see from the data, and the reporting, in many of the cases we've had to issue a default decision because either the platform, which has technically unable to find the content, even though we had established that there was an eligible dispute or. They, disputed our scope and said that they wouldn't give us the content. So, you know, this is a journey where there needs to be improvement. But the sort of the glass half full interpretation of that of course, is that in half the cases they did find the content and we did issue a decision. Right? and as you'll see from the transparency report, and I would encourage all listeners to go and read the transparency report in its depth. But, as you'll see from the transparency report. I think some of the interesting trends are things like, in three courses of the cases we were overturning. Platform decisions. Now I have to say that that statistic is somewhat masked by the default decisions because where the platforms don't engage, we issue a, you know, a default, but there's a more granular feel in there. again, it's a veneer, but a more granular feel where we are upholding platform decisions and where we're overturning them. I think another big thing to call out is, as everyone will be aware. The DSA has been accused of being a sort of a censorship tool, right? And, and it's somehow being contrary to, to freedom of expression. And, and that is not the intent obviously, of the legislation. It's to promote and protect freedom of expression. And you can see that actually we are where, where we are overturning a platform decision in three quarters of the time, we are restoring content to the platform. So the idea that Outof Court dispute settlement as a component of the DSA is sort of censoring on mass and taking down content is, is just false. It's just wrong.
Ben Whitelaw:Mm. Yeah. Interesting. And, and the report is great, and, as you say, it's the first one. Alice made the point that this is a, a very small set of data points, but I think it's an important starting point. and, you know, nothing, nothing good ever came from, you know, starting in this, with a big bang. So I, I, I really recommend people to read it. I'm really glad to have you on the podcast to talk about some of those issues that you've spoken about already, Thomas. And, um, I think that's probably a good point to segue into our first story today and, and for this week. if that ACE report Thomas is showing how platforms are being moderated, there is a kind of broader question about how the public thinks it should be moderated. And that's kind of really the topic of a recent. Report from the Reuters Institute, as all of our stories today that we'll share the links in the show notes as we always do. this is a, a piece of, research based on a survey that asks users whether they want platforms to be the kind of leaders of the content policy or whether they want governments to have a bigger say in what content is allowed on platforms. The survey looks at eight countries. And it kind of looks at, both who should control content policy, but also whether government should play a bigger role in who decides where content is false or misleading, and how that should come down. And in many ways, almost, I thought this was, a bit of a false kind of framing at the start. I, I was the questions and the way it was framed. I was somewhat kind of thinking that this wouldn't give us any interesting data, but the, the results are actually fascinating and I'll, kinda give a bit of listeners a bit of a flavor. the kind of TODR, the, the thing that really stands out across all of the, the responses is that between half and two thirds of people in all eight countries think the platform should be responsible for content policy. And that is both, I think surprising in some ways and and not surprising in others. I think it's surprising how close. All of those countries are in seeming that platforms should take the lead on content policy. there isn't a huge variation as, the research makes clear between Japan, Korea, the uk, Brazil, the us, everyone is fairly aligned in terms of the platform should, be taking the lead on it. there are some some regional variations though. we see kind of the UK and Germany being slightly more pro. Government only by a few percentage points, but wanting the government to play a slightly, more, kinda impactful role in deciding that content policy regime. Another piece that's surprising is, is the US 28% of people wanted to have the government take the lead in, taking greater responsibility for policy and guidance about what's allowed online. So. It's the, the research is somewhat counterintuitive. but it also falls kind of relatively in line with what we might expect users to think about in a kind of information environment where platforms are so dominant. and, and the timing I think is, is obviously fascinating with, as you say, the regulatory kind of regime across the world shifting so much. and we'll touch on that in other stories. Basically platforms still retain the confidence of users. And, and I wondered as somebody who's been looking at this in their report recently and in their work, what you made of that.
Thomas Hughes:Yeah, Ben, it is a fascinating piece, and again, I would like you encourage everyone to, to read it and look at the data, mainly because of the uniformity of the data across all demographics and countries. it's an interesting point. I think, I guess my first self-reflection is, those of us who are sort of more steeped into the regulatory trust and safety, ecosystem. You know, we, we live and breathe the problems of platforms on a daily basis, right? And that is, Always our attention and focus and, and where there are failings. I, and I think part, part of, maybe, part of what the research does is just gently remind us that for many people, social media is a positive, right? and that they find good online, and that they enjoy their engagement on social media and, and think it has value in society. So I, I think it's always worth. Having that small reminder to ourselves that we are, whilst we have maybe feel, we're obviously battling these giants in some ways. Um, some people are actually enjoying being on those systems and, and using those tools. but when I read it, the, as interesting as the research is, it's also, it's also binary, right? here I'm, of course I'm going to come back to, co-regulatory structures and third parties in, in those systems. and I don't, I don't read. The data, and I don't think you do either, Ben, but I don't, I don't read the data as a, as a confidence vote in the platforms themselves. Uh, rather I read it, as two things. One is I think there's a healthy skepticism of direct governmental control, Over speech. and I think many of the respondents, well, I I assume, I would like to think many of the respondents are aware that, if you directly empower government to control speech and particularly what constitutes harmful rather than illegal speech, then that's a slippery slope then. Then we have real problems with, you know, civic discourse, you know, democracy and so on and so forth in society. And if you look actually at the data and, and as it sort of speaks to the misinformation piece, the other component of it, I, think you see that, coming out a little bit, right? I mean, the platforms are not getting a free pass here that like the users are not looking at'em and saying, oh, they're, they're completely, they're sort of not culpable or responsible for any of, any of the problems that are playing out on their platforms or any of the speech that is being put up on the platforms. but I think the other piece that's kind of missing and, and I would love to see. The Reuters Institute, to put this in next time around. But, and the Knight Foundation, of course, but the other piece that I think is missing is what people would respond to the idea of out of court dispute, settlement bodies, you know, it's third parties. They're not government, they're not platforms, obviously it's co-regulatory. They're certified by the regulator, but hopefully done so through a, clear and fair transparent process and are accountable for their certification and they're given certification for a period of time. you know, and then given a. certainly for out of court dispute settlement built into the legislation given a business model, a funding model through the legislation itself. And I would hope that people would say, well, great, actually, if, as long as you've got confidence in those bodies and that system works well, that, that actually is the best outcome here, that's the best solution. Neither party is solely unto themselves going to be taking responsibility and issuing decisions without any kind of accountability overseeing them.
Ben Whitelaw:Yeah. And I think, I think this is where the ODS bodies are so interesting, right? Because they, have so much potential, and I'm sure you think that as well, but there's not many people that seem to lean, know about them. you know, this, your report talks about this as well, and this is why I think these two stories work well together. is that, The platforms are not necessarily promoting ODS bodies as a mechanism for users to, kinda seek for address, for decisions they make, even though they're meant to. And obviously, you know, reaching a kind of critical mass of people online is difficult and costly and not something that, the individual ODS bodies, can do. how do you see that, barrier being broken through so that, As you say, the users in this RER report can have the best of both.
Thomas Hughes:Yeah, I mean obviously building up user awareness is a, crucial component in this. and, and we are obviously trying hard together with other ODS bodies to get, get the, the message out there, as it were to users and we do see it coming, right? I mean, users are becoming more aware. but a crucial component in this, and as you say this is called out in our report, is what, we've deemed signposting, so on platform information about, out of court dispute settlement and how, a user, if they disagree with the decision that the platform has taken, can raise that dispute to an ODS body, uh, in this case to the appeal center Europe. and, the platforms, you know, in, in our appreciation of it, simply have to do better, right? Um, they've all put some form of information, right? They're all minimally compliant. some platforms are certainly doing better than others. And I would encourage, again, everyone to read the report. We've got quite a granular description of, how platforms are performing. Uh, not only on this scale actually, but also on. On the, sort of the data sharing and, sort of technology engagement piece as well. and it varies by platform. but, but I would say they've all the ones we are currently working with, they've all sort of got themselves over the, over that minimal hurdle. But if they, if that signposting were improved, if it were better, I, you know, I think it's pretty clear that that would then, result in higher numbers of disputes coming from users. Right. Greater awareness and then greater use of, of the systems.
Ben Whitelaw:Yeah, I, I think so too. kind of touched on the, we talked a bit about the kind of US element and, you know, you've, At the oversight board, you're probably familiar with, you've worked in the us you've got an understanding of how the US kind of see speech and understand speech. Were you surprised about the high number of, survey responses that thought that government should play a larger role in how. Policy and guidance should be conceived. That was, that was a bit of a shock to me, that it was, you know, higher than other countries like Brazil where we, we know there are, big pushbacks against big tech companies. What did you make of that?
Thomas Hughes:Yeah, it'd be interesting to see, this data sort of correlated against, you know, trust in government, trust in institutions, stability of government, rule of law. Like I think these things certainly have an impact on whether, people around the world want government, direct government control over speech or not. Right. And I, I would assume that where, countries are, uh, have, sort of, uh, where they're probably smaller, where they have, maybe more liberal systems where there's rule of law is stronger. Like people will be more comfortable with governmental. Direct governmental control or oversight and it pushes in the other direction. But your point on the US is a, is a really interesting one because I would not have expected that. Right?'cause obviously First Amendment rights are so ingrained, and so strongly held, that you would've thought there would be, great skepticism about any kind of governmental engagement. But, but I can only assume that maybe that is a reflection of a sense of frustration amongst people that, at least at a federal level. things have not progressed right. maybe in relation to children and minors and so on, but, but really across every other front, it's, I think it's widely understood to have stagnated. So maybe there is frustration that government has, has failed to act, and that now we are seeing this playing out through, state level legislation and, and obviously through the courts.
Ben Whitelaw:I think that's right. And then, and, and unfortunately we don't have, a political breakdown, in the US that the report, at least the version we see doesn't have that. But there is an interesting discrepancy between survey respondents who, who class themself as kind of politically left and politically right. And there is basically a kind of between a nine and 13 percentage point difference between, between those who are happy to have government intervention. Those people on the left, as you say, and, and those who are on the right and who, who probably shy away from that. do you think that's something that is likely to kind of increase over time as well? It's, it's fairly well pronounced as this, data presents it, but based upon what we hear other forms, particularly in the US but in other countries as well, do you think that is likely to kind of grow over time?
Thomas Hughes:I mean, it could well do. but I think again, that sort of, political spectrum, breakdown is probably a result of the fact that. On both sides of the line on left and right. There is a feeling that, you know, the online, certainly social media information ecosystems disfavor them, right? are weighted against them. and you can make the argument for, both ways. And then there is counter arguments and counter research for both as well. But here, just based purely on, not on the reality, but the perception, right? The perception is these systems don't work in my favor. Something has to change, someone needs to hold them to account. And if you're going to, you know, there, there's only really kind of two, I guess, three major groups that can hold social media platforms to account. One is their users on mass unlikely to happen.'cause uh, that would, that would require an enormous amount of social unrest and coordination. Or maybe not unrest, but coordination amongst people. The second one is advertisers. And we've seen moments in the past where they've sort of stood up, but then they sort of sit down again as best I can tell. and the other one is governments or regulators. Right. and those are the three, and, and maybe there's a fourth, which is other tech, right. Uh, both in competition and incorporation. so the obvious one, I think for the person on the street, assuming that's who was, who was actually surveyed for this, for these results. It's the government, right? I mean how are you gonna hold big tech to account the government? The government should do something about it.
Ben Whitelaw:Yeah. Yeah. Uh, I think there's as the only real discrepancy in the, the breakdown of the data, is I think the, one of the standout pieces there. And, and there's a thread that I think would be interesting to see pulled on. I think there is plans to do, for Reuters to do this again in the future. So we'll have this as a benchmark to move on. Really interesting piece of research. as I said, I was surprised that there was so many interesting elements of it, despite what I thought was a fairly binary, framing of the question. So glad to be proved wrong as ever. Thomas. Um, we, we've talked a bit about kind of, regulation there in, in the context of what users want. one of the largest, well-known pieces of speech regulation, the DSA actually turned three. This week, it was passed, in 2022 and came into force in, February, 2023. So actually the kind of three year anniversary is a bit like when you start going out with somebody. when you become exclusive with somebody, but you don't start going out. So it's the kind of fake anniversary that comes, uh, before you, you kind of commit to somebody. I would say, um, you still have to remember it though,'cause you know, you always do, but there's a, there's a kind of milestone in many ways for speech regulation, and algorithm watch, really well known and I think does some great work. A nonprofit organization has written a piece about kind of what's been learned in those three years. what the. D has managed to do what it's failed to do. What's down the track, and I know this is something that you reading with good interest, Thomas.
Thomas Hughes:Absolutely. So I think this piece that, algorithm watch put out is, certainly really interesting. they, quite rightly call out this, anniversary as a a. I think in a way, as a sort of a momentous, but maybe slightly flat occasion in terms of, in terms of progress, and, and, and, you know, they, um, you know, do some analysis around obviously the, the DSA and, and particularly dive into two areas that we've already mentioned already. but for me, in terms of the Digital Services Act. You know, I still think actually it's the, it's the best. It's a complex piece of legislation. Right. and although I, I would actually describe it, in my own mind as kind of sort of a, a, it's got a, a series of beautiful sort of interlocking cogs that, if they turn together, I think are gonna make a really, really powerful. System of regulation that is arm length and provides the independence and the guarantees that we want. but nevertheless, it, you know, it's obviously gonna take time. for that to happen. and I think one of the real challenges is, is that, you know, in my mind I conceptualize it in a certain way, but, but everyone else seems to be conceptualizing it in their own way as well. So how this actually pans out in the end is, is gonna be obviously really interesting to see. but for me, obviously you've, you've got those kind of internal pieces, so you've got the compliance requirements on the platforms themselves, like points of contact and transparency reporting and notice and action, internal appeals, and access to data like we've talked about. But, but all those kind of parts that the, I think there is probably broad agreement that platforms must be doing, should be doing, and then it's got what I think of as the kind of external both entities and functions that have been created. Somewhere between obviously the platforms themselves and then the statutory regulator, and then governments by extension. And those are things in my mind, like, you know, trusted flaggers. You know, obviously looking for illegal content, the vetted researchers, the auditors to doing the systemic risks. and then of course, the out of court dispute settlement bodies as well. And, and each of these is a, independent separate body from the regulator, right? That, maybe needs to be certified or approved in some way. And, and they need to have their work, obviously to some degree overseen in an arm's length manner, but is nevertheless. Not the regulator, not appointed by the government and so on, or not appointed also directly, by the European Commission, the digital service coordinators at the national level. And that is the ecosystem that you want to create, right? That is the co-regulatory structure work that we've seen work, I think relatively well. I mean, there's, it's open to debate, but relatively well in the media sector as a comparator. that you really want to see sort of replicated, for social media and I, you know, I, I think as the article points out, we've, got to give it a bit more time, right? And, and, and also it requires I think, lots of different actors to lean in and, and engage. And I think that the DSA is a little maligned at the moment, because there's kind of assumption that like, why has nothing happened and why isn't things changing? But if you, if you take us as an example. You know, it obviously. as soon as February, 2024 passed, we got our application in it, you know, within a, few weeks. Then we went through the certification process. Then at the t you know, after the summer of last year, we, we got approval. Then we, you know, ran hell for leather to get set up and build our systems and recruit all of our team and staff and so on and so forth. And then we launched at the end of last year. And now we've had just over a, you know, half a year of, active work of which a lot of that is building up. So it's very easy to break those three years down into little chunks and say, actually things are moving quickly. Three years sounds a long time, but things are moving fast. and if you look at the number of, at least RFIs that have gone out, from the European Commission and the number of, investigations that they've opened, you know that it's a pretty considerable number, right? I think it's almost up to a hundred. so they're not sitting idle, right? I don't think anyone could accuse. you know, the officials in Brussels or elsewhere have dotted around the EU of sort of not wanting and not being willing to pursue and take action and potentially enforcement action as well, but we're still in these early days. Right. And, and also very importantly, one of the areas that the article calls out, of course, is, Article 40 on vetted researchers. Like we've only just seen the delegated act on data access coming out. and now I assume that the DSEs in particularly the commissioner man in Ireland, they're gonna get a deluge of requests to get access to that data and for researchers to start utilizing it. and I know that there's still a lot to do in relation to really understanding like how those access requests are granted and, and what breadth and depth they have and what the platforms will provide in response. But again, it's taken time, but like we start to see this connecting and as I said earlier. I see a direct connection between out of court dispute settlement and the disputes and what vetted researchers are gonna do and what, data they might seek. And then on the back of the data, I can imagine a world in which they put out policy advice and we look at that policy advice as we issue decisions. And then some vetted researcher looks at that policy and goes to get more data. And same with Trusted flag, as you can see how all these things start to lock together. Actually create a really vibrant and strong ecosystem which the platforms should benefit from, right? if they lean in, if they look at this in a constructive way, would be useful for them as well.
Ben Whitelaw:Yeah. it was really interesting, this idea of a kind of ecosystem and almost like a flywheel, perhaps taking place, right? If, if all the parts work in the way that you, you say, I mean, I, as an onlooker, I've certainly been. So right at the kind of speed that things have happened. It does, feel slow. I know that that's probably in a world where everything happens immediately, but you know, the trusted flaggers, the digital service coordinators who are kind of put in posts in various countries, to approve, Things like the, the, the trusted flaggers and the ods bodies of which ACEs won. They have taken a while to be embedded and to be, you know, often kind of set up. I think over even a few months ago, there were countries that hadn't, didn't have a DSE in place. we're talking, you know, three years on now, so Are things happening fast enough? really to, particularly in a, in a world when the platforms, uh, will work always at, at a faster speed than regulators and what can be done to kind of, help address some of those kind of speed issues that, that it feels like kind of regulators are falling behind on.
Thomas Hughes:Yeah. Um, I, I think comparatively if you compare this to sort of A-D-M-S-D. Implementation or even a DR implementation from an EU perspective, like this is moving faster than those did. the responsibility to push the, the member states is, comes, falls to the European commission. and I believe that they are pushing and, and. I don't have an in depth, oversight over the, all, all of the member states, but I think the majority have moved and have DSCs in place. Uh, we certainly send information to an awful lot of DSEs and they, they respond to us. Right? We, for instance, we shared our transparency report quite broadly, and many of them wrote back to us and thanked us. So, they're getting up and running and paying attention. I think it's a resourcing question. There's lots of elements within this. and, you know, trying to resource and, and be engaged across all of the different aspects is very challenging. but some of the, you know, the, key DSEs, and here I would call out the, commissioner man in Ireland. You know, they have, they have moved fast and they have built up their resources quite broadly, and they have a lot of expertise in house in a way that. their predecessors not just in Ireland but elsewhere, maybe didn't have in terms of social media. So, it's a very different looking regulatory landscape now. It's got complexity'cause they have to coordinate amongst each other and they have to try and harmonize to a degree across different standards that they're applying and hold a coherent line with the platforms and so on and so forth. So, so yes, there is lots of coordination that needs to take place, but my sense is that they. they are trying to move and they're not sort of dragging their heels. but I like you, um, you know, we have areas where, and it's called out in our report, for instance, where we don't think platforms are doing enough. Uh, in fact, we very directly called out YouTube for really being very slow on getting up and running and sharing any data. Actually, YouTube, unlike the other platforms that we work with, that shared no data with us and we don't have. Data sharing agreement and arrangements in place. and I'm gonna, I'm gonna very defensively say that after we released our report, they said that we didn't have the right, uh, security and privacy protections in place. That is very disingenuous as a response because we have put in place in a shorter, much, much shorter timeframe the requirements with other platforms. And actually we have very high. Robust data protection standards and, and security standards in place at the
Ben Whitelaw:mm.
Thomas Hughes:Europe. the problem with our engagement with YouTube to date is that it's been punctuated by very long periods of silence.
Ben Whitelaw:Mm-hmm.
Thomas Hughes:Um, and we would love to complete that process as fast as possible. Sorry, that was a bit of a. That was my frustration. That was a bit of a divergence. Um, and, and there are problems with the other platforms as well, right? It's we have issues across the board, some small, some large, so on and so forth. so I think pressure on platforms to comply is another crucial component of this, right? In order to get this up and running.
Ben Whitelaw:And I think you know, you mentioned the a DR, the alternative dispute resolution, which is a kind of EU mechanism for consumer disputes. Right. which, I, I'm only vaguely familiar with, but I think the, point there is, you know, the expectations are that the EU is able to set up a regime like this fairly quickly. And actually from what you're saying, the way that it's been done so far, the first three years, has been pretty speedy, has been pretty effective in comparison to, to kind of other, perhaps non-technology related, regimes. It's a case of expectations, I think, from what you're saying, I wanna come back to the kind of point around. we've talked about platform compliance, particularly around the ODS. There's an element of that here in this algorithm watch piece. what do you think the kind of cause of this enforcement lag is? You know, we see platforms not responding in the way that you'd expect to, bodies like your own, giving data, acting in good faith as the legislation lays out. kind of engaging in a way that, know, it is specified in the Digital Service Act. do we think we'll kind of start to see that being enforced more heavily? or is as Mike and I always talk about, is that the point of regulation is that the threat is the thing.
Thomas Hughes:Yeah, no, of course. so it's, it's not a binary black and white scenario of. Complying not complying, right. if you were to ask me the question, are the platforms, engaged? Right. do you have a constructive relationship with them? Are they complying at a sort of a basic level? I would say yes. Yes, they are, right. We, they set up teams. We built systems. We send them disputes. They send us data back, we issue decisions. They implement those decisions. So that system works. Is it fully compliant? Is it as broad as it needs to be? Is it as swift as it needs to be? and as timely as it needs to be? No. No, it's not. But it's only been operational for now. Well, more than half, three quarters of a year, right? Realistically. So it's only been operational for what is in, in reality, a, a short period of time. Now, don't get me wrong, I do not wanna sit here and sound like I'm apologizing for platforms,'cause I want'em to move much faster, right? this is not fast enough. It's not good enough. They need to do better. But in the bigger picture of things. It is working, right? it's starting to show its potential and the easy part is the, you know, when, when you are bridging a gap between two bodies and how to interpret a piece of legislation, easy part is the initial part. It's like, so what's, minimal compliance on your side look like and what's our expectations on our side to get up and running? Okay. You get that, you bridge that quite quickly. That, that I would characterize as kind of like YouTube aside, that I would characterize as the first kind of like first quarter of this year, first four months. Next period was like, okay, so where are we actually disagreeing? Where won't you give us this content? Why aren't you giving us this content? Like, why aren't you able to find this content? Right? And I think some of the platforms, to be really honest, and I, I obviously, I can't speak for them, they haven't said this directly to me, but I think they've been like, oh my God, why? Why can't we find this content? Like it's an eligible dispute. We don't dispute that it's an eligible dispute. But our systems somehow are not constructed that we can find this type of content in this environment for this reason. Like so they're, figuring out how they build those systems and, in the backend, in their content moderation, can find a piece of content. You know, sometimes we have. The URL, right? We can see the content on the platform, but they can't find it in the system. So, you know, they've got to close this gap, right? And that's a technical issue. And then there's the legal issue, right? So we're, we're also going toe to toe with, you know, legal teams and they're saying, well, that's not in scope. We're like, no, no. That is in scope, right? It's pretty broad piece of legislation. If you're gonna, if you wanna dispute things being in scope, pretty confident we're gonna see this one through. But no, that is in scope. And so there's that. So it is this bit, right? this is the hard part to close the gap We're only now seeing, I think a bit more of a discussion, sort of a now we've come to these kind of log aheads, a bit more of a serious discussion where I think we might make some progress. Now, in the shorter term, I don't think we're ever necessary, whatever. It's gonna take a very long time to completely, entirely close that gap. But if we can get that gap here from. Let's say with let's, well, 50% of the decisions are being issued, right? So let's say we're 50% of the way there, right? Uh, I think we can probably through this next period, get to 70% and then I think we're gonna have a really hard time closing off the last 30%. That, that, that would be my expectation.
Ben Whitelaw:Okay. Interesting. So, so a fairly long way to go, but as far as the DSA is concerned, a better than terrible start, uh, if, if, if not, not as, not as maybe, um, not as maybe as dire as algorithm watchers painted in this piece, but, a, a helpful kind of analysis in there and context from you there. Thomas, I'm, I'm gonna kind of flip us to talking about a few smaller stories that help kind of round up today's podcast. Mike and I tend to do this, uh, as speedily as possible, but we never succeed, so there's really no pressure for you to do the same. But, um, I wanted to kind of highlight a story that, has, been doing the rounds. Um, I've been sent it by a few people this week, which is, a Discord data breach. Now, the video kind of chat platform suffered a breach a couple of weeks ago on the, on the 20th of September this week. They, announced that it had happened and it's a fairly kind of standard data breach in the sense that, it is email, username, IP address of, of a segment of its users. I think what's notable and what makes it relevant to controllable speech is that. This was as a result of a, a kind of breach from a third party customer service platform that it seems to be that the trust and safety team used. And the reason why that is, is because they have announced that a small number of personal IDs, so, driving license, passport, have been, stolen as a result of that. And. These are personal details that have been provided to Discord as part of disputes about age verification. So listeners would've heard over the last kind of probably six months or so in various countries, age verification has become mandatory under regulation in those countries, notably the UK and Australia. So platforms have decided to, roll out age verification systems on their platforms for accessing various content. those personal pieces of data, passport driving license, are not kept as standard. They're not held by platforms. And Discord makes that very clear when you, provide that data. However, if you appeal. that age verification, if you, full foul of it, if you are deemed to be too young, which I would not, I would not be, but if you are, if you're deemed to be too young, then you can appeal and as such, some of that data has been stored and that has been part of the breach, so. Is it interesting in a number of regards? the people who are skeptical of age verification and I, I have qualms about it in some of these regards as well, are saying, here we go. This is, this is the issue with age verification. You know, you're gonna create more issues for more people because the privacy and security issues that come with collecting people's data are such that you try and protect a small segment of people and you end up. Jeopardizing everyone else. I think it's a bit more nuanced than that here because it is a quite obscure, set of users who have appealed. however, it is something that Discord should have covered off and it should be something that they shouldn't have been, collecting and storing. I dunno how much you've been keeping track of the age verification debate Thomas has, it, crossed your paths? What do you make of this?
Thomas Hughes:It is not crossed my path in the Appeal Center Europe world, but, but certainly something you kind of have missed, obviously in the broader debate. And also, in the UK as of late, you know, as the, online Safety Act rules came into force, uh, through the summer about age verification. look, I, I've always. Thought that the, at doing the verification at the device level, it seemed, seemed to be the logical way to go. I, I think the point in this article also is that there shouldn't be dependency on just one form. And that if you do use a certain form, you need fallback options that are readily available and quick and easy to use for individuals. so I'm gonna, I'm gonna go for the get out phrase of saying this is a naughty issue and it's gonna take some time to resolve.
Ben Whitelaw:Yeah, I think this is, um, it's, unfortunate. For the platform and, uh, a a kind of perfectly bow tied present for people who, are, want to be kind of skeptical of age verification for some good reasons. but notable story that has been covered in a lot of places, uh, including The Guardian where we read it. We'll round up, Thomas on a, on a story that I'm gonna kind of classify as a, a, jeez, America, you've done it again. Story. Um, I think this is again. Uh. Is a broader political story that has a platform element. and, uh, much like you talked about YouTube earlier, which is not a, a platform that regularly gets a lot of heat, I would say for its speech policy. application. This one concerns Apple. you wanna talk a bit about that?
Thomas Hughes:Yeah, sure. So this is obviously the removal of the deicer. a civic reporting app, from the Apple Store. and it's an interesting one because, I would love to sit down with the lawyers in Apple who, who decided that, if that's what they have decided, of course, that, that being a police officer is a protected characteristic, uh, uh, uh, under, under any kind of, um, human rights framework. because I think it's kind of clearly established actually that police officers are not. A protective characteristic, but professions generally are not, you know, we would normally think of things like, you know, race, ethnicity, disability, religion, you know, these types of things as being protected characteristics. but looking at the, apple, policies, I think this was taken down under the, the first time, the policy 1.1. and I do wonder to some degree whether they've decided to apply a kind of a. Quasi hate speech type of policy area to this'cause it just gives them a lower threshold in which to apply because, essentially. You, you would only need to sort of demonstrate And, and, and I've not been on the Deicer app, so I don't know what kind of content's there, but you would only need to demonstrate that there's kind of harm or accusations of harm or, or criminality or dehumanizing content or, or maybe, you know, slurs and so on and so forth. Things that you, you might imagine if content can be shared on this app, might be available.'cause I'm, I'm assuming people using the app are not enamored with the actions of ice, whereas. under the Apple policy that deals with encouraging violence. you would actually, I think, normally expect a tech platform to have under that kind of violence and incitement policies. something that, that might actually apply, to police, and potential harm to law enforcement officers. But it's gonna have almost certainly a very high threshold. It's almost certainly gonna have like, imminent harm as a threshold and maybe Apple, and here I'm just wildly guessing, of course, but maybe Apple. felt that they would not be able to establish the De-ICER app was presenting imminent harm, to ice agents and, and police officers. But maybe they thought there was some unpleasant commentary going on about those ice agents and police officers. But in general. With my own background in human rights promotion and defense, I would actually argue obviously that police officers are state actors, right? and often are the perpetrators of rights abuses. I'm not suggesting that's necessarily the case here, but in general, right? So, actually accountability and transparency over their actions, and oversight over their actions and empowering individuals. To be able to hold those actors to account and defend their rights is actually really important. I mean, it's, it's like a crucial underpinning of democratic society. So, yeah, I, I don't think we can award any merit stars to Apple for this decision.
Ben Whitelaw:No, definitely not. I mean, the idea that a kind of an app designed for civic reporting, made by one developer, that was on the app store that has been taken down in this way, am it's the sea of other apps that presumably, are live and available for potentially similar things. does raise alarm bells, Particularly when you think about you know, the US administration and its connections to big tech platforms, including Apple and including Tim Cook. there's basically no tech platform that hasn't seemingly kowtowed to the US president in recent months. But, I think that's, what the concern here is that there has been a, concerted effort to take down, an app for reasons that are perhaps a little bit, Unclear at least. And, this is as a result of some great reporting by, migrant Insider, which is a kind of independent publication. there's also a really interesting letter from an former Apple employee, which would include in the show notes that calls on Apple to explain its decisions. Um. he's an Apple shareholder. He obviously kind of thinks that Apple were one of the good guys as, as many people do, and this kind of behavior doesn't chime with that. I would say so, yeah. not a great look for Apple, not a great look for, the US at this state in time. And we can say that Thomas, because Mike's not here. And, uh, he was slating Brits. He was slating Brits on the podcast last week. So, uh, I feel like we can give it a bit back. fair game after all. Um, Thomas, that, that brings us to the end of today's podcast. We've, we've gone deep on some two really interesting stories, from the Reuters Institute, from Algorithm Watch. We've talked a bit about the data Discord breach and also the, the Apple I Sagent story. some amazing reporting and, News media organizations that have allowed us to understand what's going on in the trust and safety world, go and subscribe to them. go and read them, allows us to kind of talk a bit about it on the podcast. Thank you for your time, Thomas. really appreciate it. that ERL, again, appeal Center europe.eu Center, spelled C-E-N-T-R-E for our US listeners. And, uh, yeah, thanks very much for taking the time today.
Thomas Hughes:Ben, thank you very much. I hope I've been sufficiently Mike adjacent, and it's been a real pleasure talking to you.
Ben Whitelaw:Excellent. Well, thanks for your time. Um, look forward to having you on the podcast again, listeners, if you enjoy today's podcast rate and review us wherever you get your podcast, wherever you listen. we look forward to having you back next week, where Mike will be in the chair once again. Thanks everyone. Take care.
Announcer:Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.com. That's CT RL alt speech.com.