Ctrl-Alt-Speech

The Global Internet - Or Is It?

Mike Masnick & Ben Whitelaw Season 1 Episode 1

In this week's round-up of news about online speech, content moderation and internet regulation, Mike and Ben cover: 

  • The US TikTok ban and what it could mean for the future of the internet (Techdirt
  • The EU prepares to regulate Chinese marketplaces (Reuters
  • Telegram's CEO gives a rare interview - and what that says about online speech (Financial Times
  • Generative AI is already messing with elections (Al Jazeera
  • Bluesky open sources its moderation tooling software (Bluesky
  • Trust & Safety software market is set to double by 2028, according to a new report (Duco

The episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our launch sponsor Modulate, the prosocial voice technology company making online spaces safer and more inclusive. In our Bonus Chat at the end of the episode, Modulate CEO Mike Pappas joins us to talk about how safety lessons from the gaming world can be applied to the broader T&S industry and how advances in AI are helping make voice moderation more accurate. 

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

So, Mike, in the eternal words of the Facebook status bar, what is on your mind this week?

Mike Masnick:

I'm thinking that we seem to still be fighting the same fights about internet speech that I thought we had solved decades ago, and it is frustrating me. What is on your mind?

Ben Whitelaw:

Well, I'm sitting across the Atlantic wondering just whether America is going to lose its collective mind over TikTok and when that process be complete.

Mike Masnick:

I think it has already started.

Ben Whitelaw:

Hello and welcome to Ctrl-Alt-Speech, your weekly roundup of the major stories in the online speech world and related to content moderation and internet regulation. This week's episode is brought to you with the financial support of the Future of Online Trust and Safety Fund and by our launch sponsor Modulate, the pro social voice technology company making online spaces safer and more inclusive. We'll be speaking to Modulate CEO Mike Pappas in our bonus chat later on in today's episode about how safety learnings from gaming can apply to the broader industry and how advances in AI are helping to make voice moderation more accurate. He's got a really interesting playground analogy which is worth hanging around for. My name is Ben Whitelaw, I'm the editor and founder of Everything in Moderation and joining me is a man who knows the internet like few others, Mike Masnick of Techdirt. Hey Mike.

Mike Masnick:

Hey Ben, welcome to our first episode.

Ben Whitelaw:

I know, I know, I didn't think it was ever going to be here. Um, but it has arrived.

Mike Masnick:

Yes, it is, it is very, very exciting. Uh, just to, to explain to the folks who are listening, hopefully you have a general sense of why you're here. But if not, Ben and I, who have been writing about the world of online speech and trust and safety and a bunch of other stuff feel like we always have stuff to talk about. And rather than just writing about it, we are going to be talking about it. With each other and with a bunch of guests that we will have as we go along. So each week we are intending to discuss what is the latest in the world of online speech, trust and safety, content moderation, new technologies around that, different laws, court cases, all that kind of stuff around the world. The plan is that we'll do a few deep dives on the major stories, which will just be a discussion occasionally. Me ranting, think, Ben trying to pull me back into line, uh, and then hopefully we'll do a quick roundup of a few other stories each week as well before leading into our bonus chats.

Ben Whitelaw:

Yeah, between Mike in the US and myself based in the UK looking very much towards the EU and the rest of the world. I think we can do a really good job of summarizing and digesting all of this week's content moderation news for you as listeners. So without further ado, let's get started, Mike. This week we've got a whole range of stories on our roster to talk about. Everything from gaming to open source tooling. But where else to start other than the, the TikTok ban bill. Which everyone has spoken about,

Mike Masnick:

Oh no!

Ben Whitelaw:

was the, uh, was the kind of reason for my intro today about the collective madness seems to have taken over the US.

Mike Masnick:

Do we really have to talk about this?

Ben Whitelaw:

sorry. the only way. Even if it doesn't, know, reflect too well on the US political system. But if, if, if our listeners haven't paid attention this week, if they're coming to this episode of control or speech fresh, talk us through kind of what happened this week, if you can, and what what the kind of outcomes are.

Mike Masnick:

Yeah, so, you know, there's been talk in the U. S. for years now, I mean, going back to the Trump administration of potentially banning TikTok, you know, it initially came from, uh, a bunch of TikTokers effectively pranking Donald Trump and You know, reserving a bunch of tickets to a rally that he expected was going to be overflowing and then they didn't show up and the rally was mostly empty, which was sort of embarrassing. And suddenly he wanted to ban TikTok. And he tried and there were a couple of, a couple of different executive orders and then there were some court cases and, and for the most part, the, the courts rejected the, the theories behind it. And then ever since then. There's been this sort of back and forth and there was, there was a pressure campaign which effectively forced you know, originally it looked like it was going to be a forced sale. But it instead was this forced partnership with, with Oracle, uh, in the US to, to host TikTok US data here in the US. And they set up this whole thing called Project Texas where Oracle would, would host their data, but also would effectively audit it and make sure that it was being kept secret. For some reason, a lot of that is now ignored and everyone sort of forgets that that existed. Um, and. There's just been all this talk about, you know, how TikTok is, is a national security threat in the U. S., though nobody ever presents any actual evidence to support that, there's a lot of speculation and some fair level of, of uh, xenophobia, I think, the most obvious one being there was a The Senate hearing where Senator Tom Cotton from Arkansas quizzed the CEO of TikTok, whether or not he was from China, he's not, he's from Singapore, whether or not he was a member of the Chinese Communist Party, again, he's Singapore, and it was You know, sort of very embarrassing as a U. S. citizen to see a senator, sort of, just being like, openly bigoted in that, in that manner. But anyways, there's sort of been this ongoing discussion. Finally, uh, like, really very, very quickly, about a week ago, the House came up with this bill which was immediately voted out of committee, and then now the, the, the entire House floor voted on it, and pretty overwhelmingly approved the bill. Um, There are questions of whether or not the Senate's going to, going to approve it, or if they're going to, they have a different bill, which has mostly been ignored, uh, and whether or not they're going to present a different bill. And then there's, there are questions of where that goes. However, the, the Biden administration has been very vocal that they'll sign a bill if this kind of bill gets to President Biden, he will sign it. And apparently he was even working with House Republicans, which is not something that happens very often, uh, on their bill. And so, you know, there's a real chance that this actually becomes law. And, and the, the specifics behind the law is that it, it requires that ByteDance, the Chinese company, divest its share of tikTok Uh, and then if that doesn't happen, it requires like app stores in the U. S. and others to effectively block access to TikTok in the U. S. And so there's, there are questions, you know, TikTok definitely refers to it as a Bill Banning TikTok. And you know. People who support it say it's not a ban, it's just requiring divestiture and the, the, the thing that, that really gets me about it is, you know, I, and, and, and sort of why I opened with the, we're fighting the same, the same fights that I thought we'd solve decades ago is like one of the very first semi controversial things that I wrote about on TechTurt. You know, 25, 26 years ago was all about the question of jurisdiction and the internet is this global mechanism for speech. It allows anyone to communicate with anyone around the globe for the most part. And, but there were all these questions about, well, who then makes the laws and how do they apply in a global interconnected world? And. Yes, how about that? And so now we're suddenly, you know, and obviously there have been cracks in that dam for a long time and you have things like the Great Firewall of China and you have other countries, you know, putting in place different laws and, and now we have all sorts of questions that are coming up in other contexts and, you know, in the EU with the DSA and the DMA and for years now we've had the GDPR and how does that impact the internet and other places as well? So there are some questions around that. But now we're, we're, yeah. We're having this discussion around TikTok in the US and this idea of the US effectively banning a global application because of some, you know, vague concerns and not clearly articulated concerns, some of which might be, uh, xenophobic, and some of which, you know, might have legitimate purposes, but we don't know which ones those are. And, And it, it scares me that we're, you know, reaching this level of really fragmenting the open internet in, in this manner, and that many, many people who I would think otherwise are sort of level headed are immediately jumping on board support this. Uh, you know, you know, I, I had someone that I, you know, an acquaintance who, you know, I wrote something about this saying like, you know, I don't think this, I don't think this bill is good. I don't think it's constitutional and I don't think it's effective, which is, kind of an important part of it. You know, if we're concerned, you know, and like the concerns change depending on who's talking and what they're talking about, but like, if the concern is. Data access and privacy pass, pass a privacy bill. Like we can do that and, and have it apply to everyone rather than this bill, which like calls out TikTok explicitly, you know, and if the concern is national security, and like, uh, you know, propaganda around like, oh, they're going to influence us with algorithms, which like the algorithms are not that powerful, first of all, like, yes. They can, there can be influence on the margins and stuff, but like people are talking about it, like it's mind control. Like TikTok is going to like magically turn everyone in America into like, a mindless robot of the Chinese communist party. Um, there's very little evidence of that happening. And, and I had found these two different surveys that had looked at like American attitudes towards China, and they've been like going really down over the last year to the point that there are like record lows, like people distrust China way more than ever in the past. And so like, if China has been using TikTok to influence Americans

Ben Whitelaw:

It's not a very good job.

Mike Masnick:

They're doing, they're doing a really Really poor job of it. Um, but like, you know, I'm, I am concerned about sort of where this leads and sort of what precedent it sets, not just in the U S but, but globally, like, you know, there are obviously other countries that have banned Tik TOK, you know, I mean, India Tik TOK and, and other countries have banned other apps and done other things, but historically we've sort of viewed that as. You know, an illiberal, you know, anti freedom approach to dealing with these problems, not like a serious, thoughtful approach to it. And then the U. S. comes, you know, Busting through the walls, like we're going to do the same

Ben Whitelaw:

It doesn't It does feel like a A bit of a moment, doesn't it? The idea of splinternet is something that has been kind of discussed as Kind of somewhat abstract theory. And this week was the first time I thought, wow, this, this has the potential to really happen. And and you know, you mentioned in before we start recording about another story that we saw this week about you know, kind of a college in the US also banning a bunch of apps that students are using to spread kind of Kind of information about other students and they're being accused of bullying one another, you know, basically this idea of kind of constitutional speech is is feels like it's eroding, at least to me, looking across the Atlantic, and it's, it's really interesting to see it happen so quickly. Can you kind of give listeners a sense of like, why is it happening now and why we seeing this, the speed at which is happening.

Mike Masnick:

I mean, I think there, there are a number of different reasons why it could be, and everyone's got their own excuses, but it really does feel like over the last few years, the, you know, partly just the general tech lash concept of like, oh, you know, we, we love these apps and they were, you know, the Arab spring and freedom and all this wonderful stuff. And then we saw, you know. some bad stuff happening too, and people are commenting on it, and some people are urging on bad behavior, and suddenly people who, you know, said like, you know, we're from freedom, but like, jeez, these people are a little too free, and sometimes they're saying bad stuff, and it's led to this sort of larger backlash, and, and so There's now like a very, very strong appetite in the U S and this is on both sides of the political aisle in the U S to say like, well, there's, there's stuff online that shouldn't be online and we need to step in and, and stop it. And you know, I I'm, I'm worried about where that leads. And TikTok ban is just sort of one example of that. And, you know, you'd mentioned, you know, UNC is now banning these like anonymous chat apps, which have problems like. Absolutely have problems. But like the fact that the first step, the immediate step is like, well, we need to ban these apps, I think is really problematic. And, and I think leads us to to a bad place overall. If we think that there are, you know, there are real benefits to having a real global interconnected internet, which where anyone can communicate with anyone else. And I worry about where where the world is heading.

Ben Whitelaw:

Yeah, yeah me too. I mean we could talk about this for the episode. But I think it's it's an issue. We're gonna definitely come back to in future future editions of

Mike Masnick:

Well, I've been writing about it for 25 years. So, so apparently it's not going away anytime soon. So, but yes, let's, let's move on.

Ben Whitelaw:

If a podcast lasts half as long as that, then we've done well. Um, I will say that. I mean, there's an interesting kind of segue there between the TikTok Bamboo and the story that I've been really interested in this week, which is in Europe, where a Chinese company, Not dissimilar with a large user base is facing some tough rules as imposed by the European Commission. So Shein, which is a website that sells mostly clothes based out of China and ships them across the world very cheaply, has this week been kind of, it's been reported that it's going to be Designated as a VLOP, a very large online platform under the Digital Services Act. And, there's I think 18 or 19 VLOPs already. And Shein is looking like it might be the newest one. It's really interesting because It's essentially, you know, it's a big shopping website, but it is essentially just a shopping website doesn't look or feel like the kinds of platforms or intermediaries in the parlance that, you know, the DSA was designed to cover. But here we have a giant, a giant retailer marketplace. You know, not dissimilar to, to Amazon and, and AliExpress, which are also covered by the DSA as VLOPs here being you know, brought under the, the remit of the DSA and the timelines are not clear from the, from the reports that I read, it's, it's not obvious when that will take place, but if it does go ahead, Sheen will have four months to adhere to the DSA and all of the kind of rules and regulations that come with being a VLOP. So, Being transparent about how many moderators you have having kind of stricter reporting processes in place, et cetera, et cetera, which obviously the major platforms elsewhere have spent a lot of time over the last kind of six months trying to set themselves up to do. So, I mean, what's interesting here for me is that it's a reminder that the DSA and regulation else elsewhere as well. It's not just about social media platforms. Despite what's happening in the U. S. The kind of, you know, rhetoric that's coming out from European politicians, and also politicians in the UK. You know, social media is just one part of the puzzle, and you have these giant intermediaries who have the potential to sell fake commerce and clothes. And the DSA is a piece of regulation that's designed to kind of mitigate that. So. It's really interesting alongside the Mike, that you mentioned that this idea of kind of China and regulation cropping up again in a different context

Mike Masnick:

Yeah. And, and, and even just, you know, there are a bunch of things about it that are really interesting. One is just sort of like how quickly it's grown to, to such a massive size. I mean, there, there's Sheehan and there's Timu and, and, and these countries, these, these companies that have become. You know, really big, really fast and have, have an outsized impact and. So that part of it is really interesting. There was another article this week in, in the Wall Street Journal, I think that was talking about how American social media companies are making billions of dollars off of these Chinese retailers and the fact that, that this like they're advertising path and their, their path to, to getting customers comes through social media ads and search ads and how all of this stuff becomes interconnected. And to me, that's fascinating. And then on top of that, how, you know, you're exactly right. Like thinking of Sheehan in the, in the context of the DSA, where I think most people think of the DSA as in, you know, social media and, and don't realize that, you know. It applies, it can apply to, to shopping and, and other things as well. And sort of, you know, what are the moderation questions around e commerce websites is something that, you know, it's, it's not as well, you know, considered, but also become super, super important. As these, as these new companies become so, so prevalent.

Ben Whitelaw:

Yeah, it's like the kind of harms, it's almost as if the harms that come about as a result of these e commerce platforms are, you know, somewhat less than the, than the speech ones that people you know, are obviously more likely to encounter. In social media platforms. And, you know, you can have a debate about whether that's true. You if you if you're if you're being scammed on a regular basis and you're you're providing you're losing money as a result of paying for services that you never get or products you never get. And that's pretty upsetting. And, you know, obviously what the DSA is trying to kind of address. But from a political perspective, you know, there's almost very little of that. Very little to gain by going after the marketplaces because, you know, the general public don't seem to care as much and you're right over the last decade or so there hasn't been the, the kind of issues around social media and speech within the concepts of kind of broader democracy and trends we've seen so that's, that's where the, that's what's interesting to me is the framing of this story as, as somehow less

Mike Masnick:

Yeah, well, I mean, and there are a few things there. One is like, well, there is also like the DMA side of it, right? The Digital Markets Act, which is, which is supposed to be about marketplaces. So there's a part of me that's just like, wait, shouldn't this discussion be happening within that context? But, but I recognize that, you know, a lot of this applies to, to, you know, a lot of both of those laws apply to, to, you know, different kinds of companies and across the board. But you know, it also makes me wonder, you know, You know, how do these companies impact speech in, in other ways as well, because like there is this, there is this instinct to just say like, oh, you know, this is, it's just about commerce, like there's no impact on speech, but you know, realistically, that's not true. There are a bunch of things in terms of, you know, if they're marketplaces that third parties can sell on, like, what does that mean? What does that enable? And are there ways in which bad actors will game them? I mean, you talked about like, You know, not getting products, but there's also questions about like. Faulty or dangerous products, or counterfeit products, and how do those fit into debate? And there, there are real concerns there but there's also concerns about overreacting, right? Which is the, the same thing that comes up in, in lots of other contexts as well, you know. You know, is, are the fears about this overblown and what does that mean? And, and are we, shutting down an interesting or useful marketplace because of, maybe a few bad actors and how do you deal with that? And, I can think back to, I remember a friend of mine, this is again, decades ago, I think we've established in this podcast that I'm old by the way, but, um, that, you know, uh, a job that he had early on was like going through vendors on like the CNET marketplace and, and trying to deal with bad actors, like this, like. Trust and safety, content moderation of, of retailers and e commerce goes back a long way. You had scammers. And in fact, like, I was just having this discussion. I think like, if I remember correctly, and you might know this and I I might not remember correctly. I think like the whole term trust and safety actually started from eBay. Like they were the first ones to adopt that phrase, trust and safety as a concept, and that is marketplace shopping you know, this, these things all tie together and, and there are lessons from, from one that apply to, to others. And I, I feel like it's weird how, how much the sort of general public has ignored that side of it, but it is, it is a really important part.

Ben Whitelaw:

Yeah, agreed. And, and the, there is an interesting point here as well about the DSA itself, which has undergone a bit of a rocky start. In some respects, you know, I've written in everything in moderation about the difficulties they've had appointing digital service coordinators in each of the European countries. So the kind of point person who would kind of react to issues in the countries related to intermediaries who

Mike Masnick:

And it's not as if they've, they've, not as if they've had a year to prepare for this, but

Ben Whitelaw:

Right. I mean, there's there's a bigger, there's a massive talent issue, think, across the EU. And even despite all of the layoffs in the platforms is it's been really difficult, it seems to find kind of point people and in the appointed regulators in the countries. But this is almost, in some sense, an indication that the DSA is working in its current structure, you know, you have, So you have a big company in Xi'an and you have, as you say, the another marketplace, Timu who are being called to produce more information by the appointed kind of digital service coordinators for particular countries, and who are then going to have to be bound by, you know, providing information and have to justify their status as an intermediary. And if they're upgraded to a lot, then they're going to have to obviously undergo a whole bunch of changes to to ensure that they're, uh, properly used. They're compliant. So in some ways it's It's proof that the kind of dsa maybe is working a little bit better than we thought. Again time will tell but I think if you're kind of working the dsa This is a good sign and you know, I guess if you're if you're somebody in the states and you're you know Looking across the water. Maybe this is a good example of how it could work You know, when when faced with a giant Chinese company that you're worried about sucking, sucking the data from

Mike Masnick:

Wait, wait, are you, are you suggesting having an actual larger framework that applies to everyone first rather than reactively jumping in with a bill that was written in a couple of days might be a better concept? Ah,

Ben Whitelaw:

I'm not

Mike Masnick:

sacrilege. How dare you?

Ben Whitelaw:

So yeah, so we've got, those are the two big stories I think this week and they, they, they definitely speak to a broader trend which I think we'll, we'll come back to in weeks. We've promised our listeners, Mike, a bit of a quickfire roundup of the rest of this

Mike Masnick:

Yes. And we should warn people that, that we, we, since Ben and I both like to talk, we're not sure how good we are at the quick roundup, but we're about to find out.

Ben Whitelaw:

Yeah, an admission before we even Um, show us how it's done, Mike. Tell us, tell us about the the kind of interesting quickfire stories you've

Mike Masnick:

Yeah. So I thought there was a really interesting interview in the financial times of Telegram CEO who does not give interviews. I think this was like his first interview in seven or in seven or eight years. Um, and it struck me as how. Incredible. It is that Telegram is sort of been under the radar for, for a lot of these discussions on, on online speech and trust and safety and regulations and all this stuff. And yet is one absolutely massive with a very small number of employees. Like they revealed in the piece that they only have 50 employees and, uh, they do have contract moderators, but it didn't sound like a very large number of contract moderators. I think it was a hundred, 150 or so. Um, and just a fascinating story about sort of what they're doing, how they view it, which is they're very, very, very hands off and creates some of the problems and, and how they've sort of just been able to skate without that much attention despite there being an awful lot of, you know, really terrible stuff happening on, on telegram.

Ben Whitelaw:

Why do you think that is,

Mike Masnick:

why do I think. Which part of it? Why do I think they've escaped? I mean, a few different reasons. One is, you know, some of the nature of Telegram, right? Some of it is sort of like these public chat rooms, and a lot of it is sort of more private. And they claim encrypted. There are questions about how encrypted it really is. Um, I think You know, it, it, the fact that most of it is, was not American, I think really, you know, uh, have avoided a lot of the press, you know, it was started by a Russian guy who no longer lives there cause he had to get out of Russia, but I guess it's based in Dubai now And so, you know, it's been this sort of more global phenomenon and for whatever reason, less, less attention in the U. S. And when you don't get attention from the U. S. press, it sometimes feels like it doesn't exist, which is problematic for a variety of reasons. But yeah, it's just a really interesting story and worth checking out.

Ben Whitelaw:

Yeah. Okay. And we'll share links to all the stories we talk about in today's episode in the show notes. Um, so you can find those for yourself. Anything else that you saw from this week?

Mike Masnick:

Yeah, there were c A couple of other interesting things. I mean, the, the rise of, there've been lots of questions for a while of sort of, uh, artificial intelligence, generative AI and elections is a big election year all around the globe. And we're starting to see stories come out. You know, we had a few weeks ago in the U S there was all this stuff about like the deep fake Joe Biden. But there are some other really interesting stories. And the thing that's, that strikes me as really interesting is like concerns about deep fakes for, for years, we've been hearing all these. Concerns about how deep fakes we're going to be like, Oh, we're going to show a fake so and so doing something and it's going to cause alarm. Almost every example so far, like everyone knows almost immediately that that it's a deep fake. So that hasn't been the big impact. What has been really interesting to me is how instead of that the thing that has been a bigger concern is that when when Evidence of actual wrongdoing is being presented. Politicians are very quickly jumping to the, Oh, that was deep fake. That wasn't really me. And we've seen that around the globe. Uh, there have been a couple of reports over the last few months, but the latest one is, is Donald Trump. There was a video that the Democrats put together showing you know, Trump being Trump and saying a bunch of, uh, of silly, ridiculous stuff and sounding completely incoherent. And his response was that it was a deep fake when it wasn't, it would, these were all accurate videos, but this, this yeah, yeah, exactly. And so like, you know, it's really interesting to me to see that come out. But then at the same time, there was also a really interesting story. About AI being used in the Indian election, you know, again, it's not just an American thing and how the various political campaigns in India are, like, really, like, sort of gleefully embracing deepfake images to present their opponents in a negative light. And so there's a very interesting story in Al Jazeera about that.

Ben Whitelaw:

it's the creation of doubt isn't it that that it sows the seed of As

Mike Masnick:

yeah, I mean.

Ben Whitelaw:

is real or not and politicians love doubt

Mike Masnick:

Well, I mean, the question to me, and I think this is going to be a really interesting area of research going forward, is how effective is that? And, and I don't think we know. And I think a lot of people are making a lot of assumptions, and it would be really interesting to find out what the reality is.

Ben Whitelaw:

yeah

Mike Masnick:

what, what about you? What, what's, what's a story that caught your eye?

Ben Whitelaw:

Yeah, a couple of couple of things caught my eye I mean, I was really interested to see blue sky, open source a web interface that they built for moderating on the platform And so this is all part of their approach To kind of moderation which they're calling kind of stackable composable moderation basically understanding that You know, the, you cannot set rules that will govern all 5 million users that it has on the platform already. And it's essentially creating kind of tools and filters and kind of toggles in a sense that allow people to create their own experience, which again, is something that, you know, you've, you've talked a lot about and written a lot about. And this is a really interesting move in the fact that people will now be able to create their own moderation services for the platform. And essentially we'll be able to kind of build almost like moderation teams using this tool. And I can kind of foresee in, in, you know, maybe a year or a couple of years time, people almost having businesses, moderation businesses that they operate themselves that are perhaps funded supported by other kind of consumers on the platform. And so it's kind of taking that Reddit model to an extent and But almost making it a bit more sustainable. I'm really interested to see where this goes. You know, you mentioned before we started recording, Mike, that you've had a chance to look at this. And you were pretty, pretty

Mike Masnick:

yeah, it's sort of fascinating. I mean, the thing that to me that's most fascinating about the blue sky setup is that they're really abstracting out the different layers. Like we've just come to believe that all of these things need to be bundled together. And obviously there are some advantages to bundling a bunch of this stuff together. But it's, it's different and it'll be interesting to see how it works when you can abstract out the different layers and allow different service providers to do different aspects of it. And so the stuff that they released is, is really interesting. Some of it is just open source tooling that it'll be interesting to see if other. Other systems adopt and I've, I think that there is actually some interest from like the ActivityPub, Mastodon world and, and the Noster world to potentially see what they can use of these open source tools if they can, you know, use them that way, but it's, you know, it's creating tooling that somebody else could come along and set up an operation that could, you know, uh, handle moderation for, for BlueSky or, or at protocol. Okay. Uh, services and users could choose who they want. And, and again, they call it stackable because you can combine services. So you can, you know, and, and that'll lead to some interesting dynamics and maybe interesting conflicts, you know, uh, and, and how it works. I think it's set up right now that sort of the most strict application works. And, and there's another element in all this, which was also released, which is their, their labeler. Part of it, which is that you can have different systems go in and apply different labels to different content and you can have people report things and it can go to different labelers. And so all of these layers being separated out can lead to really, really interesting dynamics and interactions. It's going to be fascinating to see how it works. And I think especially at this moment where, you know, AI systems are becoming so much more powerful, you know, I, I think that combination is going to be really interesting where it's not just going to be about, like a large group of, of content moderators who are doing this, but the interaction between maybe smaller groups of people combined with AI tools and being able to offer up interesting moderation cases. It's going to be fascinating. So I'm, I'm intrigued by it. Uh, and you know, I, I did see a demo and it, it's like some pretty cool tooling that I think will be really interesting to see how people use it.

Ben Whitelaw:

Yeah. I mean, I was reading up on it and Almost felt like I wanted to be

Mike Masnick:

Yeah,

Ben Whitelaw:

you know, I wanted I wanted to have a kind of use case to see how how it worked and to have a community that needed active moderation because It looks really smart. And as you say they've thought a lot about the the different roles that people can play within that and having done a lot of moderation and ran teams Within newsrooms that have led moderation And had some really bad to do that. I was kind of like this is this is definitely a step forward And it'll be interesting to see how it works It's also really interesting in in the context of another story that I just want to quickly talk about which is a new report from duco, um, which is a kind of ai tooling Company and they produced the trust and safety market research report which essentially Kind of forecasts a massive growth in Trust and safety software and tooling over the next five years. So it's predicting that the kind of the market is going to double between now and 2028 and that's as a result of a number of key things, for example, mainly actually the rise in artificial intelligence, which is going to make some of the kind of the things that the tooling does much cheaper, much more effective, much more accurate, but also the ongoing. Attention by media, the very nature of elections, which are going to attract a lot of attention and interestingly as well, the layoffs that are happening within platforms, which are going to create a bunch of entrepreneurs. It predicts who can start companies and create tooling and software that kind of plugs the gap in this addressable market that's appearing before us as a result of regulation. And so what you have here is You know, probably some of the startups that we've heard about already starting to kind of feel the gaps that we're seeing in this market, but we might also see, Mike, like some of these kind of open source tools and individual actors and groups and collectives also start to fill those gaps too, right? It's not just going to be I would hope your kind of typical Silicon Valley startups that build out these tools quickly enough. We might see a whole bunch of different models and ways of doing things evolving to address this, address this gap that this report foresees. So it's a really interesting report, really kind of rigorous. I haven't gone into the methodology about how it predicts that, that growth. But it's, it's, it's fascinating to see. And to think about who might provide the kinds of trust and safety software and tooling over the next kind of 5 to 10 years. Because it's, you know, there's gonna be a whole bunch of players that haven't even been

Mike Masnick:

Yeah, it's almost as if there's a lot going on and the market for trust and safety and online speech issues is changing rapidly. And that's why it's useful have a podcast that we'll discuss it every week.

Ben Whitelaw:

I would agree with that, but I am biased. Amazing. Thanks, Mike. That's a really comprehensive roundup. Thanks as ever for your thoughts. Um, next we have our bonus chat. Which happens at the end of every episode in which we'll speak to an expert guest with experience on online speech and trust and safety and go much deeper into a topic that we've recently covered either on a previous podcast or on this podcast. This week's bonus chat is brought to you by our launch sponsor Modulate. Again, the pro social voice tech company that's working to create safer, more inclusive spaces online. And has worked at a whole host of studios and games to do that. Mike and I spoke to Modulate's CEO, Mike Pappas, about why the immersive nature of gaming makes moderation difficult, but how its tooling, its own tooling, Modulate's ToxMod product, balances speech and privacy, and tries to help remove harm before it happens. And very nicely for me, he gives an inadvertent shout out Everything in Moderation at the end. So have a listen to that, and hope you enjoy. So Mike, the big question I have for you, where we'll start today, is what is it about online gaming that makes it so susceptible to toxicity in the first place? You know, we were having a conversation in advance, you were saying 68 to 80 percent of players get exposed to some form of hate and, and, you know, toxicity. Why is that?

Mike Pappas:

Yeah, well, I appreciate, Ben, that you asked it as what makes it so susceptible and not what makes it so toxic, because that's, that's a slip of the tongue that people fall into a lot, and it is a really important distinction. Gaming isn't toxic. Gaming is actually a really rich, powerful social vehicle, and that's what makes it, unfortunately, susceptible to this exploitation as well. people had this experience over COVID in particular, that games are a really powerful place to build friendships. You can have really sort of rich opportunities to go on an adventure together with someone, to explore new parts of your identity, to take on challenges. It's a great opportunity to build social connections with each other. But again, because It is such a great opportunity to build social connection that also leaves room for what, what happens if someone intentionally violates that social connection? What happens if someone intentionally exploits that? It's unfortunately two sides of the same coin, but it is really important to make sure that if we're ever talking about the one, we're also recognizing all of the good that gaming brings to the world as well.

Ben Whitelaw:

Yeah, no, that makes total sense. Um, I, I should declare at this point, I'm not a massive gamer. But I, I have played fun games in the past and I do see the benefits and you know, that really resonates. So talk a little bit about then, about where Modulate sits in that, in that kind of dynamic. You know, what is it that it's trying to do? And, and who does it work with at the moment?

Mike Pappas:

Yeah, so Modulate is specifically focused on voice moderation. Um, if you think about playing online games, many games offer some kind of text chat or pre scripted sort of little sound bites that your avatars can play, but that stuff tends to be pretty narrow. Text chat, obviously you can say whatever you want, but your hands are on the keyboard, they're not controlling your character, you can't convey emotion or nuance. So if you really want something that brings you immersed into the world and immersed into that social interaction we were just talking about, you need voice. Again, unfortunately, the flip side is because voice is that much more powerful for social, it's also that much scarier when it's being misused for toxicity and hate and harassment. And until very recently, it was basically impossible to To meaningfully moderate voice chat, um, there's a lot of reasons for that, and I won't go too far down the rabbit hole. But maybe the simplest way to think about it is just cost. Most online transcription tools might cost over a dollar per hour of audio to process. If you multiply that by hundreds of millions of hours. per month for a large game studio. You're talking billions of dollars per year. There's no one who can afford that. And that's just for transcription. That loses all the emotion, the nuance, all of the, the really good stuff that helps you understand what's truly going on. So without going too deep into the technical weeds, what Modulate and our product TalksMod do is we use some really cool machine learning techniques to help sort of iteratively zero in on which conversations. are the scary ones. Starting from a very broad view, and then looking a little closer a little closer as we see these warning signs, until we're ultimately able to highlight to a studio, here's the stuff you really need to pay attention to, without having compromised player privacy by investigating everything, and without having broken the bank.

Ben Whitelaw:

let's talk a bit more about that. I'm, I'm really interested in this kind of proactive detection that kind of ToxMod prides itself on and, and the balance of privacy. You know, folks listening might think, okay, but how do you actually do that? While maintaining their privacy? That's, it's the big question.

Mike Pappas:

and I, I think that question comes up so often because we're kind of used to thinking about privacy in black and white terms. Are they seeing any of my data or are they not? But if you think about, and we really like to use analogies to the physical world, if you think about the physical world, that's not really how we think about privacy there. And I, I tend to talk about taking your kid to the playground if you're a parent. When you drop the kid off at the playground, you don't go for the all in on privacy solution. That would be to abandon them at the playground and go home and wish them luck. Most parents don't feel good about that. You also don't go all in on safety. You're not looming behind them the entire time, watching every single little thing that they do, listening to every single thing they say to their friends. Most parents end up kind of congregated over at the sidewalk, watching out of the corner of their eye. That's the balance we've found that feels healthy for us as a society. And so that's the balance we try to imitate with TalksMod. So, um, forgive the sort of mixed analogy, but we're listening out of the corner of our eye? That sounds a little odd, but, um, you know, we're starting out looking for things like, Hey, is there a sudden burst of emotion? Did a new person join the conversation? Say one sentence and everyone went Eerily quiet. Are we hearing, you know, particular, you know, shouted slurs or something like that? That really highlights that there's a greater risk of this conversation, even in some very carefully controlled ways, looking at specific demographic interactions. So if there's a very young child speaking one on one to an adult that they have no relationship with, that might be a larger risk factor. All that kind of stuff comes together and helps us say, where should we be? maybe step one step closer and get a little bit more of a sense of what's going on. And again, it's that iteration. So by the time we're done analyzing a harmful conversation, yes, we have collected a deeper recording, we've transcribed it, we've done a deeper contextual analysis, but we're only doing that level of analysis when we kind of got through those early filters. For most innocuous conversations, we're going to take one quick glance and say, Everything's fine here, carry on, and never sort of dig into that in the first place.

Ben Whitelaw:

Okay.

Mike Masnick:

Can you, can you talk a little bit about, about the impact of on, you know, when you discover that there is some sort of toxicity and you step in, what is the impact of that?

Mike Pappas:

Yeah, um, thank you for asking. It's, um, something we're really excited to talk about because you know, I haven't mentioned who we're working with yet, but we've had the opportunity at this point to work with some really large brands in gaming, including Riot Games, Activision Blizzard through Call of Duty in particular some of the largest games in the VR space like Rec Room and Among Us VR, um, and we've done a lot of studies with these studios to understand when we turn on TalksMod. How much toxicity is there? How much does TalksMod help resolve it? What does that turn into in terms of changes of player behavior? So Activision, for instance, actually just released a report a couple weeks ago sharing some of the takeaways that they've seen from implementing TalksMod, which included some really exciting stats, including a 50 percent reduction in exposure to toxicity across their player base only a couple months after deploying TalksMod, which is something we're really excited about. And what we're also really excited about is seeing a demonstrable change in player behavior. One of the things you might worry about is, alright, maybe at best you're muting these people, but then they just come right back and do the same thing again and again. And what we've seen through Call of Duty and some of our other partners is there's actually a slow but steady evolution of behavior, where these offenders who are getting punished in the game are starting to learn. Okay, I'm not allowed to do this. Whether it's because I want to stick around or because I actually want to be more pro social, it, it a little bit doesn't matter. They're learning not to behave that way in that setting and so we're seeing anywhere from, you know, five to ten percent of repeat offenders month over month. Stopping offending from there, and as each month goes on, more and more of those folks are coming to realize this just isn't something that's going to be tolerated, and You know, the world becomes a bit of a happier place for it.

Mike Masnick:

Yeah, that's, that's really interesting. One sort of related question is, do you think that there are, are lessons from what you've discovered with TalksMod and, and as it's implemented in, in the voice world that applies more broadly to, to other areas of trust and safety as well?

Mike Pappas:

Absolutely. I think one of the, one of the really powerful studies we did was looking at the impact of jumping in and muting someone proactively right when something harmful starts to happen, versus waiting until the end of a match or something like that. And I want to be clear, Talksmod never does live bleeping. We don't do sort of automatic intervention of that type. But we can flag an offense. That's happened to a studio within a matter of seconds from when it's happening in the game. So if the studio is able to, you know, staff the team to really be on top of that, they have the opportunity to jump in and intervene right there in the midst of the conversation and prevent it from escalating. And what we found is you get these really healthy, typically sort of 30 to 50 percent reduction in exposure to toxicity just from introducing ToxMod. But you're actually going to see something like another 30 percent reduction of toxicity if the way you use ToxMod is really visible to your users. So again, you know, that muting happening in the middle of the conversation where everyone else sees, oh, this person was misbehaving and actually got punished for it. And I think the learning, even outside of things that are quite as real time as voice, is that players are not going to read the legalese code of conduct and say, okay, so that's how I'm supposed to behave. They're going to play, and they're going to feel What am I supposed to do? What are the norms? What are the expectations here? And so the best platforms that are thinking about this really holistically, whether it's in terms of writing a clear code of conduct or how they do voice moderation or anything else, they're thinking about how can I demonstrate to my players our commitment in a way that they will feel around what safety is and what kind of an experience we want them to have, and the studios that are successful at showing that in a really visible and visceral way for their players, those are the ones that create the really uniquely safe and inclusive and fun experiences.

Ben Whitelaw:

Awesome. That's, that's really interesting, Mike. I mean, we've probably come to the end of our interview today, but where can listeners of control speech find out more about modulate and kind of sign up to some of its research that you mentioned?

Mike Pappas:

I'm I would first of all encourage people to just check out our website. We have a lot of writing, um, I might write too much, but we, uh, have a lot of content out there in terms of how we think about these really delicate balances, how we think about the intersection of safety and privacy regulation and how that fits in. We've also been putting together our own newsletter called Everything in Moderation which you can subscribe for on the website. We really encourage you to check it out. Um, and Ben, I just used your, uh, newsletter instead of our newsletter name. I apologize. ours, yeah, ours is Trust and Safety Lately. You should check out everything in moderation too. Wow, um, you know, I, I'm just getting that all mixed up. But no, both newsletters are wonderful. Ben, I, I've always appreciated the breadth of yours. For us, we're focused in the gaming space. We try to really bring in content from outside of it, but especially for folks interested in gaming, I highly encourage them to check out Trust and Safety lately too.

Ben Whitelaw:

Yeah, and I can, uh, and I'm not just saying this because you've repped my newsletter, but I also read Trust and Safety Lately and it's it's really good for kind of gaming and, and trust and safety. So I do recommend to listeners to check that out. Um, Mike, thanks so much for joining us today. Really great to, uh, speak to you. And, uh, thanks for, thanks for being a sponsor of Ctrl-Alt-Speech.

Mike Pappas:

Yeah. Thanks to both of you for having me. This is great.

Ben Whitelaw:

Take care, all the best.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode