Ctrl-Alt-Speech

Regulate, Rinse, Repeat

Mike Masnick & Ben Whitelaw Season 1 Episode 33

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our launch sponsor Modulate, which builds prosocial voice technology that combats online toxicity and elevates the health and safety of online communities. Mike Pappas joins us for our bonus chat, talking to Mike about the ever important decision between building your own trust & safety tools versus buying them from vendors.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

Mike, I'm in the midst of some house renovation, right? And I'm a complete novice at DIY. So I've been listing the help of some of those apps like TaskRabbit. And I've gone to TaskRabbit and it asked me this, this question, right? So I want to, I'm going to pose it to you.

Mike Masnick:

All right.

Ben Whitelaw:

the app prompts me with, I need help with

Mike Masnick:

Oh, that's a long, that's a long list, Ben. That is a big list. Uh,

Ben Whitelaw:

Give me one.

Mike Masnick:

I think what I will say is that, um, right before we started recording, we did some prep and then we took a little break and I started my laundry and I think we're going to be discussing laundry in a few different contexts today. So, uh, I will say I will need help with folding my laundry after this podcast is done.

Ben Whitelaw:

Nice.

Mike Masnick:

How about you? How about you? What do you need help with?

Ben Whitelaw:

Well, likewise, apart from, you know, understanding how to paint and to tile and all the rest of it, I need help with understanding when there's a block, not a block. and when does Elon get to decide? That's what I need help with. and welcome to Control Alt Speech. Your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you with financial support from the Future of Online Trust and Safety Fund and sponsored by Modulate, the pro social voice technology company, making online spaces safer and more inclusive. We've got a really exciting chat with Mike Papas, Modulate's CEO later on today. He's going to be talking about the interesting question of whether to build or buy trust and safety tools, which is a growing topic of conversation in the space. but right now I'm here with Mike Masnick, the other Mike, the original Mike. And

Mike Masnick:

old mic.

Ben Whitelaw:

I'm not making any, uh, jokes about your age. I know how that goes down. Um, how are you doing? How's, how's your week been?

Mike Masnick:

Oh, it's been good. Uh, it's been, um, super busy, lots of stuff going on as always. and, uh, yeah, it's just, just, you know, everything's been, kind of great. Crazy and nonstop.

Ben Whitelaw:

Yeah, we were kind of saying before we started recording that we could have recorded two podcasts about the number of stories we had this week. There's an awful lot, even more than usual, I'd say, and lots of really quality reporting and interesting stories to talk

Mike Masnick:

Yeah. Yeah. So we were saying at some point, you know, we're going to have to add the midweek special extra episode. But for now, I do think we have plenty of good stuff cover in the meantime. But oh, and we should mention, by the way, I think it's important to note, next week, we are going to be off. Yeah. We have been able to do a podcast every single week, even the week we had planned to take off in July, uh, for July 4th, because the Supreme Court forced us or forced me into a motel room to record a quick special episode. But, uh, next week we'll be off, but we'll be back, the week after.

Ben Whitelaw:

Yeah, right. And this is confirmed. We won't be doing a midweek podcast. I'm worried that listeners will think they've got to catch up even, even more content moderation news, which isn't the case, but there's a lot going on. Um, We've also had some really, nice reviews again this week, Mike, and we've had a bunch of people get in touch, in response to my request for info about where they listen to control or speech and what they're doing while they're listening,

Mike Masnick:

laundry comes up.

Ben Whitelaw:

it keeps coming up, right? It keeps coming up. And I think it's probably because podcasts. Generally are an accompaniment to boring tasks like laundry. I don't think it's just controllable speech, but there's a few other ones as well. Um, we've got some nice responses from folks saying they, they went on, long walks while listening to the podcast. And, my favorite one was a listener who said that they were deboning a turkey carcass in preparation for Thanksgiving. Um, they were making a soup. So I thought it was a kind of nice, nice selection of responses.

Mike Masnick:

Yeah. Can you imagine listening to us while de boning a turkey?

Ben Whitelaw:

do you think that the person imagines us as, as the Turkey,

Mike Masnick:

I don't know if, I don't know if I want to go there.

Ben Whitelaw:

I also got some really nice responses from folks about the pedestrian flags, so I'd never seen these pedestrian flags and we had a few listeners. Thanks. Thanks, John, for sending in photos of some pedestrian flags from Seattle. They actually exist. You weren't pulling my leg. Okay.

Mike Masnick:

I was being entirely honest with you. I have seen them in multiple places and, yeah, they exist. And now you've seen pictures of them at least, but at some point I, I have to imagine that they must exist somewhere in the UK also. Um,

Ben Whitelaw:

don't know. I will make a pledge to, if there is pedestrian flags in the UK, I will find them and I will, I

Mike Masnick:

we'll get we'll get a picture of you standing waving a flag. I think that's that's where we need to go.

Ben Whitelaw:

Okay. Okay. great stuff. So yeah, if people want to share where and how they listen to control of speech, get in touch with us. we've got some messages on blue sky. We also got some emails, podcast at control alt speech. com. We will get back to you. and we thank you for your inputs as ever. Right, Mike. So we are jumping straight into the many stories that we have this week. we both like a good news story, you know, we both like a kind of interesting, relevant, news lead, but we also like, stories that are a bit more interactive, a bit more interesting. I don't know if you remember the days of Snowfall, the New York Times is very interactive piece from 2012. your first one is, not dissimilar to that, I'd say. Is it? it's a great tale and it's told really nicely online. Talk us through what that is.

Mike Masnick:

Yeah, so this is from NBC News and it's by Brandy Zdrozny, who, if you don't know, hopefully listeners of this podcast know Brandy. She's one of the best reporters out there on misinformation and has done ton of amazing work on that over the years. Um, so this is a, a brand new article that just came out this week. On how Russian propaganda reaches and influences the U. S. And it's a very interactive, well done piece in that snowfall style, the modern version of snowfall. and it's just talking about how Russian disinformation campaign start and basically like how they are creating all sorts of, different things bits of nonsense and just sort of sending it out into the world through this process, creating fake videos, sometimes AI generated. Sometimes it appears maybe hiring actors to play roles of people, you know, sort of confessing things and just making up complete nonsense stories. And then it sort of follows through as to where those stories go and how they filter onto, fake news sites that Russian folks have set up. How they find various trolls on the site formerly known as Twitter. and then often it sort of launders its way up through the chain until eventually someone like a JD Vance or a Donald Trump might repeat them. And so it actually follows one particular story, that was completely made up, regarding Ukrainian, officials that are close to, Zelensky buying very expensive yachts and sort of implying that U. S. money that is going into Ukraine is being laundered Through, various people to, purchase expensive yachts for Zelensky, and that then sort of, you know, filtered its way through and again laundered its way up the chain to the point that at one point J. D. Vance sort of mentioned it on some podcast that he was on that we shouldn't, you know, like, why, why are we sending more money to Ukraine for them to buy expensive yachts, which was a thing that did not happen. Um, but because it sort of, started from Russian disinformation and, made its way along through, nonsense, peddling American trolls on Twitter, eventually JD Vance picks it up because that's kind of the world that he lives in. And then suddenly there's like this legitimate question of, Oh wait, are Ukrainian officials buying yachts with American funding? it's a really well done story and it's, really like, again, it's a very interactive presentation. The presentation is really fantastic, and they sort of show like how they're doing this with a whole bunch of stories and most of which aren't, catching on at all, but a few of them are, are actually breaking through. Yeah.

Ben Whitelaw:

this is what's so wild about this piece is the scale at which, this group is, doing this work, and the systematic nature of it right it's, it's huge in terms of scale and like you say, Not many stories make it through, but when they do, they really hit. the story that's, mentioned as well about, a kind of Ukrainian man who's a kind of actor, who apparently confesses to trying to assassinate Tucker Carlson for 4, 000. Wow. Tucker Carlson's in Moscow, absolutely wild and comes with a whole array of like fake images of, burner phones and pictures of Tucker Carlson in Moscow and all packaged up in a really, you know, if you don't know any different, believable way. what do we know about this kind of group that's behind this and, and how, kind of concerning is it that so many stories are getting through into the mainstream, do you think?

Mike Masnick:

Yeah. So, you know, there have been a few different sort of groups related to doing all these things people know about, like the Internet Research Agency, which was the one that sort of became famous eight years ago. This one in particular is from this group called Storm 1516. And they're creating a bunch of these things. And there's somehow, you know, connected to, Russian officials, but it seems like there's not that much information directly on the group, but they're able to produce these really impressive things. And then, the example you gave of, the Tucker Carlson story, like that got picked up by a bunch of American sources. And. Interestingly, there was a story about a month ago, which, I don't think we actually covered it on the podcast about, you know, U. S. officials charged, uh, people associated with RT, Russia, Today, the sort of, you know, Russian propaganda news outlet. and they had been funding these American YouTube influencers like Tim Poole and Benny Johnson and folks like that, to ridiculous amounts. maybe we did mention it

Ben Whitelaw:

yeah, I think we'll be there

Mike Masnick:

we did. We briefly mentioned it. and those are, you Notably in this story, those are the same, same people who picked up on this story about Tucker Carlson supposedly being targeted for assassination. and they all ran with it. And so there's this whole ecosystem that has been created to sort of launder this Russian propaganda. And, there's a little bit. that I hesitate when talking about this story, because we heard there were similar arguments made, over the last eight years, really nine years, about like Russian propaganda and who it influences. And there's evidence that it doesn't. It doesn't actually do much to influence people in some ways, but the thing that it, seems clearly to do, and as the campaigns are becoming more sophisticated, I think the real sort of takeaway from this is that the campaigns are being more sophisticated, but it really shows how much of the disinformation world is really about confirmation bias. It is so much about not necessarily convincing people. But taking people who already think this must be happening or these bad things must be going on and giving them something to feed off of and just say, like, yes, well, this story confirms my priors and therefore it feels true and therefore it must be true. And I think. people have made this point before, but I think it's one that's worth reminding people and thinking about it. Last week, we were talking about like risks versus harms, but I think, you know, sort of a related issue is the confirmation bias at the heart of so many of these discussions about, online speech and dis and misinformation is that it's really all about confirmation bias and people looking for content to support what they want to believe. And what we're creating. In this ecosystem right now are tools to support anyone's bias, and to give them content that allows them to accept it. And we're all guilty of it. Like lots of people would say like, oh, well, you know, I'm, I'm better than that. But it's not true. I mean, I think everybody at some point falls for some nonsense because they want it to be true and it fits with their prior beliefs on something. But here, what we're seeing when the campaigns get this big and this advances, it gets easier for people who want to believe these things to take them and to, expand them further. And because they're so sophisticated and because they're so advanced and they're creating these videos with all the supporting documentation, even though it's all 100 percent fake and 100 percent made up easier for these things to sort of catch on become stories in some form or another. Okay. And then the one other interesting thing that I think is coming out of it is that none of these stories, even the ones that have broken through. And a lot of these stories, they note like just didn't catch on at all, but even the few that did catch on didn't go huge. Like they weren't like these big disinformation campaigns, but they just sort of like creep in. Like even I mentioned JD Vance mentioning the Ukraine, Ukrainian yachts or whatever. He just mentions that in passing in a podcast. It doesn't become a major story, But it's just one of these things where he's making these arguments about the funding for Ukraine right now and saying like, Oh, do we want that funding to go towards, more Ukrainian yachts? that's not the major point of the story, but it just sort of filters into the general narrative.

Ben Whitelaw:

Is there, apart from people being more immune to narratives like this and being aware of how stories get laundered through this system, which is really clear from this, piece, are there other ways that we could kind of like slow, the spread information through this chain, do you think, are there ways of, of spotting it, you know, dots that we can trace maybe in the system, because it's really tough for anybody to be, so on top of all of the different narratives out there that they're not. captured by one of the many many narratives that have been perpetuated by these Russian organizations. How can we slow this down a bit more or stymie some of the spread?

Mike Masnick:

I mean, I think there's a few different things. So one, and this is the thing that I always go back to, and it always feels like sort of a cheap shot, but Media literacy is incredibly important. And again, I just said like confirmation bias, everybody falls prey to it at some point. I've done it. I'm sure you've done it. Like there are stories that you just want to believe. And so you fall for it, but having general media literacy and sort of recognizing like, is this coming from a trustworthy source? What are the details? Who is reporting it? really matters and learning like when to be skeptical, especially if things that confirm your priors is a really important skill. It's not one that anyone will ever be perfect at, but. Training people on that, I think is important. The second thing is just sort of recognizing media ecosystems. And this is something that, is really, really important in the sort of reality based world. I

Ben Whitelaw:

Sounds fun.

Mike Masnick:

I, I'm trying to choose words diplomatically here, but like in the reality based world, mistakes are made. Reporters make mistakes, news organizations make mistakes, we recognize that, but when those mistakes are discovered, they tend to recognize that, admit that, talk about it, make corrections, and do things like that. In the sort of pure nonsense peddling world, which now exists, what they will do is not do that, right? I mean, they will spin stories, they will take fake stories, they will take real story. I mean, a lot of these stories will come with a grain of truth behind them, because that allows them to sort of obfuscate present these things in a way that, presents a darker picture than the reality says. I mean, we've seen that with things like the Twitter files and the Hunter Biden laptop, which, you know, has been totally misrepresented over time. But those sort of start with a grain of truth. But the ecosystems that are pushing those messages, you'll notice that they don't issue corrections. They don't hold each other to account. They don't, admit when they have these kinds of failings, they sort of try and cover it up. They don't mention it again, or they, they sort of dismisses like, Oh, well, you know, we, we had just heard that, you know, we were just reporting what somebody else said rather than admitting that there was a mistake made. So. I think there's an element of like watching the media ecosystem that you are engaged with and seeing how they handle when mistakes are made or when false things are reported or misleading things are reported. And that, that is a key indicator for me.

Ben Whitelaw:

Yeah. I mean, there's a couple of great quotes in this piece from the kind of The right wing grifters who not only have made money by being paid directly by Russia through, through the fake company that you mentioned, but also making money, obviously, through the monetization of videos about rumors like this, they're benefiting twice, but there's some great quotes, you know, where they say we actually reported it as being a pox and we noted at the time that it wasn't possible to verify the claim. It's like, that would have been a, if anything, a. Scintilla of, um, of a second in which they mentioned it and then they would have moved on and they would not have gone back to it.

Mike Masnick:

Yeah, I mean, I think of the quotes someone said at one point was like, you know, they reported on it and then at the end they were just like, you know, we have no real way of verifying this, but, you know, we thought it was important. It's like, that is not, that is not really a good way to handle this

Ben Whitelaw:

no, and there's, there's a bigger conversation which we should come back to. I'm really interested in, from the work I do in my day job around, what are the kind of signals that content creators and, influencers, inverted commas, what kind of, ethics and transparency should they have and how should the platforms rank them or distribute them according to those kind of journalistic principles is something that I think a lot, a lot of platforms are thinking about. Great. share that story in the show notes. It's definitely something to go and have a read of. It's really interesting, to kind of scroll through and see the different elements. But we'll move now onto, our second story about laundry and about laundering. piece that, came out this week as well, really fantastic piece by, Daphne Keller from Stanford who stood in for me a couple of weeks ago on the podcast. She's written for Lawfare about the rise of the compliant speech platform. and it's a long read. It's definitely worth spending time with. There's lots of great elements to it. Essentially, Daphne speaks to. around a hundred kind of trust and safety experts. And she's pulled together all the insights from that to basically make the argument that content moderation is becoming a compliance function, much like, banks have compliance functions and factory floors have to be compliant to kind of keep workers safe, Trust and safety is now moving in that direction, and her argument is that the regulation that has been introduced over the last few years, the Digital Safety Act, the Digital Services Act, the Online Safety Act, and the Code of Practice for online services in Singapore have created these conditions where platforms are essentially having to be able to report on everything they do, all the decisions that they make, they're having to standardize how they work. They're having to make all of the decisions trackable. That's a big part of the DSA is that the takedown decisions are recorded in a database. And that's. Creating a whole ecosystem around, auditing those decisions and tracking whether those processes have been followed correctly, and she kind of makes the case that this is essentially compliance and she goes on to say that there are a bunch of downsides to this, which, are ones that we talk about a lot on the podcast, which are definitely worth noting and somewhat concerning around government having overdue influence into speech rules on platforms about how, there's a kind of overdue focus on, on metrics and following the right process, even if the outcome isn't the, the best for users or for society as a whole. And so, you know, it's a really great kind of original piece of reporting, I'd say, which we both read and we both thought, yeah, this is great. and we have to talk about it on the podcast. So here we are.

Mike Masnick:

yeah, yeah. No, I think it's, it is a really valuable contribution to the way of thinking about this understanding all of these things. And, um, I, I think like Daphne have some concerns about it. there's an argument I can't remember again, if I've made this argument directly on this podcast before that, I think that there's an important role in, having companies and top executives at companies thinking of trust and safety specifically, as a really important, uh, component. sort of marketing function that is important, centrally to the company. I think that a lot of companies think of trust and safety as a cost center. And we see that like there are layoffs all the time. You have to hire a whole bunch of people and it's a pain that is frustrating to many Company executives and that comes through when you see them testify or when they talk about these things or Mark Zuckerberg's recent Comments on this stuff. I think he sees trust and safety as just a nuisance and a cost center for meta And it reminds me of the way that companies 20, 30 years ago viewed customer service. It was a cost center. You have to spend all this money on people who answer calls from angry customers. And you know, you want to get them off the phone as quickly as possible and not have to deal with their complaints. And then some companies began to come around to the idea that wait a second, like this is one of the major touch points we have between our company and our customers. This is where they are interacting with us. This is where they're communicating with us. Customer service itself is actually a marketing function. It is a way for us to present a, good side of what we do to our customers. And therefore, we should invest in it because that will come back in the future in support. And the argument I keep making is that We need to move trust and safety to that kind of role where it is seen as we are making this platform better. And in the long run, that means more users, more advertisers, more whatever, because we are making the platform better. That's what a trust and safety role should be. It should be seen as an investment, not as a cost sink. And my worry is That as trust and safety moves more and more into being a compliance function, it goes further and further away from that. It is again, seen as like, okay, these are boxes we need to check to make the regulators happy, not this is what is actually best for our users. This is what is going to create a better internet, a better service, a better, whatever overall. Whereas if we could get the companies to recognize that this is in their best interest, not because the auditors say so, not because the regulators say so, but because it is actually better for the long term health of the wider Internet and the services themselves, that would be better. So my reaction to this is I think it's great contribution and I think it's accurate and really important, but I worry that This sort of, capitulation to the idea of it being a compliance function actually takes us further from the world that we want.

Ben Whitelaw:

Yeah. I mean, I do like the idea of. trust and safety being like customer experience and it going in that direction, but what's the downside? Do you think of it being more like compliance? Like if it doesn't go that way.

Mike Masnick:

Yeah. I mean, some of this comes out. It's kind of funny as you read Daphne's piece that, like everybody is tiptoeing around the idea that like, we're not setting the rules. We're not telling you what to leave up or take down. But like, you know, we have all these things that you need to do in order to make it work. and so. What happens when you have that sort of situation, you have auditors involved, what everyone is trying to do is basically just cover their ass. So that if there's a lawsuit, they can say, well, we did everything by the book. You know, we did the things that the auditors told us to do, that the regulators indicated were okay. Or in some cases, where there have been previous court cases or previous, fines, we followed the rules that everybody else laid out. And so then it's just a, issue of checking boxes as opposed to actually doing what's right. It is what is, going to decrease the liability on us? What is going to decrease the legal risk when you have auditors involved? That's what you're talking about. Not what is actually going to be the best for the internet and for the users of the internet and for their speech. And there is. The potential. that those two things align that what the government wants, what the auditors suggest, and what is best for the users and for speech align. But historically, we haven't really seen that play out.

Ben Whitelaw:

hmm

Mike Masnick:

And so that's, that's where my fear is.

Ben Whitelaw:

yeah, I mean I Daffy mentions banking in her piece and I wonder if compliance is actually that bad. And if banking hasn't shown us that compliance isn't that bad. So I slightly disagree with kind of where you're coming from. And I just want to explain why. So banks pre 2008 were not dissimilar to tech platforms in some sense, right? They had a lot of users. They were processing a lot of personal data. They were going through millions of transactions every day. they had a lot of kind of wealth and money involved, and they were very central to people's lives. Not, not hugely dissimilar platforms in some sense. 2008 happens, the trust in banks kind of completely disappears. There's course of regulation, and naturally there's pushback from the banks and so they will start to say things that we actually have heard quite recently, right? So that they, you know, Regulation would affect the bank's ability to lend to consumers and would affect the growth of, economies and businesses, it would make it difficult to, sustain their own operations because there'd be an administrative burden on regulation and you know, it would make it easy for larger banks to sustain a place in the market because smaller banks wouldn't be able To kind of comply in the same way. And then what happened was all of these regulations got introduced, right? So you had all of these frameworks introduced for systemic risks. You had a bunch of transparency required around certain markets. You had consumer protection for certain products. And now we all know and understand that banks are regulated. and I wonder if, that is a not dissimilar place to where, where we are now, which is compliance, is a cost of doing business. If you want to get into the speech space, you have to do all of these things. And actually it's only because there is a big pushback against it, that we actually think differently about, speech and about the kind of state of online speech right now. Your thoughts.

Mike Masnick:

Oh boy. All right. So, um, yeah, I, think there are a lot of differences. And so one, the banking space was very heavily regulated before that. Yes, the regulations have changed and they have certainly advanced, but it was certainly not a Wild West of regulation free stuff that the, the banking industry has long been a heavily regulated industry. Also, many of the concerns that were raised about some of those banking regulations have actually proven true. I mean, we are now a year and a half since Silicon Valley Bank, which is the bank with which, my company, did business was. All their accounts were frozen and all my company's money was frozen.

Ben Whitelaw:

Oh shit. I didn't realise that.

Mike Masnick:

It was, it was, quite an experience. It was actually, this is the funny story was the day that happened was, a day that we had an offsite to work on the trust and safety tycoon game, which came out last year with lots. And so we had

Ben Whitelaw:

game. Great game. Go and play it. Bye.

Mike Masnick:

I've gone up north, uh, to this very nice house that a friend of ours had rented, uh, with this amazing view, and, in the morning before I left to drive up, I saw the news about Silicon Valley Bank, basically being shut down. Freaked out and that was not a, comforting offsite. And we did not get as much work done because I spent a whole bunch of time in this beautiful house on the phone, trying to figure out if our bank accounts was available to

Ben Whitelaw:

Yeah,

Mike Masnick:

So, you know, and since then, of course, there's been a lot of look at, it looks into what's happening with the U S banking system and how poorly targeted some of those regulations were and how they actually have, you know, the end result of this was like Silicon Valley bank got taken over by a larger bank and other larger banks also have sort of, you know, there's been a lot of consolidation and. it has actually slowed down certain things and it has created some problems. And on top of that, the even more important thing is like, you know, Even though in some cases, and I know this is one that like sets people off in some cases, money can be equated with speech and expression. That's a whole other area that we are not about to go down into. But in general, money and speech are not the same. The same thing. And so regulating money and how people hold money is a wholly different thing than regulating how people can speak and how they can communicate. And so I think that the analogy breaks down after a certain point. And as soon as you're talking about compliance and regulations around speech, the end result, and this is a point that Daphne definitely makes, is that you are going to lead to suppression of certain kinds of speech. And that is always at the core, the concerns that I have about these regulations. Is what happens because as soon as you get into that area, almost always happens is that the speech that is most heavily regulated and suppressed is speech from marginalized groups, marginalized individuals, people with less power, and the more powerful ones, folks are fine. But people who, have more controversial opinions, but important ones, are kept out of it. I mean, we've talked about this where there are, Attempts to regulate kinds of disinformation that what happens. It leads to pulling down information on LGBTQ content. It leads to pulling down information on health care, uh, things around abortion. All of these kinds of topics get targeted. By these speech regulations. And from the company standpoint, if the idea is just to comply, the easiest thing to do is just say, we're not going to do that. And we're seeing that to some extent, not, you know, this is not getting all the way there, but there is a story that we had talked about doing, which we didn't do though now I'm bringing it up around, like how meta Instagram and threads are handling, political content and they're sort of pushing it, they're down ranking it and trying to not have it apply, you know, not show up as much within the conversations. And that is a direct result from all the shit that meta is getting for how it moderates political content. And so they've sort of thrown their hands up and said, it's easier for us not to deal with this content at all. And therefore we're going to sort of push it down the algorithm. We're not going to block it, but we're not going to have it promoted as widely. And that's a direct response to like people complaining about which content they did promote in the past. And then just saying, we just don't want to deal with this anymore. The further and further you get towards creating regulations and compliance burdens for the speech, the more it's going to lead to this kind of speech being suppressed because it's just not worth it.

Ben Whitelaw:

yeah. And I do, I do buy the idea that, There is going to be an unnecessary blocking, deranking, downranking of speech and that concerns me. but I imagine that probably Banks said similar things in 2008. And my question is like, in 10 years time, You know, 15 years time, however far we are from from those regulations, kind of context that users are banking in better than then? And can you see a situation when this regulation, this compliance agenda when that actually takes hold and it looks like it will, because we're seeing more and more, regulation, government regulation about speech all the time. Can you see a situation where the people would adjust to the context and in 10 or 15 years time have processes to better understand, mitigate in a way that isn't just about downranking or blocking speech as we're worried about?

Mike Masnick:

Yeah, I mean, you know, we'll see, like, people always adjust to the reality that they have around them. and so I don't think that that is, you know, yeah, I mean, people will adjust and you don't know what you don't have. So, you know, whatever, whatever world we live in, people will say like, yes, this is normal. And this is okay. But I think, the idea of That, all the regulations around finance, and I'm not, let me be clear, because it was about to sound like I was saying, like, oh, get rid of banking regulations. I am not saying that. Not saying that.

Ben Whitelaw:

Do you keep all of your money under the floorboards now because of what happened?

Mike Masnick:

yeah, but, we are still living in a, in a world today where financial scams are massive and all over the place. And, You can say that maybe some of the regulations on banking led to the rise in like crypto scams. And now we're talking about pig butchering, which, you know, sort of relies on crypto scamming. there are a bunch of different things like this stuff sort of oozes out somewhere else in the system. The idea that we can sort of perfectly regulate these things is not true because humanity is messy, and society is messy, and the way that people interact is messy. and, so much of these regulations, and again, I'm not making the argument for no regulations. I'm not saying get rid of them, but I'm saying recognize that there are consequences to these regulations and the unintended consequences lead to, you know, the problems ooze out somewhere else. And, and we should be very careful about where and how those problems ooze out. And don't assume that just because like the world itself doesn't collapse, that these approaches are necessarily the right way to do it.

Ben Whitelaw:

Yeah. Okay. As you can see from the discussion we've had, it's a great piece. I definitely recommend listeners go and have a read. Daphne's done a great job of adding something new. And we could go on and talk about, it for, a lot longer, but we, we have other stories to get to. We've, we've gone, Mike, from there, from a, uh, a very thoughtful, well argued piece by somebody who's really trying to add to the debate to somebody who is the antithesis of that. We're back talking about Elon Musk. Um, we're back talking about Elon Musk. And what's he done this week? What should we, what can we be angry about?

Mike Masnick:

Well, I was about to say, because, you know, I wrote an article about this and I think we're going to link to my article in the show notes. So I thought you were about to say, we went from a thoughtful piece by Daphne to a terrible, stupid piece by you, Mike.

Ben Whitelaw:

Never. I'd never say that. Don't do it,

Mike Masnick:

But Elon has continued his, belief in changing what used to be called Twitter to his liking and without any thought whether or not it actually makes sense. And so he did two big things this week. One is changing the terms of service, that will go into effect in mid November, in ways that have, um, For the time being, I'm going to just raise some eyebrows. But the bigger one, the more noticeable one is that, while he's talked about this in the past, they've announced officially, that they are changing how the block feature works on x. Where formally, you know, when you block someone, they can't see what you're doing and they can't interact with you. And if anyone else interacts with, you know, it's all, it's all blocked. He's never liked that. And so now the way it is going to happen is that if someone blocks you, you can still see their content. You just won't be able to interact with it. You won't be able to reply. You won't be able to retweet it. But you can still see it. And there is to give him a very, very, very tiny amount of

Ben Whitelaw:

Mike. Don't do it. no need.

Mike Masnick:

there is a non crazy way to think about this. And that is basically when people are posting to a public. system, that content is public and anyone can see it, right? If I post something to tech dirt, I can't ban someone from seeing what I've written. If you write something and everything in moderation, you can't ban someone from seeing it. It is public. And therefore there's just a general sense that anyone can see it. And this is the point that everyone always raises with the way that Twitter does blocks or did blocks, which is that. You know, if somebody blocks you, you can just go into incognito mode and you can see that content because it's public. And so block is sort of a weird function on social media. So the theory is like, well, this changes that because it's public, it remains public, but the reality is very different than the theory on this. And the reality is that that friction that the block function adds makes a real difference in slowing down abuse and harassment a variety of different kinds of.

Ben Whitelaw:

Yeah, it's a first, it's a first line of defense in a lot of cases, isn't it, to, people being, targeted in some way.

Mike Masnick:

Exactly. And it's not perfect by any means, but it's not meant to be. It is just an attempt to add some friction. And often, not always, often that amount of friction is enough to make a difference in very important ways. And so, it works for some element of what it is trying to deal with, and not everything. And what Musk is doing is sort of removing it for a large segment of the population for which it actually does work.

Ben Whitelaw:

Mmm.

Mike Masnick:

And so, the history here is also important, which is that 11 years ago, Dick Costello, who was CEO of Twitter at the time, had the exact same idea and had the exact same thought that Block is stupid and we should And he did the same thing. He turned it so that you couldn't interact with people, but you could still see people who blocked you. It lasted for, I think, two hours before the revolt was so loud and so angry and so public that they rolled it back and never spoke about it again. As far as I know, there was never any like, you know, postmortem. There was no discussion of it. There was a, we're rolling it back. We're sorry. We're going to rethink this. And then it was never mentioned again.

Ben Whitelaw:

Right. Elon doesn't do postmortems.

Mike Masnick:

no, no, no. And, I mean, Elon at times will roll stuff back. So it'll be interesting to see if that. I am pretty sure that Elon has no idea that Twitter tried this before and that it failed miserably and was a complete disaster and an embarrassment for Twitter at the time. and we are starting to see there is pushback. I have even seen, people who are normally big supporters of Elon have been screaming about what a stupid idea this is. and we're seeing a big exodus from Twitter to other sites. And so people are recognizing that this is a problem and they don't like the way the site works.

Ben Whitelaw:

yeah, you were saying before we started recording that blue skies had this massive, uh, Surgeon users, which we're kind of tallying with the changes to block. Right. And we, this is where I ring the, uh, you know, Mike

Mike Masnick:

Yes, yes, yes, yes. Disclaimer, disclaimer, disclaimer. I am on the board of blue sky. I am associated with blue sky. Consider me biased. Do not take what I'm saying as impartial for the next section here.

Ben Whitelaw:

I know he can't block people on the platform for you or unban. You don't get in touch with them.

Mike Masnick:

Yes. Yes. Uh, yeah, please don't. but yes, blue sky has seen a massive influx in users. like even bigger, you know, blue sky saw a huge influx of users when, Brazil, uh, Band twitter and a whole, you know, basically, you know, we jumped up about 4 million users the course of a couple of weeks, from that band. This is larger than that as of right now. I don't know if it will last as long, but, right before we started recording, I looked at the U S app store, iPhone downloads and blue sky was the number four app. four free app. being downloaded, which is pretty impressive. We were one ahead of TikTok, ahead of Instagram, ahead of WhatsApp, ahead of Gmail. the only ones that we were behind were threads, which also, I believe lots of people are probably going to threads, uh, ChatGPT and Google. So, having BlueSky, this very small player in the space, Being the fourth most downloaded app almost, and, it was reported, this is not me revealing anything, uh, internal that, that I'm aware of, but it was reported publicly that within BlueSky, they refer to, EME, which are Elon Musk events that suddenly lead to a massive influx of users on the platform. And so it's interesting. to see like how these traces and I'll note again, bias, all of that, all the caveats. you know, blue sky implements block in a very different way. And in fact goes further than, Twitter did before, you know, it is referred to as the nuclear block on blue sky, where when you block someone, it obliterates every interaction that you have with that person. Whereas with blocks on Twitter, the way they, you know, have worked for, A very long time is that even if you block someone, other people can still track, the conversation that you had, they can sort of see what happened on blue sky. Basically disappears and it is effectively impossible to track down and it stops harassment campaigns cold. Like it is a very, very impactful way of doing it. There are some ways around some of it, but they throw a lot more friction into the gears and it has been, it's really powerful. And it's. sort of clued me into how these very small changes and how these things are implemented have huge impact in terms of how harassment campaigns can and cannot work on different platforms.

Ben Whitelaw:

Yeah, and are we taking the influx of users to blue sky to mean that people are interested in how platforms think about safety? it something that we think is a kind of conscious decision for users or do you think there's people are just shopping around for alternatives?

Mike Masnick:

I don't know the answer. And so this would be purely speculation. You know, I think that it's just a sense that, Yes, that some of the safety features matter. I mean, we saw this early on with blue sky, you know, when blue sky first launched as a beta, it was a private beta invite only the first sort of controversy was that they hadn't yet implemented block at all. And this was, they were just sort of testing out the system. And there were like 5, 000 people on the platform and people were so angry. How could you launch this without block? Like block was like a, is now considered a fundamental feature of social media, which is kind of surprising because it was not originally, that was a, a later addition to all of this. And as we see, even Twitter sort of was rethinking how it works, but in our minds now people have sort of learned to associate blocking as a fundamental feature. Key safety feature of social media that social media needs, then how it's implemented really matters. And whereas blue sky is gone for this nuclear block, Twitter or X is now moving in this opposite direction of the loosest level of block that you could possibly imagine. Something is much more akin to the concept of muting rather than blocking. And so. it's really interesting, and I hope that we'll find out more of the reasons why people are switching, but, that seems to be the clear reason why we're seeing this massive, massive influx to BlueSky, and I am assuming that Threads and also Mastodon are probably seeing a similar influx of people. And, uh, You know, I think it's great. I think, exploring all these different systems and recognizing that now you have choices in terms of which systems do you think are going to do the most to keep you safe and make you feel comfortable on a platform. I think that's, that's actually a really good thing.

Ben Whitelaw:

yeah, no, it's, it's great that there's alternatives out there that people feel they can even try. obviously last week we had a story about threads and Instagram moderation being below par. there being a lot of oddities to the moderation process as a result of presumed moves to AI. And actually this is a kind of nice segue onto what will be a, probably a final story for this week, Mike, but. a bit of an update to the explanation for that, those technical glitches.

Mike Masnick:

Yes. So they came, they came out and said that, you know, we, did our whole discussion on it and talked about how it was, obviously AI because it was so stupidly done and like blocking everyone who, who used the word cracker, you know, without understanding any of the context and how like cracker jacks are a snack food and not a racial insult. So we, assumed naturally that it was an AI driven thing, because that was the only thing that made sense. But Instagram came out and Adam Asari came out and said that it was a human reviewer problem and that they had not properly instructed human reviewers on how to handle certain kinds of context and things like that. And I'm not sure I believe them.

Ben Whitelaw:

Well, I mean, there's, there's a few parts to this, like don't believe him, right? And so you think he's putting the emphasis on humans because it makes, it's in his interest to like disown The human part of this moderation process, I think actually there's something here about the kind of tooling that the humans were, enabled with, like he mentioned about that the tools didn't give it the human's context. Humans aren't going to be able to kind of moderate effectively if they don't have context about something. whether or not you're right it's a tooling aspect, it's not the fact that there's people involved in the process. Right.

Mike Masnick:

Yeah, that is true. And, and, you know, we know that the tools do matter, but again, like, you know, kind of what we said last week was that it is insane that, meta, the sort of most experienced and the, you know, largest resourced player in trust and safety would have tools that were so bad. And we were talking about on the AI front, but if that's also true in terms of like the tools for the moderators, that they don't have the context. I understand like. Tools for context are a difficult thing, and it's, something I've actually spent a lot of time thinking about it's difficult because context can come in all different ways and could be external to, the content that you're looking at could be external to the platform itself, but not knowing that Crackerjack is a It's not a slur

Ben Whitelaw:

Yeah. The tool has to have been designed very badly in the first place for that to happen.

Mike Masnick:

yes, yes. That is my take on it is, is I don't like, that seems like a huge mistake. And for a company as big, like I could see like a very small player pulling a kind of off the shelf tool somewhere and not really implementing it. Right. Like having that issue. I have that issue a little bit now with like moderating comments on TechDirt. Our tools are not. Perfect. And there are a couple of things where like context would be really useful that I don't have, and I have to take a few extra steps to look at like, Oh, who's this person replying to? Because that actually matters,

Ben Whitelaw:

Yeah.

Mike Masnick:

but you know, we're tiny, right? And so like meta is not right. I I'm, I am amazed that they would not have tools in place that could handle basic context

Ben Whitelaw:

Yeah. Agreed. And I'm not, I'm not a big fan of the headline of this article, which will include in the show notes. Cause it does, it does suggest that he doesn't row back on, on that initial claim and he, he does. So anyway. it's a complex issue as always with content moderation and trust and safety. I don't know if we've solved any of the issues today, Mike, in, in the time we've been talking, but it's been

Mike Masnick:

I will say, this is actually a really good lead into the bonus chat that I have with Mike Pappas, where this question of buy versus build, there are some issues about, how do you choose which things and like, what features do you want and how customized do they need to be? And that's some of what Mike and I talk about in this bonus chat, because, these are not easy questions. these are really, really important ones. And so, I, I think Mike and I get into that in this bonus chat. And so actually we didn't plan it this way, but it leads in really nicely to the bonus chat between, uh, myself and Mike Pappas from Modulate. So Mike, welcome back to the show. Always good to have you here. Always good to catch up. we wanted to talk today. You, you recently wrote this blog post on the modulate site on the issue of building versus buying, technology for trust and safety purposes. And in particular around voice moderation, which I think is an issue that really comes up a lot in all sorts of contexts around trust and safety right now, and it's, a topic that. I heard a lot about at TrustCon and I hear about in all sorts of, other contexts as well when thinking about this stuff. And I was just kind of curious, like, what made you want to write about that topic in the first place?

Mike Pappas:

thanks for having me back, Mike. Always enjoy having these conversations. the answer to this is in some sense pretty mundane. We are a trust and safety tools vendor. So this is an inherent sort of existential question for us is, does it make sense for people to buy externally? we're very sort of mission driven, ethics driven company. We have a lot of deep internal conversations about how do we want to think about marketing and sales strategy. We don't want to trick someone into buying our product if it's not actually a good fit for them. So I think what's more interesting here is really looking at it from that lens of. Can we add a kind of value by being an external organization that you cannot get from sort of spinning up an internal team to build the same tool? And I think what's interesting is that we do feel like there's a couple of areas where we can provide value that way.

Mike Masnick:

Yeah, I mean, I think one of the things, and, and, and, This comes up in all sorts of contexts, again, like not just within trust and safety field. I mean, I think there's lots of areas where everyone is always having that build versus buy conversation. And one of the things that comes up a lot is this idea of like how central is the thing to your business. and the general sense being, you know, if something is more central to your business, you tend to lean more towards building versus buying. But not always sure that's true, right? I mean, I think there are times where, even if something is central, there is value in. if there is an expert who has the ability to do more because of scale or because of expertise or something else, it actually makes more sense to partner with that vendor. And so sort of curious where you, how do you, how do you feel about that? Or how do you think through those things yourself?

Mike Pappas:

Yeah, I mean, electricity is vital to my business. That doesn't mean I build it myself, right? To, to your point, there's a lot of things that are essential components. And because of that essentialness, you cannot afford to get it wrong. And I think as we've seen over the last several years, you know, trust and safety has long been seen as kind of this cost center, this must we do it. And we'll try to do it as little as possible. And as. Both sort of regulation, consumer sentiment, and frankly, just platforms that care that are being built around that perspective of caring are all sort of looking at this and saying, no, actually, we really should treat this as essential. I actually see that coinciding with more of them starting to perk their ears up a little bit and say, Let's take a closer look at what else is out there and think about is it worth getting that really premium offering as opposed to just trying to slap something together to check a box?

Mike Masnick:

Yeah, it's, it's interesting. I, I've been thinking about this in a slightly different context and I didn't realize I was going to go down this path when we started this conversation, but You bringing up the idea of people thinking of it as a cost center. This is something that I've actually thought a lot about lately as well, where, years ago, I remember there was this whole debate over like call centers and customer support and a lot of companies viewing that as a cost center and then somebody. You know, had the bright idea, like, wait, this is the main touch point that many of our customers have with our company or their service or products. Maybe we should realize that like, customer support is actually a marketing function rather than a cost center that is just, causing, headaches for us and then we try and keep as, cheap as possible. And in that situation too, right. I mean, a lot of people choose to still outsource it because they know that even, though it is. really important feature of their business. It still makes sense to outsource it because you can do that at a much better rate. do you think of it as sort of similar to that?

Mike Pappas:

Yeah, I think that's right. And I think it's, partly, you know, why, why we really enjoy working with the kinds of businesses that we do is because they have that conception of this is not just a cost center. I mean, games are often built by people who are, you know, visionaries trying to create this particular experience that it's really emotionally important to them to realize the experience that they had when they Wanted to create and this is part of it. If as soon as you step into that wondrous world, someone's calling you the N word. It's not so wondrous anymore, right? and I think even with some of the enterprise platforms that we've been working with more recently, we see similarly. It comes from the culture and mentality of the business that if you're the kind of organization that says we want to win by treating people well and by allowing that to elevate them and help them create something even more wonderful, then you're going to start treating trust and safety more as a core component of your business.

Mike Masnick:

Yeah. And, you know, I think it's interesting, this comes out in the blog post that you wrote, that there are some things that are very specific to the, voice space and the work that you do. And you know, it's funny cause I'm, I read your blog post on, building versus buying and I'm just like, All of the things that you have to do to do voice moderation. Well, it seems horrifying to me, right? It's just like this huge list of things. And so not that I'm in the space where I need voice moderation, but like, for me, it's an easy call. There's no way I want to build, so do you want to talk a little bit about some of the specifics that you sort of discussed in that blog post about, what companies should be thinking about specifically in the voice moderation space.

Mike Pappas:

sure. Yeah. I, I think broader than voice moderation, it really comes down the builder by decision to three major components. There's the data. What decisions do you want to make? How can you make the best decisions? There's the technical capability to actually, build those tools and have them function. And then there's sort of the user interface. What's the experience of this? Do we know how to make that efficient? In voice, each of those things kind of takes on an extra level, right? In order to get good data, you You first of all need it to be voice native data. If you're just looking at transcripts, we've worked with some platforms that took text transcripts and then sent them to, you know, Amazon Mechanical Turk or something and had someone try to read the transcript, you're not getting the emotion, you're not getting any of what we're actually looking for in that voice to really understand that. So if you want to build a proper voice moderation tool, you need data that's native to. Real conversations between people. This is also where I think you get a benefit from working with an external partner, because we see what's happening across so many different platforms. We can see emerging new trends that haven't reached your ecosystem yet. We can get much broader data sets to be more robust to different kinds of accents and languages. We just have a lot more ability to do that kind of stuff because of the cross section we see. The technical sort of capability. I don't think I need to spell out too much there. Processing voice is really complicated. You need a specialized team on that. but just on the user interface piece, I'll call out for voice and for a lot of content moderation, but it really becomes extra true in voice. Moderator mental health. is a consideration you have to make, right? if you're not just seeing this awful stuff, but hearing it over and over again over the course of a day, that can be really brutal, and it can actually lead to physical issues, such as tinnitus. So being able to actually process that audio and make sure that if people are listening to it, they're doing it as little as possible. You're pre processing it so it's not causing physical damage. So, That kind of extra stuff is also easily missed when someone is just thinking, oh, how can I, you know, spin something up as quickly as possible here?

Mike Masnick:

Yeah. I think that's, that's interesting. There's, I mean, there's a few things that are obviously specific to the voice world that I just never even would have thought of, as well. Um, it's kind of interesting. You know, a lot of this discussion is really, and obviously, as you mentioned, you're in the vendor space. And so this is an existential question for you, but I'm sort of curious. not as a devil's advocate kind of question, but in which situations do you think it actually does make sense for companies build, build their own

Mike Pappas:

I think the, biggest reasons are sort of where they get an edge on one of those dimensions. So, uh, On the data dimension, if you have a really unique kind of experience, or you have some way that people are interacting that really is not reflected in other situations, maybe it makes more sense for you to say, Hey, we need to train on our own data because that has truly nothing to do with anything else. I'm honestly struggling to come up with a good example of that, but I've, I've looked at some very niche apps of, you know, this is specifically. Chess camp only for like four to six year olds, which is a very different vibe of how to have a conversation. Maybe something very niche like that. I think it's more commonly true that it's a technical advantage. that's not true for things like text and voice that are standardized, images as well, but UGC, obviously coming from gaming, this kind of moderation is a big topic that we hear about a lot, and usually when you're thinking about what are people building in my game, You can't just print out an image of it and say, Oh, run that through the Is It Genitalia filter. You have to look at it from different angles. You have to think about how things can compose together. You have to think about are they actually writing something in the game universe? Or all of those factors are going to be very specific to your ecosystem and require access to your ecosystem at a deeper level than just grabbing the text or grabbing the audio clips of what's being sort of said. So I think those are the situations where it makes more sense to try to build in house when just you have access to that in a deeper way than an external partner would be able to.

Mike Masnick:

very interesting. One sort of final question here. And this, this is something that has come up when I've had this discussion with people as well, in terms of thinking about it is one of the fears that some people have around doing the outsourced by decision is that it leads to sort of Homogenization of the tools that if everybody's using sort of the same tool, then it, gets stuck with just, you know, one or maybe two, two or three players in the space. Do you have any thoughts on, on that or how to think through that issue? Right.

Mike Pappas:

I really buy it to be honest. Um, you know, If you're building one thing internally, that itself is an echo chamber. If you're pulling from external tools, if there was only one external tool that everyone was using, yes, I would have some concerns. Now, of course, there's configuration options and all that, but I would still have some concerns. But we're so amazingly far from that world. We are very much in a competitive ecosystem where there's a lot of different tools out there, bringing different approaches, different mentalities, handling different kinds of harms, some focused on the financial stuff, some focused on interpersonal. I don't think we're in any risk of everything calcifying into the exact same environment there. And I think by. Having more platforms talking openly to folks like vendors that do sort of operate in that in between space. That actually sort of up levels the whole conversation we're having as a collective industry because now we're seeing all those different perspectives being gathered together in one place.

Mike Masnick:

Cool. Very, very interesting. Yeah. This is such a fascinating topic. And I think it's something that a lot of people are thinking about. So, I'm glad you wrote the blog post. I'm glad you're coming on the podcast to talk about it as well. And I'm sure there will be many more conversations along this line but, thanks for joining us today.

Mike Pappas:

Yeah, thanks again for having me. Always happy to chat.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode