Ctrl-Alt-Speech

Look What The Chat Dragged In

Mike Masnick & Ben Whitelaw Season 1 Episode 57

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Modulate. In our Bonus Chat, we speak with Modulate CEO Mike Pappas about the evolving landscape of online fraud and how the company’s work detecting abuse in gaming environments is helping identify financial misconduct across different types of digital platforms.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

So Mike, this week I was shamed by a friend of mine, who commented on my lack of language skills. Uh, yeah, I was, uh, my holiday Spanish didn't quite cut it in a conversation we had. So, I've gone back to duo lingo, the kind of famous gamified language learning app, and I thought we could use that to start today's podcast. When you go on to Juliano, you are prompted to kind of put in your details and start to use the app, and it asks to, learn for free forever.

Mike Masnick:

Oh.

Ben Whitelaw:

So I'm gonna ask you, what would you like to learn for free forever?

Mike Masnick:

Well, this is, I was debating how to respond to this particular prompt.'cause there are a few different directions I could go and I'm sort of, I'm gonna turn it around a little bit and, and hope for something that, the rest of the world might learn for free forever. Which, you know, on the subject that we often talk about. I think it would be nice for people to recognize that humanity is complicated and that there aren't simple answers for, bad things that happen. Just because of the nature of humanity. It would make life so much easier if we didn't have to explain that every week. But since we do have to explain every week, we get to talk about it on this podcast, which is lots of fun. But what would you like to learn for free forever?

Ben Whitelaw:

Well, I dunno if I'm, it's something I'd necessarily like to learn, but I'm, I feel like I'm going to learn very quickly and for free how to change nappies. Uh, so, uh, that is something I will be forced to deal with. It's one of the reasons I'm taking a bit of a break from controlled speech. so yeah, when I come back in a few weeks time, I'll be a pro.

Mike Masnick:

yes, you, you'll pick it up very, very quickly.

Ben Whitelaw:

Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's May the first, 2025, and this week's episode is brought to you with financial support from the future of online Trust and Safety Fund, and sponsored by Modulate the Prosocial Voice Intelligence Company. This week we're talking about giant group chats, the African moderator fight back, and the big debate around AI companion bots. I'm Ben Whitelaw. I'm the founder and editor of Everything and Moderation, and I'm with Mike Masnick, founder of Tech and Board member of Blue Sky.

Mike Masnick:

We have a bell.

Ben Whitelaw:

We have a bell. It arrived. It arrived just after we finished recording last week,

Mike Masnick:

Actually, I, I will say it, it arrived just like, literally just after, because like seconds after we got off recording last week, you sent me a picture of the bell,

Ben Whitelaw:

Yeah.

Mike Masnick:

which we didn't have. We, I had to, I had to verbally bell last week,

Ben Whitelaw:

Save yourself. Save your dings.

Mike Masnick:

and, and this week of course, we have no blue sky stories, so

Ben Whitelaw:

Exactly SOS law. but yeah, no, this, the production values of this podcast have just gone up significantly. Thanks to my new bell. Um, yeah, it looks like something that I've stolen off a reception of a bad hotel. Um, but it, but it's there. yeah. Nice to see you, Mike. Uh, good to have you, have you back. And,, we've got a couple of literary reviews, which is exciting, although we're not gonna go into them this week because we've got a lot to get through. I'm gonna leave you to go through those when I'm off over the next couple of weeks. As I mentioned up top, I'm, taking a few weeks off the podcast, and Mike, you've got a great selection of co-hosts.

Mike Masnick:

have some great co-hosts coming though. I will say next week we are off entirely. I'm away at a conference. you have your stuff going on. Uh, so there won't be a podcast next week, but after that, for the next few weeks after that, we have a bunch of really interesting and fascinating guests lined up, and you'll discover them as we release them. So it should be fun.

Ben Whitelaw:

yeah. Reason to, to come back and have a listen. that's also a reason to rate and review the podcast wherever you get it. you may have your review shared, on the podcast, particularly if it has a literary review, which is something that goes back to an episode maybe five or six weeks ago now. where we, talked about, a tech dirt user writing a novel in the comments of the site. So, yeah. bring your best literary references and reflections, extra points for obscure kind of novels that we haven't heard of.

Mike Masnick:

But then how will we even notice?

Ben Whitelaw:

yeah. Well, you can also email us to tell us that we're, small-minded and, and, you know,

Mike Masnick:

Illiterate.

Ben Whitelaw:

And you win, you win double points. we're gonna whiz through today's stories, Mike. We've got a lot to, talk through and some kind of really interesting meaty stories that have been published by a great range of, publishers this week. And we do have a, an excellent chat with Mike Pappas, the CEO of control speech. Launch sponsor Modulate, who is a man who talks very eloquently at the best of times, and even when he has a cold. Um, I had a chat with him, about the way he thinks about fraud and what Modulate had been doing to, evolve their voice detection software away from gaming or, or kind of in addition to gaming into spaces where fraud typically occurs, like social. Platforms and marketplaces, so stick around for that. he's a great, great guest as usual.

Mike Masnick:

Yeah, he's always, always got really deep and, thoughtful insights and, a really clear way of explaining it. That's really kind of usually very, you know, eye-opening and thought provoking, so he's

Ben Whitelaw:

Yeah. and he is a reminder that if, listeners like the podcast and they have the financial means to sponsor it, to ensure that we keep appearing in your feeds, please do get in touch. go to the website, drop us a line control alt speech.com. We are a very brand safe podcast. I would say, you know.

Mike Masnick:

Perhaps the safest, the brand safest of podcast.

Ben Whitelaw:

I think so. I think so. Not like some of the other platforms we might talk about today. Um, but you are guaranteed to have, your brand within a very, brand safe context. So, yeah, that's my plug. let's jump in, Mike. we've got a lot to get through and, you know, I've got my Duolingo lessons to, to get to at the end of this. So, We're gonna start today with, some group chats. We've talked a lot about group chats over the past, four or five weeks, of controllable speech. There's been some stuff happening in US politics related to group chats.

Mike Masnick:

I have no idea what you're talking

Ben Whitelaw:

Yeah, I I know you're not super into it or.

Mike Masnick:

Have you been planning your, your war attacks and bombings, uh, through a group chat, maybe with, you know, relatives or,

Ben Whitelaw:

Not, not recently, but I mean our control speech signal chat is one of my only signal chats

Mike Masnick:

oh, you are missing out my friend.

Ben Whitelaw:

yeah. Yeah. But the story you, you've brought to us from SEMA four is like signal chats on a new level.

Mike Masnick:

Yeah. Yeah. It, it's quite, fascinating story came out last weekend, from Ben Smith who is the founder of Semaphore. And it's this deep look into, a few different group chats in particular of the sort of Silicon Valley, right. That they, really somewhere during the Covid pandemic started to build up a series of group chats where they kind of, incubate their brains in further and further extreme content. it's in some ways not entirely surprising. I think a lot of people suspected that, you know, where you have this sort of, tech VC world that went extremely like pro-Trump and, a little bit nutty, at times over the last few years that they seemed really aligned. And as if. As if they had been talking about things in some, you know, some place that we didn't get to see and then sort of brought it all out together and, and this sort of confirms it and talks a little bit about it. and so there's some, Partially eyeopening, but partially like, yeah, I kind of expected that was happening, aspects to it. but you know, the thing that really struck me about it was sort of this recognition of like how much online speech has moved to sort of private spaces and, away from the sort of public spaces. And, and it got me thinking a little bit about sort of the evolution of online speech. In this nature and you know, I was thinking back to like, early bulletin boards or Usenet time period where. they were sort of quasi-public spaces, you know, they were public, but they were sort of hard to find and not everybody was in there. You had small groups of people, and so you sort of had the obscurity through the fact that, you know, not as many people were in there. and so it felt more intimate and, and close. And then as we sort of got into the web two o era. People started to think that like, oh, there should be this one giant group chat. Right? And sort of leading up to that, or around the same time, maybe a little bit preceding that you sort of began to have the rise of instant messaging, which was usually sort of one-to-one instant messaging. You had a OL, instant messenger, Yahoo Messenger, whatever, all these other things. So you know, which was a really powerful. Tool. And I think, lots of us grew up on, you know, one of those tools and the ability to communicate with people was great, but they didn't really have the group features, or not quite as, widely used. And then we went to this world where it was like everybody in one central group chat, you know, it's like the Twitter or Facebooks of the world, which were like, it's a one giant public group chat, which, has some, Powerful features, but then also some, some negative consequences that we all sort of began to to deal with. And then I've certainly noticed in the last, you know, really since the beginning of the pandemic, more and more of the sort of interesting conversations that maybe I used to have a decade ago on, say a platform like Twitter. Started to move into private spaces, and that included discord groups of which I'm in way too many. It included slack groups of which I'm in way too many. And then more recently, it's all been sort of signal based, a little bit of WhatsApp. So you begin to have these like. Group conversations. And the thing that struck me as kind of interesting is like, you know, all of the talk about regulating online speech and, all of the things, you know, all the conversations we have about trust and safety and everything sort of seems stuck in the world where all of these conversations are happening entirely in public, in the sort of grand arena of Facebook or Twitter. and all of the thinking is sort of built on this mental model of.

Ben Whitelaw:

of

Mike Masnick:

The internet and online speech are public, and yet now what we're discovering is that more and more of it, you know, obviously that's still going on and people are still using those things, but more and more of the consequential conversations are actually happening in private spaces and in some cases those are. Accessible private spaces There may be quasi-public private spaces that people can join. In some cases they're entirely private. In some cases they're entirely encrypted when you're talking about WhatsApp or Signal. So it's, dark. And in some cases, you know, especially with Signal, they involve disappearing messages. And so that was a part of the SE four story as well, that more and more. what actually makes some of the people feel comfortable talking in these spaces is the fact that they're designed to disappear. And in fact, there was, there's one part in here where they talk about, you know, people would have messages that were set to disappear in like 30 seconds or a minute, which is basically like when you want to, say something you know is not good.

Ben Whitelaw:

yeah, start a war. Maybe

Mike Masnick:

Yeah. Yeah. So there's a bunch of different things, but just the idea that, this is happening and, and I think there are good reasons for that. I am, I guess unlike you, I'm in a fair number of, group chats, though none of mine are nearly as horrific and, scary, as what is described in the seven four piece. but At the same time that we hear these, the stories from, the media and certainly from policy makers and politicians about this idea that, oh, you know, social media is this, horrible echo chamber and we have to deal with that. And, I think the real sort of echo chambery thing, like the stories from this sounds very echo chambery and certainly of a lot of very powerful people in this case, in this particular group chat, sort of egging each other on to be worse, right?

Ben Whitelaw:

I wanna talk a bit about the, the kind of characters in this particular group chat.'cause it's, it's particularly relevant and we haven't quite kind of unpacked that.

Mike Masnick:

Yes.

Ben Whitelaw:

I think we should, we'll talk about the kind of broader, quasi, private, semi-private nature of some of these. Messaging apps that have increasingly large limits on who can join groups. But let's start with the, like the people in this particular group, because it, they say and have particularly strong views about censorship and what they are and allow and aren't allowed to say, which I think is relevant here too.

Mike Masnick:

Yeah, I mean, so the, the particular group chat, there's the main one they talk about is called Chatham House. which is, you know, sort of a, wink in a nod to the, think tank and the, the sort of famous Chatham House rules, which is, you know, I know that more than I know the think tank, which is basically, you can talk about the conversation that we had here, but you can't name anyone.

Ben Whitelaw:

like, Chatham House is gonna be pissed off about their name being used for this group. Right. This group of like, kind of increasingly volatile, tech bros. That's gonna be, that's gonna be annoying.

Mike Masnick:

Yeah, and it's basically like the people in here. are basically all the people that you've heard of recently being deeply involved. You know, coming from the tech world and being deeply involved in sort of political discussions except for Elon Musk. I don't seem to mention Elon being in the chat as far as I can, uh, uh, if I remember correctly. but there are a bunch of, obviously Mark Andreesen, Krishnan, who was in Musk Circle and was like one of the early guys who went to. he had worked at Twitter before, but sort of came back with Elon and was sort of deeply involved in stuff. They talk about, mark Cuban, who has been, sort of the billionaire who pushes back on them. And apparently he's kind of like the central character in this because he's like the only guy willing to sort of fight with all the, extremist viewpoints. But it does talk about how they. Build up certain ideas and beliefs within the chat. And so like the example I talk is like the mainstreaming of Curtis Jarvin, who is, a horrific human being who believes that, democracy is a terrible idea and we need to move back to a monarchy. and you know, we need a dictatorship. And he is, certainly gotten. A lot of press recently because of, j. D Vance is a big follower of Curtis Jarvin and Peter Thiel has been a big supporter of Curtis Jarvin. And, there's this idea that a lot of what the US government is doing is based on vin's beliefs, which is a little bit of a exaggeration, but, he's become important in this.

Ben Whitelaw:

partly because of this group. I think, and we, we won't go too much deeper on the vin, but there's a very good interview that the New York Times did, which is also quite controversial, which will include in the show notes, and these guys in this group, but it's mostly men in the piece,

Mike Masnick:

yes. Almost entirely. They did say there's like a, a token, woman here or there, but it's almost all men.

Ben Whitelaw:

Yeah. and they, they end up kind of Yeah. Like juicing themselves up as you say, and basically making out that they are saying things in that group that they used to say publicly on social media, but because of the kind of industrial censorship complex are less able to do. Right. Is

Mike Masnick:

Yes. Which is nonsense, right?

Ben Whitelaw:

it's just nonsense.

Mike Masnick:

Yeah, I mean, that's the claim and like Andreesen made that claim like, oh, these conversations should be happening in public, but because of cancel culture and the woke mind virus, they can't, which is nonsense. Which is, you know, and everyone knows that because Andreesen is still there on, X making these same comments. Elon Musk obviously has been making those comments. I. Zuckerberg has walked back all of the, and Andreessen is on the board of Meta. Like all of this stuff gets left out of these discussions. It's like, he's always been able to say this stuff. What really happened was that he started to get criticized for some stuff. I mean, years ago where I think a lot of this started was he got a ton of criticism on Twitter many years ago when he was defending, Facebook before there were Meta had the, very controversial program called Free Basics, where they would offer free internet access, but it was a limited internet access that was basically free access to the Facebook part of the internet. And a lot of people, including myself, had real problems with that, which is, their sort of denying the world, the wider access to the internet. And Andreessen had gone on a rant on Twitter that was basically saying like, the poor people should, should what we give them and shut up. and he was criticized by that. That's not, you know, that's a, a paraphrase, right? That's not exactly what he said, but that was the implication of what he said. And he got criticized and he ended up deleting his entire Twitter account and sort of pouting about it for a few months before sort of slowly making his way back. Then ever since then I've noticed like he just seems unwilling to take any kind of criticism. So similar sort of thing happened in the early pandemic when Clubhouse suddenly became like a big thing and it was this, audio chat, sort of public chat where he, was one of the big users of it and he, said some, dumb stuff and he got called out on it and it's like, that is. if you're going to speak publicly, getting criticized for it is part of the bargain. It's not fun. I mean, I get criticized all the time. I don't like it. But like, do you decide that the world is you know, you have to destroy a part of the world as he's now decided he has to destroy universities where he thinks they're like building up, their. Turning people into the kinds of, woke people who will criticize him. And that is, unfair and must be stopped. he could have always spoken. He, his only problem was that he was getting criticized and so the fact that he feels that he has to go into a, group chat with a sort of protective, group of, people who would cheer him on and build up his ego, kind of an interesting development.

Ben Whitelaw:

Yeah, I mean this, these guys are a bit of a joke.

Mike Masnick:

Yeah.

Ben Whitelaw:

You know, you can't, you can't not read this article and think that they are like, absolutely pathetic. Apparently there's, there's a line in it where they say, Ben Smith reports that they think of the group chat as like what they call the republic of letters, which is like a kind of 16th century, long distance, correspondence that like Europeans used to send across countries and, and so they, they're kind of equating themselves in a group chat talking about the kind of mad. Shit that they are with the historical tradition of sending eloquent, intelligent letters. And that, that for me is hilarious. Like this is intellectualization of literally spouting off 15 hours a day, sometimes as many hours you, you can't think that they're anything but a bunch of losers. Basically, the, the, the, the, there's the kind of wider point that I think you make, which I think is relevant to us and to listeners is this, to what extent is this private, you know, when you have a group of 300 people and some messaging apps have, limits as high as a thousand now, or telegram? I think it's like 200,000 for some groups, like to what extent. Do you think these are still messaging apps in the traditional way that they emerge in, in the kind of old sense of the word?

Mike Masnick:

Yeah, I

Ben Whitelaw:

what extent do they need to be kind of moderated in the way that public social media was? There's that tension we have.

Mike Masnick:

Yeah, I mean, it's always a spectrum, right? It's not like there's these buckets where you're obviously classified as one thing or another. It's always the sort of spectrum. And as people change and innovate and change features and affordances, it's always going to adjust and change and, but it creates different challenges and you know, it is funny where there is this feeling that. one there's a sort of meta story to this, which is not metas in the company, but, but you know about the fact that. Chatham House leaked and some of the chats leaked to Ben Smith says something about like, when a group gets to a certain size, there will always be some sort of challenges. And it's, it's funny, there is one group chat I'm in, a small group chat, but it's a very active one and a fun one. But we had a debate not that long ago about adding people to the group. The conclusion, like some people are like, you know, we all know people who would be good to add into this group. But the general conclusion was like, no, we, we have this group and if we add anyone to it, it just changes the dynamic. And we're sort of used to this and there's like this whole debate about like, would you add any more people to the group, so, any sort of group, you have group dynamics, and when you have group dynamics, you have to have some sort of governance to some extent or another. And the smaller they are, the more you can do, the more sort of norms take hold. But as it gets larger, it switches from norms to some sort of form of governance or not. Which, is what everything that we're talking about is always about, which is like, the competition between. Rules, norms, governance, regulations, and where do you apply those things? And they apply at different levels, at different stages for different things. And it's a constantly changing situation, constantly evolving. And you sort of have to figure out how that, works. And. you can't treat a, group chat of five people the same as a group chat of 300 people versus a group chat of 200,000 people versus a totally public social media system of a billion people, right? They're all different and they have different things and different affordances. It's sort of interesting to see the way speech is sort of divvying up across these different, kinds of platforms and different tools.

Ben Whitelaw:

Yeah, and the kind of arbitrary cutoffs, I guess, in the way that those different sized communities or spaces get treated. in terms of the safety protocols, in, in terms of the policies, you know, there's always gonna be areas that are not covered as maybe as well as you'd like, because. there's a point where it moves from small group to a, you know, a larger group, and it's hard to be able to detect that as a platform and to kind of manage that as well as maybe everyone would like. fascinating piece, Mike, probably the first time we've heard about this group in such detail. Definitely worth going away and reading this weekend. Listeners, you said that that wasn't a meta story. The next story is very much a meta story. about meta the company and the platform and, um, When you were sunning yourself, on your holiday a few weeks back, Mike, which I know you never do, but Pratik Var Gray, and I talked a bit about, a story that's linked to this next one. we mentioned the fact that in Kenya, the high court had given the go ahead for two cases to be heard, about meta and the cases being brought by moderators who have suffered mental health trauma and PTSD as a result of work they've done for a company called Sam. I mentioned at the time that I, I thought it would open the door to other cases in other countries. I mentioned Brazil, I mentioned South Africa. I didn't mention Ghana. And lo and behold, this week we've had stories about the potential of, Ghanaian moderators taking meta to court and these are linked, Mike, it's almost frightening. I'll just kind of recap what this is. This is a story that the Guardian have done with the Bureau of Investigative Journalism, which has done some fantastic work about the role and the kind of plight of moderators in lots of different countries over the last four or five years. And the case is directly related to the Kenyan one in the sense that after. Meta got out of Kenya because of these two cases. It set up shop in Ghana, in Acra, and it hired a bunch of moderators through a company called Majorelle, which is another BPOA business process outsourcing company. these big companies used to do kind of customer service stuff, call center work, and they've increasingly done moderation work. And that was in 2023 already we're seeing moderators. Come to, the press and to legal firms and talking about the horrific conditions they've been facing. This latest group have said much the same that the Kenyan moderators did, which they've been, forced to moderate content without breaks for hours upon end. They have had, significant mental health issues. There's a, a piece that. The Guardian published about one particular moderator who, tried to commit suicide after, undergoing, trauma as a result of the content that he, he moderated. this is all the same stuff that we saw before, and it's like meta haven't learned their lesson at all. They've literally kind of set up shop in a different African country with a different BPO and have learned nothing from before. they have, in comments that the Guardian said that they have wellness and mental health support available. But judging by, the comments from the moderators themselves and from Fox Glove, which is, a legal firm that has been doing some really good work here and which has been, supporting the moderators, that doesn't seem to be as good as they've made out. So again, you know, just as this kind of Kenyan moderator story was gonna come to a head, Mike, you've actually got another one happening right at the same time.

Mike Masnick:

Yeah, I mean, this is the nature of content moderation. It was funny, I was reading through these stories and it reminded me, I'm sure I've mentioned it on the podcast before, the documentary film, the cleaners.

Ben Whitelaw:

Yeah, yeah, yeah.

Mike Masnick:

you know, from like a decade ago, and that was, meta or, you know, then Facebook moderators in the Philippines, you know, working for A BPO. And the same exact pattern where, they're working for some sort of BPO, they're not allowed to mention that they work for Facebook, but it's obvious that they work for Facebook and all the details are that they're working for Facebook. And they all sort of go through this, these stages of like, We're seeing 15 seconds of absolutely horrific content over and over and over and over again, and it changes the way they think you know, some of it in the expected ways where you sort of become desensitized to, to horrific things or then you begin to have problems in your life, but sometimes in, in other ways where it's just like distorting your view on reality and sort of impacting your, interpersonal relationships and all of these kinds of things. That was, you know, a decade ago that they made that film. And this is the same story, and it just feels like they just sort of move around at the same time. You have the flip side to this equation, which is that, if companies are not doing this, then they get screamed at for not taking content, moderation, trust and safety seriously. And so, do you deal with those two things? And, and one element of it is like, certainly the technology's gotten much better. Over the last few years and, and the artificial intelligence aspect of content moderation has become better and better, but it's still limited and you still need, humans in the loop. But, when you put humans in the loop, the issue is that there are humans out there who have done terrible things and then want to post about it online, or that you have humans who want to find terrible things. and there's like, as I started out this, podcast saying like, there's no easy solution to it. Because, everything is horrible and you're, you're always, you know, as I said last week, you're looking for the least bad solution to different problems that are out there. So I, I read this and it's like, yes, this is horrible. It's horrible the way the company seems to treat some of these moderators. They definitely, is mentioned, they sort of. Appear to, treat employees as sort of throw away people, you know, sort of wear them out, expose'em to horrible things, destroy them, and then dump them on the side of the road when they're no longer useful. I mean, some of the stories in here are horrific, where, people are really affected by this, and then the company's like, okay, well you can't moderate content anymore. Here's this even less paying job that we'll give you. And people are like, well, I, I don't wanna take that. And they're like, well, then you can't work for us at all anymore. And it's like, well. I understand where that comes from because as a company, they're sort of trying to figure that out. And if the person can't do the moderation, then what are they gonna do? But this seems like a problem, but it also feels like one of these problems that we get when we recognize that there are people, there are people doing horrible things out there, and the people who are trying to find horrible content and expose'em to horrible content or share horrible content. how do you stop that without then exposing other people to the horrible content? And there's no good answer here.

Ben Whitelaw:

Yeah, naturally it comes down to, you know, one of our famous phrases of, content moderation is very difficult to do at scale. the reason why Meta and any platform ha has to, or feels like it has to bring a BPO on board is because it needs extra capacity to look at all the content, all the reports, all the, you know, egregious posts that come through. And that, that makes sense. And we, we've talked a bit about, through the antitrust case that's happening at the moment. how large platforms, are much more difficult to, moderate in that sense. the thing I wanna talk about is that this is essentially a labor story. I. You know, this is a kind of like, working conditions that are suitable for people to ensure that they're paid a fair wage and, meta have said that the employees get paid, I think double the average salary in Ghana. but that doesn't seem to me to reflect the kinds of work that they do and seems to miss out all of the context, which is that the average in Ghana is probably, very low. and I also note that, you know, meta has the, money and the capacity to be able to pay people more. It has the capacity to pay more to the BPO. The BPO is relatively small, but was bought recently. I noted before we started by a big French company called Tele Performance, who had an absolutely gangbuster year last year. revenue went up 23%, year on year. They also can pay their employees more. so it feels, you know, relatively, some of this feels relatively easy to address, even if it's hard to deal with. That intractable issue that we have of like content moderation at scale is tough.

Mike Masnick:

Yeah, but you're running it up against the other intractable problem that we have, which is the nature of capitalism. I mean,

Ben Whitelaw:

Ah,

Mike Masnick:

so like, you know, these businesses are told that they need to make money and they need to make increasing money and they need to have increasing margins and all this kind of stuff. And you have big companies like Meta and Tele performance that have shareholders that they have to account to, and those shareholders wanna see number go up and line go up and. can make a very strong argument that that leads to, bad situations in the world. But unless you're dealing with that underlying issue, you can say like, sure, they can pay more and they should pay more. And I would agree with you just, on a conceptual basis. But you have to put that into the context of the system and the world that we live in, which is that these companies are owned by shareholders who. Are less interested in that. Right. And, and are just interested in seeing the profits go up. And so the natural state of things, and this is true of any sort of outsourcing kind of thing that has happened in all different industries over the years, not just in this one, is that you look for cheaper labor that will do the work that is effective enough on a global scale. And so I, again, like, I'm not saying this is the right thing to do, but it's, easy to say they should pay them more and they should, but like. You have to put that into the context of the system that these companies operate in, and you begin to realize like these are systematic, global society level problems that, that aren't so simple to solve.

Ben Whitelaw:

do you think shareholders of Teleperformance or or meta would mind taking a slightly smaller dividend if they knew that? I don't know. Moderators in Ghana weren't like,

Mike Masnick:

possible, right? I mean, this is the nature. This is why we now have activist investors who are like pro-social activist investors who look to invest in companies and say like, no, we want you to change your policies and we're willing to take a slightly smaller profit. or you could make the argument that like, in the long term it is better because you don't have these, front page stories in the Guardian that slam your company for being. Evil, horrible people that, causes disruptions to your business. If you want to have a good reputation in the world, you can still make money if you continue to treat employees well, there are certainly companies in the world that have embraced that. here in the US certainly there's been a lot of conversation about Costco in particular, though. there's some issues there right now with like trying to unionize workers in Costco, but historically, they've always tried to pay their workers more and they've had a much, a stretch of where their employees were not only paid well, but they tended to stay at Costco for a much longer time. And yet Costco has been incredibly profitable over that time. There are ways to do business right, and still be profitable and sort of make that a part of your brand and your business. Because that helps in terms of, the media aspect of it. But again, we're talking about fitting that into the overall system and I, you know, you hope that articles like this and media attention leads companies to realize like, wait, it's more costly than it appears to go for the sort of cheap. Solution. maybe it does make sense to pay people more, to treat them better, because then, it may feel like it cuts into our bottom line in the short term, but in the long term, we could get good press out of it instead of this horrific press. but, that's a sort of strategic decision that these companies need to make and it's a little bit more challenging.

Ben Whitelaw:

yeah. If Costco did moderation,

Mike Masnick:

Yeah.

Ben Whitelaw:

I'd like, I'd read that piece. Uh, other, other, other wholesale companies are available, although if Costco, you are listening and wanna sponsor the next episode of Good Product Speech, please do reach out. Um,

Mike Masnick:

I do think they're having a labor dispute currently, so I don't know where that is now, but

Ben Whitelaw:

okay. We'll figure that out and then we'll, we'll, we'll reach out. Um. Okay, cool. So you know, that's a big story that continues to kind of rumble on and we'll, we'll come back to in future episodes of Controlled Speech. we've got time, Mike, before we dive into our chat with Mike for a couple more, quick fire stories, which we never managed to make very quick fire, uh, particularly when there's like a kind of gaggle of them, as the next story is. So

Mike Masnick:

I'm gonna, I'm gonna do my best and admit that this story is one that probably deserves, longer, more thoughtful conversation, but I wanna try and go through it quickly. it started with, a big piece in the Wall Street Journal by Jeff Horowitz, who has done, you really astounding work over the years, sort of exposing where Meta falls down. I think Meta hates him with the passion of a thousand sons. But, he's really exposed, like, where there are breaks and problems in, man trust and safety work. And in this case, he was looking at their push for AI bot companions. And apparently, you know, there are a lot of companies doing this. We talked about character ai. there's a legal controversy about them. but what, the article by Jeff found was that. there had been like some event where a bunch of these bots were sort of shown and metas performed poorly, in part because they had locked it down and sort of tried to make sure that nothing bad would happen and the bots would, not stray into more dangerous territories. Often sort of, you know, sexual adult content kind of territories. And because they performed badly, mark Zuckerberg got upset and basically told the team take off the restrictions. just let's accept some risk because of that. The Wall Street Journal. We're able to sort of get these bots to have conversations that are very problematic. And you can look at the article and, one meta had licensed famous people, John Cena, Kristen Bell, some other folks to be, so, you know, the identities of these chats. And then we're able to have those chats engage in sexual conversations with people who identified as being underage. And there. It's bad. Like there's no way to look at this and say that it's good, Meta's response, Very self-serving response is effectively like, this almost never happens. You really sort of have to push the model to have it happen. and there is some accuracy to that. Like there are lots of stories of AI doing bad things where when you look at the details, it's always like, because the AI was pushed to do the bad thing and if you keep pushing and you keep sort of twisting it and trying to trick it, in some cases you had to sort of do this semi-famous, Red teaming trick of like, the AI tells you I can't do that. And then you, you have a way to get around it. I'm, I'm not gonna explain what that is here it is in the article, but there are a number of little tricks to sort of get around when the AI tells you it's not gonna do that. You know, clearly there were some protections put in place, but, you know, I think the, bigger story here was, they have examples of Zuckerberg basically saying like. Remove the guardrails and at one point, even saying like, according to someone who apparently told Jeff or told the Wall Street Journal that Zuckerberg said like, I missed out on Snapchat and I missed out on TikTok and I'm not gonna miss out on this. And so it's like, you know, we need to be the leader in these AI chat bot things. and so the way he's trying to do that is to throw away the safety concerns about it. That's really problematic and really dangerous.

Ben Whitelaw:

yeah, really great, really great piece and really shows Zuckerberg's character in all of its glory. And there's a couple other pieces that will include in the show notes, Mike, that, do you wanna just allude to those briefly?

Mike Masnick:

Yeah, there, there was also a study that came out this week from Common Sense Media basically saying that kids should never use AI chat Botts. I think that's wrong. I'm not gonna get into it. We could go into a whole discussion on it. I think that there are obviously cases where they should not, kids should not be using AI chat bots, but I think a wholesale condemnation saying. You know, it's the same story. Prohibition, banning whatever. I think there are ways to teach people to use things, right? There are ways to use tools properly and educating everybody, having the tool makers be better, having people who are using it be better. Having parents teach kids, having, teachers teach kids better. How to use these things is a better approach. But there was a study this week from Common Sense Media in association with Stanford that argues that kids should never use these things. We'll have a link to that story. It's worth reading, as well. And, I think that's basically it. We also, there was a, a business insider story of a mother who works in tech who talks about how to raise a kid, to understand the limitations and problems and risks of these technologies and saying that's a better approach, which I also think fits into this, story of like, Kids and using these technologies, especially AI chat bots, it's something new. It is something of a challenge, but I think there are ways to approach it intelligently and carefully and thoughtfully rather than just saying, this is bad. This is horrible. We have to wipe out the whole space.

Ben Whitelaw:

Yeah, it would be really interesting to see what other research comes out about the rise of AI companions.'cause I've seen, initial research that it's a, a kind of use of. AI that is really re increasing very rapidly. Um, we just don't not know enough enough about the kind of effects of it. I'll just wrap up, Mike by, sharing a couple of stories about frauds, um, again, related to a meta. So we've actually had a kind of three in a row of negative press for meta. That's not, not usually how we do it, it's not been on purpose, but, colleague of mine at the ft, a writer called Martin Wolf, wrote an op-ed this weekend about how he had been alerted to deep fakes on Instagram and on Facebook, advertising a completely bogus kind of investment scam with his face on it. Essentially, somebody's created a very close up, video of him saying. Join my, my WhatsApp group. I'll tell you all the stocks that are about to go up. I've got, decades of experience in this stuff. And he's, he's a massively respected columnist, in the uk. his piece is kinda interesting in and of itself. You know, he's a, 79-year-old man. I don't imagine he, he uses many of these platforms in as much depth as, many of our listeners. And, you know, his kind of surprise at the fact that this is able to happen is kind of interesting in and of itself. Ofcom and meta were alerted the ads have come down. But like all of these things, really the, like we talked about today, Mike, you know, whack-a-mole approach to these things, things will crop up and he will I'm sure be back in fake ads on the platform soon. it's also linked to a new report. That came out yesterday, in the UK from the Financial Conduct Authority, which is, basically kind of financial watchdog in the UK who do a lot of the relationship management with the social media platforms related to financial scams and, you know, the kind of influencer trend that's taking shape. And interestingly, they have come out and said that meta is one of the slowest to react to. It's alerts about these scams when they're on the platform. So, you know, often taking six weeks at a time to take down content that, the FCA, find, and which deemed to be, improper. So, we're seeing there, I guess despite the kind of size of the company, despite the kind of firepower that it has in terms of its trust and safety approach, being very slow to deal with. Financial scams at large, and for Martin Wolf having a kind of very personal, effect on him. So again, we don't have much more time to go into scams. It's something that we, I would like to go into a bit more, Mike.'cause I think there's been big shift across the industry. People are. Working much more together. And there's a couple of really interesting, industry groups that have emerged to, deal with scams. but today I think we'll use that neat link to shift from our stories, which we've covered from the ft, from the Guardian, from the Wall Street Journal, from the Bureau of Investigative Journalism. And to jump into our bonus chat with Mike Pappas from Mod eight about his experience of detecting abuse in gaming, is translating to tracking financial malfeasance. take it away, Mike.

Speaker:

[music]

Ben Whitelaw:

It's great to have you back on the podcast. Mike. Thanks for taking the time to talk to us, uh, control alt speech. you are here to talk a bit about online fraud, which is a topic that continues to be something we, speak to and about every week. And, you've been busy in the last year since you helped launch the podcast. Modulate was a launch sponsor. Talk us through how your kind of thinking and the company's thinking has evolved and, and what you mean when you talk about online fraud.

Mike Pappas:

Yeah, thanks again for having me back, Ben. Excited to be here. so as you may remember, modulate is a pro-social voice intelligence company. So what that means is we have this. Technology we've built that's able to analyze voice conversations in very, very rich ways, understand the emotions, the conversational dynamics and back and forth. And what we've been trying to do is leverage that in different environments where we can recognize sort of the, Deeper details of a conversation than you would get from a superficial text analysis or something like that, and say, where can we deploy that level of intelligence and insight to help protect users or give them a richer experience? So. As you know, we got started in the game space looking at toxicity detection as well as positivity detection. But in the last few years we've been working with a number of folks outside of the gaming space in more classic enterprise use cases around things like call centers or in the gig economy. We're frequently asked, Hey, can you help us navigate some of these challenges around financial malfeasance? That sort of fits nicely into this bucket of things that we're trying to do, but is a new challenge because as I'm sure you know, there's a lot of different, types of fraud or scams or abuse, which I can talk more about. Figuring out the right way to partition those things required us to bring in a new level of expertise and really sort of think through how those kinds of categories and detections should look different from a typical sort of harassment or hate speech detection. So that's a lot of the work we've been doing over the, the last several months in the lead up to our recent voice vault announcement, which is that fraud product. but let me, let me pause there and let you ask some additional questions.

Ben Whitelaw:

Yeah, it's fascinating. I mean, we, we touched on this before we started recording, but you have this idea of this kind of breakdown, this, of almost like taxonomy between fraud scams and abuse that, that you use to kinda shape this work. Talk us through how you see those, differently. I.

Mike Pappas:

Yeah, so when people talk about fraud, the picture that comes into their head is usually you are trying to defraud me, and that is absolutely one of the kinds of things that we need to watch for when we're listening to these conversations. In an enterprise setting, I. We are listening for is Ben calling this call center to claim he never received an order that he did and that he needs his money back? Or is Ben calling an elderly person and trying to claim that he's actually their nephew and needs them to wire him money? There's all these kinds of scams that I should say Evil Ben, um, would maybe be trying to, to deploy. that is sort of what we mean in our classification of fraud as direct fraud. We differentiate that from something that's equally important to us, which is recognizing victims. When they perhaps are calling their financial provider. So if I'm calling my bank and I have been scammed in a different setting, and now I am coming in saying, Hey, I got this call and I need to wire this money today. It's super urgent, the Nigerian prince needs me, then the bank has a both financial and moral incentive to recognize, Hey Mike. Let's talk a little bit before we wire this money out. and, you know, they're, they're actually in many regions of the world on the hook. If they do allow me to complete that transaction, they need to make me whole because they facilitated the sort of process of me being scammed so that that's scamming and then abuse. Is someone more directly sort of physically coercing me to do the wrong thing. So you often see this in abusive relationships, more commonly, unfortunately, from men to a woman, but can be in any case where one member of the relationship. Doesn't literally have control over the others' finances, but is deploying a lot of abusive tactics and maybe is literally even sitting there behind you while you're on the phone with your banker telling you what to say, and we can notice that. Or maybe it's simply that there's a lot of uncertainty from your position when you're trying to make this transaction or you're being asked to do something that would not normally be financially responsible. So where with toxicity we can really just look and say, Hey, we can kind of hear the malevolence in your voice. A lot of the time when we're trying to understand fraud, it requires us to look much deeper at the content of what's being said, at the fact that someone behind the scenes might have manipulated the conversation we're having now. So our system needs to understand. Why might you be calling and sort of think about that and it ends up being a much more nuanced risk mitigation process. So we're a lot less likely to flag, say to a bank or a call center, this is definitely a problem. Shut it down. Right now we're a lot more likely to flag. There are a lot of risk factors that we're identifying. You might want to run an extra identity check or you might wanna put a two day pause on this transaction. There's. A lot of tools that you might have to soften that risk and make sure that you can really dig in and validate things before you do something that you can't sort of turn back.

Ben Whitelaw:

Yeah. when you kind of frame it like that, Mike, it's very. Clear how, pervasive, fraud is in this, in those broad senses are, and how, how often we can kind of, I guess, come up against those harms in different guises. You know, you mentioned about modulate starting its life as a, gaming focused, technology. You've started to think more about marketplaces and the gig economy and, and social platforms to an extent as well. How much are the kind of. Foundations that you built for gaming, translatable into those worlds.

Mike Pappas:

Believe it or not, we were able to deliver our first sort of fraud detection tools within about two or three weeks of getting that request and that that's the result of many years of dedicated work in building this more holistic voice intelligence platform that both talks mod and voice vault are built on top of. By the time we were getting this request, we had already taught that ecosystem, things like how to detect child grooming, which in many ways is reminiscent of fraud or sort of a longer term scam, right? So we had a lot of the core ingredients already in our voice intelligence model, and it was at that point much more about us just telling it, Hey, please now look for this kind of bad thing. As opposed to that one, rather than having to teach it about that new kind of bad thing from scratch.

Ben Whitelaw:

Right. And, and can you give us something like some of the most egregious examples that you're, that you've kind of come across in these new, verticals that you're working? Because I think a lot of listeners will be. fascinated by some of the examples you've shared with me previously, like

Mike Pappas:

Yeah, I mean, it, it varies so much, and this is, this is something that we're thinking a lot about is there's so many different kinds of customers for this product. Again, a a financial institution needs to worry much more about protecting you from yourself in a lot of ways. a retail call center needs to worry about sort of you trying to scam the business. In a gig platform setting, you're worried about people trying to manipulate your drivers directly Potentially. So in the food delivery space, we see a lot of folks that will, you know, get on the phone with a driver and say, you only delivered half my order. I was charged for this. You personally need to Venmo me this money. and we'll make, you know, personal, direct threats to that gig worker who is of course not an employee. So it doesn't have all the protection of the platform. These platforms do want to try and do what they can to support these workers, but it's a setting where I think, not that I'm gonna say any of these, you know, scammers and fraudsters have a moral like to stand on, but I think many of them might believe that they're scamming some many million dollar corporation when in fact they're taking money out of some of these gig workers' pockets who are really just trying to get by. and of course you see similar things with, telecom providers who are worried about someone scamming the elderly or other vulnerable people very directly. so it really ranges in terms of who is being targeted. you know, we, we've talked to a number of folks in the online dating space about. The type of scam that was called pig butchering. I know that there's been a movement away from using that language to make it, um, less frankly demeaning to the victims, but it's still a pervasive type of scam. And so, sorry that was more of a ramble than a direct answer, but I, I think that the truth is it's just such a big space. and this is why we think it's really important to be. Using a sort of more, AI type system that can really navigate these different conversational dynamics. Because if you just try and pre-program, oh, look for the words Nigerian prince, you're never gonna catch even a 10th of the stuff that's happening here.

Ben Whitelaw:

Right. And then, and the nature of, I guess, voice is that there is a real life element to it, you know, and it's these, particularly the gig economy examples there, you know, they blur into kinda real life potential harm and, almost physical violence, which I think is so fascinating about the work you do. you talked there about, um, your kind of new products. Just give us a sense of kind of what it does and, and maybe, you know, if listeners are looking for support in this area, like what it could do for them.

Mike Pappas:

Yeah, so voice vault akin to talks MOD is a system that can be plugged into any voice conversation environment, and can watch hundreds of millions of conversations in parallel, yes, in a privacy respecting way. I won't go off on that whole tangent right now, but that is something that's very important to us and we've done a lot of work to build out those privacy settings appropriately. But what it's able to do, again, whether it's deployed into a traditional call center in an app used by your gig workers directly onto sort of telecom services for the average consumer, it's able to sit on top of those conversations and recognize the telltale early signs of this kind of fraud scammer abuse, once it notices that. Each partner of ours will customize. What do we do about that? Again, a financial institution might say, let's bring in a supervisor. Let's put a hold on the transaction. A telco might say, maybe we give you a configuration option so that if it's trying to scam grandma, it can call one of grandma's kids or send them a notification so that they can intervene. The right intervention is gonna be different in each of these settings, but that's fully tuneable. The other piece of this that I'll call out is, this is fully voice native. You, you mentioned voice being really important here. Some financial transactions don't happen on voice. Many still do, and especially those most vulnerable populations are also often the ones who are most inclined to be using voice chat. and it's also, again, a place where you can see emotions. So if you're looking for signs of things like financial or broader abuse, being able to actually hear the tenor of the voice and understand sort of the emotional wellbeing of that color. Can be incredibly powerful tool in recognizing people who need help, but might not be in a position to say it

Ben Whitelaw:

Yeah, so voice, voice in some ways, like almost some of the most, at risk groups, at some of the most at risk moments, you could argue, you know, um.

Mike Pappas:

Yeah.

Ben Whitelaw:

that's all really, really interesting. Where, where can we find out more, Mike, about the products and the work you're doing? Where should listeners go

Mike Pappas:

the easiest answer is our website, just modulate.ai. We've got detailed information about voice fault, and we'll be sharing a lot more in the coming weeks and months as we get, some of our earliest customers together on things like case studies. So, definitely go there for the deepest information on how this product works. we've also got a number of other resources, so you can find us on LinkedIn. We've got our newsletter Trust and Safety lately. This time I'm not accidentally naming your newsletter, Ben. Um, though, everything in moderation is also an excellent resource. Um, but yeah, you can find us pretty much anywhere online and we're, always excited to talk about this. even if, we're not the right vendor partner for you. We think it's incredibly important for people to understand the nuances of how to build a voice safety system, and we're always happy even just to offer advice to folks that are looking to build something separately.

Ben Whitelaw:

Brilliant. Well, that's really great. Love to hear you, talk so thoughtfully about this, really tricky, tricky harm to spot Mike. And, uh, thanks for taking the time to speak to.

Mike Pappas:

Thanks again for having me.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode