Ctrl-Alt-Speech

Sorry, This Episode Will Not Cheer You Up

Mike Masnick & Ben Whitelaw Season 1 Episode 35

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Concentrix, the technology and services leader driving trust, safety, and content moderation globally. In our Bonus Chat, Dom Sparkes, Trust and Safety Director for EMEA, and David Elliot, Head of Technology, try to lighten the mood by discussing how to make a compelling business case for online safety and the importance of measuring ROI.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

So you would have seen this week, Mike, that Jeff Bezos, Amazon's founder, the Washington post owner has been in the news for not, not some great reasons. And, uh, in his honor, you know, to kind of herald the fact that he's lost so many subscribers in such a short space of time, we're going to use the prompt that comes up on the Amazon app to begin today's episode. So I want you to search or ask a question.

Mike Masnick:

Oh man. Uh, well, in honor of Mr. Pezos, who I wrote about a few times this past week over his, decision to, uh, be the cowardly lion of this particular election, I am searching for some hope as we head into election season that, uh, I don't know, man, this is, this is a stressful one. So

Ben Whitelaw:

Yeah.

Mike Masnick:

about you? What do you have? Are you searching or asking any questions today?

Ben Whitelaw:

Well, I feel for you. I feel for. All of our us listeners and anybody who's got an investment in the kind of election next week. But my question, you know, that I'm wrestling with is, how much actually should we be kind of thinking about startup culture and, the focus that it has on growth when we're thinking about trust and safety, because the story we've got today, I think really puts that in a spotlight. Hello, and welcome to control all speech, your weekly roundup of the major stories about online speech, content moderation. It's November the 1st, 2024. And this week's episode is brought to you with financial support from the future of online trust and safety fund. And by today's sponsor Concentrix, the technology and services leader driving trust and safety and content moderation globally. My name is Ben Whitelaw, and I'm back in the hot seat with my co host Mike Masnick of TechDirt. Mike, how are you doing? How was your week off?

Mike Masnick:

Oh, week off. I didn't have a week off, Ben. I don't know if you noticed, but I was wrangling the robots to see if they could take over our job.

Ben Whitelaw:

Yeah, actually I did listen. Uh, I did listen. I was kind of on the beach, obviously chilling out.

Mike Masnick:

Nice. Nice.

Ben Whitelaw:

no, I was, I saw that you had this experiment that you ran with, with Notebook LM.

Mike Masnick:

Yeah. Yeah.

Ben Whitelaw:

went pretty well.

Mike Masnick:

Yeah. I thought, you know, I thought the reactions we got were pretty interesting. I mean, people I think were pretty impressed by the quality of it and that it actually sounded like two human beings having a discussion. and so, that was the cool part about it. And I, I sort of discussed it in the intro last week that I recorded. It is really impressive when you look at it. It's only when you start to like dig deeper into the, quality of the discussion, it's just unable to actually drive insight, but as a summary machine. And a human sounding summary machine. It's really impressive. and so I, I really enjoyed it, but it was sort of, it was fun. And just the experiment on my part of like trying to make it, trying to force it to be insightful was, was a really interesting experiment. And I said this last week, I said, you know, it probably would have been faster, for you and me to just get on, on the microphones and talk about stuff, then trying to, teach the AI not to be so boring and, you know, and just sort of so middle, right? Like it just didn't, you know, wouldn't go deeper on, things. And so that's kind of the restrictions that you have with an AI model.

Ben Whitelaw:

right. I mean, it's put a lot of pressure on us this week. I'll be honest because, because, you know, there will be some listeners who like maybe, maybe miss those folks

Mike Masnick:

Yeah. That's right.

Ben Whitelaw:

a sad that we're back. but I'm, I am glad fundamentally that you having conducted that experiment didn't think that actually you were out of a job that you, you know, that you did that and thought, actually there's merit in Ben being here.

Mike Masnick:

Yes. Yeah. Yeah. That's right. It could, I could have set it up so that from now on, it's me talking with an AI, you know,

Ben Whitelaw:

Yeah, exactly. Yeah, exactly. So

Mike Masnick:

half human, half robot.

Ben Whitelaw:

I'll try and bring my A game today. Don't worry

Mike Masnick:

There we

Ben Whitelaw:

Um, so we, you know, you mentioned the election. We're going to cover a number of stories this week that have been published about trust and safety in relation to the election this week. and when a big story that's been rumbling on for the best part of two weeks now, but before we start, it's worth mentioning, we have a really good chat with the folks at Concentrix at the end of today's discussion about how to get the best, the most ROI from your trust and safety strategy. How to think about, you know, create a compelling business case for trust and safety, which is going to be increasingly difficult. as we go forward as teams shrink and obviously depending on what happens in the election next week. So, really timely conversation, but we should be. Jump straight in, Mike. There's no time to waste today as we know. So we're going to start with the election and talk about their absolute kind of deluge of stories related to, next week's vote. try and pick your way through what you read this week. And for, for listeners who obviously can't see Mike right now, he has his hands. Over his eyes and, and kind of bowed in quite a somber manner. So, um, just have that image in mind as you, hear him talk through the coverage this week.

Mike Masnick:

Yeah. Well, I was, I was right before we started, I was counting up, I think I have nine different election misinformation related stories that I, I sort of wanted to all try and weave together because I think they're all sort of related and important and it feels like everyone's trying to cover the story from slightly different angles and there's some really great journalism going on. but the summary is gosh, what a pain Mess. I, I, I mean, obviously like the election itself is a mess and I've expressed that already, and we'll see what happens, but everyone is sort of, you on edge. but think I'm sort of going to try and bucket these nine stories into two different. Um, the first is a whole bunch of, there were a bunch of reporting that was done on advertisements, particularly on meta, because meta has this ability for, people to look at because they publish all of their, political advertising. And so reporters have been digging in on the political advertising front and seeing what is going on, on the platform. And the summary is, it is a complete disaster. and I don't know exactly how much trust and safety resources Meta are putting towards this, but one of the interesting things is, as you look at some of these different articles, you begin to realize how big a challenge it is. So this is both a. somewhat a criticism of Meta and how they're handling advertising on the platform related to the election at a time when Mark Zuckerberg has come out and said that he is done with politics and he is doing nothing with politics. And yet his platform is being used for all sorts of political nonsense going on right now. So not. Engaging in politics is politics and is a political decision. And we see that in terms of what is happening on the advertising platform. But because you have all these different reporters looking at the collection of ads and you're seeing all of these different things happening, it's kind of interesting to see all of the different challenges because they're coming in From all different ways. So ProPublica had a, big story exploiting Meta's weaknesses, deceptive political ads, but that one is all about like just outright scams. These are fraudsters and, hucksters and grifters who are using political ads to try and trick people into all sorts of things, you know, buying nonsense, signing up for lists. You know, there was one where they were selling like, Probably cheap knockoff Trump coins. But when the guy bought them, it actually turned out he was signing up for like monthly subscription. So like every month he's being charged 30 bucks that he didn't even realize. And just like, you know, there was another one who had like set up a business in the name of his girlfriend at the time, and it's just a totally fraudulent business, but they're doing all these political ads and she's getting blamed for. There's just like outright scammers trying to trick people. Often using sort of political theming and imagery. And you look at that and you say, wow, you know, that's kind of crazy. But at the same time, then you have Forbes, which had a really wonderful article about how Facebook took more than$1 million for ads showing election lies. And they were looking at people who were. misrepresenting things about the election. And there were questions about how much money, Facebook is making. Obviously the headline is talking about the, the million dollars, but then there's a question of like, who, is behind some of these ads and whether or not they're being done for political purposes or for scamming purposes or for what, you know, a lot of them do look like they're people trying to influence the election through misinformation and Meta's policies in terms of whether or not they actually violate their policies seems pretty loose, like some of them are getting taken down, ProPublica found the same thing after they reported stuff, Meta took down some of the ads, but other ads, they say, oh, these are actually fine, we're trying to be as hands off as possible on some of this stuff, and yet some of it's outright election misinformation, um, not always entirely clear, From whom or how, then I'm going to move on. Like I'm going through these, these rapid fire because the,

Ben Whitelaw:

coming. Keep'em coming.

Mike Masnick:

the Washington post had a really good article saying, these look like Harris ads, but it was actually Trump backers who bought them. and this was a case where you have a. political action committee set up by Elon Musk and funded by Elon Musk that has bought all of these Facebook ads. Interesting. You know, Facebook ads, even though it's Elon Musk, the owner of X, it's possible they're buying ads elsewhere. Also, they've set up this whole like fake, Harris campaign page called progress 2028. They're sort of playing off of, you know, there's this project 2025. Five. thing which was set up by the Heritage Foundation and is sort of seen as Donald Trump's plan for the election, though he insists that it is not. But it's, a lot of the people who wrote it were former Trump staffers and everything and everyone kind of expects that they will go into the government if he wins next week. and so this Completely fabricated fictional progress. 2028 is sort of, trying to be the flip side, even though it's entirely made up by, right wing interests. And it's just full of like extreme, stereotypical made up nonsense that people are claiming a future Harris administration wants to

Ben Whitelaw:

Is that just to discredit her? Is

Mike Masnick:

Yes. Yeah. It's basically trying to, discredit her and, you know, argument and, and yeah, don't want to get political, but there's this argument that goes around that, she's like this extreme left wing, socialist, communist, whatever you want to say, which is utter nonsense. Um, but so they're trying to present it. So they're creating this fake plan that suggests like. These sort of massive amounts of government control, they get into online censorship is like one of the big planks on this, this progress, 2028, again, like a made up conspiracy theory, thing that, that people are pushing. And then they're creating these ads that are designed to look like Harris ads. And in fact, they're. using the targeting features to customize them, and in certain ways. So like, you know, when targeting, people that they think are likely Jewish, they're presenting one position on Israel and Palestine. And when they're targeting people who they are likely to be Muslim, they're presenting the totally opposite position in terms of Israel and Palestine. And so, it's very, very scammy and very, very concerning. And yet they're using these ads. And I would guess that the ads are probably showing up on, X as well, but we don't know that meta at least releases this library. There's something to be said about that. You know, you have this sort of. looking for lost keys under the lamppost concept, you know, so like, we're all seeing what's happening on Meta because Meta thankfully is transparent about the ads that show up, but you know, it's, you know, we don't know what's happening on other platforms now, the other element in all of this, and this came up in a couple of other stories

Ben Whitelaw:

You've been

Mike Masnick:

that, Yeah, I've been busy. Uh, so the New Yorker had a really interesting story on, basically like U S government folks who are trying to counter, election interference, but it has a really interesting look at. Different attacks on the U S system and how Russia, China, Iran, in particular, trying to, to influence the election often through social media, including social media ads, the New York times also had a similar piece on this. Again, lots of links, how Russia, China, and Iran are interfering in the presidential election. And the general summary is that, you know, they sort of saw what happened, whether it worked or not. In 2016 with Russia trying to use, ads and trolling operations, and they've all gone all in on it and using every, every path they can to try and influence the election or just to, to sow chaos. And so, all of this combined gives you this picture of this is a very chaotic election and against the backdrop of over the last four years, as we've reported multiple times this year, there's been a concerted effort, especially among. Republicans and sort of the, the MAGA world to argue that any attempt to call out disinformation and election interference from foreign players and everything is a form of election interference and censorship itself, which has caused a lot of players to sort of back down. And it's sort of cleared the field for this, right? And so you're in this world now where there is this free for all, where the companies are sort of taking a step back. Zuckerberg says he's not going to get involved in politics. YouTube says they're not going to police certain kinds of content anymore. All of these things are happening and now everyone is rushing in and taking advantage of it. And some of it are depending on how you want to. Call it like legitimate political actors. If you can call, uh, Elon Musk funded PAC, a legitimate political actor. Some of it are, just outright grifters and fraudsters. Some of it are nation States trying to influence the election. All of this is happening at once. And it is just, as you can hear from me going through these stories, it's, it is impossible To consider how much chaos is involved. And so to some extent, it's like easy to look at the Facebook ad side of it and say like, how could they allow this to happen? But at the same time, you're like, they're also having to deal with all of these different things and these different influences. And how do you sort out which ones are which and which ones are legitimate and which ones are not? Cause there are also legitimate advertisements going on as well.

Ben Whitelaw:

Yeah.

Mike Masnick:

And, and just fighting this. Force, and then I I'm not letting you talk, Ben, because I have too much to say. I will let you speak in a second. There's all of this other stuff, which this is not even, this is all on just the advertising side. And we didn't even get to the fact that there's like so much disinformation happening directly on the platforms. And here, I'm just going to focus briefly on, there were a few stories this week on. Elon directly turning X into not just a, full time 100 percent propaganda program for Donald Trump, which he is supporting and clearly has said he is supporting, but he's set up this election integrity group on X. And I find this funny because, you know, all the social media companies felt. Compelled to set up election integrity efforts following the 2016. Some of them had, you know, some stuff before, but it was a big deal. After the 2016 election, everyone was taking election integrity seriously. And it was about how do we fight all these things? How do we deal with all these things? And as. Some of these other companies are taking a step back from actually doing that kind of work X is using that name election integrity to set up this group, which is 100 percent nonsense. It's just conspiracy theory after conspiracy theory, and it's, it's flooding the system. And there was a, a CNN article. Again, I'm not going to stop with links. There was a CNN article about how election officials are outmatched. And there was, you know, there were Republican. election officials in Arizona who are saying like they're getting death threats and they're getting, you know, there's one, uh, Stephen Richer, an election official in Maricopa County, which is sort of, uh, a Republican stronghold for the most part, who's saying he's trying to get notices, he was trying to have a friend hand deliver a letter to Elon Musk to ask him to knock it off because he's facing death threats.

Ben Whitelaw:

Wow.

Mike Masnick:

It's, you know, the entire world is upside down in terms of everything that we've talked about in terms of like disinformation, where you now have like the owner of a platform who is actively engaging in spreading and increasing conspiracy theories, one more link in all this was the, the, uh, I see your face. I know you want to talk,

Ben Whitelaw:

Yeah.

Mike Masnick:

what, and tried to not follow any political stuff. Didn't try to express that he wasn't interested in political stuff at all. And immediately was shown, election information, Elon Musk, conspiracy theory, nonsense, his entire feed filled with that. It is impossible to avoid on that platform right now. And so we're in this, this zone. I mean, obviously, you know, we're a few days out from the election and it is just. nonstop craziness and, and, full of

Ben Whitelaw:

Oh, what's up? Yeah, and I feel for you, I feel for you that having to having to have read those articles and then regurgitate them like that. Um, yeah, I want to get your kind of thoughts on like, what are the most worrisome parts of that, because there are so many kind of pieces at play. But I mean, the I guess the thing that sits across all of those different stories that you mentioned there is the fact that the election, just puts so much pressure. Yeah. On platforms and so much kind of pressure on trust and safety teams and affords all of these different actors that you mentioned, you know, the kind of grifters and the foreign, you know, meddlers and the kind of other discredited, activists to, you know, who purport to be, somebody that they're not, they all provide cover. You know, they get cover given to them by the election because there is so much going on and there's so much attention on it and teams and platforms are stretched. that for me is the kind of, as you quite neatly summarized, overwhelming feeling is that in some ways, there's no, you know, There's no other way of going about this. We just have to kind of almost get through it. Is that your sense is it just like get through it? Or could we have predicted that this was going to happen? And two, three years ago after the last election put a plan in place.

Mike Masnick:

Yeah. I mean, what's the saying? There's no way out, but through, I mean, I think we sort of have to go through it, but, this is kind of a reflection of what has really happened over the last four years. And in particular, I mean, I mentioned it. Back there that, the attacks mainly by Republicans on all of this, on trust and safety, on content moderation, on research around disinformation, that has been a concerted effort to create this kind of chaos, and now we're seeing it.

Ben Whitelaw:

Yeah.

Mike Masnick:

we'll see what happens over the next week, the next few months. it's obviously very much up in the air and very much uncertain, but a lot of it is because of, these attacks, like right now would be, the right time. Like if we had, and I know that there are still people fighting the fight. I don't want to say that the entire field has been abandoned. I know that there are people within these companies who take election integrity really, really seriously. Not at X, but, but at other companies. Um, and I know that there are researchers who are still working on this and doing a really valiant effort in trying to prevent the worst things from happening and prevent misinformation from, creating real harms, because that's the fears that, you know, there is a real risk of violence. As it is not. certain, but there is a real risk that the misinformation and disinformation that is being pushed by any of these actors, you know, whether it's intentionally or not, is going to lead to violence. And that is incredibly scary. And it's, you know, not the kind of thing that we should be dealing with and I don't, I, you know, yes, in a, sane and thoughtful world, people would be preparing for this and there would be plans in place and, I, I sort of, skipped over it really quickly, the New Yorker piece about the government agency that is, tracking the foreign interference, it's the foreign malign influence center. Yeah. they seem to be doing really good work, but to what end, right? You know, if they alert social media companies to things that are happening, you know, the Republicans are going to claim it's election interference and censorship, and it's like, that's crazy, right? I mean, this is what the government should be doing. The government is supposed to be, if they find out that there's foreign influence trying to, to, undermine an election, we want them to alert the people who need to know about it. And we've built this world in which. A unfortunately large number of people believe that this is government censorship and interference in elections when it's, it's the opposite is trying to stop that. And so, yeah, I'm, I'm sort of exasperated as I think you can hear my voice because what a terrible world that we've ended up in, through all of this. And I don't. I don't have a good answer to it.

Ben Whitelaw:

No, no. And. just this, the example you gave there of, of that Elon Musk funded PAC setting up its own kind of community within X to talk about election integrity, inverted commas, and to kind of flag issues is of itself, just the kind of microcosm of it was looking at, looking at it here, they've managed to nab the handle at America.

Mike Masnick:

Oh yeah. He's, he stole it. He, he's, I mean, it's his site, right? So he can do that, but he took it away from someone.

Ben Whitelaw:

Yeah.

Mike Masnick:

and I had written about this about a month ago when it happened. He just took the at America handle from this guy who had been actually critical of Elon Musk and of Donald Trump. And, and he, he just took the handle from that guy. I mean, he can do it. It's his site, but like, I don't know. It just gives you a sense of how much he's willing to just, use this platform that he bought to his political ends.

Ben Whitelaw:

Yeah. Yeah. And one of the fascinating things, to be honest, about content moderation and online speech is the way in which it butts up against politics and power. And, you know, that's always what's kind of fascinated me. That's why I started writing. Everything in moderation in kind of 2018, because it, you could see even back then it was things were taking shape, but this is the most kind of clear and kind of obvious version of that right where you have speech being manipulated for the purpose of kind of political gains. And, I think I want to go back to wherever we were before.

Mike Masnick:

Yeah. I mean, you know, but there's, again, there's no way out but through, but like, the one thing I'll note is I've seen a few people saying that this is why he bought Twitter in the first place. And I don't think that's true. I think it's just a happy accident for him. You know, I don't think he really expected to get involved, all that involved in politics, even if, it's come out how his sort of, shift to the right wing, predated when most people thought it was, and he was already there when he decided to buy it. But there isn't any indication that he was really looking at purchasing Twitter as a like political vehicle. But, you know, once he decided to go all in on politics, it became the perfect vehicle and he could pull all these levers. And, you know, what's crazy about it, and I wrote a story about it, not including this link in the show notes, but I wrote a story Today about like, just how crazy it is. If you look at what all the platforms, Twitter, Facebook, YouTube, we're being accused of for the last eight years, honestly, where they really were not being particularly political. And we're really trying to be as unbiased and centrist as possible in how they handle all these things. And they were getting screamed at and they were being called before hearings every few months and the Republicans filed a lawsuit against Google, claiming that because spam filters were filtering more Republican emails, it was election interference. Just, just last week, the attorney general from Missouri, who is just absolutely crazy, said he's starting an investigation to Google because search results put Donald Trump search results further down than Kamala Harris was results, which is nonsense. you know, so they're claiming all this bias and then like Elon

Ben Whitelaw:

he's seen all

Mike Masnick:

is doing with X Not just goes beyond what they claimed these other platforms were doing. it's so far beyond it's it's like, he's doing all of that and more and the other platforms weren't. And yet the narrative for a lot of people is still that all of these other platforms were trying to undermine Trump when they weren't. and they're sort of giving Elon a pass when he is literally doing way worse. Then anything they even accused these platforms of. and it's like, I understand like hypocrisy is just like the nature of politics right now, but it, it is stunning and I feel like it needs to be called out.

Ben Whitelaw:

Yeah, no, you're right. And, I expect we're going to talk about this at length next week, Mike, if we, if we manage to get to next week and next Friday and who knows. So I reckon we kind of draw a line under that there, but

Mike Masnick:

Yeah. Yeah.

Ben Whitelaw:

from what you've shared, you know, it is, it is a mess and, you know. For all our listeners who are based in the States and who kind of are voting, you know, my heart goes out to you and hope it all goes well. Um, it's a big week for everyone. Um, okay. So onto a, different story now, but one that's no less.

Mike Masnick:

It's not getting any lighter.

Ben Whitelaw:

No, I'll be honest. Yeah. Um, and so we're going to talk a bit now about suicide. So a kind of bit of a warning upfront that, we're going to be talking about references to suicide. So, and many of you would have heard about this story that has been in the news for a couple of weeks now, around character AI, so, we didn't record last week. So we're going to talk a bit about what's happened and also some kind of minor updates have emerged this week. And a little bit about also what I certainly think, you know, why this is interesting and what we should be kind of questioning around this story. So just for those who are catching up, Character AI. if you don't know is a AI tool that allows you to create as a user, kind of digital versions of people. And you can interact with those in the form of a kind of chat bot. There are many, many millions of these characters. You can build your own, you can kind of use others on the platform and it has become very popular. Over the last few years since it was founded. So there's around I think back in January, at least there's 20 odd million users, some 16 million bots. It's a big platform. Uh, it was founded by a couple of folks from Google. Some actually really eminent AI researchers. One who was involved in. The creation of the transformer back in 2017, which is kind of the piece of tech that sits under the LLMs that everyone kind of knows a bit about now. And also the guy who created the kind of initial version of Lambda. is kind of Google's initial AI product. So two really eminent, AI researchers and, and kind of creators. they built character AI this week. It, over the last couple of weeks, it's become known for being involved in a story about, about the death, the suicide of a 14 year old called Sewell Setzer, who developed. Over a period of time, a relationship with one of the bots, um, the New York times reported that the, he would spend a lot of time talking to the bot, telling it its feelings, developing a relationship. It sometimes became romantic. and this bot was actually a kind of character from the game of thrones and, basically became his friend and they developed a kind of friendship and the kind of interaction between the bot and this young teenager ended up causing. Him to commit suicide with his father's gun. And this is obviously a story that has been widely reported because it's very emotive. It's something that everyone can kind of, understand. And there's been a court filing, been brought against Google for that, which I know you kind of have some thoughts around Mike. So before we get into kind of, I guess, what's interesting about this story, like, what did you make of this when, when this story broke and like,

Mike Masnick:

Hey

Ben Whitelaw:

talk a bit about the kind of your thoughts on, on the actual kind of finding itself.

Mike Masnick:

Yeah, I mean, obviously very, very tragic situation and horrifying and in all sorts of ways very sad. it reminded me somewhat, you know, we had had some conversation around this when we talked about the sextortion scams and earlier this year, an article around that. but, um, Something of a similar reaction in some ways to some of the things I thought about in that story, where, you know, what is very much brushed over in the story is the fact that this kid had access to a gun. And that seems to just, you know, everybody ignores that part of it. and yet, over and over again, we have found, and there have been studies going back quite some time, that access to a, gun is one of the leading, reasons why, suicide happens. and yet everyone brushes over and is quick to blame the technology. And I'm not saying that the technology is blameless here, but there are other factors and I, I find the lawsuit itself, to be not particularly compelling. I know some people have read the lawsuit and found it very compelling. Um, I think the lawsuit is really compelling. very poorly done. It clearly is taking things completely out of context. It is presenting stuff in a way that is, I think, unfairly presented. And it also, completely brushes over the, The access to the weapon, it sort of says, well, it was stored, in accordance with Florida law, which is like, basically like it's attempt to say like, well, you know, we did fine. Uh, not that Florida's gun laws are particularly good. Um, and, um, And it feels like,

Ben Whitelaw:

this always very

Mike Masnick:

very tricky to talk about without sounding callous. And so I want to be very careful here and again, point out like this is a horrible and tragic situation. of the issues with, suicide is that it is,, it is reasonable and understandable to look for who do we blame. when someone decides to, take their own life. and that is often a really, really complicated thing. And the easy answers are almost never, there. and there was a story many years ago that I had covered and made a lot of news around an attempt to, hold a mother legally liable for, a fake account that she had helped create to get back at one of her daughter's, former friends. Uh, yeah, it was a big story and there was a big back and forth. And there was this whole attempt to hold the mother liable. And there were attempts to like, claim she had like violated. It was over MySpace and using MySpace terms of service. And there was all of this nonsense. And one of the points that was driven home to me at that time, by some psychologists who work on, issues around and related to suicide is that any attempt to claim after the fact that you can definitively say, this is why someone decided to end their own life is almost always wrong because it is almost always more complicated than that. and you run into really serious problems when you can say, we can easily determine what is the cause of the suicide and then hold that. Entity liable, whether it's a person or whether it's a company, there are potentially some exceptions to that, but I don't think we reached the level here. You know, the, the, the New York times article about this and the lawsuit indicate that this child was already having some other, it was diagnosed with other, anxiety and depression and was not necessarily. Fully being treated for those things at the time. There are things in the lawsuit about how, you know, when he turned 14, he suddenly became more withdrawn from his parents and more focused on other issues. And that happens to a lot of 14 year olds. Um, you know, that is part of, teenager hood. Um, and so how much of it can you actually blame on character AI and how much not, and I I've read some of the transcripts and I think. You know, some of the transcripts are taken a little bit out of context as well, where they're really trying to make this case that the, AI bot was like encouraging him to hurt himself and, it's not as clear as they make it out to be in the lawsuit and in the article. And again, like, I don't want to sound heartless about it, and I know that that's how this can come off, but it is one of these situations where it is a lot more complicated and human reality is complicated. And yes, it is absolutely tragic that he ended up taking his own life. but I just feel like this rush to sort of see who can we blame for it and how do we sue this company, is misguided.

Ben Whitelaw:

Yeah. I mean, the transcripts are, very, very impactful. You know, this, this character bot says, he actually says you shouldn't talk like that. You know, when he, when he talks about kind of harming himself and freeing himself from the world, but the bot actually says, don't talk like that. I won't let you hurt yourself.

Mike Masnick:

yes.

Ben Whitelaw:

And so obviously, you know, you can make a case either way, I think, as you're pointing out for this, but I guess, child safety advocates would say that actually the fact that there was the bot was kind of prompting him to even get to that point rather than go to a parent or go to somebody else to do that is, I think, where many people are. compelled by this story. And I want to kind of just spend a bit of time and I think, you know, we talked about that before we could go back and forth on that. I want to spend a bit of time to kind of talking a bit about, you know, some of the other parts of play here, because character AI has obviously subsequently come out and said that it's Doing some work to improve safety on the platform. it has removed a number of bots, a number of the kind of characters. It has taken away the chat of some of the existing bots, which has really upset a number of users who pay for the platform. So their, their whole histories with all of these characters has disappeared. And it's, there's a whole thread on, on Reddit about how upset those folks are. you get the sense of the kind of, character AI is, is doing much about this or is kind of responding in the way that you'd like. And we should say at this point as well, we had, the kind of head of content policy at character AI on the podcast a matter of weeks ago, Catherine Weems, before she joined. The company. So she was co host on the podcast with me before she started her new job. So, you know,

Mike Masnick:

What a time to join.

Ben Whitelaw:

what a time to join. I haven't spoken to her since, she is a friend of everything moderation. I really respect her work. We haven't spoke. So I'm, you know, we're talking about this story, without knowing much of the details,

Mike Masnick:

Yeah, Yeah,

Ben Whitelaw:

in the way that you'd like,

Mike Masnick:

I mean, from what they've said, you know, it'll be interesting to see is actually implemented. Obviously, you know, Catherine is a good step. hopefully that leads to real things. I think, there are clearly, and what does come out of it is that there were clearly more steps that could have been taken particularly in terms of like intervention issues. Now, this is also tricky in lots of ways. And it's easy to say in retrospect, well, of course they should have had something that recognizes if anyone starts talking about self harm or suicide in any way, immediately direct them towards help. That is also tricky in some ways, because one, are a whole bunch of different potentially bad things that could happen. And how do you, train a bot to handle every bad thing because other bad things are going to slip through. Though there is the argument that things around self harm and suicide and eating disorders are such obvious ones and such major ones that obviously you should plan ahead for that. But then there's the question of how do you do that in a way that will be effective? And this is where there is research being done. And I don't, I, I've read some of the research and I've spoken to some of the researchers and I've spoken to some of the psychologists who have been, been looking into these things and there is this belief that, one of the most helpful ways of dealing with people who are, are having thoughts of self harm is to meet them where they're at. And that often involves being very careful about how you suggest interventions, but suggesting interventions. So, you know, figuring out ways to get people to help, but if you jump in in a way that is seen as, as scolding or, sort of shaming. people. That doesn't always help. Obviously, like, different people are different. And so, some of that is true, too. Like, different people react different ways to different kinds of interventions. And so, figuring out a way to, and I've, I had heard an interview with, somebody who runs a different, a, a competitor to Character AI, who was talking about, the potential of there being problems if the intervention is done in an inauthentic way, that it just turns people off. you can look at it in different ways. Some people say, well, there's just, you know, concerned about turning people off from the product and losing business or whatever, but there is also the thing of if the intervention is done poorly, it just drives people away, not towards help, but potentially in the other direction, figuring out how to do interventions well is incredibly important

Ben Whitelaw:

Yeah.

Mike Masnick:

AI is getting there now.

Ben Whitelaw:

Yeah, I think you're right. I mean, I want to talk a bit about other interventions outside of the ones that we we use to kind of stop a user from becoming unsafe. I want when I was reading this story, Mike, I was thinking about the moments that could have. The company could have avoided this situation, right? And there's three that come to mind. I just want to kind of briefly talk to you and then get your thoughts. So, so the first is like, Character AI was founded in 2021 by these two Google Googlers, um, who left the company because they actually said they didn't think that the company was capable of doing anything fun, with AI. Okay. So it's kind of, they've, got funding to do something fun that is Character AI. It took them over two years to hire somebody. who did trust and safety. that's the person who, who brought Catherine in. And so the first thing for me is like, why did it take so long? Why did it take over two years, they had more than 20 million users. They had more than 60 million bots. There must've been some trust and safety issues that came up where somebody was probably having to kind of, do it in addition to their actual job. That for me is like chance for intervention. Number one. The second is when. It received investment and actually, and, and Dresden Horowitz invested in, in March, 2023. So kind of 18 months ago, roughly. and it got me thinking like there must've been some kind of due diligence about that purchase and the input about the investment, sorry. And, you know, I would have loved the kind of legal firm, involved in that or parties on either side to, have some sort of questions about what safety measures are in place. If we're going to kind of invest in this company, can we be sure that it's not going to cause us or the users who we are essentially kind of investing in any, undue harms that's number two. And then the third one is actually when, alphabet slash Google, semi acquired slash invested, literally three months ago. So they, they, they basically bought back the character AI team, the two Googlers who used to work at the company. as we've seen a number of. Tech platforms do with other smaller startups. we would know that there was a legal advisory firm involved in that. Why was the question not asked about, is this team staffed up? Do we have the necessary skills and expertise when it comes to trust and safety for this to be responsible acquisition, whatever you want to call it? Um, those are three moments where I just think like, More could have been done. There could have been more questions asked. And unfortunately the incentives are not in place for any of those parties to really do that. But it just seems like, from an investment perspective, from a founder perspective, from, A bigger company acquiring a smaller company perspective. Those are the people who get the chance to make big calls about trust and safety. And then the outcomes of that trickle down to trust and safety professionals, some of whom are listening to this podcast, and then they're having to deal with the consequences of that. is that an unfair kind of characterization? Is it, is it a bit pie in the sky to presume that those questions should have been asked?

Mike Masnick:

I mean, I think this this is a much larger question, and it has to do with kind of like how the industry views trust and safety. and, the argument that I've been making for years is that they need to view trust and safety as a marketing thing in terms of making a platform better. Because that's how you do. That's how you avoid problems like this. That's how you don't get the awful headlines. You don't get the lawsuits, and you have just a better platform that people enjoy using more and more. Unfortunately, in part because of what we're talking about in the first part, the attacks on trust and safety as if it is this awful, like, paternalistic, you know, nanny state kind of thing,

Ben Whitelaw:

Compliance function,

Mike Masnick:

Compliance function, Exactly. It becomes a selfless thing. And you mentioned Andreessen Horowitz invested in character AI. Mark Andreessen last year wrote this big tech optimist manifesto, which included the line that trust and safety is the enemy of progress.

Ben Whitelaw:

Yeah.

Mike Masnick:

So no, they don't care. Right. I mean, Mark Andreessen has made it 100 percent clear that he thinks trust and safety is anti progress.

Ben Whitelaw:

And how, how much do you think that would have shaped a startup's kind of approach to trust and safety

Mike Masnick:

Yeah. Oh, absolutely. You know, and, and, like a lot of people look up Mark and, I'm. Sort of stunned because, you know, I've, don't know Mark well, but I've, I've interacted with him a few times. And historically he struck me as much more thoughtful on this stuff. He is still on the board of MEDA. You know, he has to have been exposed to the necessity and importance of trust and safety. and yet he seems to have gone in the other direction and really dug into this idea that trust and safety is this like crazy woke idea that is, is problematic. and, I had written a response to his tech optimist manifesto last year, where I sort of pointed it out, like, you have to have a long term view and you have to think about. if you want to be tech optimist, you don't want to blow up the world, right? You don't want to destroy society in the process. And so taking a few steps earlier on to say, how do we develop these things safely? How do we not have horrible things happen that lead to lawsuits and regulation and compliance and, and horrible headlines is you take. Take some, put some effort up front to doing these things more carefully. You're not going to get everything right. There are always going to be mistakes that come through and there are going to be problems along the way, but actually make a real effort up front and recognize that trust and safety up top saves you from all of these problems down below. And, and, you know, he's in his own world and, that filters down. as much as people look up to Elon Musk, they also look up to Mark Andreessen. Both of them are seen as sort of leading lights in what they say among entrepreneurs and engineers right now is, is seen as gospel. and that can be really problematic.

Ben Whitelaw:

Yeah, I mean the, yeah, the interventions that you mapped out there in terms of making it harder for users to get them in a situation where they're unsafe, I think is one thing I'd like to see more interventions in, for companies as well. Like, I don't think it should be all about users. And I think a combination of the two is necessary if we're to avoid situations like this. So, I mean, I think. we probably could do a few, add a few more stories to today's episode, Mike, but I think we probably, upset our listeners quite frankly enough this week, we've probably exacerbated the dread they feel about next

Mike Masnick:

Yeah,

Ben Whitelaw:

already. Um, so I, I feel like, we should kind of

Mike Masnick:

yeah, we'll see. I mean, I hope, I hope everything, turns out well in the next week and maybe we can revisit some of the other stories we had wanted to cover because I think there were, we had some other interesting stories, but we, you know, these ones we sort of really had to go deep on this week

Ben Whitelaw:

Yeah.

Mike Masnick:

you know, they're tough stories.

Ben Whitelaw:

Yeah. Um, so yeah, sorry, folks, if you've, listened to the podcast and you're, your head is now as low as Mike's was at the start of the episode and, your hands are over your eyes as well, but

Mike Masnick:

But I will say, I will say as, as a nice segue again, that the bonus chat that we have after this is talking about, making the case for trust and safety. And, and a lot of what we talked about today in this episode was really about why, companies need to understand the, the business case for trust and safety, the long term benefits of trust and safety. And so, I think the conversation that we have after this gets at that.

Ben Whitelaw:

yeah, you're listening now to Dom Sparks and David Elliott, the director of trust and safety. Uh, EMEA for concentrics and the head of technology who do a really good job of unpacking that. so hopefully if we haven't, left you with much hope, I hope they do. David, Dom, first of all, thanks very much for being on control or speech. It's great to have you here. Appreciate you making the time. yeah, so let's dive straight into the conversation. I want to really just understand a bit more about how you're thinking about return on investment of trust and safety, to kind of set the scene. We've, we've seen that there's a growing conversation around ROI and trust and safety over the last few years as platforms have. assessed budgets, cut staff, actually a trust con, was actually one of the most over subscribed sessions. So this is a really timely topic to be diving into. Let's start by how, how you're thinking about it and why the industry should be thinking about it more.

Dom Sparkes:

So for me, within my role at Concentrix, you know, solutions is, is what I do every day for our client. And we'd all love it that trust and safety just is done for the right reasons. it needs to be there obviously for, child safety, for user safety, et cetera. But there's a finance person somewhere along the line that needs to justify any kind of spend. So for me over the last, Blimey, 20 years of being in trust and safety. This has been a topic that has, cropped up probably on a weekly basis. For a lot of organizations, I think these days, I think there needs to be a realization that, yeah, trust and safety has to be done in some shape or form. Pretty much every organization needs trust and safety in some shape or form, be it large or small. And I think increasingly the ROI of that, isn't in perfect science. as we know that there's a lovely, beautiful, beautiful Phrase in marketing. I think it was John Warner maker. I think a hundred years ago said half the money I spend on my advertising is wasted. I just don't know which half. And I think that that quote sort of possibly applies to certain trust and safety elements. so I think it's really important. So you've got the obvious ROI in terms of things like growing users through having better services, Those trust and safety ROI benefits that can improve profit. You've got those that are more cost avoidant, you know, making sure there's not brand reputation issues, regulatory fines, and those sorts of things, but the level of which they can be measured is really difficult. And I think we need a collective conversation with organizations within industry to work out what They are for different people. There isn't one answer. You know, I think we'd love to have one. This is the benefit of, trust and safety ROI. And I have seen a, uh, a guy at Adavinta, created a metric, but I don't think it can be applied to every single. So for me, this is a really important topic to help create better solutions for our clients, for our clients to understand what they're buying, how they buy it, how they use it. and increasingly you use those benefits and ROI metrics across organizations as well, not just within the department.

Ben Whitelaw:

Hmm. Interesting. We'll come back to that, I think. Um, David, from a technology perspective, how do you think about ROI? How does ROI, how can it be baked into the kind of technology services, that platforms and other intermediaries use?

David Elliott:

I dare say it might even be easier to justify ROI when you bring technology into the conversation. Companies that do have not just a regulatory obligation to ensure that trust and safety is, part of their culture. Operating a model of, say, content moderators to ensure that their platforms are clean they're not destroyed by bad actors. I'm sure they can do that with people, but that's going to be very costly not necessarily as effective as using technology. I mean, I know that's bloody obvious, but the explosion of Gen AI the bad actors have a wider army of weapons now to bypass and get past humans and get their narrative misinformation, disinformation, and new types of content that may not be hashed, for example. And then all of a sudden have an explosion new HC SAM on a platform. So the technology, you can justify as, a weapon against bad actors weapons, makes sense.

Ben Whitelaw:

So in terms of the different types of platforms that are out there, we've got obviously social platforms, marketplaces, dating apps, is there a kind of single way that, that we can be thinking about ROI or do different platforms have to think differently about that, Don, why don't you have a go at that first?

Dom Sparkes:

I think it would be dangerous if people started thinking about it in the same way. Um, I think that could lead people down the wrong path because it's got to be dependent on, their goals. obviously for certain platforms, you mentioned dating there, obviously there's a huge safety component without a doubt. And that's, that needs to be looked after and that should be measured and monitored. it's always harder for platforms starting or trying to create ROI when they haven't done anything at all. So from day one, that's really hard. So I think if you're looking to create ROI metrics, mix and match the word ROI with benefits is the first thing I'd say. and for those industries to then start looking at, well, what is that for them? What can help their services? What creates a safe environment for their users, safe environments, without doubt, improve businesses and improve bottom lines. How provable that is will take time. Because you need something to compare against, but yeah, I don't believe any one metric or selection of industry standards can necessarily help us. That said, a collection of different things that could be applied and measured in different ways and different approaches can be incredibly useful. across that dating or gaming or retailers.

Ben Whitelaw:

And, and can you give a kind of good example of a platform that, that's thinking about ROI or, or things that you've, you've really thought, okay, this, company, this, platform, this, this org has a clear sense of how they want to position ROI within your'cause. Anything you've seen that, strikes you? Mm-Hmm.

Dom Sparkes:

Well, I have, I have seen one recently. It's more on the vendor side, actually, rather than an actual platform. So yeah, a company called Pasabi who do, um, fraud and review detection. and they have created a rather lovely model for working out. The ROI on application fraud. So if you use their platform, and I'm sure this would apply for other similar platforms as well. If you use their platform, then the amount it can save in terms of, cost of fraud, it can show that by the investment you make in them, what it can do on the platform, uh, and then what that can save you,

Ben Whitelaw:

Mm.

Dom Sparkes:

that sort of stuff, that's really clear, and it's probably easier for tech providers to do ROI than, than service providers, as

Ben Whitelaw:

David, any other examples that come to mind?

David Elliott:

I do like Sabis, that Dom's example there and what they're doing in the marketplace, world and trying to, uh, assign a metric in terms of ROI when it comes to fraud is interesting. there are, other companies, uh, Arwin, spring to mind and they have as part of their standard solution, I think, develop the impact of bad actors on your own social media channels. And the impact that has on your reputation, and then that can be defined by a monetary value and a reputation value, they have built in solutions like that into their social media moderation solution. so they spring to mind, there's, players now, I don't know if you've seen in the past week. they've developed that, appeals board and for Europe, that would be driven out of Dublin, a slightly different conversation, but one could argue that there will be cost savings in terms of what we're doing in terms of what they're doing with that appeals board. Eradicating red tape and bureaucracy and getting to the cause of user frustration quicker and for a nominal fee, then that will have a positive impact on. The social media channels that are signed up for initially Facebook, Twitter and YouTube to make those platforms that bit more transparent, easier to use for the user, get rid of those cases and the impact that that has on cost and resources quicker and that will have a positive impact on cause, which can be justified with a monetary value, which equates to a positive or a Y And those 2 examples spring to mind. I,

Ben Whitelaw:

obviously it goes back to your point about regulation being often the driver of, further focus on ROI and platforms being more careful with how they're spending their resources being clear on what outcomes they're driving, how can businesses and people listening to the podcast, folks who are, are based in platforms, or other intermediaries who were making a case to their bosses for investment, how would you advise that They use kind of ROI metrics to be more compelling in doing so. What would you say to them? And you can use your own experience here. I'm sure you've both done it concentrics and other places of work too. Hmm.

David Elliott:

I'm working on a case at the moment, very manual, exercise of, user generated content on social media channels for. Or an NGO, essentially what they're doing is any case of racism or a bad actor acting improper on the own social media channels is manually moderated by a human. And I just think that this NGO, they haven't invested the time and effort into looking whether or not this. Sort of process can be automated and the mind boggles that in today's age that we live in a technology that someone hasn't looked into this and that's costing them a fortune. These are pretty highly scaled individuals that could be working on more paying activities outside of manual moderation when 99 percent of what they're moderating could be outsourced to a platform. their time is then freed up to do other payload activities. I mean, if that isn't a case in point that okay, the day goes live, you've got to set up fee. You've got a license fee and for the solution they use, but in truth, you look at the cost saving in. 6 months time, maybe even before that, very quickly, you see the return on investment of embracing a piece of technology and for very obvious, content that should be moderated. in mind, Dom and I come from a BPO, a CX company, we live by bums on seats, and that's essentially, for want of a better way of putting it, I hope I don't get sacked for this, but that's essentially our product. But that doesn't mean that there isn't a need for humans to be involved, even when technology is able to automate 99 percent of the moderation needs for a platform or for an NGO in this case. And that 1 percent is the tough part. Bit, that's moderation that requires a human nuance and the understanding of colloquialisms, empathy language, for example, evolves. We need people to understand the new terms that are come up to the fore of that day. And there will always be, people involved in the moderations. piece will always be, uh, unfortunately, the term I use is, you know, I hate it. A cost, a human cost, uh, involved in moderation and technology will only do so, so much. but the human, the loop, element will remain, I think.

Ben Whitelaw:

Yeah. So, so the kind of advice there is to review almost all processes that are existing right now to see if you can a deeper ROI, a clearer ROI. Um, which I think is a good one. You never, never stand still really. Dom, what, what do you think? Interesting.

Dom Sparkes:

I've got, um, a slightly different case where looking at helping a client with their trust and safety journey journey. and actually we're not starting with the trust and safety department with starting elsewhere. So if you imagine TNS is a red thread throughout this whole organization, you know, how can, or how does. Yeah. TNS impact the legal department. How does it impact it? How does it impact wellbeing and looking at all those bits first before looking at the actual trust and safety department. And I think part of that is to, we'll do a number of things. Look at those synergies, where can we make sure their value is placed across the organizational chain? Hopefully that will bring TNS to life a little bit within the organization. So, you know, to be able to say to the legal department, if we do this right, you'll have less work to do and and less risk. That should be a tick. If we can get it right for it, there's gonna be better security. Tick. If on the wellbeing side, you know, the teams are really well looked after tick, and then after that, well how are we gonna do it in terms of the moderation, that sort of stuff, because that's relatively easy,

Ben Whitelaw:

Yeah.

Dom Sparkes:

and if you can get buy-in from all those other departments first. it's almost much easier to buy TNS, is often considered like an insurance. No one wants to pay for insurance, but we have to. So how can we make it drive more value from day one?

Ben Whitelaw:

Yeah. Okay. So thinking about the different kind of matrix of metrics that you can draw upon from across the organization as part of that ROI case, good stuff. I mean, you have this workshop coming up. What is it you want, people to take away from that? in a nutshell, what's the reason for, coming along in Amsterdam chatting about this

Dom Sparkes:

Or would love to garner the room's collective intelligence to bring together different viewpoints and share those with different people. So if there's people in the room, you know, from the dating industry, they're going to have certain metrics, definitions, benefits that the retail industry could benefit from. And likewise, the other way around. I think, like I said earlier on, I don't think one metric exists or should exist actually, it's probably more and more. So I would love people to walk away and be able to go back to their organizations, be able to sell interest and safety a little bit more confidently, or if they're a vendor to be able to have better cases to sell their services to their, to potential clients.

Ben Whitelaw:

brilliant, I think it's a great point to end today, Dom, David. Thanks so much for your time. thanks for sharing your, those cases and your knowledge today. It's been great to have you here. And, uh, thanks for taking part of Control Alt Speech.

Dom Sparkes:

Thank you.

David Elliott:

Thank you.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode