Ctrl-Alt-Speech

Live at TrustCon 2025

Mike Masnick, Alice Hunsberger & Ashken Kazaryan Season 1 Episode 67

Our second annual live at TrustCon recording of Ctrl-Alt-Speech! Ben was unable to make the trip halfway around the world, but Mike was joined by trust & safety influencer Alice Hunsberger from Musubi and Ashken Kazaryan, a Senior Legal Fellow at the Future of Free Speech at Vanderbilt University. They cover:

This week’s sponsor is Modulate. In our bonus chat Mike Masnick talks with Modulate founder and CEO Mike Pappas, live at TrustCon, about the kinds of voice scams they’re seeing, with a focus on scams using social engineering techniques to pressure people to do things they probably shouldn’t do.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Mike Masnick (Intro):

Hello. You are about to listen to the live recording of Control Alt speech at Trust Con 2025, but I just wanted to pop in really quickly before we go to the live recording to let you know a few quick things. First of all. Our sponsor for this week is Modulate, so check out the absolutely fascinating bonus chat that I had at Trust Con with Mike Pappas, the founder and CEO of Modulate, which is at the end of this podcast. Please do not miss it. We talk about all the different. Kinds of voice scams that are happening today, including social engineering scams and how Modulate tries to detect those in real time. As someone who is obsessed with understanding how these scams work and all the tricks that the scammers use, it was one of the most interesting conversations I had at Trust Con overall of all the conversations I had, and we actually got to record it, and now you get to listen to it as well. So that was really, really fun. Also, if you wanna see a video version of the live recording at. Trust con. We will be putting that up on YouTube, but it is going through a bit of an edit, to make it look nice and, uh, do the multicam effects to focus on who is speaking at which time. so that probably won't be up for a few days and hopefully sometime, next week or so from when this audio podcast is being released. But you should go check it out'cause it was really fun and it was fun to do it in front of a crowd and to see everyone. also another programming note. Ben and I are both traveling quite a lot through much of August and so control alt speech will be going on a mini break, I guess from our usual schedule and our usual format. but over the next few weeks we will be having a few special episodes dropped in here or there. So. we will not let you miss us for too long. Uh, these will have a slightly different format, including some behind the scenes details about control, alt speech, and, those conversations have been really fun to record and really interesting, and we hope that you will enjoy them as well. But for now, here is control Alt speech, live at Trust Con 2025.

Mike Masnick:

Hello. And welcome to Control Alt Speech, live at Trust Con 2025. Woo. That was great. We should do this all the time. Uh, this is your weekly podcast on the news about online speech, trust and safety, internet regulation, and all things. Of that nature. Uh, folks who are regular listeners to the podcast will note that normally the intro is done by Ben and his wonderful British accent. Ben is not here, as you'll notice, uh, we have talked about it on the podcast that he won't be here, but he did send in this picture of the reason why he is not here, that has Ben and his brand new, uh, lovely child wearing a one z. With the control alt speech logo on it, and it says, uh, just a baby. But this is for people who are listening to the podcast. I'm explaining the visual imagery, uh, but already into speech podcasts. So, uh, Ben cannot make it. He sends his apologies for not making it. He is definitely suffering from FOMO based on the number of text messages I have received from him today. Uh, but instead of Ben, we have a lovely panel here. Um, I will switch Ben off the screen and back to our logo. Um, we were supposed to have, uh, four panelists. Unfortunately, one of our panelists had to drop out at the last minute. Uh, but. I think we can fill the space just fine with who we have here. So,

Alice Hunsberger:

and say, we'll do our rest.

Mike Masnick:

You'll, you'll do fine. Uh, so, uh, again, I don't know if I said, I'm Mike Masnick, uh, from Tech Dirt and, uh, KPI Institute, and we have Alice Hunsberger, who again, everybody knows you here, right? Woohoo.

Alice Hunsberger:

If you don't know me, uh, you should subscribe to everything in moderation, which is. Ben's newsletter who he lets me write for. Um, so yeah. Yes. That's how you can learn more about who I am if you don't know.

Mike Masnick:

And then at the end we have Ash Kazarian, and, um, now I feel like I have to say what you write, but you do everything. So do you wanna talk, just give a quick intro on Sure. Your current work.

Ashkhen Kazaryan:

Uh, I'm a First Amendment lawyer, which means I'm having a great time in the year of 2025. And most of my work focuses on digital free speech at the future of free speech at the Vanderbilt University.

Mike Masnick:

All right, so regular listeners of the podcast will know the way this works is that we generally have a discussion about things that have happened in the news this week. Um, we have picked out a bunch of articles that we. Have decided are interesting and that where there are complicated things where we don't have easy answers for them, I think will be the quick upfront summary of the things we are about to discuss. Uh, and we are going to try to make sure that towards the end of this we will have some time for user questions. There are microphones on the side and so we will give you a little bit of heads up and people can line up. There are very bright lights shining in our eyes, so we will try to be able to see my eyesight is. Not as good as it used to be, but, uh, hopefully we can see you and have you answer a question, uh, or ask a question. We will do the answering already. I'm messed up with all the lights. All right. So, uh, we're gonna start with a story or a couple of stories that Ash you found, um, having to do with. Our favorite property to talk about, uh, which is X and, uh, the gr ai that is on X. So do you wanna give a quick summary of, of the stories that you found?

Ashkhen Kazaryan:

Sure. I'll be very quick. Uh, one of the stories from Al Jazeera talks about how much GR has been used by. PE users for fact checking. Um, and they mostly draw on two events in our history. One being National Guard, being in LA and, uh, governor of California where we are Newsom, uh, posting two pictures and then a lot of users. Using Grok to see if the pictures were real and grok incorrectly saying that they warned. And then the fallout from that. And then the second one had to do with Elon Musk making a statement about the president and the Epstein files and the fallout from that. Um, and overall, I think the article talks about how more and more, because grok is integrated into social media platform like X, um, users use it more and more for fact checking and. Quick answers and then rely on that and the upsides and downsides of that.

Mike Masnick:

Yeah, and, and I thought this was really interesting, both of'em in fact, because, you know, there's been all this talk for years about the importance of fact checking and like, you know, there's a lot of miss and disinformation as people at this conference are very well aware of, and we're always saying like, oh, you know, better fact checking is important and here's a tool that in theory. Helps people fact check. And then, you know what the, the Al Jazeera article was noting was that it, it was, you know, giving wrong answers. It wasn't fact checking. It was doing the opposite of it. But I also thought what was interesting was the fact that so many people are using the tool that way. The, the like grok is it true response to any particular piece of information is becoming something that a lot of people. Are using. And I thought it was just sort of like an interesting element that, that there's at least some element of people wanting to get a FactCheck, wanting to try and get some sort of confirmation or some way to know if what they're looking at is true. And I thought that was a positive, but the fact that that grok sometimes responds with nonsense Yeah. Is a problem.

Ashkhen Kazaryan:

I just, I was looking for a number between June 5th and June 12th, Brook was called on 2.3 million times, which is. A wild number. I did not realize that. I see it all the time on my feed and I judge people who do it, to be honest. But yeah, that's a number I did not imagine.

Mike Masnick:

Yeah. Do you have?

Alice Hunsberger:

Yeah, I mean, I, I always have thoughts. Uh, I think it speaks more widely. I mean, we all know if we use AI every day like I do at my job, um, we know where you have to fact check the ai, even if it's supposed to be fact checking for you. Like, I always take this extra step to check. And I think this sort of speaks to a lack of media literacy and AI literacy. Generally, and I wonder in the trust and safety. Space, what is our responsibility to try to teach people how to use these tools and understand what the limitations are and how much is something that like we as a society, more generally have to do? And it's really not like a trust and safety issue so much is it is just new technology that people haven't figured out yet. And so. I have a couple stories coming up next that I, I also speak to the same similar theme, so,

Ashkhen Kazaryan:

and as a First Amendment lawyer, uh, I also feel very con, you know, conflicted about this. But first and foremost, I think it's an additional tool that some of our institutions can use and. Hopefully perfect. And maybe it's not just grok, maybe our social media platforms implement something like that. We now see community notes being the most popular way to do some of the content moderation around this. So I think it's. Just another tool in a toolbox as we are, we keep adapting to new information ecosystem. Um, and as long as it's not the government that tells grok what to do, I'm okay with that.

Mike Masnick:

Well, uh, yeah, I mean, the government made some statements about AI today, um, which may not influence, but I, I think it actually does raise a larger point, which is. Who has control over the AI systems, which is something that people don't think about. And I know, you know, with Grok in particular, I think there was a, a post that Elon made today that I, I think it was today, it was one of these things that went across my feed. I haven't fact checked it. I could be wrong. Uh, but where, um, someone was saying it was like yelling at Elon, like, you have to get the bias out of grok. And he said, we're working on it. And there's this element to all of this, which is, you know, who. Is controlling the tools that you're using for these purposes. And that's, I mean, that's always been an element of any of these things. Like when, when we had, you know, back when there was fact checking on social media and, uh, meta had, you know, fact checking RIP Yes. Uh, had, had there fact checking programs in place. There were still questions about it and people would say, well, you know, oh, but they would disagree with this fact check. You know, it depends on what context you're adding or, or how you're looking at it. But now we have these tools like, like Grok, and there's a question of who is in charge of the dials in terms of how those things are being tuned and what they're being tuned for. And obviously, you know, a few weeks ago, grok was referring to itself as Mecca Hitler, which, you know, do you trust Mecca Hitler to be your accurate fact checking thing? You know, hopefully not that many people in this audience, uh, would, would assume that that would be a good fact checking thing. But I, I think there's this, this element of like, it's one thing to say, okay, yes, you should be, um, checking on things and you should be curious and you should be trying to figure out, you know, is this a trustworthy source? And so it's good to see people, you know. Trying to find that, but the fact that they're then relying on an untrustworthy source to determine whether or not the sources they're looking at are trustworthy leads to this sort of inception level of, of do your own research.

Ashkhen Kazaryan:

At the same time, I do wanna say, I don't think this is like a new problem. Maybe the scale is different just because of the internet and its scale is humongous, but at the same time. 20 years ago, 10 years ago, if you were watching Fox News versus watching M-S-N-B-C, you were maybe getting slightly different version of the events and description of the events. And same applies to newspapers and book publishers. And the question of who your audience is and who, where do you go to get information, adjust how you get that information and what's in that information. So I do wanna reiterate that this is not a new problem.

Mike Masnick:

Yeah, but is it, is it different in a way? Because it's one thing to say about, okay, yeah, I'm gonna pick, you know, I'm gonna watch Fox News, or I'm gonna watch M-S-N-B-C, or I'm gonna watch CNN, or whatever. But there's something different about the, the fact that it's like an AI and even. The fact that people believe because it's like, oh, it's an ai, it's an algorithm that there's some sort of truth behind it. And, and I, I mean, this is just what I'm noticing, people sort of, and this is going to come up in, in later stories that we're talking about today as well. There's this sense that like, people know to not always trust a human, but they seem to be, in some cases, at least, certainly not always, but perhaps more willing to trust. Well, you know, the, the, the machine told me. And, you know, it's, it's got some sort of, uh, you know, qualitative, some nature that makes people, you know, believe that it must be true,

Alice Hunsberger:

I think too. And yes, and we'll talk, we'll talk about that. And also, um, there's something about truth and facts that is very interesting right now. And I think a lot of it is kind of. Self-reinforcing. And so for a lot of people, they're looking, they're really looking for reinforcement of their own beliefs in their own worldview, more so than actually wanting the truth to challenge them. And so there's this feedback mechanism that can get pretty sketchy if it goes too far down any one direction. And what I've said for a long time is I think trust and safety teams are like. The way that companies put their values into action. So if X believes in whatever Elon Musk believes in and wants his worldview to be out in the world, and gr is checking Elon Musk's tweets to like reinforce that in the bot, like. That's kind of what people are coming to X for and they're being pretty upfront about that, and we're seeing all the ways that it comes out wrong, but it's also the values of the company, even if some of us don't agree with that. Yeah. And so it's an interesting thing to think about, like is it, is it a feature, is it a bug or is it both like hard to say

Mike Masnick:

and, and there's an element of confirmation bias in the users too, right? Because. You know, in some cases users are looking for, this is something that I think is somewhat underexplored, just even in the social media space where people talk about concerns about filter bubbles or echo chambers, or whatever you wanna call it. But a lot of it is people searching out confirmation bias, looking to sort of reinforce their own beliefs rather than necessarily being, you know, um, you know, impacted by the algorithm as people say.

Ashkhen Kazaryan:

I would be very curious to see a study done, uh, that looks into that. Two different camps because I have, you know, anecdotally have seen a lot of people be very scared of ai, right? And everything that they don't like, they say, oh, that's ai. Like that's not real. And then there was like this big contingency that now rely on AI for everything on what they eat for breakfast. To, to, you know, how they're gonna travel. And I think the solution to this is the most boring solution, which is digital citizenship. Uh, educating our society, everyone from like fourth grade on how to, uh, sparse information, how to analyze it, how to critically think, which is the hardest part of that.

Mike Masnick:

Yeah. Well, that's a, a perfect segue Yes. Into Alice, some stories that you found this week. So if you wanna talk about the, the stories that you had.

Alice Hunsberger:

Yes. Um, so I have two stories. I'll talk about'em super briefly and then we can Yeah, we can, we've already kind of made the point, which is like, yes, teach no. If you have kids, teach them how to think critically, please. Um, so. For anybody who doesn't know. I was worked at dating apps for 13 years and a lot of the work that I did there was sort of grappling with this expectation for users of really strong safety mechanisms, which were really useful. And also kind of lack of control of like, well, humans are gonna be human and they're gonna do really weird stuff. And especially if they're meeting in person, like we can't really control that. And you know. All the things that can come with those issues. And now we're seeing some of that with ai, which is like, you can put safety guardrails in ai and that's really important. And that's actually like a lot of what I do in my day job in the subi is thinking about those things and humans are gonna be weird. And there has to be this balance between, um, letting people explore and use AI in creative ways and also having guardrails. So there's two stories. One is from Rolling Stone, um, which is people who have body dysmorphia are using AI to like brutally rate their look and explain to them all the things that are wrong with them. And it was just like obviously causing harm, uh, for those folks. And so the article is sort of like. You know, where's the responsibility to prevent this kind of harm? And this is, again, like you were saying, some of these problems are not new. Like people in trust and safety have been dealing with eating disorders and, and other harm types for forever. But now you have this like interaction with, with a bot that is reflecting back what you wanna hear, even if it's really dangerous. Um, so that was one. And then the others in Forbes and, um. I was announcing that, uh, one of the co-founders of Casper is starting an AI therapy company and talking about how so many people don't have access to mental health and you can really revolutionize things by using ai, which is great. And like I've personally used AI for like, oh my God, my son just had like an hour long meltdown and I don't know what to do and what, what can I try tomorrow to try to calm him down easier and also use AI for like. Hey, help me figure out like healthy, balanced meals and how many calories are in them. And, um, you know, those are really, really good. And also, you know, the trust and safety side of me is like, no, like, don't use AI for therapy. Like that is gonna go so wrong. Um. But, uh, and similarly like, don't use AI to count calories like that. It's gonna send people down rabbit holes like we saw with the body dysmorphia thing. So I think the common thread here is like, is the users, the person able to think critically and recognize how to put up their own safety guardrails and how to kind of think critically and be like, oh, this isn't so good for me. And unfortunately, especially in the us, like that is not something that we. Teach or support or celebrate with people at all. At all. Um, we're kind of just like left out. To find scraps of, you know, how to learn these things. And now we we're faced with this unprecedented technology that can mirror all of those problems back at you so quickly. So I don't have answers. I don't like the idea of having such strict controls with, uh, any tech. Technology that like it becomes so restrictive. People can't use it for good and for helpful reasons is like the free speech advocate in me. Um, and also as a safety professional, I'm like, we really need those guardrails and we really need to figure it out. And I don't know how to reconcile it.

Mike Masnick:

Yeah, I mean, I think this is the theme of the podcast since we started it, which is like, gosh, there are no easy answers here. There's just a whole bunch of complexities and trade-offs, and I think these two stories were like really good examples of that because you can look at both of those stories. The, the body dysmorphia one is a little bit trickier, but see where like there are benefits to using the technology to, to help people when it's used well. And the problem is that there are all sorts of examples of it not being used well. And then the question is, what do you do about it? And there is the, the sort of. Um, instinctual reaction, which is like, well, if it can be used badly, we have to figure out a way to stop it. But the only way you're really going to stop it is to then. Take away a whole bunch of the good uses as well. And as you said, you know, if you were trying to use it for checking calories or something in a good way, and some of the AI systems that you were using wouldn't let you do that. And you know that that's because there's a, a safety professional who looked at it and said, well, oh no, no, we have to make sure that the AI is not encouraging, um, you know, eating disorder behavior.

Alice Hunsberger:

Yeah. But then it also feels really frustrating. Yes. Restricting as a user, it's like, I'm not, you know, I, I just wanna. Figure out how to eat healthier breakfast without like, totally overeating today. But yeah, it's tricky.

Ashkhen Kazaryan:

This is, again, most of my job is fighting with a government, so I'm just gonna talk about fighting with a government. Um, a few years ago, a California state legislator. Um, introduced a bill that didn't go anywhere, but the bill wanted to fight Buddy dysmorphia and all the problems a lot of young girls faced, um, by forcing social media companies to somehow figure out when someone uploads a picture, if it was photoshopped, uh, didn't go anywhere. I had to spend a lot of time explaining why that A doesn't work, B doesn't help. Um, and I feel like it's stuff like that, but like, okay, we have this issue with. Chatbots and them probably triggering a lot of issues that people have. Or maybe even, you know, maybe someone doesn't have an issue and that like leads them down the rabbit hole, you don't know. But at the same time, I know I sound like a broken record, but I grew up, I'm a nineties kid. I was, all of my body image issues came from TV and newspapers and magazines telling me that Jessica Simpson was bad when she was size six. Um, so again, like it's a repetitive thing where maybe trust and safety teams at generative AI companies can figure out what is the model that they wanna offer, and then we as users figure out which model do we wanna use.

Mike Masnick:

I mean, I think, I think there's, there's. You know, there are a whole bunch of diff different issues at play here, which also makes it more complex and more confusing in terms of how do you, how do you deal with it? Because all of these issues have, you know, parallels that predate. AI predate the internet, you know, uh, issues with eating disorder. Go back.

Ashkhen Kazaryan:

There's a forum on Reddit, I'm pretty sure where you upload your photo and you ask others to rate you.

Mike Masnick:

Yes.

Alice Hunsberger:

Well, does anybody remember hot or not.com forever? If you're my age? Probably. Yeah. Like I totally use that. It was terrible. I was a teenager. Bad idea,

Mike Masnick:

but, so then what, what, what I think, you know, and so there are, there are definitely people who will say, well, but the technology makes it different. The fact that the technology company is there as an intermediary, then put some level of responsibility on the technology company. I don't think any of us think that the technology company should be like, well, we're not gonna do anything. But I, I think that there's this over focus on the, the intermediary of the technology company to solve this problem. And there's this idea that, oh, if only. The company could, you know, magically step in, they could fix it so that the good uses no problem, the bad uses will be blocked. And I think that's an unrealistic expectation. Um, and I don't know how you, how you deal with that because there's always going to be, there's always going to be some problem and there's always going to be. Um, and forgive my profession, but there's always gonna be some journalist who's gonna write a story that is like, oh, here's this big, huge scandal. There's always going to be politicians who are going to say, oh my gosh, there was this horrible thing that happened. You need to do something. How could you have let that happen? I mean, how do we And, and I, my, my concern about all this is that I still think, and I get yelled at, and people in the audience are free to yell at me if you want. Uh, we did say you could heckle. So, um, that, um. I lost my train of thought, but I, um, my, my, my fear is that in, in doing that and getting to the point where the, all of the decisions are being made because someone gets angry. There's a media or a politician who says something. We take away from the discussion about like, how do we teach people to use these tools properly in the first place? You know, is it citizenship? Is it, you know, the, the media literacy? Is it technology literacy? Is it, you know, when and how in schools do we do, we train people to, to better understand, to be appropriately skeptical, when they should be appropriately skeptical, to be willing to test out and try and experiment in ways that is useful for them to learn and experiment. You know, and sometimes learn where things go wrong. I think all of those things are important, and yet that discussion gets pushed away in favor of, oh my gosh, there's this horrible thing happening that needs to stop. And like, so I, you know, I, on this podcast many times, I've talked about the importance of like media literacy, and I've had people yell at me saying that, well, that takes too long.

Alice Hunsberger:

Yes, but takes too long.

Mike Masnick:

Thank you.

Alice Hunsberger:

But. We've also had these kinds of scapegoats for forever. Yeah. So when I was a teenager it was like, you know, heavy metal music and Dungeons and Dragons were gonna like, make everybody terrible. I think I turned out okay. But you know, it's like that um, scapegoating of social responsibility and saying like, we as a society are doing just fine. It's just this one thing that's turning everybody bad, I think is. Like a natural thing and we're gonna keep seeing it and there's gonna be something else next time. And if we have been providing some of these tools and some of these skills the whole time, or like proper mental health support for everybody, universal health insurance, cra crazy ideas, um, then it might not be such a big problem. But I think, you know, it's normal for, for. People in charge of the government to kind of want to make it somebody else's problem and not their own.

Ashkhen Kazaryan:

Yeah.

Alice Hunsberger:

My

Ashkhen Kazaryan:

parents

Alice Hunsberger:

wouldn't

Ashkhen Kazaryan:

let me

Alice Hunsberger:

have

Ashkhen Kazaryan:

a tamagotchi because they saw on TV that like two kids killed themselves when their Tamagotchi died. And I still want one. If anyone has one, um, just let me know.

Mike Masnick:

I was gonna say and look how you turned out.

Ashkhen Kazaryan:

Um, but I, I feel like that's really. Brings me to the idea of like, with great power comes with great responsibility, but the responsibility should be on us, right? Like we have this great power of generative AI now at our hands, and the responsibility shouldn't be in the tool, it should be on us. And that's the hard media literacy law it takes for everything. But no matter how much you limit the tool, it will always have problems because humans always have problems. Yeah.

Mike Masnick:

All right. With that, I wanna move on to our, our last story. And this is not officially about ai. There, there's always an AI element to absolutely everything these days. Um, this we thought would be a really good story to sort of close out the discussion and to close out trust con, uh, because it was a, a really interesting, sort of thought provoking piece in Tech Policy Press by Dean Jackson. Um, I'm a huge fan of, of Dean. I think he does some really, really great. Um, reporting and is very, very knowledgeable on, on trust and safety issues, including a lot of the, the nuances. But this one, I was like, I'm not sure I agree with it. And because it's one of these things where I'm not sure I agree with it, it's, it's fun to talk about it with people. Um, and so it's called a realist perspective on trust and safety and, uh, it's, it's worth going and reading. I have a couple quotes from it, but part of what he's arguing is that. Over the last four or five years with things like Trust Con and TSPA and Integrity Institute and all of these other organizations that have sprung up to sort of support the trust and safety field, there was this feeling of this sort of groundswell of support for like, oh yes, let's make the internet safe, and that maybe has changed a bit. Uh, in the last year, some of you may have noticed, uh, it may have infected some of the way that people have felt about this conference and the conversations that we're having, certainly a part of the discussion. And so his argument is like, if you take a realist perspective on this, this idea that these, this effort to build up these organizations to help, you know, make the internet safer and in a way that doesn't involve necessarily the government coming in and telling people how to do things that, that, that may have failed. Um, and it's sort of suggesting that we have to take a realist view and, and suggest that that experiment is over. And so one quote he has that a realist assessment of the current moment suggests that, um, instead of that, the one force capable of moving tech titans in a better direction, it says perhaps the only force short of a mass consumer movement is state power. Uh, and then he, he addresses this, but it strikes me as. At a moment when we're, uh, some of us in this audience are quite concerned about the nature of state power and where it is today and how it is being used, and how many things that, you know, may have been put in place for good reason are being weaponized to target people who are, um, challenging the, the, you know, the, the lines or, or. Policies of this administration that the idea of saying, well, we can't do it collectively as, as a group here, the only answer is state power. Um, I hope that's not true, and I'm hoping that there, there are other ways to deal with this. And that kind of what I'm hoping we can discuss a little bit, you know, how, how, how did you, I know, you know, I sent this to you and I had both of you read it. What was your reaction on, on reading Dean's piece?

Alice Hunsberger:

I think. Similar to you, there's some regulation that we've seen that is helpful and sort of refer back and say, look, give me more resources. I have to do it. Um, and that can be really good. I think also similarly, especially in the US if you read, um, some of the things that are happening, it's like all pornography is totally evil and some of the goals to like shut it down. Completely for everybody and not only for adults. And, you know, it's like we can't rely on, on government to be sensible, especially when it comes to protecting marginalized groups. So I agree. I I'm worried about that. I think the other part of it is, um, one of the points that he makes in, in the article is sort of this idea that, um, proving the ROI of trust and safety didn't work. Like people tried to make that argument and it just ended up that like nobody. Okay, like the business motive for investing in trust and safety just isn't there because you see companies like X who are Remo, you know, laying off tons of people and really cutting down, and so therefore it's proof, like people still use it, so why do it in the first place? I don't think that's the full story. I think for a long time we've seen, especially the big global platforms have these one size fit all policies. They're all kind of, they were the same. They had like very generic, everybody's welcome, everything's cool, but they weren't gonna take hard stances. Now we're seeing x. Go off in one direction, but we're also seeing other companies start to like take a really solid stance and say like, no, trust and safety is really important and we're gonna use that as a differentiator and we're gonna invest in our teams. Our CEO E is gonna talk about trust and safety openly. Um. Some of it might just be PR stunts. Like I don't work for these companies, so I don't know for sure. Um, but like Roblox is the sponsor of, um, you know, the main sponsor here. I also see other people who are talking like Pinterest, CEO talks a ton about trust and safety and I find it really fascinating because a few years ago that was like not something that CEOs would talk about that much. So I think there is. I think the business case is still open. Um, and I would love to see more companies differentiate themselves in that way.

Mike Masnick:

Yeah, I, I, I think, um, the, I think the story on the business case has, has not been told yet. I think it's way too early. I mean, trust con is, you know. Been around for four years. The, the term trust and safety as generally applied to the field is a, was like about that long. Right. I remember a few conferences before Trust Con existed and the, the trust and safety was not even sort of widely accepted as a term. The idea of like figuring out metrics and ROI on it is not there yet and it's one of these things that I think, I hope. Maybe this is wishful thinking that, you know, companies will sort of naturally realize that yes, there are these efforts to like, and, and I know like last year I tried to go to a panel on ROI on trust and safety and I couldn't get in'cause it was packed and I showed up early. Um, so there's obviously a lot of interest in sort of figuring that out, but I think it, it is still early and people are still trying to figure that out. But, but we have a parallel which is the sort of, uh, customer support. That many companies, you know, in the eighties and nineties people, a lot of companies viewed customer support as a cost center. And, you know, you wanted to get, make it as cheap as possible, offshore as much as possible. You know, give people, uh. You can only be on the phone for 30 seconds. You have to get people off the phone as quick as possible. And then some, some companies smartly began to realize like, wait, this is like one of our major customer touchpoints. We should be viewing this more as a marketing function than and, and a beneficial function rather than a cost center that we have to try and minimize. And we're seeing, I think as you noted, some companies are recognizing that for trust and safety as well. So I think it's, it's a little premature to say, well, the, you know, the whole attempt to prove ROI, you know, was a failure. I think. I think there's still still time on that.

Alice Hunsberger:

And also, not to set you up for a big rant here, but there's a lot of really promising stuff happening with, um, vol, like user based moderation or decentralized platforms like Discords doing a lot of really cool stuff. Blue skies doing a lot of really cool stuff. You see a lot of clap. Yeah. That, that was, that was, and Ben, Ben

Mike Masnick:

is not here with a bell, but I, I am on, Mike is on the board. Mike is on the, I'm the board of our guy, so. Consider revised.

Alice Hunsberger:

Yeah. Uh, no, but I think like we're getting the tools finally for people to be involved, like you were talking about before, like give users choice, give users autonomy over their own experience, give them tools. Those are finally technically possible now, and that's setting up, I think, for hopefully this like consumer revolution where people are like. I, I can have a hand in creating my own experience in making things better, and that's trust and safety, not just as like a centralized function within a company, but also as like a back and forth with users, which I think is really promising. We're like. Super at the beginning of possibilities there.

Ashkhen Kazaryan:

I was having a lot of cognitive dissonance when I was like, okay, so this piece says, love you, Dean. This piece says, um, this is the issue. Trust and safety has been suffering because of the current political climate and everything that has happened. So solution is to give a government that currently I'm criticizing power. Over this. So that was the part that I was like really struggling with understanding. Um, but the second piece is, um, depending on business model of course, but even if maybe platforms are not advertising or saying in DC loudly that they're pro trust and safety, they still have advertisers, they still have money to make. And if there is something hateful or horrible next to an ad, probably the advertiser is not gonna want that. And that's gonna be another piece of, uh. Kind of influence over them, and maybe we are in the, currently the mood and the vibe is bad, right? Like, historically, we're just like, here, whatever the, like, lowest point is, um, on Earth. Like, it's like some, some, what is, what is it called in English? It's starts with an m.

Speaker 4:

I don't know. No. Okay.

Ashkhen Kazaryan:

Trivia for Thank you. Um, but yes, we're here, but like historically, we're probably gonna go up, like when you hit the bottom and you knock, then you go up. Um, so maybe the mood around trust and safety is gonna change. And, uh, if we don't wait for it to change or for this discussions, we want them to stop and then give the government the power to decide, like, then we're really screwed.

Mike Masnick:

Yeah. I, I think there's an element of. You know, I think, and everybody does this, and I certainly do this, where you sort of over-index on the way things are now and sort of assume that we're in a steady state and that the world isn't changing, but the world is changing and the world continues to change and these things do cycle and there is some sort of pendulum. And so yes, obviously I think, you know, within certain circles, trust and safety is looked down upon, um, or considered the enemy of progress in, in certain circles. Um, but. I, I don't think it needs to stay that way. And I think there are these other forces at play, and it's, some of it is advertisers, but it's also the users. And again, like Dean, actually, he, he makes this point in there that like, you know, he seems a little skeptical that the users will actually stand up and do things. But I am hopeful also, and again, bias, whatever, like with, with things like Blue Sky and, you know. Which has custom feeds and, and, you know, third party, uh, ability to create different moderation systems and all of these other tools, people are beginning to discover that you can do it. And I've said this a few times before, but like we've had, you know, 15, 20 years of these centralized systems where people were sort of taught and, and became accustomed to the idea that. You were not in control of your online experience that someone else was in control and they might not have your best interest at heart, but there wasn't much you could do about it other than maybe use that, those same tools to yell at the people who run those platforms or to yell at the government to make the people who run those platforms change it. Um, whereas we're beginning to see. These elements that are showing up where like, wait, you can make a difference yourself. You can take some control over it, or third parties can come in and, and build other services on top of it that can take control. And that creates this element of there's, there's innovation. So you can see like different approaches to dealing with these things and decide that you might like the way somebody else wants to handle moderation or, or handle these things. And that will spur more people to begin to recognize. This is theoretical, this is hopeful, but like we'll begin to recognize that there are other ways to do this, rather than saying, you giant company has to solve this, or you, government has to tell the company how to solve this thing. I agree. You're supposed to yell at me about that. Um, I think that that's a good sign that we can start moving to audience questions. Um, if anyone wants to. Gosh, you guys are nervous.

Ashkhen Kazaryan:

Any content moderation stories from the past week or two weeks that you guys wanna discuss?

Mike Masnick:

There? There are microphones. I'll remind people there are microphones over on either side. I see at least one person running to a microphone. Um, and so we will, we, we can take questions. We have another 10 minutes or so, I believe. Um, so we have somebody lined up the microphone. You there Go first.

Audience Member:

Hi, I am Ann Collier with the Net Safety Collaborative and I think partly it's just we're all very tired.

Speaker 4:

Yes.

Audience Member:

But, um, thank you all. This is a fascinating discussion and I just wanted to follow up on that last point. I am such a fan of decentralized. I think that we need to give, we need to empower the user more and I think we need to even get down to giving individual users their own, um. Their, their very own bots or their very own AI to do their own personal content moderation. But then there's children, right? And we do need to empower them as well. But I'm not sure, decentralized or, you know, having these fantastic sort of, um, third party apps in a decentralized space is gonna keep them safe. So I was just wondering if you all. I could comment on that. Um, those who aren't properly educated yet, um, to take control and also to, to keep adults from taking too much control so that they don't have their own agency and freedom of expression.

Mike Masnick:

I can start on, on that and then, um, yeah, I think, you know, it's, that gets back to the discussion that we had as in the, in the first part, the first two parts of, of this where it's like these technologies can be used for good and bad. And, and really sort of, a lot of what it comes down to is, is you know, helping people understand, um, you know, teach them how to use it well. Also part of that is not, not expecting that everyone will use it perfectly, but that there is this sort of learning experience. And this is something that like, you know, a lot of us who are older, um, ancient, you know, learned as a child is that, you know, you're, you, you sort of learn as you grow about things that are appropriate, sort of age appropriate as you go. And sometimes you make mistakes. And part of growing up is learning, is making those mistakes and, and learning. Now it's not saying like, well, it should be a free for all and that, you know, well, kids will learn, they'll, they'll mess up and encounter terrible things and they'll learn from that. No, not always, right? But like I think that there's an approach that has to do with not just like the tech companies being responsible or the AI bot companion being responsible, but family society schools sort of saying, we're going to teach you how to use these things appropriately, and we're going to, we're not going to. Lie to you that everything is perfect online, that you have to be aware that occasionally you might come across issues and sort of figuring out, you know, in an age appropriate way, begin to to train people to learn how to use these tools properly. And I think that. You know, it's not a, it's not a sexy solution. It's not an easy solution and it's not gonna work perfectly, but I think it's the, the most reasonable solution.

Ashkhen Kazaryan:

The two things as we have that discussion, very important discussion that I wanna flag is currently the government is trying to mandate airification, um, and basically get rid of anonymous speech online and encroach on a lot of other rights. Um, and I think a centralized system in the name of kids, a lot of really bad. Laws are being passed in the name of kids. Um, so just keep that in mind. Have that conversation. When you have that conversation, discuss this piece too.

Alice Hunsberger:

Yeah, I think we can go to that.

Mike Masnick:

We can go on to next.

Audience Member:

Uh, hi everybody. Um, my question might be a bit, uh, strange. Um, uh, I remember, uh, during, uh, the sort of takeover of, of Twitter, uh, and all the changes and, and the shift that went away. The users were leaving and looking for something else, and someone made a comment, why are we trying to replace Twitter? Um, isn't the problem maybe systemic in digital systems? How they work, how they remove. Humanity in, in conversation, in interactions. Um, could that be the answer then to just embrace more offline community? And if that is the case, then how would you suggest that happen?

Alice Hunsberger:

I, I mean, again, no, no easy answers at all. Um, I think there's some element of, you know, yes, like go touch grass and live in the world. Uh, I moved to the middle of the woods so that I would have a nice balance of my online life and my offline life. Uh, and also. You know, I remember being a teenager in the South who was like figuring out my sexuality and feeling really out of place and feeling really lonely. Nobody understood me and people on the internet understood me and gave me support, and that's where I found who I was and found, you know, myself, able to express myself in a way that felt authentic where I couldn't in person. And then I became. Brave enough to also do it in person, but it took a while and it was the internet and I think like a ton of people have those experiences. And going back to the child safety issue, like especially as teenagers, and so you have to kind of balance these things, especially as our. IRL lives become more polarized and people become more close-minded and more hateful. Um, it may become harder and harder and harder for people to find that support and acceptance that they need in their own communities. And so I wish that wasn't true. I, I wish we, you know, didn't have that problem. But while we do have that problem, like the internet is a solution as much as it is a problem.

Ashkhen Kazaryan:

It was Tumblr for me, I think, um, for Twitter, yes, there's this very specific community that exists that sometimes is still very fun. But like astronomer, CEO thing, that was a great day on the internet. Um, I'm sorry to him, whatever he's going for. Um, but at the same time, I think, for example, I'm a millennial. My younger siblings are all like. Very Gen Z. They're not on Twitter. They have their own communities on discord, on TikTok that have completely different dynamics that maybe there's an exodus from one of those communities They like. They face similar questions. So I think it's also a generational thing where it's let, like Twitter's gonna be around it listed in this form forever. I think it's like technology comes and goes and people find community online.

Mike Masnick:

Yeah, the only thing I'll add is that sometimes, you know, technology can also help people meet in real life and go outside and all those kinds of things. Uh, I wanna go to the question over here.

Audience Member:

Sure. So, a, a lot of the latter part of the discussion was I think, kind of selling the idea of empower users, let them choose. Create competition, essentially. I mean, there's definitely a, I think, a, a strong kind of critique of that view. Not that it's not a good idea, it's a great idea. Um, just that it, it's really difficult to happen in practice when your friends are in certain places. Like if your friends are on TikTok, you're not gonna go to Twitter. You're not, you. I mean, it has to do, it's, it's very difficult to move. Um, and, and options like Blue Sky are really good, but it's still difficult for people to move. Um, one of the areas where maybe state intervention. Could be considered is in terms of like mandating, say, competitive interoperability. And I wonder if that's something you've thought about, um, if you have reflections on, on that.

Mike Masnick:

Um, yeah, we have like two minutes left. I could go on for about an hour on that particular subject. Um, the really quick version is, um. I, I, I think incentives work better than mandates. And there, there are attempts to mandate those kinds of things. And I think what will be found is that they fail. And if they fail badly, they'll be damaging to the, the wider space And the idea of doing these things where. If, if you put in place a mandate for like interoperability or, or data portability, companies will often figure out the least friendly way to do it, the way to comply as opposed to the way to actually make it useful. And so I fear where that ends up, even though I sort of appreciate the, the, the thinking behind it and, and like, yes, we're trying to get there. I would love to think through and I have, as I said, probably an hour on this of like ways to create better incentives to, to make companies think it's a great idea for them to do that without it being the government saying, you absolutely have to do it this way. But that's a, a much longer story. I think we have time for one more relatively quick question and then we will close up and let everybody go to happy hours and drinks.

Audience Member:

Thanks so much. Um, you mentioned earlier that advertisers, you know, hold the significant power. In recentering the importance of trust and safety and you know that their spending can have this major influence. Um, I was at a recent brand safety conference and I heard a very high profile. Um. Brand suggests that actually, you know, audience tolerance has shifted and we're no longer quite as concerned. You know, the tolerance level isn't there. So when you mention our ad, our ad for our product appearing aside around content, that would previously have been an absolute no go, that they're more open to it. What is your response or reaction to that kind of narrative?

Ashkhen Kazaryan:

I think if that's true, if, I mean maybe someone is just trying to get a big bonus back here by being original, but if that's true, if that really reflects where the society in that country is maybe. That's what it is. I grew up in Russia. Um, I'm Armenian myself. There's so many different cultural things that have changed over the course of my life and what society accepts. And then I moved here and I learned a lot of new things and a lot of new words. Um, so I think there's just like something about if that, um, statement is true. Um, then they, when, and they're ahead of the curve. And if it's not, their business model fails as advertisers. And,

Speaker 5:

uh, there is a big, we've seen this a lot of, just for context, and you don't have to include this on the podcast, but that was Unilever. Fascinating. Um,

Ashkhen Kazaryan:

um. Well, but we've seen a lot of times, uh, society negatively react to advertisers thinking it's going one way and then it's going the other way. For example, there's always conversation about tra wives and how that's like very mainstream right now, and I actually don't know if there's enough data to say that it is.

Mike Masnick:

Yeah, I think I, I think it's an interesting question and I think we'll see how, you know, society is changing, the culture is changing, and sort of how people re respond to it. It is something that we're, we're all gonna learn together and, but we're all sort of able to influence that in our own ways as well. And, and so I think, you know, we'll, we'll sort of see how that goes. Um, we, we are out of time and, but before we close, I do, uh, this is, this is, Alice had told me she was going to do this at the beginning and then I forgot at the beginning. I forgot at the beginning. Can I do it now? So you can do it now.

Alice Hunsberger:

Okay. Yes. Um. Before we go, huge thank you to TSPA and everybody involved in Trust Con, all the volunteers who have been making this possible.

Mike Masnick:

And then thanks to all of you for coming out, we are the final thing at Trust Con and we assume that many people, you know, wanna leave early and not hang out. And so we are incredibly grateful to a crowd this big coming out to see us and, and to cheer and yell and occasionally heckle.

Alice Hunsberger:

Yeah. Not as much heckling as I hoped. So

Mike Masnick:

next year? Yeah, next year. Come with Heckles.

Alice Hunsberger:

Thank you.

Mike Masnick (Intro):

I hope that everyone enjoyed listening to that as much as we enjoyed recording it. And now stay tuned to hear me talking with Mike Pappas from Modulate regarding different kinds of voice scams and what to do about them.

Mike Masnick:

Welcome back to the podcast again, Mike. It is great to see you Of course.

Mike Pappas:

Great to be here in person this time. Time Mike. Yes. Yes. It's always

Mike Masnick:

fun to do it in person. We don't do it in person very often, but we are obviously recording live from Trust Con. so first of all, how's your trust con been?

Mike Pappas:

It's been busy as always, but wonderful conversations through and through. And we're only on day one, so I'm sure many get to come.

Mike Masnick:

Yes. Yeah, it is, crazy and hectic and, all of that fun stuff. I know. So today we wanted to talk more about modulates voice fraud detection. back in May we had you on to talk about how you had you know, moved into doing things from. Detecting toxicity through voice and all of those kinds of things into fraud detection. And then a few weeks later, your CTO and co-founder Carter Hoffman was on to talk about some specifics of how it's possible to detect fraud by voice. Mm-hmm. Because I think that's something different. People haven't really thought about it as much. and I will say. Of all the bonus chats we've done, that's the one that we've heard the most feedback on. People loved it. It was really interesting. It was very different. It was absolutely fascinating. the stories about things like intonation and pauses and, the different trade offs involved in all of this, we're absolutely fascinating. But now you're back, to talk even more about fraud, our most fascinating subject, including how you're seeing very different types of voice fraud that maybe go beyond what I think people normally think of when they're thinking about fraud. So can you explain some examples of what you mean?

Mike Pappas:

Yes, absolutely. So, excited to be back. I'll do my best to live up to Carter's legacy here. Um, but. When we've talked about fraud before Carter really went into the idea of sort of deep fakes and synthetic voices, how do you detect that kind of stuff? So when most people think of voice fraud, they're either thinking of that kind of deep fake detection, or they're thinking of, you know, metadata analysis. This call is coming from the Philippines and it's not supposed to be, or something like that. Right. What I wanna talk about today is what about actually listening to what's being said in that conversation? What kinds of things can that reveal to you? You don't have the magic bullet of, oh, I know for a fact that thing you're claiming is a lie and no one's coming into the call saying, I'm about to fraud you. Right? Um, but there are giveaways that you can look for in the actual sort of conversational back and forth, and that's the other big part of what we're trying to introduce. Pulling again from that heritage of understanding conversational dynamics in social settings for sort of toxic or positive behavior. so happy to dive straight into a couple examples of that, or

Mike Masnick:

Yeah, yeah, yeah. Let's, let's get into examples.'cause I think, yeah, I think practical examples would be really helpful to sort of. Lock in what you mean by all this.

Mike Pappas:

Awesome. So probably the broadest example is sort of the concept of social engineering in general. Mm-hmm. So there's a broad set of, applications of social engineering out there, but usually it involves something like, I'm going to try and apply pressure to you and create sort of a hot seat that inclines you to deviate around some process. Sometimes that pressure is me pulling on something social. So you and I are now best friends and I've convinced you, or you know, I've told you this tragic story about my children and you really wanted to do your best to help me. Sometimes it's more sort of direct and antagonistic. Hi, I'm definitely calling from the IRS and you are about to be in grave trouble if you don't do this right now. All of those techniques though, we can look both at the specific claims being made and say, Hey, are these in fact pressure tactic? X. We can hear that both from the actual content of what's being presented, but we can also hear the agent on the other side saying things like, I really need to check with my manager about that, or, I'm not sure if I'm allowed to do that this way, and especially look for the response of that person who's trying to manipulate you. Really trying to push past those with some practice techniques. Of course, there's emotional aspects of this as well. You hear the sort of use of things like anger and aggression. Or kind of overplayed tragedy in some cases. Mm-hmm. To try and trick people into kind of letting themselves open up and stepping around that policy that it turns out was there for a reason.

Mike Masnick:

I mean, it's interesting, right, because and we've talked about this on the podcast a bunch, you know, the way that these kinds of social engineering things work is that there's sort of like tried and tested. manipulation methods that, get past people's defenses, but that also means that there's some sort of pattern there.

Mike Pappas:

Exactly, yeah.

Mike Masnick:

so is that what you guys, you're sort of training on the kinds of patterns that have been proven to, commit fraud?

Mike Pappas:

Yes. Both at the sort of literal script level. Mm-hmm. So this is obviously not real, but if someone wanted to bring up that they were a Nigerian prince, we sure would recognize that. but it's also more kind of at the meta level of the pattern is how are you navigating the conversational flow? Are you jumping straight into these kinds of appeals? Are they actually coming up organically in the conversation or does it feel like you are using them in a strictly weaponized way? We can tell from how the conversation evolved and say, okay. Even though you're acting is spot on, the way you're actually introducing this into the conversation makes it feel like you're trying to work around. And of course we'll never know for sure, but when we notice this, we can be doing things like giving that agent an alert, or if they're particularly junior, giving an alert to their supervisor saying, Hey, there's a call happening right now that seems suspicious. That seems like it might be using these manipulative tactic. You might want to listen in and see if you need to step in and support your junior agent or junior agent. You want to take a moment to really reflect on this before you actually authorize. Maybe you wanna run an extra identity check or some other kinds of things. It doesn't have to be black and white, as long as it's sort of pointing you in the right direction of how to mitigate that risk. Now that we've identified it,

Mike Masnick:

right, it's not about 100% necessarily identifying everyone, but if you can raise a flag and warn people and, and say like, maybe this, this needs a little bit more attention.

Mike Pappas:

Exactly. another sort of classic example that we're thinking a lot about is protecting the elderly from a lot of these scams. Mm-hmm. And again, deep fakes are a big part of that scam strategy. but you frankly don't even need deepfake sometimes if you're trying to scam someone who's in that vulnerable position, really not gonna understand. Well, one of the things that we've been looking into, and obviously we need the right partners for this, but we have the detection capability to say, okay, you know, Mike, you are claiming that you are on the phone right now with your mother and it's you who just broke your leg and you need her to wire you a bunch of money. ASAP. One of the things we could do is recognize that that's you making that claim and actually prompt a notification to the real Mike Masnick right. Saying, Hey, FYI, you're on the phone with your mother right now claiming this. If that's not true, you are best positioned to convince your mother that that's not true and prevent something bad from happening.

Mike Masnick:

Right. Interesting. And so in a case like that, though. where does your system lie? Like how, how does that get into the process?

Mike Pappas:

Yeah. So that, that example is why I mentioned we do need the right partners for this, right? Um, so we're having some conversations both at the sort of telco level. Mm-hmm. Is there an opportunity for us to make this offering? You obviously can't be listening to every conversation. That's a huge privacy violation. But for the right users who are in that vulnerable space and who might opt in to say, I want that extra layer of protection, could we offer that for them? we're looking into, is there a way for us to even sort of stand up an isolated app or something like that? but there's a number of technical complexities there. I think realistically though, you could spin this and say, all right, what if you're not talking about the elderly? What if you're talking about a intern or a junior employee who's being pressured into going and buying gift cards on behalf of their CEO or any of these kinds of stories and say, all right, now you can apply that same logic. But in this enterprise setting, right. And now we are hooked directly into the enterprise phone lines. We do have consent to be listening to all of that, and we know exactly how to route it to everyone.

Mike Masnick:

Yeah. There's, there's a part of me that, that wonders if, you could just build that, that just a text like, buy a, a gift card like the, the term gift card.

Mike Pappas:

I, I'll admit, I have almost never heard someone legitimately say, we need to go buy a bunch of gift cards. Um, but I mean, I, and you know, modulates been subject of a number of these scam attempts I'm sure. Um, I recently had a surprisingly compelling one, masquerading as one of our investors basically coming in and saying, Hey. I just need to do a quick call. We're trying to actually update our investment profile here, but we need to move some money around. And would you be willing to do X, Y, Z for me? once we got into the weeds of it, it started to become pretty obvious, right? But I was actually impressed at the amount of Polish that the kind of initial outreach did. They clearly had learned something about how investors communicate. Um, just sort of the writing style and the way that they presented it. so yeah, there's, there's a lot of sophistication in this stuff these days.

Mike Masnick:

Yeah, it's fascinating. And then the idea is that, it's basically what is done if you detect something, is sort of up to the partner in terms of like what kind of level of response or what type of response.

Mike Pappas:

That's right. So again, going back to the toxicity world that we started in, it was, Hey, we've noticed this person seems to be violating your code of conduct. Do you want to review that for yourself? Do you want to take action on it? In the case of fraud, it's again, Hey, we see this person is trying to perpetrate fraud or likely to be trying to perpetrate fraud. Do you want to take us at our word and immediately shut that down? Do you want to provide a warning? Do you want to get a supervisor in? All of those options are in play, but the really important part is that this is happening in real time, right? So we're not saying after the fact, man, it's a shame that that conversation happened, but we're actually giving you that opportunity to intervene.

Mike Masnick:

And do you have a sense now of like, how do you balance the sort of false positive, false negative thing? You know, in terms of determining any of this, this is the problem that all of trust and safety has, right?

Mike Pappas:

Yeah, I think the false positive, false negative trade-off comes down to what kinds of interventions you're deploying. so for instance, with some of our partners in the insurance space, they're really worried about these organized sort of cabals of fraudsters that are coming in and saying. I'm definitely Mike and my house is definitely burned down and you need to pay me at this new bank account address, right? For something like that, the real failure mode is they actually execute a transaction and so we have kind of a good amount of time to dig in and so it's okay for us to say, Hey, we're not super sure yet that this is definitely fraud, but we wanna recommend that you look a little closer and we wanna recommend that you look a little closer Still. And all we need to be sure of is by the time you're making the transaction, that's where we need to be ready to actually get in and stop you. But we don't need to be very disruptive until that point. Whereas with, you know, Hey intern, go my me some gift cards. That's something that's happening in a much faster interaction. So for that, we want to be disruptive if we're doing anything at all. So we need to be more erring on the side of high precision. Even at the risk of missing a little bit on the recall side, which is why, of course you still train your team and you still provide all these other tools and don't just rely on our ability to catch it. because as good as we are, the more the stuff rolls out, the more fraudsters will find new and creative ways to, you know, work around this stuff.

Mike Masnick:

Yeah. They're, they're, fairly clever. yeah, I, I mean, this is, this is all really fascinating. Is there anything else that we didn't cover that you wanted to,

Mike Pappas:

No, I mean there, there's so many different subtypes of this stuff that we could get into. Um, you know, we talked about insurance and those kind of suspicious things. Yeah. We've talked a lot about vishing. what are the other categories that we look at Here is what we call return fraud. so there's different manifestations of that. It could be literally, Hey, your delivery never showed up at my doorstep, and you definitely owe me money even though I have it in my hands right here. or it could be more sort of abstract. So we see a lot in the, food delivery space. Something like, Hey, I just placed this order and the app charged me a hundred bucks, but it only registered 50 bucks worth of my order.

Mike Masnick:

So you

Mike Pappas:

delivery man, need to go into the restaurant and get the other 50 bucks worth of food for me. And if you don't, then you're gonna show up to my doorstep with the wrong amount of food and I'm gonna be angry and you don't want to take that risk. Wow. So you'd better go do this for me. That's really tough.'cause the delivery apps, they don't see that, right? They don't have the ability to protect, but because there's going to be that physical interaction, there's a much greater sort of threat, right? That compels these drivers to just say, look, you know, it's ultimately not my money that's getting scammed. I'd rather just do the thing that makes me safe. So, right. I think. Just, again, there's so many different kinds of these manipulative conversations. Yeah, and especially in the world of things like gig economy or sort of multi-party insurance setups and things like that, you end up with so many different people involved in the process. It's easier and easier for fraudsters to find someone who. Has downside risk if they don't listen to the fraud star. Right. But it doesn't actually cost them to be defrauded. So even if they kind of know something sketchy, they might be more inclined to just let it happen to protect their own sort of personal bottom line.

Mike Masnick:

Right. Yeah. I mean that's where those scams are like the most successful, where you align the incentives. Yeah. So that it's easier for them to, to take part in the scam than, than anything else. This is, again, I mean this is absolutely fascinating. I mean, I think it's a space that, I'm not sure people have thought about as much and yet is you know, really, really fascinating. Really, really interesting and, uh, exciting to, to hear that you guys have come up with a solution for, because I think it's, feels like one of these things that has been taking over more and more of people's lives and there's more and more of these stories of, of these kinds of scams. And, and sometimes it felt like, especially with like the voice communications and the social engineering, it's like, well, how do you, fight back against that? And so having a solution that can sit in there and, and do something is, is really, really interesting. Yeah.

Mike Pappas:

Well, appreciate it. We're excited to be, you know, able to have more and more of an impact in the space and

Mike Masnick:

yeah.

Mike Pappas:

always love the opportunity to be here and hear your insightful questions and dig a little deeper into what we're doing.

Mike Masnick:

Yeah. And that's great and, and people can find out more on the Modulate website

Mike Pappas:

as always. Yep.

Mike Masnick:

And so, uh, check that out and we'll have a link in the show notes as well. So thanks again for joining us.

Mike Pappas:

Yeah, thanks for having me, Mike.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode