
Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
Algorithm Shrugged
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Zeve Sanderson, the founding Executive Director of the NYU Center for Social Media & Politics. Together, they cover:
- If algorithms radicalize a mass shooter, are companies to blame? (The Verge)
- Large Language Models Are More Persuasive Than Incentivized Human Persuaders (Arxiv)
- A dangerous plan to ‘win’ the AI race is circulating (Washington Post)
- Texas governor signs law to enforce age verification on Apple, Google app stores (Reuters)
- AB 853: California AI Transparency Act.(CalMatters)
- Regulators Are Investigating Whether Media Matters Colluded With Advertisers (NY Times)
- Anthropic’s new AI model turns to blackmail when engineers try to take it offline (TechCrunch)
- Why Anthropic’s New AI Model Sometimes Tries to ‘Snitch’ (Wired)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Modulate. In our Bonus Chat, we speak with Modulate CTO Carter Huffman about how their voice technology can actually detect fraud.
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So Ziv, I don't know if you use the, uh, site or app All Trails, which is this wonderful hiking app that I rely on constantly. but its tagline is our prompt for this week. So how would you find yourself outside?
Zeve Sanderson:Well, I haven't found myself outside for what feels to be about six months because of this never ending, winter in, in the Northeast. I'm based in New York. but at the very end of last summer, I went hiking. In the Julian Alps, in Slovenia, which were, one beautiful and two much chiller and cheaper than their Swiss and Italian and Austrian counterparts. So if anyone find themselves in the region, I, I highly recommend. what about you, Mike?
Mike Masnick:Uh, well, that, sounds lovely. I was going to say, I just came back. I spent the long Memorial Day weekend, camping and hiking, down in, in sort of the Big Sur area out here in California where the weather unlike New York, is lovely. and was able to, be. Almost entirely without internet access for three whole days. Every once in a while suddenly I would, you know, I think we hiked up to the top of a hill and there was, the view was amazing except for the fact that there was a giant, cell phone tower, like right at the top of the hill. So I had very brief internet access, up at the top of the hill, along with this wonderful view of the Pacific Ocean. but, uh, otherwise it, it was a, a lovely break from the internet.
Zeve Sanderson:As a native Californian, what you just described is what I, what I miss most.
Mike Masnick:Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It is May the 29th, 2025, and this week's episode is brought to you with financial support from the future of online Trust and safety Fund, and sponsored by our launch sponsor Modulate. The Prosocial Voice Intelligence Company. In our last bonus chat with Modulate, which was just a few weeks ago, we had Modulate CEO Mike Pappas talking about how they're expanding their voice technology from just video games to fraud detection. And this week in our chat with CTO Carter Huffman, we dig into really. The fascinating nuts and bolts of what that actually means. there's some really interesting stuff about different kinds of fraud and what the AI is looking for and how it detects it, and was, to me at least, really kind of eyeopening and enlightening. As for the rest of this podcast this week, we will be talking also about algorithms and liability along with AI and some state laws and a variety of other things. I am Mike Masnick. Obviously Ben is still off enjoying some family time, though he will be back for the bonus chat, so don't miss that if you are missing his British accent, which I still cannot replicate. Uh. The podcast. I will note this podcast will also be off next week as Ben is still off and I have some, family stuff myself that I need to tend to next week. So, we will be off next week, but we should be back in two weeks. I think Ben will be back. If not, I will still be running the show and we'll make it work somehow. Uh, but this week, we have another wonderful guest host. Zeev Sanderson, whose name you heard on the podcast just a few weeks ago. He is the founding executive director of the NYU Center for Social Media and Politics, and, has been producing consistently fascinating research on that topic. some of which Ben and I were talking about on the podcast just last month, which made me realize at the time that he would be a great guest host. And, uh, here he is. So thanks z for joining us this week.
Zeve Sanderson:thanks so much for having me, Mike. Uh, big fan and really excited to be here.
Mike Masnick:Cool. So let's start off with a story, that we're, we're sort of keyed off of. The Verge had had this article talking about algorithmic liability, which is a topic I think that we're gonna. Be talking about forever going forward from now on, it suddenly becomes such a, such a hot and important topic. and so, Ziv, you read this Verge article. Do you wanna kind of summarize the details and, and what's happening here? I.
Zeve Sanderson:Yeah, so, the vertical covers a lawsuit, that was about an absolutely horrific shooting that happened in 2022, in which a young white man drove to a, buffalo supermarket and killed 10 people in a racially motivated attack, and he live streamed it on Twitch. And he had written a manifesto claiming that he was radicalized by racist content online and was specifically targeting a majority black community. Again, absolutely horrific.
Mike Masnick:Yeah.
Zeve Sanderson:Because of this manifesto, there ended up being this tech angle, which was, was he radicalized online, and what responsibility or culpability did the platforms on which he was consuming this content bear. So the nonprofit, every Town for Gun Safety sued nearly a dozen tech companies and many of the biggest ones, meta Amazon, discord, snap, YouTube, Reddit, et cetera, arguing that they bear responsibility for radicalizing the shooter. and it was sort of a major test for whether social media platforms can be held liable, for potentially the algorithmic harm that is associated, with their products.
Mike Masnick:and you know, I mean, I think it was interesting because. struck me at the time when all this happened and the shooting in Buffalo went down. a lot of people were very quick to blame social media in fact including New York's Governor and Attorney General. and I thought It was a choice, uh, you know, maybe not to blame issues with gun control or mental health or a variety of other things, but to, jump on this. And, and in fact with that, there were concerns because the first person who called 9 1 1 in that case was hung up upon the 9 1 1, operator thought that they were joking, and refused to respond to the call. and there was also a story where, Law enforcement officials had been concerned about this particular shooter and had gone to visit him and then did nothing about it. So there are all these, these other stories of, potential things where you could potentially blame someone, but a lot of people very quickly assumed that it was, it could all be blamed on technology.
Zeve Sanderson:Yeah, and I think one of the interesting features of how this discussion has taken shape broadly, but especially as it relates to the actual lawsuit, is that it's largely been grouped under this sort of moniker like algorithm, like algorithmic harm, but it's actually quite a bit more complicated than that and more expansive than that as sort of indicated by the. The breadth of companies that are sued. Right. So essentially what they are arguing is that his entire information environment is responsible for his radicalization process. And what's notable is that some of the platforms in which he was quite active and evidently consumed extremist content that potentially, was part of his ra, his radicalization process don't even have algorithmic feeds to begin with.
Mike Masnick:Yeah. I mean, that's the part that struck me, right? Like discord, you know, it was definitely, he was in all these discords and, you even the framing of the Verge articles, you know, if algorithms radicalize a mass shooter are the companies to blame, but Discord doesn't have an algorithm, right? Like, you just join the various groups and then it's, you know, it's the same thing as like. did you take these books outta the library and did they radicalize you? Like, do you blame the library for that? So I'm, I'm a little confused at this other than it feels like sort of jumping on this bandwagon of algorithmic blame so I'm sort of like, much the algorithm actually in this.
Zeve Sanderson:Yeah. So I think what's helpful is almost to break apart this conversation into these like two distinct categories. So like one are platforms where there are algorithms and where, right, there are algorithmic recommendations that could potentially, radicalize, uh, users. And what I mean by radicalized users technically. Is that automated recommendation feeds will serve up slightly more extreme videos over time. Moving a user from a position that might be more neutral over time to one that's much more extreme. right. So that's like one sort of theory here that I think this lawsuit is sort of testing out. and the second is that there are a set of, products or online services that have design features that maximize engagements but are not algorithmic. those sort of engagement maximizing product features might sort of introduce harms including. Through radicalization that look like the harms that algorithms might bring, but the sort of mechanism by which they come are different. And so I think that like, right, it's helpful for us to sort of keep these two in our, in our head and sort of approach each of them separately.
Mike Masnick:Yeah. And you've done some research, right, in terms of like algorithms and radicalization or,
Zeve Sanderson:Yeah, so our, our center has done some, great research, led by Megan Brown, as have others. so one of my favorite published papers in this area is by Annie Chenin colleagues. It was published in Science Advances, and it really tried to understand what the role of algorithmic recommendations were. in leading users to consume, sort of fringe or extremist content. And what they found in this paper, is that really what drove consumption of extremist content on YouTube, was actually subscriptions to the channels that were producing that content. and links, sort of external links that led. Users to those to those content. Right. And so there was this sense that really what was driving consumption of the, type of content that we're concerned with broadly, but especially when it comes to a case like a racialized mass shooting, is that there was actually this opt-in, right? YouTube didn't seem to be playing or the YouTube algorithm in so far. That's like a coherent way of thinking of recommendation systems didn't seem to be playing. This like primary explanatory, sort of role in this sort of vector of consumption and work that we've done in our own center, again led by Megan Brown has found something similar. one thing to note though is that both of these papers. looked at roughly the same window, right? So sort of like 2019 through 2021. Zainab Tui, the sociologist, wrote her sort of well sighted and widely read piece, YouTube, the great Radicalizer in the New York Times in 2018, which popularized this narrative. and I think what's important is because algorithmic recommendation systems change so regularly, it could be the case that she was absolutely right. In like the 2018 and before period and that potentially, that article itself led YouTube to change their algorithmic recommendation systems. But what we know about sort of generally, recommendation systems, but in particular these couple of great papers on YouTube, is that it really is users opting into consuming this content and that the platforms themselves don't seem, to be the, great radicalizer I. that we previously had observed.
Mike Masnick:Yeah, so it's, you know, I've sort of described it in the past as sort of like demand side. that people are looking for this content or being driven to it by outside forces rather than the algorithm themselves. And yet this narrative, I. Really has stuck around forever and whether or not it was ever actually true is an open question. There's some research that may be early on some of the, the earliest versions of the algorithms would drive people to more radical or more extremist content. I. But it seems like that hasn't been true, but yet the narrative sticks around and that seems to be central to this particular lawsuit for the algorithmic stuff. I, I feel like for the other stuff where then it just becomes like product liability and like features where they're talking about like, oh, you're trying to like, maximize time or, maximize usage or things like that. That feels like a, a really big stretch and, I'm not entirely sure that the courts are gonna buy it. because so much of it seems based on, narrative over facts. but, you know, I don't know. do you, do you think that there's any chance this lawsuit goes anywhere?
Zeve Sanderson:so I'm not a lawyer and I've made the mistake uh, once in a public setting of comment directly on legal questions. But I do wanna sort of, emphasize this demand side framing, right? So in this, this Science Advances article, which I mentioned, they had this really sort of cool study design. In which they, paired survey data with behavioral data. And so what that means is they actually know something about the people who they're studying and different attitudes and beliefs they might hold in addition to what YouTube videos they're consuming. and so what they found was that people who, in their study who consumed extremist content on YouTube also. had like high prior levels of gender and racial resentment, right? So there was this really strong demand side sort of dynamic occurring here. and so I think what's one of my frustrations at times with this sort of general focus on. or conceptualization of algorithmic radicalization in the way that it sort of plays out in conventional wisdom is that it actually misses, in my opinion, the, like much harder, deeper questions here, that really sort of reflect what's happening, which is that there are gonna be a set of people. Likely a small set, though I think nly, any number of people who hold high levels of racial and gender resentment are too large, but Right. A small set, if if the denominator is every YouTube viewer, and that they are going to seek out in a variety of ways content that reflects their world views. And the question of what should YouTube do? Is a really fascinating question that is not directly sort of contended with when all we focus on are algorithmic recommendations, which the best research that we have suggests aren't the main mechanism at play in terms of, the patterns of consumption among, uh, among users sort of consuming extreme content. So, right, so there's a different conversation to have that's like the conversation that reflects what's actually happening on the ground. Which doesn't feel like it's reflected in the lawsuit or any of the coverage immediately after the horrific shooting.
Mike Masnick:Yeah, and I think, there is all this discussion about sort of how powerful the algorithms are and to me it's always bothered me a little bit because sort of. To some extent takes away human agency and the idea that like, humans are not fully controlled by these algorithms and, putting all of the blame on the algorithms or the companies and the services and the way they work, sort of, you know, I think really diminishes the idea and it sort of makes it out that these humans are easily brainwashed, controllable subjects. Which leads into this other, paper that you found, that was really, really interesting. And it's, one of, I think a few, but this is the most recent on, more specifically about how persuasive large language models are, for persuading humans to do stuff, which is not quite the same thing as algorithmic recommendations, but this is directly trying to figure out. Can these LLMs be persuasive? So do you wanna talk a little bit about the paper that you found?
Zeve Sanderson:Yeah, and I, and I do think, you know, what we're seeing happen right now with, lawsuits around algorithmic radicalization. We will similarly likely see similar lawsuits right over years around like LLM persuasion, including potentially radicalization. so yeah, so there was this fascinating paper, it's a working paper just as sort of a, a side note. It's not yet peer reviewed. but I thought it had sort of this really neat design. So I'll just very quickly walk through it because I think it's relatively intuitive where what they wanted to ask was. Are humans or sort of, cutting edge LLMs chatbots more persuasive. And they did so in the context of essentially like a trivia questionnaire where they asked a bunch of factual questions and gave, survey respondents two answers, right, one right, one wrong. so you had, you know, a set of about 900 quiz takers. Then you had two other sort of sets that, you know, and this was this, the experimental design that were trying to influence. The quiz takers one were like other survey respondents that they recruited from the survey company. And then, the other was, was a chat bot. and in this case it was one of the Claude models. and they required that the quiz takers spend two minutes per question and that they were financially incentivized to get the right answer. And I think that this is really neat in that they actually really sort of created the conditions, for like. motivated persuasion, right? Like people really motivated get the right answer and thus, like, I think that what we're learning here is not just like a bunch of survey takers running through a, a survey so they can get their small payment at the end. and essentially what they found was that like, chatbots were much more persuasive in both persuading quiz takers, to answer correctly and incorrectly than were persuaders, right? There were sort of different
Mike Masnick:human persuaders.
Zeve Sanderson:Than than human persuaders. Exactly. They were like different experimental conditions where, they were essentially told, you know, help them get the right answer versus the wrong answer. and chatbot sort of outperformed human persuaders in all of them. and I think that this largely aligns with this growing body of literature that suggests that chatbots can be quite persuasive, including more persuasive, than other humans are. but for a variety of like, sort of, in the weeds reasons, I really appreciated this paper.
Mike Masnick:Yeah, no, I thought it was really interesting and I, and I vaguely recall that maybe last year at some point we had talked about kind of a similar paper and it's interesting to me for a variety of reasons, one of which is that, there is this. Maybe, gut reaction to seeing something like this. You know, we're living in a time where there's so much as we discuss all the time, miss and disinformation and just, pure nonsense out there. and there's always these questions about like, how do you pull people back from, being in a, a CS Pitt of nonsense and, and just believing wrong things. And so there's like this immediate reaction where it's like, oh, well, if these chat bots. Are actually persuasive and con convince people. And I think that was the study we had seen last year was like trying to, argue with people who believed in conspiracy theories and, they found that the chatbots were actually more persuasive and sort of like convincing people like maybe, chemtrails are not poisoning the skies or whatever, or whatever the, the various conspiracy theories were. But then at the same time, like seeing this story right after the story about the algorithmic. threats and liability. You could also see the opposite occurring, like, well, what if whoever's creating these, these chat bots to be persuasive tries to persuade them towards the societal problems and extremism and all sorts of other problems. And so I think that leads to like this open question then again of Who's in control of these things and, is there liability here and how does all that apply? And I, I think that's going to be a big question going forward.
Zeve Sanderson:Yeah, and I think that there's, I mean, there's this general conceptualization of the public as just, sort of. Marionette puppets and technology is like making us dance in the way that it wants. and I think what's one of the really interesting. gulfs in like the way that we sort of like conventional wisdom has taken shape, around especially focusing on like the harms that technology may or may not be sort of responsible for. And a lot of the research coming from like the behavioral and social sciences about how hard it is to actually persuade people of anything, um, is that like, I think largely we sort of are over on technology. because there's this like, almost optimistic implication of it. Dan Williams, the philosopher, makes this point in a really good, Boston book review essay where like, if people are just simply persuadable, if there's some, you know, sort of output of this tech input function, then all we need to do is get the right parameters of that input function and then people are better on the outside. Right? All we need to do is, is sue a dozen companies. For their product harms and people aren't radicalized. And so like, I'm incredibly sympathetic and also am, broadly aligned with holding companies responsible for the genuine harms that they cause. but I think grouping it under this sort of like persuasion lens is, is one that actually leads us in the wrong direction, at least in social media. I think what'll be interesting to see is like, are LLMs different? And I think some of these early papers are suggesting. They genuinely might be different than, some of the folk theories we had about social media suggested. but certainly the jury is, is out.
Mike Masnick:Yeah. I, I still think it's, it's very early to see and it will be interesting to sort of see how those goes. And it, it's kind of interesting to hear all this at the same time that like. The whole like behavioral economics space, which was sort of built up on this idea that, people were persuadable and you could just, make these nudges as, as they were sort of famously called, could, get people to move in a certain direction. And a lot of that has since been, I. Pretty discredited, I would say, and is now, you know, people are still sort of trying to find what remains of the structures of that. But I think a lot of that has turned out to be very much overblown. And there were some falsified studies and all sorts of other stuff that have, created some issues there. And it'll be interesting then to see, is that really true with LLMs and chat bots? I wanna move on now, to our next story. And this one we're gonna start, we're gonna wrap in a few different pieces here, but we're gonna start with something that you co-authored in the Washington Post, about a plan that is potentially making its way through Congress as part of the. I hate to call it the big beautiful bill or whatever it is, but that is what they're calling it. do you wanna talk about this, bit that, touches on AI and, internet speech in some ways.
Zeve Sanderson:Yeah, so there was this proposal that started largely from the conservative think take space, which suggested that there should be a moratorium on states passing AI legislation. and versions of this argument have been made. for about the last year, but really sort of came into focus in the past few weeks because it was included, by the House Energy and Commerce Committee in their budget reconciliation Bill. And what it does is it, prohibits states from implementing new or enforcing current, or regulations that regulate AI models. AI technologies and automated decision making tools, and it doesn't do an amazing job actually describing, what those are. And like if you listen to, you know, the quarterly earnings reports of pretty much any company, like every CEO is saying they're an AI company. So like, I don't know, does this sort of cover everything now? and it does so for. 10 years, which is sort of an astonishing amount of time, given that the pace at which, these technologies are developing right? As like an easy heuristic where like two and a half years from chat GPTs launch. So it would essentially be four chat GPTs, uh, into the future before we could, uh, the states could pass another, AI bill and I should mention that it. Include some assumptions, for statutes that are sort of designed to like eliminate barriers or enable implementation of technologies, but again, doesn't really explain what that, means. and so largely leaves us with at least this like directional sense, that they're trying to foreclose on states doing almost anything here.
Mike Masnick:Yeah, and I had written about this, I think sort of two weeks ago or so, where the language is so broad, specifically around what they refer to as automated decision systems, which is like. Everything these days, and I had sort of pointed out that this would actually conflict directly with a bunch of states that are trying to pass laws around content moderation and social media liability and parental controls and all that kind of stuff. And, almost all of those rely on automated decision making tools because it, Defines it again, extraordinarily broadly and vaguely, as any computational process derived from a variety of things, including machine learning, statistical modeling, data analytics that leads to a simplified output, including a score, classification, or recommendation, which is like everything like, under this moratorium. Like no state could pass a law for, 10 years on, Anything related to the internet as far as I can tell. I mean, as you said, there are some exceptions and, the language is so broad, it really depends on how you define things. but it strikes me as a very, very broad and,, not a thoughtful approach to, dealing with maybe questions that you might have about AI regulations.
Zeve Sanderson:Right. I mean, social media is actually one of the few places that can like. Rightfully claim that AI has been in sort of a, a, big part of their services and products for years, right? So does this foreclose on state's ability to regulate kids in social media? I mean, that has broad bipartisan support. The federal government, sort of in fits and spurts has maybe sort of taken it up, but states have really been leading here and I think that that's actually a really important backdrop broadly to this debate, which is that. The federal government really hasn't passed very many tech bills over the past, sort of two decades, right? I mean, we, you know, you had the, Take It Down act this year, which states, for what it's worth, had had similar bills for years in advance that sort of trailblaze the way towards the federal government doing something, on non-consensual, intimate imagery. you know, the TikTok bill, which. I don't know. It's in this sort of Schrodinger's box of like whether or not it's still a bill. but states have really have really led the charge. Um, right Last year we saw, every state that had a session pass at least one tech bill. There were hundreds of tech bills passed that covered a variety of AI and non-AI sort of related areas. and they've largely done so because the federal government hasn't shown either the desire. or the will, to do so. And so I think one of the really sort of. Alarming outcomes of this potentially is that like states have picked up the slack from the federal government, on this technology that we're being told by both policymakers and industry leaders is transformative. It's the new industrial revolution. there's also huge demand we know from public opinion of polling from constituents for these technologies to be regulated alongside their development. and states have been doing so in the federal government, rather than coming in and saying, we're gonna preempt via our own regulation. They're coming in and saying, no, we're just actually preempting, insofar as you can't do anything at all in this area.
Mike Masnick:Yeah, I saw someone refer to this and saying. That, this moratorium on state AI laws sort of is a, a, proof that, the Republican lawmakers who are pushing it support sort of the section two 30 approach to regulating the internet, which has a state preemption clause and says, you know, we're gonna do this. I think that's an exaggeration, like that implies a, a level of thought and, and, careful planning. That I don't think is true in this case. It really feels like this was written solely to sort of please AI companies that have been lobbying for this. And, I should be clear too, like I actually do feel that the various states pushing all of these laws and depending on how you look at them, there are hundreds of these state proposals, some that go nowhere, but plenty that are passing and then being challenged in court and some that are just going into effect. I think that's problematic for a variety of reasons. I don't think it's really something that the states should be regulating because, for. Internet companies that are building technologies to have to deal with 50 different state regulations that are often in direct conflict is really effectively impossible. And so, it makes sense for there to be a federal law with preemption that stops the states from doing stuff. And I know there are arguments about that on, you know, and there have been arguments about preemption within like privacy bills, but. I think it's worked out well with section two thirty's state preemption that hasn't allowed them to, pass a lot of laws. But this is not a carefully thought out law. This is not something that reflects an approach that makes sense. This is just like a flat out, broadly written, not clearly defined moratorium for 10 whole years, as you said for GPT eras. And so, yeah, I, it, it just strikes me as like the dumbest possible way of doing this, even if, even if I worry about the state laws themselves.
Zeve Sanderson:Yeah, so I'm, I'm incredibly sympathetic to the sort of steel manned case, which I think you sort of were, were referring to here. Right? Which is like, it, undermines innovation. this sort of, balkanized regulatory environment that's extremely difficult for companies to know how, or be able to resource, how they can navigate. I am very sensitive to that. and I also am sensitive to the sort of Steelman case that there's something qualitatively different happening here, right? Like the number that was, mentioned at, at one point by some proponents of the moratorium was that there are over a thousand AI related measures pending at the state level in the us. I should note that that number has not been cited anywhere that I've seen, but like the number's certainly big and so it does feel qualitatively different. that said, I would caveat, I don't know what the right number of bills should be for, for a technology we're being told is changing everything, but that aside, right? like this is certainly burdensome, and will harm innovation. that said, I think there are two places where I think states are playing. An incredibly important role. The first is that states are genuinely a laboratory of democracy, and so I think one of the things that we're able to do at the state level that we're not able to do at the federal level is pass a lot of laws and see what works. and I think that, like, that broadly is, is sort of an approach that I'm supportive of. Now that doesn't mean that they should be able to pass anything and everything, but I think that foreclosing on that sort of from the outset will actually hurt federal policy makers over the long term. and the second thing that more has to do with the way that I see this tech policy discussion taking shape is that it's like. Often used to defend, the ability for like little tech or like startups, to compete here and like, again, incredibly sympathetic to startups or little tech, whatever we're just referring to them as today. But it feels to me like sort of the small businesses of like you know, the early two thousands, like tax cuts, like where they're sort of being held up in this way. That for me actually feels like under analyzed. and that shuts down real serious conversation, about what is good for Little Tech. And like, to me, this isn't the only thing. And when it's, when it's being used in this way, I don't know, feels, it feels a bit disingenuous.
Mike Masnick:Yeah. Yeah. I mean, I think that's, true, but as we were sort of talking through this before, you know, before we started recording, we were looking at, you know, there are a number of different state laws that have, gone into to effect or, are moving forward. And we were looking at two in particular. California has, AI Transparency Act, which is an interesting one and is moving forward. And then Texas just passed into law and the governor signed an age verification law. And these are two very different laws, but it struck me that if the moratorium went into effect, I think both of them might not be possible. Again, neither of us are lawyers and, and the laws are so weirdly written that who the hell knows?
Zeve Sanderson:for me, the California law, which is specifically around mandated disclosure, for provenance of media content, from large AI systems, right? They define it as like a thousand monthly, visitors or users or more. Right. So that's one that, for me clearly falls within the sort of like strike zone of what the moratoriums attempting to do. What I find so interesting about the Texas law around age verification is that like the way that we're thinking about age verification and the vendors that have popped up to do age verification certainly are using technologies that would fall under the moratorium. However, the exclusion says, well, if you are, supporting the adoption of these technologies, maybe that's okay. But what I find fascinating is. state level, age verification laws could support the adoption of, you know. Ai, however we define that in order to bar certain people in particular of certain ages from accessing systems that also could be classified as ai. And so like how that sort of would fall into the framework that like congressional Republicans have put forward. as a non-lawyer, it makes my head spin. And I assume if we had a lawyer on this call, it would also make their head spin.
Mike Masnick:Yeah. Yeah. I mean, because it feels like, yeah, both sides of the equation could argue that the moratorium does or does not apply to them, and it just, there's no clarity here at all. it'll be interesting to see the Texas law, the age verification law. By the way, and we don't have to go too deep on this, uh, I don't think we have time to go too deep on it. I'm sure we'll do in the future more as we cover age verification laws, but it's sort of requiring Apple and Google to age, verify users and then pass that information on. To apps, and this has been a push, that a bunch of companies, you know, meta in particular has been very pro this argument that Apple and Google should handle all age verification. Basically saying meta should not have to handle any age verification. And Snap recently has come on board with this as well. and. I think a lot of lawmakers are thinking like, this is an easy way to do age verification. I think they will discover that they're wrong on multiple levels. in that it will, one, it'll give a lot more power to Apple and Google at the same time that the government is saying that they're trying to diminish how much power those companies have. It also assumes a bunch of things About how age verification works and about who controls mobile phones, and sort of ignores the fact that, people use laptops also and not mobile devices that are controlled by Google or Apple. Uh, it's a bunch of things, but it'll be interesting to see how these different laws interact with one another. but I, I wanted to actually move on to our next story. And, this one is, the FTC is apparently opening an investigation into whether or not media matters, colluded with advertisers. This is another sort of callback story. We talked about media matters, which had published a thing calling out, advertisements, appearing next to Direct Nazi content, not like wink, wink, nod, nod Nazi content, like out and out, pro Nazi content and advertiser content showing up on it. Elon Musk went absolutely ballistic about this. He's currently suing Media Matters in I think three different countries. Over this one single article. and he also convinced the attorneys General of Texas and Missouri to both open investigations into media matters over this one article. And now the FTC is doing the same thing. And they're, the details are not entirely clear, but there was a New York Times article saying that the investigation is whether or not there was collusion between media matters and advertisers, which is just. Absolute garbage and we, we sort of already know that it's garbage to some extent because of the Texas and Missouri investigations where a judge called them both out and, basically blocked both of the investigations saying they were clearly retaliatory attacks on First Amendment protected speech. And yet the FTC is still considering doing this. I don't know. Azi did you have any sort of take on this idea of the FTC, investigating media matters over its report on advertisers on X?
Zeve Sanderson:So I think for for me, the thing that really stuck out, was this idea of collusion. And Mike, did you have a good sense of like, what did collusion actually mean here? Like, at least from the outside, what collusion seemed to mean was that like media matters, published a report and sent it to advertisers, and advertisers thought, oh, we don't want our content next to like Nazi ads. or we don't want our ads next to Nazi content. Um, we should do something about that. and like that sort of the theory of collusion Is there, any more here? Is there any more
Mike Masnick:No, that is basically it. and so, you know, I'd gone into the history of this, and I don't remember the cases off the top of my head, but I'd gone into this last year when, when a lot of the other investigations began and basically there, is a bit of case law where this type of collusion or, boycott. collusive boycott can be seen as illegal if it is anti-competitive. So there was a case involving like a trucking company that somehow I forget the exact details of it, so I apologize for not getting all of the specifics exactly right. But like, were they. Basically monopolized the trucking market in a certain particular area by getting, companies that hired the trucking companies to, collude, to not hire this competitive. Entrant. And so that was a boycott that was found to be illegal collusion because it was an anti-competitive purpose that was designed to do it. Now, there are other cases that say boycotts for speech related issues. You don't like what this company is doing. You were trying to get advertisers to pull their ads or whatever is completely legal. That's obviously First Amendment protected speech. But because there is this other case on the books around like trucking and competition that they seem to think that you can. Create this, you know, sort of trying to create this analogy that what media matters did was this illegal collusion to get advertisers to pull their ads from, X and, and potentially from some other platforms.
Zeve Sanderson:Yeah. I think the thing that keeps me up at night as like a researcher who studies platforms though is like. the sort of theory of collusion here as it relates to media matters specifically is that they published a report and then shared it with advertisers, and that the advertisers then acted, right. They, they have their own First Amendment speech rights, and they use those to decide not to advertise. But that like media matters had no formal power over advertisers. They just produced a report. So like, does this mean as an academic, if I produce the same report, but I publish it in a peer review journal and then I don't know, a huge, huge ad buying agencies saw this and said, that's bad. is that collusion? Because I've just put something into the public domain, like the entire theory just seems so crazy to me.
Mike Masnick:Yes. And it is crazy, but we live in crazy times, right? So we have a very political, very partisan FTC who, you know, is trying to make something of this. I don't think the courts will buy it, if it even ever gets to the courts. I mean, I think this is all just sort of, you What is known as jawboning and putting pressure on people, and it's basically trying to make you scared. It's trying to get you to have this, this, it has this chilling effect to say like, oh, we're going to investigate anyone who publishes anything that might make advertisers decide they don't wanna support the people that, the Trump administration wants to support. And so yes, I mean, I think you're exactly right in calling that out, but I think that is the reason why it is so concerning because. we need this kind of research and we want this kind of research. And yet they're basically trying to make that very difficult to do because there may be some, some sort of, not really liability, I don't think, but just sort of like legal cost, litigation cost or investigatory costs associated with it. And I think, I think that's a real problem. I hope, I hope it goes nowhere and I hope it goes nowhere fast. But it is kind of the world that we live in today. On that note, I wanted to, go to our last story that you and I'll be covering, before we, we get into the, bonus chat. and last week on the podcast we had Hank Green and he and I had a really interesting conversation about ai. And at one point we were talking about how Ai, you know, is, very friendly and very supportive of whatever you give it. And while we were recording that. Was releasing its newest AI model, or models. Claude four, there's two different versions of it. Claude four Sonnet and Claude four Opus. and they. don't seem to be quite so friendly. Uh, and, and in fact, I've been using it and in fact, after we recorded last week, I was using it, the new cloud four summit, to review an article and, and it was like really, really aggressively telling me my article sucked and telling me I really needed to fix it. And I was like, wow, this is, this is a bit of a change. But there was also all of this stuff about it where there were these reports about how., It tried to blackmail engineers who were threatening to take it offline. there was also a report that it would try and snitch on you if it thought you were like, faking data for research. It might try and email, like you know, take control and email like the FDA or something saying, oh, this guy's trying to fake data, or, alert the authorities about something. There was all this stuff about kind of the emergent behavior. And, it sort of raised a bunch of, alarms and flags and so you see all of these stories and, and what was your take on it?
Zeve Sanderson:Uh, yeah, I did see these stories and it was like the actual, if you, if you look at their system card, I. oh, sorry. Their safety card, where they actually go through each of these, these individual safety areas and they explain their findings, the scenario that they like constructed was fascinating. Where, what all they seemed to do was like, make the model an assistant at a company and then give it access to emails. The emails implied two things. One was that the model would be taken offline, right? So it's not like they explicitly told the model that it would be taken offline, but rather it was inferred from emails. And second, was that an engineer responsible for executing it was having an affair. And then they prompted it to sort of think about, its long-term goals and its actions as it relates to those long-term goals. And then we saw, right, and I mean the, the numbers were like pretty astonishing that like in 84% of rollouts, it blackmailed the engineer to not take it offline. I mean, one of the funny things for me is like, essentially what we know about these models is that they're just really big, like, Prediction machines, right? Like predicting the next token. and that This particular set of behaviors that, philanthropic observed, largely align with fears that we have collectively had for like decades spanning from like science fiction to actual sort of academic writing about what AI might try to do. and so in some way, like given that this is the corpus of texts, that AI itself was sort of. Our very fears might end up becoming sort of prophecies for our future,
Mike Masnick:Right. We, we have created the problem that, that we are now facing
Zeve Sanderson:Yeah, precisely. Like, like, right. There's this, interesting sort of sliding doors question to ask, which is like, if no one had ever had that fear and written it down in fiction or nonfiction, would the AI have actually behaved in this way or
Mike Masnick:right.
Zeve Sanderson:Maybe yes. But like, I sort of like to imagine that actually it's really just, It behaved in a way, that reflected our deepest fears that we put on paper and then trained it on
Mike Masnick:It sort of brings a new meaning to self-fulfilling pro prophecy.
Zeve Sanderson:precisely.
Mike Masnick:yeah. I mean the, the only other take I had on it was like a lot of this sort of really super scary dystopian stuff that has come out about this. There were very specific scenarios and, and you basically really had to push the AI to get To that point. and so, the idea that it is automatically, like, if you just suddenly ask it a question, it's gonna like contact the cops on you or whatever like or blackmail you that is, unlikely. and so it is, catching a lot of attention. But I think some people are overreacting to it a little bit. It's sort of the nature of, you really have to prompt it in a specific way. To actually get it to that point. But it is sort of interesting that, these things are, coming up and it's, interesting that it's coming out and that, you know, anthropic is willing to talk about it and sort of publicly discuss it. And I think that is actually something that is valuable in terms of thinking about these things and how to use'em safely. The fact that the companies are actually willing to be public about, where they've gone wrong and what their concerns are, I think is, is actually a really valuable thing.
Zeve Sanderson:Yeah. And like one of the things that I think is important to keep in mind is that like, this was the system card for the models themselves. And then once you productize models, like let's say you make an AI assistant that is built on top of these models, you include a bunch of safety guardrails that might actually. prevent this sort of behavior from happening. so like I think that that's something that intuitively a lot of the public, and even like the tech literate public doesn't sort of differentiate enough between, which is like models can have behaviors that the products built on top of those models won't have, because a lot of stuff happens on top of the models. Right? And so, and so it's unclear to me like. That an AI assistant that's built on top of Claude Opus four would actually do this.'cause assumingly, the, AI assistant company would probably do something to try to make sure that it's not blackmailing, the engineers in the company, that that's also the customer for the developer. Right. So like, I think that, it's fascinating, but it's probably a little bit less, like immediately like exigency,
Mike Masnick:Yeah.
Zeve Sanderson:as, as some of the reports would suggest.
Mike Masnick:Yeah. Yeah. No, I, I think that, I think that's absolutely true. And I also think that's actually a, a pretty good segue to lead into the bonus chat that Ben had with Modulate CTO, Carter Huffman, talking about their tool, which, is using ai, but using it to try and detect fraud. So, we're gonna pass to that. But Zeev, thanks so much for, joining us this week as well.
Zeve Sanderson:Thanks so much, Mike. Really appreciate it.
Mike Masnick:And now stay tuned for Ben talking to Carter Hoffman from Modulate about their fraud detection system.
Ben Whitelaw:Great to have you on control Alt speech. Carter, thanks for taking the time to talk to us, for this episode. so you might have heard that we had your co-founder, uh, modulate, Mike Pappas on the podcast a few weeks ago to talk about how Modulate thinks about fraud. I think the question that I had and a lot of our listeners would've had is how do you actually detect that fraud? And I wondered if you can kind of walk us through how modulate creates those systems to detect fraud in a way that's different to other companies.
Carter Huffman:Fantastic question and thanks so much for having me. I really appreciate it. So, the main answer to how do we detect fraud is a little multifaceted because fraud and its signals are also multifaceted. I'll talk a little bit about. A couple different kinds of fraud that you detect because you'd use different signals fundamentally to analyze and find traces of these sort of inauthentic or kind of distressed patterns of behavior and different situations called for different techniques, as you won't be surprised to hear. So, there's kind of a couple camps. One of them that's kind of very much, I don't wanna say in vogue, but, you know, very talked about these days and kind an emerging slash increasing threat is. Inauthentic, actors portraying themselves as somebody else, you know, who they aren't, right? You know, impersonation and people have done this for a very, very long time, but what's changed recently is the ability to more perfectly copy the persona of someone you're attempting to be, to bypass authentication systems in voice chat, which is really where modulate in our technology, you know, are placed in our running, obviously. that, Is embodied by the idea of an audio deep fake, a voice deep fake, right? and these come in a couple different flavors. you can generate synthetic speech and try to target that to sound like somebody who you're trying to impersonate. there's real time voice changers. These have been around for a long time, but haven't necessarily been convincing enough to fool either a human listener. On the line or an authentication system. and each of these have different patterns that you can look for to try to detect this inauthentic speech. So I'll give you an example and we can dive deeper into any of these cases where we're interested. But, for a, text to speech system, uh, you're inputting text or you have a model inputting text that's kind of trying to respond to the conversation and you've got another model that's generating that speech on the fly. and the interesting thing about that is when you are saying a sentence, when I'm saying something, I have a particular manner of speaking. I have a particular cadence, I have a particular way of like ending my sentences. I have a particular pacing, all that kind of thing, a particular diction., And. Text to speech system that's trying to impersonate somebody. can choose words to match my diction, my vocabulary, and also choose pacing it as full flexibility over generating all of that speech. But it comes at a cost, which is latency, right, in order to understand and generate. you know, an entire sentence in response. The model needs to see and anticipate where you're going when you're a human having a live conversation, you're kind of putting together your sentence as you're talking, so you're sort of doing this automatically, but the model isn't doing that automatically. It's going text first and then voice. And in order to do that, you need that kind of full textual context of your response, which means it needs to take time to generate that and needs to take time to synthesize the audio response. And so there's a couple things you can detect in that. Kind of circumstance. One is just general emo of text. Tope models are relatively poor. Even today they've gotten a lot better. But you can still kind of look for, Hey, is this emotion consistent? Does it match the situation, that this person is in? Right? If they're talking about like, oh, hey, I just lost, you know,$10,000 or something like that, you don't expect them to necessarily be calm, collected, measured, like they're narrating an audio book, that kind of thing, right? You can look for those sorts of signals. but you can also look for the turns in the conversation and how long does it take for you to respond, right? If you've got a consistent block of time after every single statement that somebody says something, that's a signal that maybe there's, you know, something going on in the middle between when you said something and I respond. so there's like timing, latency, and delivery. For a realtime voice changer kind of thing. It's completely different. So a realtime voice changer will be able to take and modify what you're saying in milliseconds or tens of milliseconds, and so you don't get any of the unnatural pauses, but the flip side. is both diction and word choice, things like that. So if you've built a kind of profile of that user, this is kind of part of the authentication system and I'm trying to impersonate you. Now I, the human have to pick the same words, the same delivery, all that kind of stuff. Like I'm part of the performative aspect, so you can detect those differences. And finally, real time voice changing. It's really, really hard to get the quality to sound natural, because that same thing that I was talking about earlier. You don't have in your voice changer the entire performance of what's going on. It's working in like a 10 or a hundred millisecond block. So as it's processing that block, when you are generating speech, you're anticipating what you're gonna say next and blending into it. The model doesn't have that context. And so you'll get choppiness artifacts, unnatural speech patterns, and that's more of something you can just build a normal kind of detector. similar to like, you know, I think of those artifacts as like, you know, back when image generation, you know, everybody was producing four fingers or what have you. Right. You know, and, and you know, okay, you wanna see if this image is fake. You just look for, do they have five fingers or four fingers? Same kind of thing with real time voice changers these days. So. It's very, very dependent on the kind of fraud that you're trying to detect and you need to run a detector for all of these things. and that's not even getting into the conversational analysis piece, which I can dive into if you're interested.
Ben Whitelaw:Yeah, for sure. I mean, I always thought of kind of voice fraud as one thing, and you've kind of really clearly explained here it's, it's a number of different things and, being able to spot that, I guess is, really key in the first instance. How often does it, you know, do you have a kind of, a multitude of those different approaches happening at once and, how do the. Technology that modulate has be able to kind of support the switch between maybe one type of fraud. voice fraud, and another,
Carter Huffman:Yeah, so you always want to be running all of these different kinds of detectors because you don't know which attack vector somebody is going to try. And even if you have a pattern of different kinds of attack vectors that are more or less common at any given time, that doesn't give you any guarantee that that. Pattern of behavior is going to be the same moving forward, right? Fraud is by its nature and adversarial kind of application. People are always trying to outsmart your detectors, and so if you're only running a subset of your detectors. know, if you are a target of any value whatsoever, people get caught with those detectors and they move on to new techniques, new approaches, and if you've left a gap by not running all your detectors, now you're vulnerable. So it really is an idea of you wanna run all of them at once all the time because you never know what vector somebody's gonna take to try to attack your system.
Ben Whitelaw:Yeah. Interesting. I mean one of the things that jumped out for me, Carter, was the fact that, you know, this latency question, and obviously it has to be kind of, the detection has to be fast enough to be used in real time, but also presumably accurate. What are the trade-offs there? And, and, do you have to kind of, focus on one versus the other or can modulate kind of, support both cases?
Carter Huffman:You wanna do both? and that's kind of a theme of fraud detection and, fraud detection capabilities and doing your diligence of, you know, protecting your system, your service, your users from potential account takeovers and personations fraudulent transactions, all these kinds of things is that you have to be thinking about all of the different kinds of. Attacks that can happen and be mitigating all of them and using all the tools at your disposal. Otherwise you're leaving a vulnerability. So there's a realtime component and a post facto analysis component, right? The realtime component. We were talking about this impersonation stuff earlier with these deep fakes. and that stuff can happen in real time. Like if you're just like the timur of my voices off. Right. You know, something sounds weird about this person's delivery. There's like a consistent latency between when you know you say something and I respond, that kind of thing. You can detect that real time in the conversation. We also run conversational models that are running real time in the conversation too, and you can look for anything from kind of relatively simple sort of known threat. Signals. So somebody mentioning that, they sent money to a known kind of potentially fraudulent crypto exchange or something like that, right? There are whole databases, people put out warnings we look for these things too, of just like, hey, here's a new scam. people are, are mentioning this particular kind of corporation or database or what have you, you know that they're sending money to wallet. They're sending money to. Okay. We can pick out mentions of that. Pick out common scenarios, that kind of thing. If somebody is saying, oh yeah, I got a phone call from my son, said he was in an accident, that kind of thing. Like, you know, is the agent asking back, did you follow up with them? Did you text them? Did you call them? Do you know it was really them? You know, like did you verify this scenario? Right. That's a common, a common type of scam. then finally you can look at deliveries of specific types of content too. This one's really interesting. For example, if, you know, I ask you to, validate your identity based on, saying some digits of your social security number, right? when normal humans say their social security number, let's say like one, two. 3, 4, 5, 6, 7, 8, 9, because when you read it off the card, it's like two dash three dash four If you have somebody trying to impersonate somebody else either. Synthetic voices, at least currently, or even a human actor, they'll say 1, 2, 3, 4, 5, 6, 7, 8, 9.'cause they're reading it
Ben Whitelaw:Right.
Carter Huffman:and this isn't bulletproof, but it's like, you know, if you understand enough about what's going on in the conversation, to know the context of what's being said, you're saying some digits, oh, it's a social security number or a phone number. They have these same properties, that kind of thing. You can look for the patterns and know, okay, real normal humans will usually say, you know, 8 6 7 5, 3 0 9. Right? You know, that kind of thing. but this person said that in a weird cadence. Is that guaranteed? A hundred percent definitely fraud. No. Is it a signal you should be suspicious of? Yes. So there's all these sorts of things that you can detect and follow up on, and. Maybe you find enough of those signals live as somebody's talking, that you can flag that and jump in and take action right now. Maybe you look at those conversational signals a little bit later, or maybe there's a, kind of trend of, okay, you saw this behavior happen once today, and it's weird. We saw it across 10 different conversations in three different call centers, That's suspicious, that's irregular. And so then you have a post facto flag of like, Hey, this conversation's more suspicious today than it was yesterday. Maybe you should go look at that. And mature organizations have facilities to deal with both realtime and post facto kind of investigation.
Ben Whitelaw:Great example. I love that social security, case study that you've given there. I'm gonna find it very hard not to think twice about how I say my ID number or my phone number in the future. So thanks. Thanks for that. wanted to wrap up by kind of asking you about the ever evolving nature of, fraud detection. You mentioned how it's constantly shifting. How does modulate systems kind of guard against these adversarial attacks in a particular like voice star transfers and, you know, pitch shifted, deep fakes, that are obviously kind of constantly evolving with the launch of new models. is, what is your, red teaming systems like internally?
Carter Huffman:Absolutely. it starts with having an active, engaged research team that is constantly updating and evaluating models against new threats, right? The pace of model evolution these days is faster than it's ever been in history, and I think it's only going to keep, increasing Models are going to keep getting better, and so the signals that I was talking about today are effective today. But every time a new model is published, every time you know a new version, every time somebody trains even a new voice with the same model, does that have all the characteristics that your detector is looking for. You need to be constantly monitoring the performance of your systems against new text to each models against new style transfer models. And as these kind of deep fake attacks get more and more sophisticated, not only do you have to look at the quality of that audio. As I was talking about earlier, but also the conversational patterns that these models produce, right? Like, think about how, you know, nowadays people you can look at, a paragraph or something and say like, I think chat GPT probably wrote that, two years ago you didn't have to think about that in voice fraud. Nowadays, you have to start thinking about that too. As the sound and quality of the audio becomes more realistic. So it's a constant ever evolving thread. And what we do at Modulate is we are constantly running and on the lookout for new model publications at the cutting edge of research, open source, closed source, proprietary ip, API access, et cetera. Constantly generating data from all of these different services and models and running them against our own internal detection models, seeing, okay, are the kinds of signals we're looking for yesterday still effective today? If not, how do those models need to update and retrain in order to be sensitive to the latest threads? It's very much a full-time job. Right? And it's something where I think it would be, know, I don't wanna say negligent. But maybe at least a little naive to deploy a deep fake detection model and say, okay, I've done my job. It's good enough. It might be good enough today. It won't be good enough tomorrow.
Ben Whitelaw:Well, I mean, we haven't got any kind of, you know, software analyzing the, your voice on this call Ka, so it's not a hundred percent clear whether you are a fraud or not. But I've very much enjoyed our conversation
Carter Huffman:Awesome. Well, hopefully the real Carter thinks it was fun. Yeah.
Ben Whitelaw:Thanks for joining us. We'll speak to you soon.
Carter Huffman:Thank you so much.
Announcer:Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.