Ctrl-Alt-Speech

Don't Believe What This Podcast Says About Misinformation

Mike Masnick & Ben Whitelaw Season 1 Episode 64

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

Mike, you're back in the chair after a few weeks of travels. brilliant to have you here. Um, don't, don't sound so scared. Um, but in honor of your recent travels, I thought we'd use, you know, an app that, is used on both sides of the pond ways, which helps you get from place to place. And its easy, prompt is where to,

Mike Masnick:

Well, as you mentioned, I've been traveling for the last few weeks, so, where to is nowhere. Finally, I've been, I have been living out of a suitcase and on the road for almost three weeks, and I am happy if exhausted to be home and not trying to figure out train schedules or traffic or where the hell I was going and I can just be home.

Ben Whitelaw:

That's nice.

Mike Masnick:

what, what about you? Where to.

Ben Whitelaw:

Well, um, where to, well, we've got a, a bunch of stories, packed up in our suitcase. we're ready to jump into the control alt speech car, and we are heading to a better understanding of this week's stories about quantum moderation. So, jump in for the ride. Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's July the third, 2025, and this week's episode is brought to you with financial support from the future of Online Trust and Safety Fund. This week we're talking about the panic of misinformation, why it's still a bad time to be a moderator, and what Brazil Supreme Court judgment means for the internet. I'm Ben Whitelaw, the founder and editor of Everything in Moderation, and I'm with Mike, who is. Office again. I can see him. He is not got a suitcase near him. Uh, Mike, great to have you back.

Mike Masnick:

Yeah, it's good to be back there are suitcases actually right on the floor over there because I, I have not fully unpacked everything.

Ben Whitelaw:

You got back late last night, didn't you? You're doing this, you're doing this with very little sleep.

Mike Masnick:

Yeah. It was, uh, I mean, I actually did get a fair amount of sleep last night, but that was just because I collapsed really early. Um, but, yeah, it's, it's been, it's been a, a wild few weeks. but it was fun to listen to, to the two episodes that you did without me, which I thought were both really fascinating and very interesting, and I enjoyed listening to those while I was, out and about. but, uh, happy, happy to be back. Happy to be doing this again. I miss it when I'm not doing it.

Ben Whitelaw:

Yeah, I think it's a good way to kind of process thoughts, um, about some of these topics. I think Bridget Todd actually said that to me last week, perhaps when we weren't recording, but she says that the podcast that she does is, is her way of like consolidating and, kind of organizing her thoughts on stuff. So I agree when we, when I don't do this, it's, it all becomes a jumble. Um, listeners will tell us whether it, it actually sounds as coherent as it is.

Mike Masnick:

Right.

Ben Whitelaw:

but yeah. And it's, it's nice to have you here. we won't cover this story as a, one of our chosen stories today, but we wanted to touch on in the intro the, big story that happened just after we recorded last week's podcast. the Supreme Court judgment, on age verification that happened, just at the kind of back end of last week. I know you've kind of written a bit about it, you've thought a lot about it. You've, you've obviously been covering this topic a lot on tech over the years. Do you wanna give us a kind of a, brief overview of the significance of that?

Mike Masnick:

yeah. This is the Free Speech Coalition versus Paxton case. And yeah, it came out last Friday. We thought it was gonna come out Thursday and both Thursday and Friday morning, I was like sitting in my hotel room

Ben Whitelaw:

room.

Mike Masnick:

while, while family was like trying to busy themselves and trying to deal with the expectation that the case would come out and then did come out on Friday. you know, I would say it was a very bad and very problematic ruling and we're gonna be living with it for a long time. the nature of the case specifically was about adult content websites and the law in Texas that said that there had to be age verification on those websites. and that went against. a number of precedents specifically around the internet and attempts to deal with, child appropriate material, and age verification in particular. And the, the Fifth Circuit in Texas originally had ruled, arguing that a, different case from like the 1960s, I think, about age verification for adult magazines, meant that this was justifiable and they could ignore the ruling. And what they really got at was sort of what standard you use to, view these, kinds of cases, and whether or not they violate the First Amendment. And so the ruling that came outta the Supreme Court was six three. It was the sort of, republican appointed justices against the Democratic appointed justices. And the ruling was written by Clarence Thomas and really. Really made a mess of, a lot of First Amendment law, in ways that I think we're gonna be hearing about and dealing with for a long time, it didn't wipe out the idea that age verification could violate the First Amendment. And I think for people who think that the case means that now you can pass laws that say social media requires age verification, I don't think that the ruling goes that far, but it does open up a lot of language. That means that states are gonna try and we're gonna see a whole bunch of cases challenging them. And we're gonna go back on this again, and people are going to quote some of the, I would argue, very, very sloppy and inexact language that Clarence Thomas used and the failure of the majority opinion to really grapple with what it was that they were ruling or to even really apply the standard that they said has to apply here. And so I, I think it's a very messy case. I think it was a very bad ruling. I think it's very problematic for free speech on the internet in general. I think it upends and really doesn't deal carefully with, their earlier precedents and, in that sense actually weakens a lot of important precedents about speech online. And so that it worries me a lot.

Ben Whitelaw:

Yeah. And, and lots of other privacy and security experts who who've focused on kind of age verification, age assurance are also very concerned. It's worth saying, however, there is a large group of child kind of protection advocates who are, celebrating this and what it could mean. And I think there, there are a lot of those, people out there. There were probably people who listened to the podcast who were also, Feel positive about this judgment, but you're right, it does open up a huge kind of worms. in terms of, legal precedent, there's some couple of really good pieces. We, we decided not to talk about it as some of our main stories today because it's, yeah, well, partly because we did a big episode last week about age verification and we, we talked about, being a teen on the internet that was with Bridget Todd. definitely worth going back and have a listen to that, so we didn't wanna do it twice in two weeks. but there is gonna be more to say, and we, we expect this to come up in a future

Mike Masnick:

Yeah, it's definitely going to, and, and there will, be other cases and sort of like the interpretation of, of what was said in this opinion is really gonna matter. And so this is an issue that isn't going away and we'll hear plenty about it. but yeah, I was pretty disappointed in the ruling. I think it's gonna have widespread impacts that's, that will generally be negative. and I think that even the people who are celebrating it will, regret that celebration over time.

Ben Whitelaw:

Yeah, indeed. you know, last week's episode was pretty punchy, was, you know, a sprightly 45 minute run through of the week. I don't expect this week to be the same, Mike, partly because you're back in the chair

Mike Masnick:

Are are you saying I'm a, I'm a little bit more verbose.

Ben Whitelaw:

I'm saying that you have a lot of contribute. Um, and also'cause the topics that we're gonna be looking at are, you know, they're pretty contentious. starting with the first one that bring about today, this is, an equally as, I guess diff difficult topic to really reconcile. And that is how, damaging is misinformation. And this, this comes on the back of a big report by a think tank, and then some additional commentary that you've noticed, and I'm gonna weave together to kind of help us understand this.

Mike Masnick:

yeah. So, so what happened was last week, David and Sarah at the Cato Institute put out this report called the Misleading Panic Over Misinformation and Why Government Solutions won't Work. And the Cato Institute, for folks who don't know, it's, a very traditionally libertarian think tank in Washington, dc. they have a lot of stuff that I. Sometimes agree with, they have a lot of stuff that I disagree with. but traditionally, unlike some of the other think tanks in DC they have tended to be more principled and actually do stand by the things that, they talk about. They have always, you know, in the sort of traditional, again, like I hesitate to say this because libertarian means different things to like everybody, I, I, there's the famous quote, which I won't get exactly right, which is like, every libertarian has their own definition of libertarian and it's what they are and everybody else is wrong. Is, is effectively the, the, the, the intent of the quote. But I think the Cato Institute is, you know, they've always been very strong on like their pro immigration. they are pro criminal justice reform. a lot of libertarian groups are really just right wing front groups and Cato Institute is not like that. and so, this is I think, fairly typical. Traditional libertarian, which again, like there's fewer and fewer of those in the online speech world. in that it goes through a pretty detailed analysis of how there is a moral panic about miss and disinformation. And this is certainly something that I've talked about in the past, that there's sort of this pervading belief out there at times that, misinformation is like this toxic poison that if, if you are exposed to it, your brain will melt and you will, believe any, any nonsense that that is out there. And that's wrong. Right? and I think it's important to state that that's wrong because when you believe that misinformation is so toxic and so poisonous that it just will melt anyone's brain, if they come into contact with it, then the solutions to how you deal with it are very, very different. And so I think it's important to have these kinds of things that push back on it. And I'm reminded of, Joe Bernstein years ago wrote a really thoughtful piece for Harper's. They actually spent about a year researching, arguing that, misinformation was over hyped. Often actually buy the social media companies themselves, which sounds counterintuitive because they're often criticized for allowing misinformation. But his argument was that they actually sort of somewhat enjoyed the idea that misinformation, that any information on their services was so powerful that it could change minds and change perspectives because that allowed them to go to advertisers and say, hey, you know, if the information on our platform will bring about, crazy conspiracy theories, just think of what it will do to get people to buy your widget. Um, and so I think there's a lot of really thoughtful analysis on here in this piece that talk about why. people are overreacting in some ways to misinformation that the term misinformation is often used in a broad and vague way that allows people to put in all sorts of other information that some other people wouldn't consider Misinformation allows, you know, there's a lot of sort of, overreacting or sort of over qualifying or, or giving too much power to the information for, let's say, electoral things that happen that they don't agree with. People are looking for something to blame. So I think the piece is really good in sort of laying that out. It also has some interesting ideas around solutions, including things that I have talked about and believe in strongly around giving more power to the end users, and empowering them to, you know, determine what sorts of content they wanna see and what sorts of content they don't wanna see. And so, overall, I think, there's a lot of really good stuff in here.

Ben Whitelaw:

And just to clarify, Mike, the, the, you're talking about the kind of protocols paper there, right? Like that's, that's kind of the, some of the ideas that you wrote about there and which have gone on to be fundamental to services like B Sky. Those are kind of picked up by this paper,

Mike Masnick:

yes. So this, this paper certainly talks about that, those kinds of things. And so I think there's, value in that, but the paper isn't perfect and, and it's received some criticism and some of the criticism I actually think is interesting and correct as well. And it's really interesting. It's sort of, you know, one of the, the most vocal critics of the paper is, Elliot Higgins, who's from Bellingcat. for people who, don't know Bellingcat, which you should, it's, it's like, uh, this open source, it's a journalism outfit, but they use open source information to expose all sorts of stuff. Is there a better way to describe Belly Cat?

Ben Whitelaw:

No, I don't think so. I think, I mean, they're like the kind of, I'd say the creme de la creme of, of like in osint and investigative journalism. I would say.

Mike Masnick:

they're investigative journalists. They've exposed, all sorts of like, crazy Russian conspiracy theories and like, they, they've done some really amazing work. I mean, I think they had figured out like, who the assassins were, who targeted Russian expats who were hiding out in Britain, who were, assassinated and stuff. Like, like they've done some amazing work. And, and Elliot is very, very thoughtful. And so he has a criticism of the piece that, basically says that they underplay. The impact of misinformation. And I think he's also right. And so there's like, even though the, the Elliot really criticizes the paper, I think he's right that the, Cato piece sort of brushes off the actual concerns with misinformation and assumes that because there is an overreaction and there is a moral panic and there are people who act as though misinformation is like this toxic, all powerful brain controlling substance that that means we can dismiss it entirely. And Ellie's point is that that's not exactly true. And I think he's right that there are communities of folks who are so built up around misinformation that. supporting their viewpoint becomes tribal, such that they're just looking for confirmation bias on everything. And that anyone who challenges those things must be wrong or evil. That it, that it creates this sort of fractured society. And that's true. And we're seeing that. The question is, how do you deal with that? and that, I don't think, you know, so people who sort of look at Elliot's view might, think that the way to deal with it is like, well you have to crack down on misinformation. and that's certainly the view that a lot of politicians are taking. it doesn't deal with the sort of difficulties of that or how that expands or what the Cato paper fears is. That when you say, well, the regulations have to crack down on misinformation. Well, who defines what that is? And it opens up this possibility of governments coming in and saying, you know, Trump runs around all the time and says, fake news this, fake news that, If you allow regulations to take that on, you actually may empower more of the kind of problems that people are afraid of here.

Ben Whitelaw:

Yeah, and the irony is that these two pieces, this kind of broader debate is happening in the same week that the, eus voluntary code of conduct on disinformation, which we talked about in various guises in the past. It was the, kind of center of the beef between Ri Breton, former EU commissioner and, Elon Musk. And there have been kind of many battlegrounds over which, like that has been fought, actually became part of the Digital Services Act this week. So it became a kind of formalized part of the DSA, which is only going to, I think, further anger, folks working for platforms and for probably us, elected officials, including Jim Jordan, who I'm sure has, taken note on this. but yeah, so the, the timing is very ironic, uh, as you say. And it, there's a whole kinda sideline of, concerns about what that it might lead to.

Mike Masnick:

Yeah. And, and there's a battle here, and I think it's actually important to lay out one of the factors of this battle, which is that you can argue that misinformation and disinformation is having an impact and is a problem and is something that society needs to deal with. But then the question is who and how? Right? And so I've already raised the concerns that I have about putting that emphasis on the government because they will abuse that power to their own advantage, often to the disadvantage of, say, their political rivals. But then the question is, should it be the platforms themselves that have to deal with that? And that leads to other problems and o other concerns. And so,

Ben Whitelaw:

So,

Mike Masnick:

the sort of DSA approach is effectively, like, we're gonna force the platforms to deal with it. Which I think is not a good solution either. I do think that that platforms may have their own responsibility to deal with certain kinds of mis and disinformation, but that is something that they need to figure out on their own effectively. You know, where this all comes down to is like, all of these things I think at a, take a step back are like societal level issues. Like society is a mess right now. People are extremely messy. and we're sort of dealing with the consequences of some of that globally and discovering that like there are a lot of messes to solve and the idea that we're going to solve them by the eu telling Mark Zuckerberg to remove disinformation, I don't think that's the issue. I don't think that fixes things. And I think that, the people who are complaining about the Cato paper seem to think that there's some sort of magic wand that you can wave that causes, well, if only social media were better about handling this stuff, it would magically solve these things. And I don't think we've seen that to be true. And in fact, while people will argue with me, people will look at like the world today and they'll look at,

Ben Whitelaw:

at

Mike Masnick:

Trump and everything that he's doing and say, well, that is because of social media misinformation. But you could just as easily. And with just as much credibility, probably argue that, Trump being elected last year was backlash to the company's overreacting. Right? And this was the argument that Mark Zuckerberg made, right? Like, that they got so much pressure around COVID and Trump and other things that they overreacted and pulled down too much information, and there was a backlash and there were alternative services. And people were mad and saying like, oh, they're trying to censor us. A lot of that is not true, as we've discussed and detailed, but it, it's giving off that narrative. And if the idea is that, oh, the EU now steps in and tells these companies that they have to do this thing, you're gonna get a similar kind of backlash. it's bad faith and it's motivated by, ridiculous people. You have to take that into account. These are largest societal things, and you don't fix them with a magic wand and you don't fix them by, telling Mark Zuckerberg or Elon Musk that they have to, to moderate in the way that you want. that's just not how it works. You really have to get at the underlying issues, and nobody really wants to talk about that, including the Cato paper, which, you know, talks about some of the problems. And, Elliot's piece talks about some of the problems. And, and I sort of joked before we started recording that it's, it's like the classic, blind men and the elephant, all sort of looking at different parts of the problem, but nobody has a complete solution because the world is messy.

Ben Whitelaw:

Yeah. And there's a, third set of eyes looking at the elephant that we should talk about. Um, a, a a third piece that, we'll, we'll bring into this, but just on the Elliot's piece, for byline times, I mean, the point that he makes is that actually the fact that, the misinformation kind of proliferates in cases like COVID in cases like Q Anon in cases like, elections has led to people becoming kind of unable to have conversations and reason debate with each other. so they're basically these two pieces, the Cato paper and, and the kind of op-ed are almost suggesting that the cause, is different. they're putting the, cause as, either a. Misinformation or is it being the kind of result of the societal problems that you mentioned? and I just think that's a kind of, that's almost unresolvable, you know? I think, and, and then the, the third piece that we'll come onto is, is something that I was thinking about as well. Like we just, don't have a collective, almost a agreed view of the problem that is misinformation. it's like the kind of definition of a wicked problem, you know, in that it's, difficult to define, there's no solution and there are kind of conflicting interests about, how to resolve it. Like are we just gonna go round and round in circles forever talking about the problem of misinformation? Or is it just because this is a relatively new phenomenon on the, on the internet?

Mike Masnick:

Yeah, I mean, you know, it's interesting too, and I'm gonna take a slightly circuitous route to answer your question here. But like, you know, I noticed a thing, like, I sort of presented unfairly. I sort of presented Elliot's pieces if he was supporting the idea of like regulatory, demands on, moderation. But he's not, and it, it's funny in some ways because he is kind of arguing for better discourse, right? Like more thoughtful discourse, more of an approach towards truth. and you could argue that the Cato piece is arguing the same thing. It talks about, the marketplace of ideas and more speech. but Elliot is sort of arguing that like, Right now we're seeing, bad faith actors sort of take over these marketplaces leading to things where it's, it's all tribal. it's not about actually finding truth. It's not about, using the marketplace of ideas to find truth. but he still wants this, this wider discussion that is focused on truth. He's looking for ways to sort of create, more of a, a liberal approach towards you know, how do we get the world to agree on certain basic facts?

Ben Whitelaw:

on

Mike Masnick:

And so to get to your question that like, we're in this moment of upheaval and it's, you know, the comparison has been made many times by many people, and I'm certainly not the first and I'm cribbing on, on the work that lots of experts have done. But like when the printing press first came about, it took society a very long time. arguably a hundred to 200 years to figure out how do we deal with this? Whereas, all of the information before this was held by a few people and everybody else. the populace had no access to, the sort of written word, other than what was sort of shared with them by, you know, often priests. And, there was kind of a lot of upheaval in those a hundred to 200 years as sort of people learned to deal with it. And eventually we sort of came to this conclusion that, you know, more information is probably better and access to information and educating the populace and, and things like that was important. But it took time and it was messy. And I think we're going through a similar sort of thing right now where, the means of production of information has expanded massively. And the ability to distribute that information and also to monetize it, which is relevant because you have this sort of grifter class that weighs in as well, which will do anything for money as we've talked about as well. you sort of have this very messy situation where it has to be a societal level. Conclusion of like, we have to agree on what is ground truth and how do we view it, and how do we, learn about it and how do we teach people about it? and it's why, like I talk about media literacy as being such an important component to this. And people always mock that and say like, that's no answer. And, and basically what, people are saying when they say that's no answer is that, that's too difficult an answer. Right? And that takes too long. Like we gotta start with kids and teach them how to understand what's real and what's not real, and how to understand when people are lying to you and what the motivations are and what incentive structures are out there. and again, like not everybody's gonna agree, but that is the nature of, humanity and stuff. Like, and, and this is where I also think we do ourselves something of a disservice in the debate over mis and disinformation about this idea that, there are certain things that are facts and there are certain things that are untrue and like, yes, there are facts and there are, there are lies, but like

Ben Whitelaw:

like

Mike Masnick:

there's so much in between where, you take something that is factual but is presented in a way that is, that distorts it. Or you take something that is false but has an element in there that's true. Or you presenting that is something that is like, it doesn't tell the whole story. Like that's the stuff that we're dealing with all the time. That's the stuff that's really problematic and that's the stuff that you can't deal with by saying, oh, that's misinformation. Just because you, you know, you only present half the story, like you've presented half the story and you didn't present anything wrong, but you left out a lot of the context that's a different problem than just pure misinformation. If it's pure misinformation, if it's just blatantly false information, then there's like ways to deal with it. but that's not where. the really problematic stuff tends to come in. It's when you know you're presenting stuff out of context or in a distorted way or to, aim for confirmation bias. There's a whole bunch of things and we don't have easy ways of dealing with that.

Ben Whitelaw:

Yeah, it's the messy middle again. And, and if I was to kind of characterize, I guess these two pieces, you know, simplistically as like the Cato paper being pro marketplace of ideas, pro free expression, pro speech, in its kind broader sense and, and the Elliot op-ed being slightly more kind of interventionist, you know, uh, kind of a bit more of an, creating kind of structures ways and means of like countering misinformation to some degree. Then, then our third piece that we want to flag Yeah. is almost like somewhere in between, you know, isn't it? it's about the idea that there are, kind of stakeholders as you say, who are sitting in between making the decisions, the difficult decisions about what speech to allow and, and this idea of kind of editorial discretion and, content moderation fundamentally. Talk this piece by re dur.

Mike Masnick:

Yeah, so this is, by Renee. and lots of people here, you know, listening to this, I'm sure know Renee Esta. and so she wrote a piece for Law and Liberty, which is also another sort of traditional libertarian publication, called Content Moderation is Not Censorship, and it's actually a response to somebody else's piece. Uh, Tori Tinsleys piece called When Social Media Obscures Truth. And that piece argued, sort of against the idea, you know, how social media content moderation is censorship, and they, they shouldn't be in the business of determining what is and what is not true. and so Renee wrote a piece sort of criticizing that and basically saying like, that's not a fair way to characterize what content moderation is because everybody has to do some sort of content moderation and you're making decisions and everything is a choice. And also not all content moderation is just taking down content. And this is a piece that I think a lot of people, you know, there's this concept like beyond leave up and take down,

Ben Whitelaw:

mm-hmm.

Mike Masnick:

and Renee talks about how actual enforcement tends to fall into three different buckets, remove, reduce, or inform. And everybody focuses on the remove, which is like the take down. But like reduce, I mean, you know, reducing the visibility of it or, you know, whether or not it trends in an algorithm or inform is to add more information to it, which is a marketplace of ideas. It's more speech, it's flagging the content and adding some context or something along those lines. And these are all things that are, part of that discussion. And it's interesting to then compare that. Back to the Elliot piece where he talks about at one point, that we need systems in place that help people evaluate competing claims. And what he's really talking about is adding context, which is a form of more speech. and he mentions, he doesn't want to impose a single truth, which is the argument that the Tinsleys of the world or that the Cato piece sort of suggests that the social media companies are trying to do that. They're trying to enforce this is the single truth, but he's saying we need to defend the process by which free societies function, which is slightly different than just saying like, let there be an arena where everybody fights it out. Because when you have the setup of the arena where everybody fights it out, you're suggesting that everybody has equal value in that conversation. And that magically, like the ones who are most correct will win. That's not always true. What he's arguing is that we need to have at least some sort of societal structures that help people understand signposts tools. I, I think of this as the media literacy component where it's like, how do you help determine which things are true and which, commentators do you trust? And, and where do you look for more context and how do you understand these things? I think that stuff is really important and that's where I agree with Elliot. And I think that the Cato piece underplays that part and the Tinsley piece also, and Renee is sort of pointing out that this is what good content moderation good, trust and safety work is really all about, is adding in that additional context and adding in more value that help people evaluate content. And I think that's a much more nuanced and complex discussion, but I think it's an important one.

Ben Whitelaw:

yeah. and obviously Renee really understands the way that platforms work in terms of how they manage content. And I think, that's reflected in that piece. know what I was thinking, Mike, about, the topic of misinformation, having read these pieces was like, is it any in anyone's interests to really end misinformation? You know, like we we're kind of almost like suggesting that that is the end goal. And I, I remember reading a piece. I think it was, uh, Ryan Broderick on Garbage Day that, that basically kind of like said about how misinformation was fun you know, it's entertainment, you know, it's like, it's not necessarily the kind of, or not always the political, or, you know, societal kind of grandiose, you know, misinformation that we all concerned about. It's like, pop stars and, and their salacious rumors about who they've kissed or, you know, that that kind of stuff is, what he claimed is like the bread and butter of people's extent of misinformation. And I, and I wonder if there's any, until that changes, until there's people really understand that it goes beyond that. Whether anybody has sufficient motivation to really end it. I think the incentives are not, strong enough.

Mike Masnick:

yeah. And again, like it depends on, you know, how do you define mis disinformation, right? I mean, like parody and satire are technically, you could qualify them as mis and disinformation, right? The onion, nothing in the onion is true, but it's funny, we like it because you know, it's misinformation that is entertaining, right? there is value in that. And that's why I think the concern about misinformation again is sort of like, people are so focused on the idea of like just the blatantly false statement. But sometimes we like that. Sometimes that's parody, sometimes that's satire. where the concern is is when like people are being tricked or confused, right? And that's where you want the sort of societal pieces in there to help people understand what is true.

Ben Whitelaw:

Mm-hmm. It's

Mike Masnick:

that like, response should be, well just get rid of the false information because that doesn't work. Right? Like that's the point that we keep going back to is like as soon as you're focused on like, well we have to remove this information, you're going to run into, problems again. because some of it will be like, well this is partially true and then you remove, and then people are like, well what are you hiding? And then it's becomes a big conspiracy theory. And so, no, I don't think the goal should ever be to like, make misinformation disappear. What it should be is to arm people to better understand what is real and how to put things into context and, and how to come to a strong opinion that is their own and not one that is purely tribal. Right? It can't be, I need to believe this because, Donald Trump tells me to believe it.

Ben Whitelaw:

Yeah. So so that even if the definition of misinformation is like misleading information shared by people who believe it's true, you think there's still a case for that?

Mike Masnick:

I think you run into problems once you, you define it that way, then the question is who, who gets to determine that that is misinformation? Who gets to make the decision? Right. Because, you know, I can say stuff here and people might argue that, like, stuff that I just said now

Ben Whitelaw:

now is specific.

Mike Masnick:

right? Because they'll say, well, no, like, you know, there's a clear thing that is true and that is not true. And, and we should totally get rid of the stuff that is not true. And so, you know, now that you can accuse me of misinformation, you can accuse anyone of misinformation at some point where people are gonna, believe in something that, based on confirmation bias, because everybody does it, everybody falls for something that, confirms their priors in some sense or another. And that is, misleading. but the person believed it's true. And so should that always be stopped because then we're just gonna be in this constant game of policing what everybody says, and people are gonna get stuff wrong. And you don't get, the situations where, you want people to be able to stand up and say, Hey, this is, wrong, or this is different, or whatever. And like, I go back to the case of in the early days of COVID, there was a Chinese doctor who spoke up and, publicly went on the internet and said we have this new disease that is very dangerous and people are dying at a much greater rate. It's very contagious and it's a concern. And the Chinese government went to him and said that's misinformation, that's health misinformation. We need you to go online and, and say that you were wrong and you apologize. And you take it all back. And like three weeks later he died. And then, COVID spread around the globe. And so it's like, you can't say, well just get rid of misinformation without understanding like, the context of it and the consequences of, saying that. Because then somebody has to decide who is the misinformation and they're gonna get stuff wrong as well. and you're just, you lead to this world where a lot of really problematic things result when the goal is like, we have to get rid of misinformation again, the best way to compete with it is like, give people the tools to, put it into context and to understand it and to evaluate it. And, you know, the more we can do that, the more we can let people, it lessens the power of the misinformation if people aren't as willing to be taken in by it.

Ben Whitelaw:

Yeah. I think there's, there's certainly a, um, definitely feels like there was a kind of very. Us and a kind of slightly rest of world, um, discrepancy there in, in terms of those views. And, and I think that's borne out in these pieces, which I think make it so interesting and why we wanted to talk about it today. Um, also imagine if one of our listeners or or somebody decided to, accuse the Content Moderation podcast of misinformation. That would be,

Mike Masnick:

it'll happen.

Ben Whitelaw:

that'd be bad for business. That would be, but maybe we'd, you know, maybe it wouldn't be, uh, maybe we'd get out of, press coverage for that. Who knows? Um, okay. We've, we've gone deep on that. We've gone, very deep on that but I, it's a great, it's a great series of pieces, long form pieces that are well worth reading. as you can tell from our conversation, there's, lots in there and, um, really recommend taking some time on those, onto now, uh, a topic that I talked about with our. Guest host a couple of weeks ago. control Al Speech isn't a, labor rights podcast, but it is somewhat, turning into that. Uh, and

Mike Masnick:

You know, I have a degree in labor relations, Ben. Right? I

Ben Whitelaw:

do you, I had no idea

Mike Masnick:

Yeah.

Ben Whitelaw:

that. tell me more, what, what made you, what made you go into the study labor rights

Mike Masnick:

my, my, my undergraduate degree was in, industrial and labor relations. because I thought I was gonna become a lawyer and this was actually a good pre-law program. and then I didn't, go into law. Uh, but,

Ben Whitelaw:

But,

Mike Masnick:

but yeah, no, I, I have books on my shelf that are like the, you know, history of labor and I had to take two years of labor history. I had to take collective bargaining, negotiation, arbitration, whole bunch of other stuff, labor, economics. I, this is my wheelhouse, Ben.

Ben Whitelaw:

This is, yeah, we're, I'm talking to the converted. This is, um, well this is, good. So, I mean, this, this is a topic that I have, you know, long covered as part of the newsletter and in various skies. But, you know, the latest story that I wanted to talk about today was, an investigation by, an Irish publication called The Journal, which has done another investigation into the mental health of moderators that an outsourcing company working for a platform. And in many ways, this is no different to the others that we've talked about on the podcast, where, you know, moderators in Kenya and in Turkey and in Colombia and the Philippines have all, suffered mental health issues as a result of looking at egregious content. That goes on to be published on platforms. I think what's interesting with this story, Mike, is the fact that content moderation and ai data labeling and annotation are becoming so symbiotic now. and, we've known about that in the context of, contexts like Kenya, and in Ghana where a lot of those teams working for outsource companies, do both. But this is actually happening in Ireland as well, a company called Colan, which is, not a very large BPO, but certainly, has some big contracts including with Meta has, been accused essentially of, making their staff write prompts about suicide and self-harm, which have led to, of suicide ideation and self-harm for the moderators, for the workers themselves. so again, it's a very similar pattern. of behavior where a worker is having to, man a cue, do tasks that have an effect on their, kinda mental health and self-harm. And there's lots of detail in there. interesting to me that there are more moderators speaking out about this. probably five or six years ago that wasn't gonna be the case. The original, story about content moderators at Big BPOs was a real, surprising story because, Casey Newton at Platform had somehow got them to talk. And, and, but we're seeing more and more of this happen all the time, and it, tells me that there is a, a kind of movement building, I think, around what's acceptable and this kind of. fact that we haven't learned anything. I think from the issues that content moderators at BPOs have focused on the past, and we're now going into this AI era where companies are being, paid a lot of money to train, data and, and build models and, and that those are being kind of rolled out globally with the same issues at play, the same mental health concerns and the same lack of, wellbeing checks is really the story here. I think, and it goes into a lot of detail. employees were asked to essentially pretend to be, pedophiles. they're essentially asked to, seek out violent or graphic material and writing prompts that they might submit. And they were doing that 150 times a day you know, without the kind of protections, I guess that, you know, you would've learned about Mike in your, in, in your undergraduate degree. Um, so it's a kind of additional story in that longstanding narrative

Mike Masnick:

Yeah,

Ben Whitelaw:

the rights of workers both in content moderation and AI labeling, which I think are almost, the same thing now, to be honest.

Mike Masnick:

yeah, there there's certainly a lot of overlap. I mean, the thing that I think is really interesting and struck me is again, how somewhat disconnected this work is from the platforms themselves, right? I mean, just the fact that there's a BPO in the middle, which adds this sort of level of plausible deniability, but it also adds this way for the platforms themselves to sort of wipe their hands of the responsibility because they can sort of say, or they can contract out and say, we need. This kind of labeling or this kind of moderation, is anyone out there willing to do it? And some companies going to step up and then, they might then even outsource to somebody else as well. Like there are these layers which allow you to sort of push the responsibility and the concern out away from the, the actual platform itself, then creates this sort of weird incentive structure that is leading to harm to workers. But at the same time, you're sort of balancing that against, this is the thing that's always come up with, the human and of content moderation, where it's like, well, if you don't do any moderation, then you're exposing lots of other people to harm. So effectively you are putting people in harm's way to protect other people. And you could argue the same thing is true of some of the AI stuff. And so like, I don't know for certain, but it sounds like the AI prompt. efforts that we're talking about, were designed to figure out how do we deal with a situation where someone is looking for illegal content. Through AI or looking to generate illegal content through AI or is looking to engage in self-harm and the system needs to understand what that looks like and react to it, and that feels like what they were probably training for, which then you think, well, in the long run that should be good because if the systems recognize that better and then don't have to put humans in harm's way or can actually help prevent harm, that's a good thing. But are we harming people in the process of, doing that? And that's where like, oh wow, these are really, really tricky and dangerous trade-offs. And it's not so simple as to say as, with anything that, that I always talk about, right? Like none of these things are so simple that you say, well this is obviously bad and horrible and should stop. Because what if that is being used in pursuit of building better systems to actually prevent more harm, in the long run as well?

Ben Whitelaw:

Yeah. definitely. I mean. know, the, the other thing to this is that, the thing that I think is increasingly becoming, I'm becoming aware of is that this is not like an incidental thing happening in, one or two parts of West Africa. You know, this is a story that's happening in Ireland, where is a large, obviously, technology

Mike Masnick:

Well, it's, it's happening everywhere. I mean, I think that's like, I think the tech industry has been able to hide some of it by, by, you know, I mean, pick your country, right? I mean, it's all over the world, right? I mean, there are stories in the Philippines, there's stories in India, there's obviously stories across Africa, you know, south America, the us like every country has people in these kinds of jobs. And it does feel like. for horrific reasons, maybe companies sort of try to hide it in parts of the world that maybe get less media coverage and less attention. but I think it is happening everywhere because there's so much

Ben Whitelaw:

so much.

Mike Masnick:

that's happening.

Ben Whitelaw:

Yeah. And, and some of this experience is actually born out in another piece that came out this week. a a long feature in, in the magazine, Jacobin, which has its own political, leanings is kind of leftist, magazine. you know,

Mike Masnick:

in le labor rights

Ben Whitelaw:

they believe in labor rights. You know, you can't say that we're not balanced here on, on control speech, talking about the K two Institute and also Jacoby

Mike Masnick:

in the same episode. Yes.

Ben Whitelaw:

in the same episode. Um, but some researchers have written a piece about a report that they've just done, in which they interviewed 113 data labelers and content moderators. Again, the idea that they, they're almost mutually exclusive in a lot of these businesses now from Kenya, Ghana, Columbia, and the Philippines, and documented all of this mental health, harm, PTSD, depression, et cetera. that kind of brings to life a lot of what we talked about in a, in a, really effective way. Again, Mike, but one thing that, they talk about, which we don't really, discuss a lot in control of speech and, and more broadly, I think is the impact of the NDAs. So the kind of non-disclosure agreements that, tech workers at all levels, including these content moderators and, data labelers sign as part of their employment. And, you know, again, these researchers who are working for a human rights organization, you know, make the point that these NDAs basically maintain kind of normal structures, the inherent structures that, exist and, and give power to these big platforms and hide the abuse, as you say, that does happen, when BPOs don't provide the sufficient, protections. And so the NDAs are the kind of, Key to maintaining the silence of these workers. I mentioned the fact that more and more seem to be breaking those NDAs to talk to journalists and to talk to researchers, but a lot of people aren't. You know, they note that, when they reached out to 105 workers in Columbia, 75 of them were too nervous or too scared to talk to the researchers in Kenya. It was, 68 out of 110 that didn't want to do it. So almost two thirds of workers are, feel kind of held back about talking about their experiences. They're labeling data and, and moderating content. And so we're really looking at the kind of tip of an iceberg here as to the experiences of these people. And, and I wonder at what point that dam will break.

Mike Masnick:

Yeah, I think, the piece is really interesting and, the researchers had released an underlying report that this was based on, called Scroll Click Suffer, which is also worth reading and, and is actually sort of a thoughtful analysis of the impact on these workers. but I do think the NDA piece is a big deal and I understand why companies do it, right. But I do feel like that maybe is a point that should be contested and should be talked about because, the companies effectively are trying to hide this, and I think we're all better off if there was an open discussion about this and that if these workers were able to talk about their experiences and talk about the, the struggles that they had, then maybe we can. Figure out like are there better ways to deal with this? Are there better ways to approach it? And we can talk about the actual trade offs and understand it when it's totally hidden. Then we're guessing and either we're assuming the worst of the worst or we're assuming that it's exaggerated or whatever, but we don't have the real information, the real data. And so I do think that, I appreciate these researchers really heavily talked about the NDAs limiting their ability to get the information that they need and how that is problematic. I don't think the companies are, gonna be quick to sort of, you know, say we're gonna get rid of the NDAs. But I think it is a pressure point that we should be talking about. That if these companies, the BPOs especially, but the platforms as well, if they really do believe in, as all of them will claim, they believe in protecting the health and wellbeing and mental wellbeing of their workers and they put in place, they always claim that, you know, they have rotations and they have mental health help and they have all these things. If that's true, like. Let them talk. let's get rid of the NDAs and let them speak out and let's have these conversations publicly so we can figure out what is the better way to actually deal with these kinds of jobs.

Ben Whitelaw:

Yeah. And we, we spoke about, you know, Sarah Wynn Williams, the, the Facebook employee who worked in the public policy team whose book, attracted a lot of media attention and there was a legal case that we talked about where the company tried to prevent it from being published. You know, these NDAs are affecting people. Not just at the lower levels, but even the really senior folks who have subsequently left the companies and have, been out in the world for years and years. So they're a powerful thing. and this, these two pieces definitely, highlight that I think in a way that, again, I hadn't really thought about. So lots of good stuff there. Mike, we've, come to the point of the podcast as we always do, where we will attempt now to quickly go through some stories that we will inevitably struggle to

Mike Masnick:

I, I think we have time for one quick story, maybe.

Ben Whitelaw:

Yeah. Well, it depends how, it depends how quick we wanna talk. so let's start with the one that you flagged, the Brazil Supreme Court story. This is our second Supreme Court story of the day.

Mike Masnick:

Yeah. And, and in some ways is, is related to what we've been talking about throughout the podcast today, which is that, so Brazil, about a, I forget when, and I should have looked it up before we started recording. Years ago, about a decade ago, maybe a little bit longer than that, passed, a law, a civil liberties law, referred to as the Marco Seville, which presented a bunch of sort of civil rights and civil liberties for internet usage. and it was generally considered, really, really good. Very, very thoughtful. A lot of work went into it. A lot of thought went into it. Very supportive of civil rights and free speech of people in Brazil. There's a Supreme Court ruling that's been in process, where like different Judges would report kind of where they stood on this particular issue over the last few weeks. So we knew that this was coming, and I wrote something about it, about two or three weeks ago, in which they sort of effectively overruled a piece of the Marco Seville, which had been, an intermediary liability rule. Similar in some ways to Section two 30, slightly different in that what it said was that if a court rules that some content is illegal, then a platform intermediary will have to take it down. But before that, they're protected from liability if they don't know whether or not the content is illegal or not. The Supreme Court effectively ruled that out, and now says that the platforms need to be much more proactive in removing content that may later be deemed illegal or harmful. That is a kind of thing that I think sounds good to a lot of people who have no experience in this space and don't understand the actual consequences of it. and I talked about this a little bit and got yelled at, by a lot of

Ben Whitelaw:

a,

Mike Masnick:

people in Brazil.

Ben Whitelaw:

yeah, talk, talk about, talk about when you wrote a thread about this on Blue Sky,

Mike Masnick:

I, so I wrote about it on Blue Sky and I basically said, there will be very bad consequences to this and it will make it much harder for there to be, social media apps operating in Brazil and there will be other problems. And, and you know, it's tough to write out in detail on a blue sky thread. And so people took issue with it and certainly I got accused of. Colonialism and, uh, you know, trying to enforce the First Amendment on Brazil and things like that, which I, which was never my point and never my intent, you know, in fact, the Marco Seville is not consistent with the First Amendment, but I still thought it was a very good law and I actually supported the existing, approach. And so the problem is that I think people are looking at this through the lens of Brazil has gone through quite an upheaval and they had a Trump-like figure who is now, being tried and has been indicted and all this kind of stuff, and they're sort of trying to deal with the amount of misinformation and an attempted coup and they've had dictators in the not so distant past. and so they're, the country's trying to deal with the fact that you have a number of bad actors. And my point is that when you set up a system, like what the Supreme Court has now just blessed is that when those people inevitably get back in power, they will abuse those tools. And claim, you know, as Donald Trump is doing now, claim that certain, any kind of critical information is misinformation, is harmful and needs to be taken down and will threaten the platforms. and it's the same mechanism that China used to create the great firewall, which was basically saying, we're not gonna tell you what to take down or what to leave up, but if you leave up any content that we later deem harmful, you'll be punished for it. And that leads very quickly to over blocking and removal of all sorts of information, including information that may be. Truthful and maybe critical of let's say a wannabe dictator. And so you have to be concerned about the consequences of this. And we've seen some, some, you know, as this ruling has come out, now we're seeing more and more people, including people from Brazil who are not colonizers from the US talking about the potential dangerous consequences of it. And so, rest of world had a really good piece that had quotes from a number of people. EFF had a series of posts. Tech policy press also had a series of posts that, look at, the potential negative consequences of this ruling in Brazil that will, force intermediaries, force platforms to be a lot more aggressive in pulling down content before it's ever been deemed harmful or illegal by a court.

Ben Whitelaw:

Yeah. And if we, if we try and kind of place, I guess this story in the spectrum that we came, that we discussed earlier around misinformation.'cause this is kind of a misinformation,

Mike Masnick:

Yes, sure.

Ben Whitelaw:

result really, isn't it? Like, you know, you've got Cato Institute thinking that misinformation is an overstated, threat, Elliot Higgins thinking that it's a, significant threat, but, you know, doesn't need to, there needs to be some kind of key foundations and underlying changes to the way that we communicate. you've got, ready to rest the treading a fine line between kind of platforms, deciding, and then you've got Brazil,

Mike Masnick:

Yes.

Ben Whitelaw:

you know, in the kind of opposite end of, Cato in many ways, saying like, no, no, we'll have the courts decide. the platforms can pull down, uh,

Mike Masnick:

I mean, the old system was that the courts decided, but here they're saying like, the platforms have to decide before the courts decide. and so it's, you know, it's even more extreme. And again, like, I want to be clear because, you know, sort of beaten into me by people yelling at me. Like, I understand why people in Brazil are concerned about misinformation online and that it has been abused and it has led to really bad situations. I just think that this will lead to further abuse and I think people really need to be aware of that

Ben Whitelaw:

Yeah. Okay. well that's, that's a interesting update to a story that's, long running. I will just finish by flagging, a story. I dunno how often we've mentioned Elon Musk on this podcast, Mike, but it feels like a long time ago. Not, not at all today. I'm, I'm,

Mike Masnick:

Yeah.

Ben Whitelaw:

I'm gonna, bring him up

Mike Masnick:

But, but like, oh, in the past few weeks, yeah. We've, we've been sort of Elon free for a while.

Ben Whitelaw:

yeah, exactly. I'm, I'm feeling better for it. Um, but, but, uh, x slash Twitter has actually announced this week that they will, be opening up community notes to AI note takers. And I think this is a really interesting development that, many people will be fascinated by. you might have seen, I don't think we talked about it on the podcast, that, it was reported about a month and a half ago that the number of community notes had completely fallen, completely tanked. it'd gone, I think, dipped 50% in the course of four or five months, partly as a result of, I think Elon's saying that the note system had been commandeered by some governments and was being used as, kind of interference, which you

Mike Masnick:

he

Ben Whitelaw:

do for your own products. Yeah. either way there's a huge decrease in the number of notes being shared. And so they've come up with this ingenious idea to have, AI note takers and, people will now be able to create bots that, check, posts for their validity. they will still be voted on, with the kind of bridge algorithm that, community Notes is known for. So they won't be immediately available, but there'd be essentially a kind of increased throughput of notes that will then be voted on by humans. Lots more to say that about that in the future. But, uh, basically we're all gonna be, critiqued by AI bots, uh, in various forms, mate. So, you know, just, just a flag. Get used to it.

Mike Masnick:

Yeah, I mean, I, I, think it's interesting, right? as an experiment, it's really interesting and it has the potential to actually be, to make community notes more useful, but also the potential to make it complete garbage, right? And, and so we really won't know until it's enforced, right? The big complaint with Community Notes was that it was a very slow process, and by the time a note would show up, it was probably too late. Most people had already seen the miss or disinformation. If you could have the AI be one accurate, and then, two. quick, like it could be more useful, but does anyone actually believe that's gonna be the case? I don't know.

Ben Whitelaw:

No, it doesn't seem like it does it. anyway, that, brings us to the end of today's episode. Mike, it's been great to have you back here with us. I hope that, the holiday, uh, has left you in a, and, and this podcast has left you in a kind of improved state of mind. just want to thank all the publications that we've been able to reference today, Irish Times, the Journal, Jacobin Cater Institute, byline Times, you know, again, important to give these guys a shout out. thanks for listening to us and if you enjoy today's podcast, don't forget to rate and review us, wherever you get your podcasts, and we'll be back next week. Take care. See you soon.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode