Ctrl-Alt-Speech

Moderation has a Well-Known Reality Bias

Mike Masnick & Kate Klonick Season 1 Episode 31

In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Professor Kate Klonick, who has studied and written about trust & safety for many years and is currently studying the DSA & DMA in the EU as a Fulbright Scholar. They cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor TaskUs, a leading company in the T&S field which provides a range of platform integrity and digital safety solutions. In our Bonus Chat at the end of the episode, Marlyn Savio, a psychologist and research manager at TaskUs, talks to Mike about a recent study they released regarding frontline moderators and their perceptions and experiences dealing with severe content.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Mike Masnick:

So one of the OG social media sites that we all know and love, I guess, was LiveJournal. and right before we started recording, we just spent way too long trying to figure out what LiveJournal's prompt was. but Kate, in the spirit of LiveJournal, what is your current mood?

Kate Klonick:

I'm feeling angsty, which I think was mostly what I was feeling every single day that I had a LiveJournal account, when I was in high school. but what, what is your current mood, Mike?

Mike Masnick:

think, I think it has to be angsty, right? Is that, but in the, in the interest of, I'm feeling a little bit, uh, overwhelmed, uh, I think would be my, my current mood. There is so many things going on and so much stuff to cover that it's impossible to keep up with everything right now.

Kate Klonick:

we have a lot of stories, a lot of stories, and a lot that we're not even going to get to.

Mike Masnick:

Yep. Hello and welcome to Ctrl-Alt-Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you with financial support from the Future of Online Trust and Safety Fund, as well as this week's sponsor, TaskUs. So please stay tuned at the end for my really, really fascinating talk with Marlon Savio, who's a researcher at TaskUs. As we talk about their recently released study, looking at the impact of severe content on moderators. There's a lot of really interesting stuff in there. So please stay tuned for that. I am Mike Masnick, the founder and editor of TechDirt. as you may have noticed, Ben is off this week. I know that last week I was off. So we are ships passing in the night. Ben will be back next week, but instead we have a wonderful replacement for Ben. we are joined by Kate who hopefully many of you listening will know she's a professor at St. John's, and is currently in France as a visiting professor at Sciences Po. I can't pronounce the French. That's my high school French coming back. She is there as a Fulbright scholar studying the DSA and the DMA, and is one of the most thoughtful, most knowledgeable people on all sorts of things related to online speech and regulations and Trust and safety and has written some of the most wonderful enlightening pieces on it from her original, the new governors to her work in the New Yorker. And just everywhere that Kate shows up, you will learn something about online speech and trust and safety. So I am thrilled to have you on the podcast and welcome.

Kate Klonick:

Well, thanks so much, Mike.

Mike Masnick:

so since you are in the EU and studying the EU, before we get into all of the many stories this week, I did want to ask you what, what's going on in the EU.

Kate Klonick:

Yeah, I could give you kind of a roundup of various things that have been happening that you can read in the headlines, but I can kind of, I think the more interesting thing is like, what am I hearing from the people I talk to, that are working at companies like the big tech companies and small tech companies in the US that are working for the various regulators that are charged with, enacting the Digital Services Act and stuff. And I think that, like, I guess. The man on the street that I will give you a woman on the street, is kind of like the, the, things that I'm hearing the most about are kind of the concerns about innovation and all of this regulation really hurting, um, European regulation. And really this question of whether or not these products are going to still be any good in Europe and whether or not Europeans are going to notice if like, they end up getting really, really shitty. Um, and. And so this has kind of been, the conversations that have been circulating in my European tech circles, for a while, but this is, uh, you know, with all of the fines and everything else that's been happening recently, this is the. By far the thing that's kind of pulling away from the pack as the long term effects. and what will happen with the kind of that giant game of chicken.

Mike Masnick:

Yeah. I mean, and it's interesting timing wise because not really going deep on this story this week, but you know, one of the stories that just came out yesterday, that we did. Yeah. Talked about covering, but I think it's a little bit maybe early to cover it was there was this consumer fairness fitness check that came out of the, EU in which they're basically, it was looking at different things like dark patterns on the internet and basically saying, even though that we now have the DSA and we have the DMA and we have the AI act and everything that. We probably need another law around things like dark patterns and other things under the banner of consumer fairness and the idea that this is unfair to consumers and sort of my first reaction to that was. it just feels like, yes, there are legitimate concerns about dark patterns ideas, though often feel that that because it's so broad and people aren't entirely clear what that means, it allows for people to say anything I don't like is a dark pattern, which I think is problematic, but the EU is now sort of talking about creating another regulation on that. and I'm wondering if it feels like regulation for regulation's sake.

Kate Klonick:

Yeah, I mean like I just feel as if there is just a ton of redundancy. regulation here. It's just this like, it's part of the reality of kind of the situation. I mean, the e commerce directive existed and then they passed the GDPR and now they have the DSA and you know, and there's like, and all of these things do kind of, feed into each other and build on each other. But I do think that there is going to be in general, especially with kind of the fines being levied and the rollout of some of the longer term cost stuff, that's going to be coming in the DSA, like the out of court dispute bodies that are going to start kind of getting rolled out in the next couple of months. I, I just think it's going to become, Europe is the next largest market after North America. Obviously, I'm sure this is something your, your listeners know. Um, but it's By like a lot, uh, North America is like four times the average rate per user that the EU is, and it's becoming a giant, giant, giant cost center. And so I don't think it's going to get to the state that like, you know, so you either make your product worse and invest less in it, or they make it so that it's so expensive to operate there that you just don't bother. And I just think that that actually is like, these companies mint money, but they have bottom lines and you're not gonna, you know, I just, so we'll see, we'll see.

Mike Masnick:

yeah. No, it's interesting. It's definitely gonna be something that's really interesting to watch. I keep waiting for, Elon to take X out of the EU because I feel like that might make the most sense for him. And

Kate Klonick:

a hundred percent. Yep.

Mike Masnick:

Yeah. All right. Well, let's, move on to stories that we're going to talk about this week. There's a whole bunch, as we were saying, we're, you know, we spent a lot of time sort of going through. There's so many different things to talk about, but we want to start with the study that came out in nature, this week, which is really interesting. Sort of, I think in some sense, confirm some of the things that we've thought, but actually with some empirical data to back it up, which is really nice. So it's the study on differences in misinformation sharing can lead to politically asymmetric sanctions, which is a. Very sort of academic way of saying, Republicans lie a lot more than Democrats. And so if you're seeing, results of moderation that seem to say that Republicans are getting suspended or right leaning accounts, conservative accounts are being suspended more often than left leaning, accounts, it's not because of bias within the moderation. It's an asymmetry within the quality of the content that they're sharing. So, Kate, what, what was your take on this particular study?

Kate Klonick:

So first of all, so this is a, Rand and Penny Cook's study. They have been doing great work, in this area. I think they're both at Yale. Um, they've been doing great work on this stuff for a while. So this was kind of, I was excited to see this from them. Cause I do think it's really robust. And, as I said to you, uh, before we started, I was like, well, this is the study that just proves empirically what I have been telling people since. 2016, who are conservatives, which is essentially like, listen, they put out nets that were like three standard deviations away from the mean. and the conversation moved to the nets. And so like the conversation got more extreme, it got more violent the whole mean moved and so like all of a sudden we're catching more stuff and now, you could say, well, you only put the nets on one side of things, but that's also just, the nature of kind of this type of speech is that there is, you know, you don't have violent speech or low quality speech or these low quality, um, sources for information. That are giving the same type of news. And so, yeah, it's like basically the opposite of causality is what this is showing. reliably found that there is no causality between wanting to take down more Republican speech or conservative speech and that instead it was that like Republicans follow people who lie more and Republican. I mean, it's essentially like if you had banned a bunch of National Enquirer stories but like, it turns out that 70 percent of the National Enquirer's readership is Republican. Like, right. Do you know what I mean? It's not because they're Republicans that you ban the National Enquirer, you ban it because it's the National Enquirer, and by the way, it turns out that they're Republicans. So that's, I mean, these are, this is a really important causality point that I feel like a lot of people have missed in the last eight years. Oh my god, eight years. And I'm like thrilled that there's finally a study that you can point to about this.

Mike Masnick:

Yeah. Yeah. I mean, had released, I think it was a preprint of this story like two years ago, and they've since done a lot more research on it. And I think it's really useful. And one of the things that I thought was really nice about specifics in the research was, people might raise questions about, well, who's determining, like, what is the low quality content? And they engage with a bunch of, sort of neutral fact checkers to do that. But then they Also had a group of Republicans determine what was low quality content and then run the same study based on what the Republican said was low quality content and found the same result. So even when you sort of bias the system that way, where you say like, let the Republicans determine what is the low quality content and then run the same analysis, you still get that Republican. Content is more likely to be banned because it is low quality. And so, the key finding here is that the number one predictor of whether or not an account would be suspended was not whether or not they were Republican or Democrat. It was whether or not it was low quality content. And it just so happens that because Republicans are more likely to share the low quality content, that's, that's what you got. And so. it's really useful. I don't think this is going to change the minds of people who run around claiming that there is, you know, anti conservative bias. and the other thing that it struck me as interesting about this is that, there's this element of, have been these other reports, not studies, but reports that a lot of the platforms actually, rather than having an anti conservative bias in their moderation policies, actually Sort of bent over backwards to let, you know, sort of the MAGA world break the rules more because they were afraid of being accused of having an anti conservative bias. And so there could be this world in which, I think this is a description of reality based on all of this, that the platforms actually did set the rules so that, Conservatives were more able to break the rules and get away with it. And yet, because they share so much more low quality content that they end up being much more likely to be suspended. So even though the rules favor them, they still end up being suspended more. And so, the idea that this is an anti conservative bias is actually the opposite. From that. And so the report sort of makes that really clear. I think it's really nice, useful data points on that.

Kate Klonick:

Yeah, it's really robust, and I think that they do an excellent job, but I think that that is, unfortunately, also as a, as someone who like trained in cognitive psychology for a long time, I, an empirical psychology, which so do authors here, and so that's what this area is. I think that you would, you would, your point that this isn't going to change minds is unfortunately very true. I mean, this is motivated reasoning all the way down. People are, you know, this, and so, I mean, we'll see if it gets pickup. We'll see if it does anything, but I don't think so. Everyone kind of thinks that all of these elite schools are in the bag and everything else. So it just kind of, you know, that's, that's just.

Mike Masnick:

Yeah. Well, I think I might bring this back up again in the later story that we're going to cover. But for now, I want to move on to our second story here, which this is one that I've actually kind of wanted to talk about for a while. Over a month now and couldn't quite fit it in. And there's been some more news since, since the original story that I saw. And this is, all about gift CT, which is, I hope a lot of our listeners know what it is, but it's the global internet forum to counter terrorism. It's an organization that was set up by a bunch of the, tech companies. And it was really in response to anger and demands from regulators going back a while about. Specifically terrorist content. Appearing online. And the idea was that this group would work together and, create hashes of particular terrorist content and share that around so that, each platform didn't have to discover on their own, you know, Oh, ISIS is pushing this particular thing. Kind of material, but they could share that and share signals to each other so that, you know, if ISIS is doing a beheading video, that everybody can, take it down together very quickly. when this came about, I was initially somewhat concerned about it. because whenever you have all these groups getting together and creating hashes of content that is getting taken down, and there's not much transparency about it, there are reasons

Kate Klonick:

could go wrong, Mike?

Mike Masnick:

could go wrong, right? There are reasons to worry about how that, could very much go wrong. And specifically we like terrorist content has always been this sort of weird category of content for me. and I've talked about this going back ages. I mean, I remember then Senator Joe Lieberman absolutely going apeshit over terrorist content on YouTube in the late. You know, sort of 2008, 2009 timeframe and demanding that they take it down and threatening them if they didn't take it down. And then, of course, like the first account that YouTube took down. in response to Lieberman doing that was like a Syrian human rights group that was documenting war crimes by terrorist groups. And they got taken down because content that documents war crimes is also the same content that terrorists are, you know, the context really matters. And especially when you're doing like hashing and sharing these things, it's impossible to share. Context with it as well. and so that was always my concern and sort of the lack of transparency around this. And so this week there was this wonderful, really thorough, article in wired sort of looking at it says two years of turmoil at give CT and the reason I said I wanted to cover before was about a month and a half ago, there was an article in the, times of London talking about how there was turmoil because of the four founding members, one of them is

Kate Klonick:

For finding company members, you should, you know, Meta, MetaX, uh, YouTube, and Microsoft.

Mike Masnick:

Right. And X being the concern there, because since Elon took over, he basically was ignoring give CT. They weren't contributing back to it, and they weren't using the hashes anymore. But because of the structure of how it was founded they were sort of a permanent board member, and that made for a very awkward situation. And so the news that's revealed this is that X has left the board. but also a bunch of other things that tick tock wanted to join, and was rejected, that, the parent company of Pornhub wanted to join and was rejected and there were a bunch of other concerns. So I don't think this is a really thorough, really interesting article about some of the trouble that gives CT has, had, over the last few years. was your take on it?

Kate Klonick:

okay, so I think that we also have to mention the fact that in trying to get talk about this for the last couple of weeks, you asked some of the smartest people in the room who, who I know, know about GIFS CT, but do not feel like they know anything about GIFS CT. And when we started talking about this, about talking about this today, I was like, uh, I, you know, um, the, uh, you know, like, I guess I can talk about GIFS CT. and, I want to say that, that that's an important thing. My, I'm. Um, I don't know a lot about governance structures that have existed, uh, they study them specifically, ones that are started by for profit companies, around speech and content moderation, and, I have on my, like, long term to do list to figure out what the hell is going on with GIF CT and its governance, um, and its founding and governance, because GIF CT really was, like, there was, like, a friendlies chat that, like, existed for a long time that was just, like, Former trust and safety who had worked at the companies that passed hashes back and forth to each other under the table. think there's eight people that are employed, according to like their website right now who actually worked for the company. And then it has a board, um, which is also strange. and then it has these four founding members. To me, very unclear how any of the, like, who pays for what and who then has governing power for what. And so apparently, it was the decision of the four founding members, not the board. of like advisors that listed on their website, not the eight staff members, that is also listed on their website, but the four pay to play players, like whether or not they were going to let TikTok in. And so they decided to not let TikTok in. And you can't wave your xenophobic, wand and just make me think that, what makes sense in any given situation here is that you just have to exclude TikTok from everything because China and human rights policies, especially when you have a very straightforward, like, you would know and be able to check. The hashes that they were adding to the database. And they're the largest video platform in the world. Like, why would you not want them using the technology that is provided it, it makes no sense. not like a bidirectional risk to let TikTok in. Like there isn't it doesn't make any sense. so that's crazy And then like the Wired article basically blows up the fact that it turns out only two of the members made the decision at all and then quietly acts left the board and that there's just like no Transparency on this incredibly important Incredibly important, like, all of your points about how difficult it is, what terrorist content means, and blah, blah, blah, aside, it's still an incredibly powerful thing, and people just, it is a tool that people use out of the box for all different sized platforms, and we don't know, why does it exist as a non profit entity, is it a non profit? Why is a non profit just accepting four million dollars a year from four major tech companies, and then not paying? I just don't understand how that works as like any type of legitimate tax structure. And then like, why has it like not done more? mean it has all this money from these companies. Why hasn't it done more? Why isn't the hash database more open and transparent? Like why would that be terrible? This is also the same problem that we see with photo DNA and ncmec. And it is this weird, weird little, and like NCMEC got its own kind of first amendment issues, um,

Mike Masnick:

about on the podcast.

Kate Klonick:

which was one of the reasons we don't talk about ncmec. It's like the first rule of like, you know, but this is, I mean, this is less.

Mike Masnick:

at least with Nick, Mick, where you're talking about CSAM, where it is like clearly illegal content, terrorist content. It, It depends. And there, there's a whole bunch of questions and oddities. The other thing that sort of surprised me about the articles, I think they said there are 25 members of GIF CT, but only 13 have access to the hash database. And it's like, well, why, who, like, how does, who, who makes that determination? Like if you're a member and you don't have access to what, what, what are you doing? and so there's all of these concerns. and I was concerned already about the idea of like, What if content that is borderline or not clearly, you know, in context, shouldn't be hashed. And yet in this article, they talk about how they discovered two years ago or last year that there were, false things in there, including the song, never going to give you up the

Kate Klonick:

the yeah, the Rickroll song, and it had

Mike Masnick:

song got into the database as a hash. I mean, that's crazy.

Kate Klonick:

yeah, so like, one of the things that is frustrating about this is, this is this area of immense power and non visibility hashing. It is an incredibly powerful tool, to identify and, from publication. Yeah. entire world of content. and people just don't get excited about it. So this is like this incredibly kind of salacious story where they worked on it for 2 years, these reporters at wired and did an incredible job reviewing what is kind of it. really dry stuff, frankly. And, I don't know that this is gonna like, you haven't seen this in like the news at all. This is still like market and industry, like but this is what people should be worried about. And like, this is where we should have a focus. If we want good governance policies, like why can't we have Hash governance? why can't we have something that, revolves around having some type of, skiff of individuals that, like, has a governance, like, a legitimate kind of governance system in which they get put into place that, like, does review this content and figures out what goes in and what isn't. And I just don't understand. It's Anyways. Except to say that all of these systems, as you know, got built to put out fires and that there's no, yeah, there's no grand plan.

Mike Masnick:

yeah. And we're not saying that there's no usefulness here because there are things, you know, when they're talking about things where, people are live streaming. Attacks where they're killing a whole bunch of people. Like there are obvious reasons why companies want to learn about that content very quickly and be able to take it down. And there are reasons why it is useful to share stuff, but there's a whole bunch of questions and there's just so little transparency about this. And the fact that, we didn't know that Tik TOK tried to join and couldn't. We didn't know that the Rick roll was in there. We didn't know any of this. We don't know how companies are using it. We don't know who has access to it. It's just. this complete, darkness over how this system is working. And yet it has an impact on the kind of speech that we can see online. And so, yeah, there's a transparency element that's really missing here.

Kate Klonick:

A huge transparency element that's missing and I'm going to say one last thing about it, but I know we have to kind of move on, but I am just going to say that, there is a job voting issue here because this is also on their list of board members and from their advisory committees are a bunch of governments. there's Australia, Canada, France, Japan, New Zealand. Okay. Like there are governments. And so like. I don't understand how they're not having some type of say and like what is determined to be terrorist content. And those aren't countries that are like hugely affected by terrorism. you know what I mean? Like, I mean, domestic terrorism or international terrorism. And so, why isn't, like, Somalia represented on that list? Like, I don't understand why they don't have a say the type of, terrorist content that, like, their entire, country is living under. anyways, so that is my last kind of point, but there's something a little bit rotten in this, all the way down in an organization that should be, really has every reason to be a, pretty like, neutral good of the content moderation, trust and safety world. it could do better and that is, I guess how we'll wrap on this story.

Mike Masnick:

Yeah, yeah, I hope it leads to more transparency. I hope stories like this actually do lead them to to, something be more clear and transparent and have their governance structure be a lot more clear as well. But we'll see definitely something to follow. Um, moving on from that, there was a story I wrote about this week, which was very frustrating to me. I think I increased my blood pressure, uh, sitting there writing about it. it had to do with, for the last few months, there have been stories making the rounds that. Apparently, hackers from Iran, trick some, Trump campaign officials, into, sharing information and then allowed them to hack into the Trump campaign and get a bunch of information. and the hackers have been pushing it to news organizations. And the main thing they were pushing was the internal dossier that the Trump campaign had put together on JD Vance about where he might be problematic. and people had talked about, you know, a lot of people had seen it. And a lot of reporters had seen it and said they weren't reporting on it because one, there wasn't that much that was newsworthy in it. And two, they didn't want to sort of assist the Iranian hackers, which is somewhat understandable. But of course, when you have something like that, eventually someone is going to bite. In this case, the independent reporter, Ken Klippenstein, decided to post it on his sub stack, and promoted it widely. And it got a bunch of attention. Now, this story seems remarkably similar to me to a story from four years ago, which was that some folks got access to Hunter Biden, the son of President Joe Biden, got access to a laptop that he had left at a computer repair shop and went around shopping the information from that hard drive, to a whole bunch of news sources. And many of them were, All turned it down because they didn't quite know the provenance of this. And they were worried that maybe it was also a sort of similar hack and leak kind of operation. and eventually the New York post bit on it and reported on it. And that became a big deal because due to the concerns about the provenance of this information, where it came from, Twitter first decided that they would block. links to that new york post story and they locked the new york post twitter account so that they couldn't post until they deleted that tweet and they did this under their hacked materials policy which had existed before had been used before people claim that it was made up there on the spot that's not true it had existed it had been used in fact it had been used against a much more left leaning news source a few months earlier and so it was not an ideological thing but it has since become You gospel among the sort of MAGA Republican crowd that this was Twitter trying to influence the election right before the 2020 election and that they censored this story and didn't allow it to be talked about for two weeks, which is not true. Twitter did block access just to the links for 24 hours. Then they admitted that that was a mistake and they changed the policy. They adjusted the policy and they said that the links could be shared and all of that. Also, if you look at, as soon as they did decide to block it, searches for that story went way up on Google and apparently drove a lot more traffic to that particular story, all this kind of thing. Anyways, we have a very similar situation, a dossier related to a campaign for president, all this stuff. And, Elon Musk had cited the Hunter Biden laptop New York Post story is one of the reasons why he had to buy Twitter to never allow that to happen again. He claimed that was one of the worst things that happened. And so clearly he would allow the J. D. Vance story to go forward, right? Of course not. No, he blocked all links to it. he not only froze Ken Klippenstein's account, he suspended the account. Any link to his sub stack was blocked, even links, not to his story. And so you can say, well, okay, that's Elon being a total hypocrite. Not all that surprising. That's what he does. And of course he's, all in the tank for Trump and Vance this year, but then also. Meta started blocking links to the story and Google would not allow you to, share links. If you uploaded it to Google drive, it would not allow you to share links to the dossier. And this came just a week after both Meta and Google had appeared before the Senate. And we're quizzed by Senator Tom Cotton about the New York Post Hunter Biden story. And would you ever do that again? And Nick Clegg from Metta said, no, we would never ever do that again. And Kent Walker from Google said, we didn't block the original Hunter Biden story. We didn't see a reason to, and we wouldn't do it if it happened again. And yet here they were a week later with basically the same fact pattern and they both blocked it. And so this to me. It's just like so much hypocrisy and Kate, I'm, I'm curious, I've been talking now for a while, like, what's your reaction to this particular story?

Kate Klonick:

I mean, same old shit, like, I don't know, like, why are we, I guess, I, this is partly why I'm just, like, uh, I've enjoyed watching you get excised, uh, on this, as we've, as you've been describing it, I can, like, literally see, like, you're turning into, like, a little emoji, with, like, your cheeks getting redder, but no, but I'm serious, I think that this is, unfortunately, like, yeah, I'm not surprised. Like, I'm not surprised. Like, why, why would you be surprised? Like, there hasn't been, there is one way of understanding content moderation governance, that is one of just, uh, just who has the power. And one of the oldest Things before all of the Hunter Biden stuff and keeping stuff up and taking stuff down was like the friends of Sheryl Sandberg, right? things that the Foss and one of the, best known stories at Facebook that people talk about, in which they had kind of the no insides on the outside rule around violence and that they decided to break that because there were news headlines, during the Boston Marathon bombing and pictures, attached to those news headlines that had people's legs blown off. And it was against Facebook's policy of content moderation policy to have the insides on the outside shown, it just, they wouldn't show it. And like,

Mike Masnick:

just explain the insides outsides for people who

Kate Klonick:

Oh, sorry. Yes. the, the shorthand instrumentation of the, the nonviolence rule for nonviolent content rule, at Facebook was kind of instrumentalized as a, if you can see the insides on the outside, then it comes

Mike Masnick:

of a person. So

Kate Klonick:

The insides of a person on the outside, like if you can see like, and so there were certain kinds of exceptions for that, like medical videos or um, images from things, but whatever, but if you could see the insides, it came down, but when there was a bombing Boston, And it was huge news, um, and Facebook wasn't running Boston Globe stories where a guy's leg was blowing off and you could see the bone and like the muscle, and there was no newsworthiness exception. And like, you know, Cheryl made a call that was like, no, we're putting this up. Okay. Well, like, sorry to be cynical about this, but like, there's always been a moment that the people, like, that there is just a lot of, as, uh, my dad used to like to say, there's a lot of power at the end of a nightstick. It's like, a lot of power in just having, like, having the enforcement button, you know, or the non enforcement button, and so it's like, I guess that's how my reaction to this. Sorry to be so kind of blasé about it, but.

Mike Masnick:

mean, I think there's something interesting which gets at the sort of newsworthy question, right? You know, there's a lot of these things where people say, well, there's like a, you have to have sort of a newsworthiness exception. But then there's a question of like, what is newsworthy? That's totally subjective standard,

Kate Klonick:

And like, nothing is, nothing was newsworth, I mean, uh, beheading by a Mexican cartel is like Certainly newsworthy, but they didn't show it, right? They had a rule that that was not something they were going to show. And that is why newsworthiness cannot be this anything Trump's card. like newsworthiness has, is, and has always been a very soft, very, very subjective, very messy, type of excuse to kind of break rules or not apply them in a principled fashion, in my opinion. And so that's. So you have to come up with something better, I think.

Mike Masnick:

Yeah. And to sort of tie this back to the nature study that we talked about the beginning, right? I mean, this, to me, this is just like another example where the rules are sort of different. if anything, the bias is not an anti conservative bias. It's the other way. Right? And this, this seems to be an example of that, where it's not just Elon Musk, who you sort of would expect to do this, but the fact that Google and Meta also were willing to block this content. And again, like literally a week after they promised it. under oath in a Senate hearing that they wouldn't block similar content and nobody's making a big deal out of it other than me. Like, that's the thing that gets me is that, you know, nobody's willing to recognize the hypocrisy here. And I know that, I think Elon claimed that, this was the most egregious form of doxing they had ever seen, which I mean, is bullshit. First

Kate Klonick:

This coming, from the guy who like personally targeted former employees of his, and like, yeah, great,

Mike Masnick:

drove, drove, drove them from their homes

Kate Klonick:

Yes, exactly. Okay.

Mike Masnick:

Anyways. All right. All that's me getting out my rant of the day. So, let's move on. there's a really interesting article in four, four media, talking about meta smart glasses. and some students. Basically, as a demonstration, like knowing that this was problematic, but to demonstrate how problematic it could be, they took meta smart glasses and they built an app for it that would, connect to PIM eyes, which is a publicly available facial recognition engine and some other sources. And so you could be walking down the street. With these glasses, take a picture of someone without them knowing it and get back all sorts of information about them, including sometimes their address and their social media feeds and everything like that. And this was designed to alert people to the idea that something like metaglasses and this kind of app could exist and people could be walking down the street and identifying who you are and information about you. you. Had thoughts on this as we were talking about it beforehand. So do you want to give quick summary of, how you reacted to this particular story?

Kate Klonick:

I mean, privacy is something that's so hard to define, famously hard to define and famously hard to like, it's, it's an ever moving target how we define it. And so, like, on the one hand, I thought that this is obviously like, of course this is going to happen, because of course it's going to happen, like, obviously this is something that was going to happen, and I've heard actually that it was actually a feature that Facebook thought of putting in the glasses, and like, having actually with the glasses and someone like was like, absolutely not. so that's good. but I think that there's like, I did this experiment with my students once for in my information privacy class. the number of years ago. And I wrote about it in an op ed for the New York times. But, uh, I asked them, I said, without eavesdropping, if just people are having a conversation loud enough, To hear in public, and using identifiable things that are printed on their clothing or where they went to college or like whatever else, or like, things like that. And using only your smartphone and Google, see if you can not dox, but de anonymize a person in public, like figure out their name. And this was actually very easy for my students to do. And they were very. The most of them were able to do it with someone over the course of a week, just it was like a thing to do if you're like sitting on the subway or sitting on a train, sitting in an airport. I think I gave it over spring break. So they all were kind of traveling and in places where this was particularly kind of like, there were moments where you were going to kind of have these opportunities, But it's not that different. there's a level of friction between like typing some, key word searches into Google and having someone's face and their name pop up next to you. but, I think that this is, technology that we have unfortunately been very, very close to for a very long time. I don't know what we're going to do about it. Like, I don't know how you're going to, like, I mean, that's awful. it's awful. we don't want this. But I don't know how you stop it. Like, I actually can't think of, can you? And I'm also kind of curious of, like, You said you were going to surprise me with what your reaction was to it. And so now I'm like, maybe Mike, maybe this is one of like Mike's libertarian, like kind of like, let them do their thing. Don't try to stop progress. And I'm like, no, De anonymizing glasses are not progress. Like Mike, like,

Mike Masnick:

No, no, no, no. I'm not, I'm not that evil. Uh,

Kate Klonick:

Well, I gave you the opportunity.

Mike Masnick:

Yeah. Yeah. No, I mean, I read this story. I had, I had sort of a similar reaction to, reading Kashmir Hills book from last year. face belongs to us, which was this great book, mostly about Clearview AI, which is this facial recognition company. And it really does a great job sort of presenting this as this impossible situation where the technology has become so easy. Like one of the key things in that book is like Clearview AI is sort of considered this big, scary facial recognition company. But the reality was it was just like some random dude with very little technical skill sitting in a cafe in somewhere in Manhattan, downloading code off of github and just training it on way too much data that he could, scrape from different social media sites and was able to do that. And if he was able to do it, tons of people are able to do it. And so eventually, even if it's somehow cabined off and considered, you know, kept secret, one of the other things in that book was that, google and Meta had basically both invented the same thing and both of them decided it was too problematic to release to the public, but eventually somebody does. And so this is another example and PimEyes has been out for a while and there's been a lot of concern about PimEyes and it's run by just like some dude somewhere, I forget where he is. Maybe in Russia or Ukraine or somewhere. and it is not, is, you know, sometimes it's responsive to concerns that people raise and sometimes isn't. And so yeah, I don't, I don't have a good answer to this. this technology is coming and it's definitely going to be there. And like, I remember, when Google had their Google glasses. And, people were getting beat up from wearing them because it was really obvious you were wearing them. And where this becomes trickier is as the glasses become less and less obvious. And, you know, meta smart glasses, if you're looking closely, you can see them, but I talk to people and not realize they were wearing the meta smart glasses. Like they're, pretty close to looking like normal glasses. And so you could see this being done in ways that are really, really creepy. And The demo video that these students put together included them going on the subway and identifying random people on the subway. And I don't have a good answer. I don't know how you deal with that in any way. You know, there's some people who make the argument that like, if you're out in public, it's not private anyways, but connecting all of this information together suddenly is really, really challenging. And so I don't know how you deal with that. Yeah,

Kate Klonick:

having conversations, Kash, Kashmir Hill is a friend and we used to be journalists together way back in the day. And we've talked about this for years, like, talk to his friends and just kind of people who watch the space. And, you know, one of the things I remember her saying to me at some point was like, you know, one of the things that people always say to me was that it will normalize. norms will respond to and enforce these bad actions. And these things will start to sort themselves out. Like, it'll just be Bad karma to do this type of thing or like people will judge you and people just won't do it. Norms are thin. Norms are a very thin, very, very, Soft type thing to rely on for something that is as incredibly invasive as we're talking about in this article with like the ability to facially recognize people walking down the street and I think that the other part that I think that you it's worth mentioning that you kind of point out which is like you could see the people are wearing google Glasses and you get beat up. It's like you can't see the people or anything. Wait till they have contact lenses, you know, like wait till there's some type of thing where it's just like you can't, you can't know, you can't know if you're being considered in order to opt yourself out of it. And, you know, so, you know, and I know that Cash often talks about the arms race also of some type of technology that would like exempt your face from Yeah. But like, I, that seems also just like a cat and mouse game I'm not interested in starting to play. But, maybe we'll have to. I don't know, it's just

Mike Masnick:

yeah, there are really

Kate Klonick:

ever a not depressing podcast? Actually.

Mike Masnick:

are times there

Kate Klonick:

Okay.

Mike Masnick:

Um, but yeah, I mean, there are real questions about, are there legal ways to deal with it? And I'm not sure that there are effective ones or how much of this is just norms. Right. and how do we build norms in society that recognize like, Hey, this is not cool. I don't know. I don't know. Anyways, moving on our next story. this was really interesting. This is from, Katie Harbath, who's a friend of both of ours. had this wonderful sub stack post this week, where she was able to work with, Sarah Hunt at the Rainey center, who was doing a poll, of Americans on a bunch of different topics and allowed Katie to, drop in a few interesting questions having to do with tech policy and related issues. And so she got some really interesting answers in particular about the top tech issues, that people were concerned about this election season. And so, Kate, do you want to talk a little bit about what Katie found from the results of this survey.

Kate Klonick:

Yeah, totally. I will just say that the name of her sub stack is anchorchange. substack. It's very good. and Katie, is one of the most kind of pragmatic, and even handed people who has been working and knows exactly what it, Like, you know, just she's one of my go to people if I like need anything on like, what's happening in like the Wisconsin general election for governor and like what's happening in the Mexican election and like what's happening in and also what is what is the most recent update on like Tick Tock stance on political speech and like she knows it all. And so her sub stack is great. but, there were a couple of things that I think were kind of super interesting. One was that the top tech issues are kind of predictably child safety, election integrity and privacy. And like lagging behind were like cyber attacks and disinformation and censorship. And content moderation was like, I think that content moderation is still a buzzword that people don't 100 percent know. so I don't, you know, but, I guess, I think that censorship was a little bit at, like, 20 percent or something, and child safety was, like, around, like, 55 or 60 percent, so that was kind of, the, to give you the idea of the spread of what people were concerned about, like, you know, 65 percent of respondents thought that kid safety was really important, 55 percent thought election integrity was, 50 percent thought privacy, and then after that, like, Only 20 percent thought censorship was the most important issue. I guess I'm optimistic about that. Like, I'm glad that, we're actually So that's something to be optimistic about on this, damn podcast. Uh, is, um, I'm optimistic But I guess I would say that I'm a little pessimistic because although I understand that people are concerned with kid safety, I'm concerned about it being a, a tool for which to jam through a bunch of bad regulation on other things like censorship broadly. so that wasn't surprising, but I thought that that was an interesting, study.

Mike Masnick:

I mean, I was wondering how much, because like the kid safety topic has been really a mainstream story the news for the last year. Like if people really centered around it now, I've had many conversations on this and talked about. I think a lot of the discussion around kid safety is really a moral panic. It's not actually supported by the data, but it's a really nice narrative for politicians in the media, especially. In fact, this week there was also a giant New Yorker story and I wrote an article about it on TechTrip where I went through, it was this 10, 000 word story by Andrew Solomon, Giving all of these horrible, traumatic, heart wrenching stories about kids, who took their own lives, and suggesting that social media was to blame. And it took 71 paragraphs, and I counted twice to make sure it was actually 71 paragraphs, before the New Yorker article included some experts who all were there saying, no, there's no evidence that this is because of social media. so the kid safety narrative, is very, very popular. And so seeing that at the top of the list was a little bit concerning, I think is driven by sort of the media. And yes, I fear that it leads to really bad legislation. We've talked about COSA and we've talked about other sort of kid safety focused legislation on the podcast before, which I think are really misguided. And so it doesn't surprise me, but yes, it also concerns me that in the name of kid safety, we will lead to problematic regulations. I think, somebody was talking about recently, like, the four horsemen of, internet regulation is like kid safety, terrorist content. I forget what the others were, but like just this idea of like, If you want to create sort of a moral panic to put in place bad regulation, you bring up these sort of horrible list of horribles. and so the fact that kid safety was at the top of this list was not surprising, but I, I am still concerned about it. I did think the fact that censorship is sort of at the bottom of this list and content moderation sort of suggests that the attempts by. Very disingenuous people to turn those into stories that that is not really sticking outside of a small crew. Probably again, to go back to what is now becoming the theme of this podcast, kind of a MAGA, right wing dishonest crew of people that they're trying to make that a story and they haven't been successful in doing that, but it is still really interesting to see which issues are really sticking with people and which ones aren't.

Kate Klonick:

Yeah, 100%. And I thought that, this was a good little collection of, data.

Mike Masnick:

Yeah. So interesting stuff. we'll have a link to it in the show notes of course. And yes, definitely check out Katie's blog. because it is always wonderful and she's always thoughtful on

Kate Klonick:

Anchor change.

Mike Masnick:

Anchor change. Yes. And so, Kate, thanks so much for, jumping on and, taking Ben's place and giving us the view from that side of the Atlantic Ocean, when I'm stuck on this side of the Atlantic Ocean really appreciate all of your insights thoughts on all of these stories.

Kate Klonick:

It is always really fun to hang out and talk to you and we should do it more often.

Mike Masnick:

Yes, absolutely. Well, we will have you back on the podcast at some point soon. And for folks who are listening, please stay tuned. We have a really, really interesting conversation. There are some really interesting findings in this study about, how content moderators who are exposed to severe content, how they're dealing with it, what is the most stressful situations that they're dealing with? it was really eyeopening to me. And so I was really excited to talk to Marlon Savio from Task Us. And so we will have that now. So you have this new research report out perceptions and experiences of severe content in content moderation teams, which is a qualitative study. can you describe just the basics of the research and what it is that you found?

Marlyn Savio:

Sure, Mike. Thanks for having me. First off, very excited to be here. and I'm very happy to discuss, this particular study. So on a high level, just as the title you read out reflects, this study focused on, uh, moderators lived experiences of, you know, reviewing different types of content. And this brought out a very interesting finding that, less severe content, tended to be potentially more distressing for moderators to work with, and that it calls for proactive wellness support. So kind of going into the specifics, as the research project set out, we were interested in finding out how moderators discern, what content seems more difficult for them to work on and the impact the different types of content leaves on them. Right. But we also additionally surveyed professionals who support moderators and this includes team leaders, learning trainers and wellness experts, because we wanted to understand the kind of support that moderators most often sought from these support professionals. And we did this in two regions, the US and the Philippines. We noted that groups in both regions somehow had very similar concepts of what they thought was, extreme content and what was not. So for instance, uh, you know, violence, gore, abuse, all of this clearly to them was egregious or severe content, while, material that involved more political or hate based themes, tended to be considered as content of moderate severity. This in no way, was their intention to kind of understand that, you know, this is not harmful or whatever, but it was more about that the shock element or the immediacy of action needed was, less as compared to severe content. And interestingly, all our participants, regardless of, what content severity they were working on, there was this consensus, for the need for formal wellness support on the job. and you know, as part of Task Assist Frontline First Provisions, We prioritize preventative wellness screening and interventions for moderators right from the recruitment point. and our participants expressed that without this sort of active wellness support, it would be difficult for them to first get adjusted to the job as well as for them to meaningfully process and come to terms with, you know, whatever it is they were seeing, hearing, reading, and by and large witnessing, you know, as part of their work.

Mike Masnick:

It's really interesting. are a bunch of different things that are worth sort of dwelling, you know, diving into a little bit more and, thinking about, which of the findings you, you mentioned. some surprise in the sort of, less severe content, sometimes creating more stress. Was that the most surprising finding or were there other, findings that were the most surprising to you?

Marlyn Savio:

sure. I can get into that. So, yeah, like you rightly pointed out, it truly caught us by surprise when, you know, moderators seem to be more bothered by moderate severity content. So how this went about was that while our participants, knew and understood that, severe content is truly problematic and, you know, needs to be, removed and all of that. They seem to be working more efficiently while reviewing these type of content, right? there was this conviction already in them that this is wrong and needs to, be acted upon immediately. So, you know, there's that chop chop sort of thing. action immediately. On the other hand, when it came to working with moderate content, however, you know, the typical gray areas, they felt a lot more affected and under strain. This was because they ended up spending a lot more time and thinking in deciding the appropriate action for this particular sort of ambiguous content. and if, if I have to add another layer or nuance to this finding was The participants reporting that the risk of inaccuracies when it came to moderate content review. Was higher when compared to severe content decisions, right? and if I think back to what our participants disclosed, they would say something like graphic content, is easier because there, the violation is explicit. You know, what action there is to do and so on, but on moderate content queues. There was a need for more engagement with the material itself, and this, by extension, took a lot more time away from them as well as, you know, exerted a lot more tool. so we sensed that they needed psychological safety here because there's a lot of uncertainty, a lot of volatility when it came to making decisions about this, as well as healthy coping skills, you know, just as much as someone on a severe content cue would require. So, if I have to kind of sum this up. We understood that there's, there's a need to kind of move beyond this, fixation with, oh, egregious content is the one that's harmful. No, there's more than this, you know, one big C called content that plays a role in moderator's well being as well as their performance, really. And this then cascades into the quality of moderation, moderation outcomes and whatnot. Right. can, in fact, think of a couple more Cs, right? So moderators require clarity right from day one, right? In terms of the processes, in terms of policies, in terms of what they need to do when it comes to ambiguous content. Also how the content moderation team itself manages change, right? Because change is a constant in trust and safety as, you know, You are very well aware of, right? There's always policy revisions, there's workflow updates. So, when all of these are present, how does the team overall cope with this? How does it manage change? How is it communicated? How is it implemented? All of these play a role. And finally, of course, we're talking of coping skills, That moderators, regardless of what kind of, or what severity of content they are working on, egregious, benign, whatnot, All of these together work in tandem, so there's, clarity, there's change management, there's scoping, much beyond what we talk of and focus on, that is content, That plays a role.

Mike Masnick:

if, um, because the assumption is always that it's the most egregious content that's the problem, and yet I wonder if, if people working in that, while it's obviously distressing, they sort of steeled themselves to that, they're sort of ready for that, they understand what they're getting into, when it's more moderate content, and also, There is the lack of clarity, as you were describing. It's just, it's a lot more involved. Like, am I making a mistake? What decision is the right one? It just involves a lot more. And I think that sort of plays out in one of the, the sort of other findings that I took as really interesting out of the report was this idea that the sort of protocols and guidance, are often. Just as important as, the content. So some of the, biggest stressors that, you found in this research were when content moderators felt that they didn't have clear protocols or clear guidance on how to deal with things. And that stress was actually greater than the stress from the content itself. Again, a little bit of a surprising finding to me, because most people think about the content being the main thing. and this sort of raised a different issue, but in some ways sort of related. Is that, is that your take on it?

Marlyn Savio:

Yeah, I agree. But I also want to kind of, bring in another, detail here, right? When we say that, egregious content is easier to work with, there are some circumstances within which that happens, right? And, um, course we talk about, time is an important factor. People take this amount of time to kind of habituate to it and whatnot. But, It's what you do with that time, right? You cannot just sit back and say, oh, time will heal it all. Moderators will find a way to, cope and adjust with this. That really can be counterintuitive and, can fire back. Because whether moderators really meaningfully habituate or just on plane indifferent to the content I think will make a difference in the long term, both for their well being as well as, you know, their performance and whatnot for the organization as a whole. for instance, at Tasker's, we use an entire employee lifecycle sort of focused well being approach, right? So even at the point of, uh, hiring candidates, we use a scientifically or psychometrically validated, um, uh, data. pre higher screening instrument called CARES, which allows us to look for traits that can set these candidates up for success in content moderation environments. So already there, we have already kind of identified people who can, you know, thrive in these environments, right? So that's one. And then this, along with sort of a proactive intervention program, we have found has been successful in bringing down this kind of startled response that moderators have, right? and we have actually been able to see that this has come down from, say, six months, which is considered the norm for, you know, moderators to adjust to that kind of work. And, uh, And we've seen it happen within like two to three months. If you have proactive wellness support along with, the whole people first kind of provisions, the culture, the leadership, all of this playing the role and not just, oh, you know, it is content. I've taught you what content you're going to see. You're trained in this and you're ready to go. No, I don't think it works that way. It's more important for us to become more intentional, understand. For one, we can never say that, policies and guidelines will stay static, right? Uh, it would be very naive to think of that in a trust and safety environment, but we need to also understand that moderators are functioning in this constantly evolving dynamic, right? So are we inoculating them enough to, you know, not just function, but to really thrive, to add value to the work that, they're doing, and to really contribute overall, will come when we consider all of these factors, beyond content. So yeah, what I wanted to highlight here is that yes, for egregious lines as well, they can work well, but it's under conditions. It's conditions apply, right?

Mike Masnick:

Yes. Yeah. No, I think that's, that's a really good, good point to reinforce. So, one of the other things I think was great about this is that, you know, you did this research, you, published it. publicly for anyone to read. Which I think is great. And I'm, sort of curious, what's next. Are you planning more research? Is there going to be more things that are, you don't have to say what it is necessarily, or if you want to, um, but you know, uh, getting the research out there for other people to learn from, I think is really important, especially in this space where everybody's still trying to figure things out and the world is changing rapidly. so what's next for the, the research efforts there?

Marlyn Savio:

Yeah, sure. I mean, uh, our mission as a team. So we are a dedicated research team at Taskers focused entirely on, wellness of frontline staff, and, we really want to bolster employee well being. And this happens by bringing about data and evidence. that we can use to influence business decisions. But our hope also is that we're able to influence the industry's perception and approach to psychological safety, we have generally been at the forefront of, establishing these business decisions. Best practices for well being, especially when you think of trust and safety, there's always new jobs that are emerging. It's not just new content, right? earlier, it was, you know, a certain type of audio visual content. Now you're having AI and whatnot. So there's safety and trust needed in all of these spheres, but how, the frontline staff in each of these spheres adjust to the job, the kind of well being support they need. Differs. across these, and for all of that, you need research to come through. So for instance, earlier this year, we came out with a white paper where we spoke about, We gave well being recommendations for AI safety annotators, right? We thought that would be similar to content moderation, but it's not. it has a lot more active involvement with the content. You're working with a model, which is non human, uh, all of this factors, right? So when you consider all of that, there is research needed. And, we're hoping that not just that we conduct research internally, but we're, How we do it is also important. We're keen on, research collaborations, we have always, since the inception of our, team, we have tried to work with, academic institutes. We've worked with industry partners, like, for instance, even, Recently, we came up with a preprint on how disinformation has an impact on moderators, right? Because they're constantly exposed to, firsthand, to all of this unfiltered content, right? And how best we can mitigate this. We did this, in concert with MIT, uh, Boston, and Google. So, yeah, these are ways where we can, you know, expand this outreach, make sure that, you know, the industry as such is, you know, Maturing in terms of the practices that we have, because all of our guidelines and policies, interest and safety cannot just be about legality, cannot just be about regulation. It needs to consider this human perspective. Right? And that can come from, incorporating all of the learnings that come from research studies and whatnot, So the what is, yes, we wanna look at innovative tech. Uh, for instance, how can AI be, be used or how its potential can be harnessed to. Promote well being for moderators, the use of variables, even virtual reality, if that can be used for, well being, interventions, right? All of these things can be looked at, but, we would definitely, uh, look for collaborations and they're already doing that, but, uh, the hope is that the industry as a whole picks this up.

Mike Masnick:

I said, the research is really fantastic. It's really interesting. I'm glad you're, you're doing the work and I'm glad that you're releasing it publicly and now out here talking about it as well. So, uh, Marilyn thanks so much for doing the work and then for joining us on the podcast as well.

Marlyn Savio:

Thank you so much, Mike. Thanks a lot. This has been wonderful chatting with you.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode