Ctrl-Alt-Speech

The Internet is (Still) for Porn, with Yoel Roth

Mike Masnick & Yoel Roth Season 1 Episode 13

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Yoel Roth, former head of trust & safety at Twitter and now head of trust & safety at Match Group. Together they cover:

 This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Mike Masnick:

So we are increasingly digging deep, um, finding prompts to start this podcast. But in the words of the popular task site, Thumbtack, what's on your to do list this week, Yoel?

Yoel Roth:

This week and every week, I am trying to figure out what does or doesn't count as a very large online platform under the Digital Services Act. Mike, what's on your to do list?

Mike Masnick:

Well, right now it's trying to record a podcast, without Ben and, and having you, you here instead, which is fun. But you know, I sometimes rely on Ben to handle some of this stuff. And so keeping up on. I'm keeping this podcast going is, is on my to do list right now.

Yoel Roth:

I'll do my best make it as painless for you as possible.

Mike Masnick:

Hello, and welcome to Ctrl-Alt-Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you with financial support from the Future of Online Trust and Safety Fund. I am Mike Masnick from Techdirt, and once again, We are without a British accent on the podcast as Ben is busy, as I just noted. Uh, however, we have secretly replaced your usual co host with trust and safety expert, Yoel Roth, who somewhat famously, ran trust and safety at the, I will say now deceased Twitter platform and currently runs trust and safety for Match Group. Which is the company behind a few different dating sites, including Tinder and Match and Basically,

Yoel Roth:

Hinge and OkCupid and just keep naming them and probably their match group.

Mike Masnick:

yes, yes. So if you are dating, thanks to the internet, you have UL to thank for keeping you safe.

Yoel Roth:

Doing our best.

Mike Masnick:

Excellent. Excellent. so we have a bunch of stories this week and,, some of them are quite fun and I'm happy that we have you here as an expert, so I'm going to jump right in with our first big story. Which I want to hear your thoughts about, which is, this is a story from TechCrunch, which noted that X, which is the, uh, The platform formerly known as Twitter, where you used to work, has apparently tweaked its rules to formally allow adult content. And so, uh, can you help me out here? Because I was under the impression that Twitter, somewhat famously, always allowed adult content. So what, what's happening here? Yeah.

Yoel Roth:

part of it, because it just sort of is what the status quo has always been. You're right, like, Twitter has, somewhat famously in the world of social media, always had a stance of, If not actively encouraging, then at minimum turning a sort of knowing blind eye to the subject of adult content and nudity. personally, that was one of the things that attracted me to working at Twitter in the first place, that the company had a more sensible approach to this type of content. Then at the time, Facebook was mired in these endless debates about nipples and breastfeeding. And it just felt like Twitter had landed somewhere more sensible. Which is why I was somewhat surprised this week to read that the company triumphantly has decided that it's okay with adult content and that they quote believe in the autonomy of adults to engage with and create content that reflects their own beliefs, desires and experiences, including those related to sexuality, which has to be just about the least erotic way to talk about porn I have ever read. But like, it's just a little confusing because I think my experience and the experience of anybody who's ever used Twitter is that you don't have to dig very far to find adult content.

Mike Masnick:

Yeah. and so, I mean, the suggestion in the news reporting is that there is some sort of major change in terms of like formally allowing it within the rules, but it was always allowed in the rules, right? There was, is there any real change here?

Yoel Roth:

It's hard to say. And this is where, you know, I think you have to position what X is doing against a backdrop of less and less transparency about their operations, their rules, and even the basic mechanics of the product. There have been longstanding broken windows when it comes to how adult content works on social media generally, and also on Twitter, right? There are questions around age gating and age assurance. Big, thorny, messy, unsolved questions, but open questions. There's questions about how to label that content. And if you give content creators the ability to self label adult content, or if you rely on machine learning to do it,

Mike Masnick:

Right.

Yoel Roth:

it's not really clear what changed here. Is it that now this content can be monetized and, X is moving in the direction of trying to explicitly compete with only fans? That was certainly something that the previous administration at Twitter considered, and then now famously, after some of these documents were leaked, the company decided not to pursue these plans because it felt it couldn't do so safely. have they now felt that they've solved that problem, and, and that they can monetize adult content? Like, maybe that's the subtext here, but it's really not. What is different now than a year ago, or frankly, than 10 years ago?

Mike Masnick:

I mean, I find the suggestion that the current administration of that platform has thought through any of this in terms of can they do something safely feels unlikely to be happening within the, uh, the current folks there, but, maybe that's just my, uh, my outside view of, how the, platform is currently being run. I mean, there was talk about this when Elon first took over that he obviously needs to find a business model you know, especially considering how he's driven away, half the advertising effectively, and he wanted to explore other business models. So there is that question of maybe this is a business model thing where he thinks that, Monetizing and, doing something along the OnlyFans model is their way to make up some of the lost revenue, but that's, that's just guessing at this point for us, right?

Yoel Roth:

really hard to say. I mean, I think what we know and what we've, what you can observe on Twitter is that a lot of adult content creators are using the platform. But right now, most of them are using it as a way to redirect traffic to other sites like OnlyFans. This is something that Twitter studied and understood years ago, right? Like, we saw that a significant portion of, media results in search were adult content, and most of the accounts posting that adult content were saying, Hey, look at this trailer for content you can find on my OnlyFans account. And there is a robust ecosystem there of Discovery and interaction on Twitter, redirecting to content consumption and monetization on OnlyFans is this very symbiotic thing, except that Twitter wasn't making any money from it. And so, there is a rational case to say, you should vertically integrate these things and actually have monetization on Twitter. The difference Is that OnlyFans have invested extraordinarily heavily in trust and safety to make sure that when they are offering monetization capabilities tied to adult content, they're doing it in a way that assures that performers are producing consensual content, that people are over the age of 18, that everything is above board legally. if you talk to the folks who work at OnlyFans, I'll tell you, like, a majority of the staff at the entire company are working on trust and safety issues. Like, their entire business is rooted in trust and safety. I don't know that we're seeing that same level of rigor and diligence coming from the new administration at X.

Mike Masnick:

That's a, a diplomatic way, way of putting it. Uh, I mean, there is a part of me that wonders if, this was not really mentioned in the story, but Elon likes to see himself as this, freedom loving, freedom supporting person and his, his whole reason for purchasing Twitter in the first place was to allow freedom of speech. I almost wonder if this is just like. he's trying to get that story out there that adult content is okay because he's, you know, wants to continue to present himself as this, freedom supporting person.

Yoel Roth:

Look, I, if that's the stance that X want to take, and this is a fight that they want to have, I think it's a fight worth having. I think we've, for many, many years, had the biggest voices in speech governance through technology taking a pretty broadly anti porn and anti nudity stance. You had, In the early days of the App Store, Steve Jobs saying that he wanted to give consumers freedom from porn on the iPhone. You've had Mark Zuckerberg taking a very pointedly anti nudity stance on Facebook and sort of creating policies that pretty comprehensively restrict this. I'm glad that there are sites that take a different viewpoint. I think there was a strategy to not drawing attention to this issue, and I suppose the word of warning I will offer here is, as the guy who on a couple of occasions dealt with, threats to have the Twitter app kicked out of the app store due to the presence of adult nudity, allow me to just say that perhaps drawing attention to this particular issue may be picking an unwinnable fight.

Mike Masnick:

Ah, very interesting. Well, I know I will note, I will call back to a New York times piece that you wrote soon after you left Twitter and which you noted that there are some regulating forces on how. Speech is, handled on different platforms. And one of them that you very clearly noted was the app stores often do have, perhaps problematically and, and in many ways, I think problematically an awful lot of say here. And so I'll go further than what you were just saying. it is possible that Apple and or Google could, in theory, using this news, start to push back on, X and threaten to do something.

Yoel Roth:

Yeah, I mean, we've seen this play out with perhaps less prominent platforms before. We've seen Tumblr sort of go through a whole series of contortions with the App Store, ultimately leading the platform to ban adult content altogether. we've seen that, OnlyFans have generally opted not to take an app based approach at all. We've seen that most actual porn sites are totally unable to offer an app based experience. And again, we can question the legitimacy of the underlying policy there. Like should adults be able to view legal content through apps on their phones in a native experience? My opinion is yes, but given the ecosystem we all live in and the governors of that ecosystem and their policies on porn and adult content, You sort of have to dance with the one that brought you and if you're going to operate an app that is used by hundreds of millions of people who are accessing it primarily through an app store, testing the boundaries of that doesn't seem like the best business strategy.

Mike Masnick:

All right. Well, it will be a story to follow. Certainly. Uh, let us move on to our next story, which is not about, the platform formerly known as Twitter. Uh, but rather this story sort of broke, more or less like five minutes before we started recording last week. Uh, and so we mentioned it very, very briefly in last week's, but we wanted to go a little deeper here, which was that, the, the Timu, the sort of suddenly out of nowhere, incredibly popular Chinese based, shopping app, is looking like it is, I guess, being declared a, a flop, very large online platform facing all sorts of additional scrutiny on its moderation practices under the EU's DSA. and. It struck me as generally interesting because there's also Shein, which is another, you know, Chinese based shopping app that also, um, this was announced a little bit earlier. That, in my head, the sort of framing that I had is that we have these two new European regulations, the DSA digital services act and the DMA, the digital markets act. And I sort of mentally was putting e commerce related stuff into the DMA. That's, you know, Digital Markets Act. That's about e commerce. That's where these kinds of things are going to be regulated. And the DSA was always about social media. And yet we're seeing now Timu and Xi'an are being included under the DSA and are going to face regulatory scrutiny under that law. And so, Yoel, you had some thoughts about kind of where this is heading and what's happening over there? Yeah.

Yoel Roth:

and. I had to wonder, are we scrutinizing Timu and Xian for the same reasons that TikTok is drawing an enormous amount of scrutiny by regulators, which is like, these are giant apps used by lots of people around the world that happen to be based in China. And I think you can ask whether that's legitimate, given some of the opacity about labor practices, which has emerged as one of the big driving concerns here. But also like. Lots of European and American companies engage in exploitative labor practices to produce cheap fashion. And so it's not clear to me that these things are unique. But then, like, more generally, like, if you set aside the, do we care about these because they're Chinese, you're left to this question of like, Is one law that is built to govern TikTok, and Twitter, and YouTube, and Facebook, and Instagram, and also Teemu, and Shein, and the App Store, and Amazon. com, like, can one law really do a good job with all of those services? Because they look extraordinarily different. Um, I'll say from, from my own experience, having gone from working at one of the, uh, You know, what Kate Connick calls the big speech platforms, the platforms that primarily are focused on what people are saying and how they're expressing themselves to now working on dating apps, which, certainly have trust and safety complications. Nobody is saying that you have a foundational right to free expression on Tinder. Like the stakes are just a bit different. And, and so you have to think like, are these laws that are built for sort of omnibus regulation of the internet? Going to be able to account for the diversity of all of these different services and that's. Sort of a tricky proposition.

Mike Masnick:

So, I mean, are there particular aspects of the, of the DSA that you think don't really match with the kinds of things that, say a Timu are doing?

Yoel Roth:

I mean, I think there's some elements that do match up and that I think are worth interrogating. Right? So the we've seen in some of the early commentary coming from the commission that what they're really what they're most worried about are things like dark patterns, and they're worried about, um. Some of the interface design and algorithmic components that perhaps are encouraging people to buy all sorts of crap. They don't really need, and that may be obfuscating or obscuring, the origins of products that might not be doing enough to protect people from counterfeit goods like. All of that feels like a legitimate consumer centric, set of concerns. Let me bracket for a moment that, like, we're using a new term, dark patterns, to describe exactly the kind of behavioral marketing that people have been engaged in for decades. Like, if you, Walk down the aisles of a supermarket. The experience you are having is arguably a dark pattern. You are, your eyes are being drawn to the cereals and the mayonnaise brands that marketers want you to buy. And by the way, they've done a ton of research about the most effective ways to make you buy them. And so like, if that happens in an app, all of a sudden we like lose our minds with concern over it. These aren't like necessarily new things. So it's, it's worth like. Backing out of this presentist moment.

Mike Masnick:

Yes.

Yoel Roth:

Those things I think line up. What I'm less clear on are some of the things like transparency requirements. Um, like that's one of the flagship parts of the DSA. And if you look at it from the perspective of platforms that aren't the big speech services like Instagram or TikTok, things start to feel a little bit murkier.

Mike Masnick:

Yeah. And, and I mean, I, I, I guess I kind of wonder where this ends up because it increasingly, and this has always been my complaint with the DSA and I know, like I get some pushback on, because I'm always complaining about the DSA on this podcast, and, and to be clear, right. I think that certainly compared to internet regulations elsewhere, I do think that the DSA is a much more thoughtful approach than, I mean, what we're kind of used to seeing in the U S which is just, you know, complete. Nonsense, completely disconnected from reality. So there, there is some thinking and there is this intent to, you know, how can we create a relatively flexible regulation that is trying to minimize bad stuff that is absolutely happening online. My DSA has always been that, Some of what you were talking about, which is like, some of this is just things that, we've been dealing with in other contexts for a really long time. And, I'm not sure that just because they've gone digital, that they're suddenly as horrible as some people make them out to be, but also that like how you define the internet changes the way you think about these things and we, we went through this sort of 10 year, maybe 15 year period where people sort of started to define the internet as just being about those large speech platforms and that, you know, the, this simplest way of saying that was like, the internet is Facebook. And for those of us who believe that the internet is a lot more than Facebook and should be, and hopefully will continue to be a lot more than Facebook in the future, I've worried about that. And, and one particular aspect of that is that when the conception that you have of the internet as being Facebook and that you have to regulate. The internet as Facebook, you actually turn the internet more into Facebook because you're saying Facebook has to do these things to be safe in the minds of, European regulators. And then you sort of. Start to force everybody else into that, those, those patterns. You need to follow these best practices. You need to have these transparency efforts. You need to have these, you know, appeal processes and all of that, that kind of thing. And of course, Meta will comply with the DSA and then everybody will feel that they have to do the same thing that Meta does because, okay, Meta did X, Y, and Z, and that made them okay under this. And so do we lead to a world where. and Sheehan, even though they're totally different kinds of platforms. And then everybody else as well starts to look more and more like Facebook. Does that make sense? Am I, or

Yoel Roth:

Yeah, no, I mean this in a way connects back to the first topic we discussed, which is like, So much of what the policy and regulatory landscape around the internet looks like is informed by our monomaniacal focus on the biggest social media platform. And obviously Facebook matters because there are billions of people who use it. But also I think the internet is magical precisely because it's diverse. And when the internet starts to seem more homogenous, especially when it comes to its governance, I think that's a big Bad thing. you know, you and I spent some time this week at a convening, that was looking at policy and governance issues for threads. And on one hand, like, I'm glad that the team over at meta are being so thoughtful about these issues. I'm glad that they're engaging on them. They're being really principled and I'm glad that they're building towards integration with activity pub and the protocols that power a lot of the Fediverse. But also I have to ask, like, Part of what made Twitter great was that it wasn't Facebook, and that there was this other corner of the social web that wasn't controlled by the same company with basically the same policies. And what we're now seeing is That corner of social media is starting to look more like Facebook too.

Mike Masnick:

Yeah. Yeah. it'll be, it's going to be interesting to see how that all plays out, but that is actually a really good lead into our next story, which is somewhat about Facebook, which, uh, is a story from 404media, that was interesting about, uh, I mean, the, the title was that, Facebook's Taylor Swift fan page is taken over by animal abuse, porn, and scams, and, uh,

Yoel Roth:

author sort of starts the story by describing something I do every day, which is going on Facebook and typing Katy Perry into the search bar, which is how I started my morning. Um, but, you know, on some levels, you read the story. which, sort of describes the, the kind of very obvious scammy stuff about how you, go on a page that seems like it's about your favorite pop musician. And then all of a sudden it's like, ta da, hardcore porn. Um, and on 1 level, you read that and, can say is like, oh, another instance of the Masnick impossibility theorem. Like, content moderation is really hard. And that's true, but I think there's like two nuances to this story that I found really interesting. the first one is that it's poking at this ecosystem of very, very old accounts on Facebook. Um, you think back to the earliest days of the service and the ways that, you know, we would express some of our interests on our profiles and, and, you know, I would say like, Hi, I'm, I'm Yoel. I like. The Rolling Stones and then that would link out to a page for the Rolling Stones and some of what we're seeing in this article is that some of those oldest pages that were like maybe set up automatically as like the hub for people who said they were into the Rolling Stones now have been hacked or taken over or compromised in some way and have been overrun with like bestiality content, um, which is sort of a like, You know, things on the Internet don't ever really go away. They just change forms and usually they get hacked and turned into spam. So there's kind of an interesting link rot, an Internet rot dimension to all of that. But then the other bit of it that was so interesting to me is that This is also a product of just how big and complicated Facebook is. Um, you know, we, when Facebook rebranded as meta, they were signaling in a lot of ways that they are not just one app. They're not the blue app anymore. They're Instagram and they're WhatsApp and they're, you know, they're VR products. But we've also seen that within Facebook, Facebook isn't just Facebook anymore. It started as a news feed, and now it's groups and communities and shopping and dating and all of these other surfaces. And those things become incredibly difficult to govern effectively because you have to build like at the basic mechanical level of trust and safety, you have to build moderation systems that remember that there are all of these different parts of your product. And by the way, you have to moderate all of them. Um, and that's just a hard proposition for any company, even one at scale.

Mike Masnick:

Yeah. Yeah. And it's, it's incredible how, I mean, I've joked in the past that like not joked, but sort of told the reality, which is that if you have any platform that allows any sort of, user speech or uploads in any way possible, you have a trust and safety issue because people will abuse it. We, we had discovered this and I don't think I've spoken about this. We had discovered this maybe a year ago that, someone had figured out we, just on Techdirt. for commenters, if you set up a profile, you could upload like a user image, and someone, some organization, group of people or whatever, had figured out how to sort of abuse that process to upload. A ridiculous amount of spam content. And then they were somehow connecting it to like weird LinkedIn pages. And so we were suddenly getting these things that were like pulling from, TechDirt hosted images that were appearing on Skype. GAMI, LinkedIn, it was like this whole ecosystem of stuff that was all happening behind the scenes. And we wouldn't have noticed it at all, except that somebody found something on LinkedIn and connected it back, detected and alerted us. We had to go through this whole process to find it, but you see it in like. in this article, it's, it's a fun read because you know, there are all these descriptions and I'm just going to read a little bit of this and it goes on for a while, but it's like on Facebook, a user can post, or they can create a page or a group, which are two different things. The pages can post images or they can post posts, or you can post in groups, which can be public or private, a user or a group or a page can all create an events, which have their own posts, images and comments sections. And it goes on and on and on like that. And there are these different layers and having to understand and police and follow all of those things and how they can be abused. And, you know, if they fall into other hands or they're set up by scammers in the first place, and it's just, you know, you begin to realize how overwhelming the challenge of all of this is, especially when you have, I mean, I guess Google shuts down products like crazy, but most companies, once you have features that some people actually do rely on, it's very difficult to shut down those features. And so, you know, if they're still prone to abuse, they're going to be abused. And on something like, Facebook at massive scale, yeah.

Yoel Roth:

a lot about this at Google, where it's like, you know that it's promotion season for product managers when you see this proliferation of new features that sort of seem like things that exist, but now they're different and they have a new name. And by the way, Gchat is now called Meet, and Meet is now called Hello, and Hello is called Talk. And, you know, like, You know, we all kind of like roll our eyes and laugh at these things, but, one, like, I always find it interesting to think about the organizational dynamics that lead to that. Like, what are the incentives that lead these giant corporations to have these proliferations of features? But then also, like, just not through malice, not even through bad intent, like you just can end up with moderation that doesn't work because all of a sudden your product is too complicated for any one human to reasonably understand. And when you're building and designing these systems, somebody has to understand how all the pieces fit together so that you can make sure that the weird, scammy, bestiality stuff doesn't end up on the Taylor Swift fan pages.

Mike Masnick:

Yeah. Yeah. I mean, I think there is the other element to this story, which is that, and this is, there is a, a genre of these kinds of stories, which is, reporter goes on random platform searches, find something bad, and writes about it. and always part of that story is then like, you know, Yeah. Then we alerted the company and then that bad stuff disappeared leading to a bunch of people saying, well, why, why doesn't the platform itself just do the same thing that the reporter did? And it's like, well, you know, I, well, I would love to hear your take on it. My

Yoel Roth:

I mean, actually, like, by the way, platforms, like, if you're listening, like, y'all should be doing that. And if you want to dress it up in fancy terms, call it red teaming, like, call it whatever you want. But like, just if you're, if you're thinking about what are going to be the best ways to avoid PR risk. Consider doing what the reporters do, which is like, use your own products and search for stuff, and then make sure that there are people who are doing that every single day. I

Mike Masnick:

So, well, here's my question then you know, from your standpoint, many companies are doing that? Cause my assumption was basically that a lot of companies actually are, but that the, the issue is so large, right? I mean, you can do this, you know, for Taylor Swift or Katy Perry, but there are a million other people that you also would have to be searching for as well.

Yoel Roth:

mean, this is, this gets us to, the stories about Google AI search and, um, Putting glue on pizza and Google's response of like, these are very uncommon searches. And like, these are people who are just trolling us and it's not representative of the product experience. And, Google got roasted for that response because really your search engine shouldn't be telling people to eat rocks and glue. Like that seems like a reasonable and trenchant critique. Um, but the flip side of it is like, yeah, there is a little bit of trolling here. Like we're not using these products organically. That said, I think typing Katy Perry into the search bar of Facebook is using the product organically. And so, I'll say first, like, I don't know that companies are doing enough of this. I think tech companies can sometimes be a little bit too fancy for their own good. And people will say, like, we need to build the machine learning model that will detect this at scale that is universal and generalizable, and they'll build a thing and it has, you know, 80 percent recall and 99 percent precision. And that's great, except it's not performance on the thing that a reporter does in 30 seconds, which is type Katy Perry into the search bar. So I think you've got to do both. Um, and, And it's not always about the technologically satisfying solution. It's about the pragmatic solution. And I think, some of these big companies in the name of scale will not always do the pragmatic thing. I'll just footnote this by saying like, that solves a class of problems that is about PR exposure. primarily in the United States, this is not actually a solution to moderation. Like, there's no way that that thing scales outside of mitigating the risk of some American reporters writing embarrassing articles.

Mike Masnick:

Yes. An excellent, an important point. so now our next story, we're going to move back to Twitter. I swear. We did not just pick Twitter stories because you, you

Yoel Roth:

happened to be one of those weeks.

Mike Masnick:

One of those weeks where interesting Twitter stories came up that we thought it would be interesting to hear you comment on as well. But there was a really interesting study that came out in Nature. And there was also a Washington Post article about it as well, and some other discussions elsewhere. And it, was looking at the post January 6th de platforming of misinformation accounts, mostly around sort of, you know, QAnon conspiracy theory related stuff and how effective that was in terms of decreasing the reach of misinformation. And so as someone who was there and was a part of that, what is your take on this particular study?

Yoel Roth:

it's an interesting study because it looks at a particular moment in time where Twitter changed its policies to ratchet up the severity of enforcement of a long standing policy. So let me, let me give a little bit of context for folks who maybe weren't following this as closely at the time. Twitter, sort of well before January 6th had introduced a set of policies that restricted the circulation. Of what we called coordinated harmful activity, but that was really just a proxy term for behaviors like QAnon. And the basic intuition here is like, we've had policies about extremist groups and dangerous organizations. Terrorism is pretty easy to ban, um, as a policy matter. Like, if you are a member of Al Qaeda, like, Twitter was going to ban you, but something like QAnon was a little bit more nebulous, because it wasn't always something that had that kind of really clear group identity, it was more like this nebulous kind of universe of conspiracy theories that people sometimes subscribe to, sometimes they were just asking questions, sometimes You would see people peripherally share or engage with themes that were QAnon adjacent, but you didn't really know, like, are they actually a dyed in the wool QAnon believer? Or are they just, you know, asking a question about this, like, child protection issue? It was messy. It was deeply, deeply messy. and so Twitter's approach was to look for a couple of things. One of them was the sort of networked behavior of it. So, are there Groups of people who behave in a coordinated fashion to advance these kinds of conspiracies and especially are there swarms of behavior that go after particular targets, right? So like Chrissy Teigen was targeted a couple of times by this activity. Large brands like Wayfair and Walmart were targeted by this kind of activity. Kooky conspiracies about how like you could Order a cabinet on Wayfair that was mispriced by a bug. And that mispricing was a signal that inside the cabinet, there was a trafficked child. Um, and so you would see these like moments where it was truly a swarm of users amplifying that conspiracy. And Twitter decided that it could target that behavior in a way that allowed for intervention. And so before January the 6th, Twitter's approach was to broadly de amplify and reduce the circulation of that content. The intuition behind that, the goal, was to Was, you know, not to drive this content underground, not to try to make it so that this whole thing became, topics you could never, ever, ever talk about, but rather to say, we're not going to recommend it, we're not going to amplify it, we're going to make it harder to find, but we're not going to run this risk of, like, aggressive de platforming that has a giant backlash effect that ultimately drives more attention to these kooky conspiracies. You can question whether that was the right call, and I still do question whether that was the right call, but QAnon was weird and complicated and it was not always obvious what the right intervention here was going to be, and so that's where Twitter was pre January the 6th. On January the 6th, we all saw very clearly that QAnon stopped behaving like this nebulous universe of conspiracy theories and started behaving more like a plain old extremist group. And so, Twitter took exactly the same detections it had been using for months already and said, Alright. We're going to start banning these users, and carried out a set of large scale content moderation actions to ban. At first, uh, it was about 50, 000 users. It grew to 70, 000. And the interesting thing that's not in the nature article is that this network kept coming back again and again and again and again, and in the weeks after January the 6th, Twitter ended up suspending nearly 150, 000 of these accounts because the network just kept reconstituting. And

Mike Masnick:

as

Yoel Roth:

a long,

Mike Masnick:

the same people.

Yoel Roth:

generally, yes, what we saw was not that there were 150, 000 people who believed in QAnon. It's that there was a much smaller number of people who had lots of accounts. So it'd be like one person with like 15 Twitter accounts. And then also when you would take away those 15 Twitter accounts, they would just. Sign up new ones and they'd get banned again and they come back again and get banned again. And so in a way that the number, the 70, 000 or the 150, 000 is misleading because it makes this sound like it's a bigger network than it actually is. And I'm not saying that to trivialize the problem. I'm just saying like, understanding the behaviors here is important. Which kind of brings us to the, the nature study, which is super interesting. because they find that the large scale de platforming effort reduced the prevalence of certain types of misleading information. Now I'll note, they're not saying that it reduced the circulation necessarily of the QAnon conspiracies themselves. Like Twitter believes that that content. Substantially reduced in volume, and that was what we found. But the article says, look, we found that other low quality domains started to circulate less. So if you say this domain, this website is strongly correlated with misinformation. There was less of that available after Twitter carried out this action. That's an interesting finding. They also found, and this was the real kicker. There were people that Twitter didn't ban. Who stopped using the platform as much, or at least they stopped posting as much. Um, and so I said like, there's actually by disrupting these networks and taking away the source of content, there's this like really interesting behavior where actually some people who have historically spread misinformation will stop doing so. And they speculate a bit about what some of the causes of that are. Those are interesting findings about this, like, most controversial of things, which is like, does deplatforming actually work?

Mike Masnick:

Yeah. It's interesting to see. It's interesting to see there's been some research on this before, but this, this is a pretty, you know, a pretty big study and a pretty big look at it. And, the thing that I'm sort of Curious about, which I don't think the study gets to is, did the effect really, exist past outside of Twitter, right? Did, did all of this just move somewhere else? Right. And we've seen, and we've had other discussions in the past where, de platforming on one platform works some extent and may reduce that misinformation on that platform. But does it just reconstitute elsewhere? I mean, you talked about it reconstituting. Within Twitter. And then it became this sort of whack a mole game, but does it just reconstitute elsewhere? And if it does, is it in a different form? and and that

Yoel Roth:

know, this is what Rene Diresta has called, like, platform regulation arbitrage. It's, you know, if you get banned on Twitter, then you move to Telegram or to Rumble or, or whatever. And by the way, you also then, like, have this new grievance, which is that you got banned on Twitter, which is a rallying cry. Um, and I think there's a lot of merit to that argument, right? Like you, we don't, we shouldn't understand these things platform by platform. We should understand them as ecosystem level dynamics. It is, in my opinion, still worth thinking about. Deplatforming on specific platforms as an intervention that matters, right? Twitter mattered, at least historically, because it was where conversations on the Internet intersected with media and cultural and political elites. you would see that the conversations on Twitter drove the conversations in the news that day. And so, what happens on Twitter and the prevalence of harmful content and conspiracies on Twitter matters because it, it played a role in defining public discourse. And so, fixing Twitter, even if the underlying behavior migrates elsewhere, like, journalists weren't lurking on Telegram the same way that they were obsessively looking at the trending topics page on Twitter, and that matters.

Mike Masnick:

Yeah. Yeah. That makes sense, but also works as a really good lead into our next story, uh, which there's a, New York times story about how Israel is secretly targeting us lawmakers with influence campaigns around, Gaza, Palestine, Israel issues. and, you know, basically it sounds like they're doing what we've heard of in lots of other contexts where they have sort of, you know, secretly. Organized and paid for, influence efforts, uh, using a variety of different social media platforms and trying to push pro Israel messaging. my first reaction on seeing this was, you know, of course, like this is What happens, you know, we've had Russian organized efforts. We've had Iranian organized efforts. There have been, as has been exposed US organized efforts as well. but, uh, was there anything that struck you as, different or noteworthy in, in this particular story? ha

Yoel Roth:

is like the classic example of how, you know, we use big words like disinformation campaign and influence operation to describe what are fundamentally just these amateur hour, like, sort of keystone cops efforts. Let me, let me read one of the, one of my favorite bits of the Times piece. It says, quote, in at least two instances, accounts with profile photos of black men posted about being a quote, middle aged Jewish woman. On 118 posts in which the fake accounts shared pro-Israel articles, the same sentence appeared. Quote, I gotta reevaluate my opinions due to this new information. In a way, we, uh, we, we have to remember to put these things in context, and there's two pieces of context. One of them is like, some of this is incredibly shoddy. Uh, and that's not to say there aren't more sophisticated. better done disinformation campaigns out there. There absolutely are, but some of them just, just are absolutely abject. and this one was clearly an amateur hour operation. The second is how effective is any of this stuff, right? If you, if you see these types of messages, even if you are, uh, you know, a member of Congress exposed to this and you're like, I'm on social media to hear from my constituents, my constituents who have profile photos of black men who claim to be middle aged Jewish women, um, like. Even if you see that, is that, like, does that change your mind? Are you going to vote differently as a representative because you saw that? Or, as an American holding an opinion about the conflict, if you saw some rando posting nonsense generated by ChatGPT on Twitter, Are your opinions different? And a lot of times we talk about disinformation as if exposure to it drives a difference of viewpoint and one of the trends I've been really glad to see lately is that we're seeing talented, smart researchers in the space say. We shouldn't take that as an assumption for granted. It doesn't mean we don't care about disinformation. It's just that we shouldn't give it power. It doesn't actually have.

Mike Masnick:

Yeah. I think that's really important. there is another study in nature this week, which we had decided we weren't going to discuss in depth in the podcast, but it's relevant here. So I'm going to bring it up, which sort of gets a little bit at that and just talking about how, the study of how far does mis and disinformation flow and found that Within the context of the study perhaps was not as effective at convincing people it was useful and sort of rallying the troops who already believed in certain stuff, but it wasn't, it's sort of like the, uh, The whole story of like cops and fentanyl where this belief that like any exposure to it anywhere, like, you know, causes them to go into shock or whatever, a lot of people get exposed to some mis and disinformation, how many it actually impacts seems to be according to a lot of research. Fairly minimal. The effectiveness of this is, is not a strong, that's not to say it has no impact and certainly on the margins. I think there are examples that you can point to, and obviously people are coming to believe mis and disinformation somehow, but the idea that, its existence alone is this automatic poison that if you touch it, you will, go into convulsions like, like the police and fentanyl, you know,

Yoel Roth:

Yeah, no, I think that's right. Like, there's been sort of a, an overheated reaction for the last few years that's been rooted in debunked theories of media effects, right? Like, if you go back to the 1950s, when we were talking about, you know, TV and radio. There was this idea that of hypodermic media influence, like ideas are injected into you like a syringe into your arm and researchers found that like that's not true and that's not how the brain works and that's not how media works and it's not how ideas are formed at all. and we're going through that cycle again with the Internet, but it doesn't change that. I actually still believe platforms need to be taking action against this type of content, even if you don't buy some of these overheated arguments about media effects, right? Like the reason that. Platforms should be removing these types of fake accounts isn't because they're persuasive. They should be removing them because they make platforms feel crappy to use, right? Like it's a bad user experience to come across these obviously fake and manipulative accounts. And if you're a platform that wants to be attractive to users and a credible space for conversation, then You need to be governing this, whether or not you believe it's going to have a deleterious effect on democracy.

Mike Masnick:

Yeah, yeah, absolutely. All right. Well, that takes us to our final story, I think this week, which is about Twitch. And, in the past week they terminated all the members of their safety advisory council, uh, which were a group of, supposedly experts in the field who were supposed to provide some advice being an advisory council about, uh, keeping Twitch. safe. And so know that lots of companies have these kinds of, advisory panels and, and groups and, and organizations. I know Twitter had them for a while and other platforms have. And so I was kind of curious what your take is on, on Twitch's decision to end that.

Yoel Roth:

Yeah, you know, I think we don't know a lot about what's going on within Twitch at the moment. We, we know over the last few months they've laid off a lot of their trust and safety staffers, seemingly as part of a broader industry trend of cutting back in these areas. Um, I think that's broadly not a good thing. I think it's, it's worse for the companies and worse for their users. But we, we don't really know the specifics of what led to this. We can look at what Twitch said, though, which is that in place of their, safety advisory council, they want to move in the direction of having a new advisory council made up of, what they call Twitch ambassadors, um, which I gather to be sort of prominent streamers and users of Twitch. And there's a certain logic to that, right? Right. On one level, I buy the idea that members of a community should be involved in its own governance. If I had a criticism of, the advisory council we had at Twitter, it's that it didn't have a whole lot of like, it didn't aspire to be representative of the Twitter platform. And you can ask whether that's ever going to be possible. Like, can you reasonably represent hundreds of millions of people in this kind of way? And I, I generally don't know that there are great direct democratic solutions to that problem. But I, I understand where Twitch are coming from. They want to make governance decisions in a way that is more inclusive of their community, but you need both, right? Like you want the perspectives of your community represented and also safety advisory councils have typically been made up of people with actual expertise, like people who have dedicated their life to studying and understanding these complex issues with non obvious trade offs and dynamics. I have said sometimes, like, opinions about content moderation are like elbows. Everybody's got them. and that's true. Everybody can offer an opinion about governance, but that doesn't mean it's an informed opinion. And I think platforms need both the expert perspectives and the community perspectives. And we're seeing Twitch shift pretty strongly in one direction there.

Mike Masnick:

Yeah, I mean, I think, it's one of these things where they're, they're different perspectives, right? And each of them are valuable in their own way, right? Because oftentimes the, the user perspective on safety things may pick up on certain specific nuances about how The particular app works or certain affordances that are different than somebody who isn't an, uh, you know, active and regular user wouldn't realize that this is how this, plays out in reality, but on the flip side, where you have experts in other areas or academics who have spent time understanding the deeper nuances of some of the other stuff that we talked about, about like. You know, do certain kinds of interventions actually work. I mean, I've had conversations with users of platforms who insist that, you know, certain interventions absolutely must work based on their own experience as a user who have no idea how poorly those interventions actually work in practice. And so it feels like, It would make sense to have a balance of people who actually do understand the platform deeply, but also understand the power and impact of different interventions deeply, rather than overcorrecting on one side or the other, and it feels like maybe Twitch is switching from one extreme to the other.

Yoel Roth:

Yeah, and again, there's merit in both of those, but I would, I would argue you want to have that balance. Like, I, uh, a little while after I left Twitter, I wrote piece in Lawfare that was sort of making, I thought, a provocation for platforms to have a public editor. Where there was somebody whose job it was to like explain governance decisions and be the voice of the community in those governance decisions. And I still think there's value in that and that there's an element of that that is, if you put it in the most positive light, what Twitch are doing here, but I don't think you can just have one, uh, in the same way that I don't think you can make big governance decisions by, oh, I don't know, fielding a poll on Twitter.

Mike Masnick:

Just randomly chose an example. Yes.

Yoel Roth:

I don't know why that one came to mind, but, um, the point is like these there's tools here. There's lots of different approaches you can take to inclusive governance. It should be a yes and not an either or.

Mike Masnick:

Yeah. Yeah. Oh, that makes sense. That makes sense. Well, I think that brings us to the end of this particular episode of Ctrl-Alt-Speech, you, well, it was great to have you on the podcast. You're, uh, you know, knowledge and insight on this is, is, uh, super valuable and I think, uh, people will have really enjoyed it.

Yoel Roth:

My pleasure with, with apologies for not bringing the, the usual British accent to this conversation. Um, I, uh, it was, it's always fun talking through the news of the week.

Mike Masnick:

Yeah. Yeah. And, you know, and, and getting at the nuances and different challenges, but, uh, thanks again and, thanks to everyone for listening as well.

Yoel Roth:

See ya.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode