Ctrl-Alt-Speech

The Bell Tolls for TikTok

April 26, 2024 Mike Masnick & Ben Whitelaw Season 1 Episode 7
The Bell Tolls for TikTok
Ctrl-Alt-Speech
More Info
Ctrl-Alt-Speech
The Bell Tolls for TikTok
Apr 26, 2024 Season 1 Episode 7
Mike Masnick & Ben Whitelaw

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Show Notes Transcript

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

In honour of the recently shuttered post. news app, Mike. What is on your mind?

Mike Masnick:

Uh, well, I'm wondering what's going to happen with all these apps that are trying to come around. But the bigger thing is that I think there's just too much news and figuring out what we're going to cover and how we're going to cover it is becoming a real chore. And I'm wondering if we're going to have to turn this into a two hour podcast, a three hour podcast, maybe we need to be like broadcasting all the time, Ben.

Ben Whitelaw:

Yeah, it's it's been a hell of a week, isn't it? Yeah.

Mike Masnick:

has been one hell of a week. What is, uh, on your mind, Ben?

Ben Whitelaw:

I'm wondering if we can get our very learned audience involved in the choosing of our opening gambits. Um, I think part of our issue is that we spend so much time researching old deceased social media apps and, and the calls to action that they used to use. So listeners, if you have any thoughts, you know where we are. podcast@ctrlaltspeech. Com. Um, you do get in touch

Mike Masnick:

Yeah, I am wondering when the, post prompt began, because I was looking at some of the old deceased, uh, social media apps and messaging apps, and the early ones didn't have any sort of prompt. It is a, a newish thing, but I'm not quite sure where it started.

Ben Whitelaw:

as a history lesson there. I'm sure. Hello and welcome to Ctrl-Alt-Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you as ever with the financial support. The future of online trust and safety fund. My name is Ben Whitelaw. I'm the founder and editor of Everything Moderation, a weekly newsletter, bringing you the best. Of the trust and safety stories from the last seven days And i'm joined by mike masnick who is always on the go Could be live streaming constantly from Techdirt if he wanted to how's it going mike?

Mike Masnick:

I'm good. I'm good. I'm exhausted. Busy as always. Uh, and, uh, there's just always, always, things happening.

Ben Whitelaw:

Yeah, that's the default isn't it nowadays it's like exhausted overcome with the news if I had to respond to a post On post. news, I would just say exhausted. I think that's but we're here and we're bringing We're bringing our best selves to the podcast

Mike Masnick:

Yes, absolutely.

Ben Whitelaw:

We've got a lot to cover

Mike Masnick:

a shot of caffeine to be doing the

Ben Whitelaw:

Exactly Exactly whether you're listening to this on friday evening Or across the weekend or on monday morning We hope this brings you a little bit of a perk up and all your news for the week Let's get started, Mike. Let's get right into the mix of things. There's a whole bunch of stories that we want to talk about this week. The story that we both agreed was one of our big two stories this week was a new report from Stanford that you have Really gone into the weeds on, um, you were telling me before we came on and started recording that the research of this report said that, you know, you might be one of the few people who've gone through it in detail. Is that right?

Mike Masnick:

Well, yeah, I, I don't know about that, but yes, some of the authors of the report were happy. I wrote two separate posts about it. I have recorded, but will not be released by the time this podcast is released. I've recorded an interview with a couple of the authors, of the report. It's from the Stanford Internet Observatory, which, you know, it does a ton of really great work on a bunch of stuff. But the report itself is looking at. The, cyber tip line from the National Center for Missing and Exploited Children, which has become sort of the de facto global system for how we deal with what's known as child sexual abuse material or CSAM online in that U. S. companies are required to report if they come across any CSAM. to the CyberTipline, and then NCMEC and the CyberTipline are supposed to review the information that is reported. They are the only entity, effectively, in the world that has a legal right to store and, obtain child sexual abuse material without facing criminal consequences for it, uh, for, you know, for the purpose of investigation and then they coordinate with local law enforcement around the world to try and, deal with whatever sort of criminal violations have happened.

Ben Whitelaw:

And before we jump into the report, Mike, when did Nick come about? Because there are people listening. You probably don't know exactly like what, what this body is, where it comes about and it's useful context is before we jump into the actual research.

Mike Masnick:

Yeah. So it's, uh, I forget the exact timing on the year. But it was, set up as a pre trial. private, non profit organization, but set up by the U. S. government effectively to coordinate, around issues with, missing and exploited children. And, it's really on an increasingly larger and larger role as a bunch of the issues around, you know, child sexual abuse material went online. So the rise of the internet, really sort of You know, as the internet grew, NCMEC's role really changed and expanded somewhat with it, though its funding hasn't necessarily kept up with that, um, nor its technology as the report detailed. But it, it is this, organization and it's structured in a way to, in theory, try to deal with, and some, this is some of what comes out in the report, try to deal with some of the legal. challenges and the privacy issues around, finding information, what is being shared, who is sharing it, and sharing that information with law enforcement in a way that lives up to, in the U. S. certainly, Fourth Amendment scrutiny, but there's some confusion over where those lines are drawn. And then around the rest of the world as well, it comes up in questions around data, privacy and data protection and how information is handled, in various countries.

Ben Whitelaw:

So because of the major platforms are all based in the U. S., they are obliged to pass on, CSAM, right, to the organization via the cyber tip line.

Mike Masnick:

Yeah. And, and so it's, you know, it's a little tricky, but some of the fourth amendment issue that's, that's involved here, and it's sort of important to understand this and to go a little bit deep into the weeds is that, the government cannot. Force companies to search for this stuff because then they would effectively be government agents. And if the government is directing you to search for content, then you have fourth amendment concerns and they cannot. require a search without a warrant and a warrant requires probable cause for criminal behavior. So there's a lot of people who keep demanding that there should be a law passed that requires MEDA and, Twitter and, and everybody to do searches for child sexual abuse material. And you totally understand the thinking behind that because you say like, Oh, this material is out there. It would be better if it was found, but requiring them to find it would then. effectively ruin that evidence for any criminal prosecution because the response, if that information is found and it was done at government request without a warrant, without probable cause, the remedy for that is excluding the evidence. So requiring these companies to search for stuff would then make it harder to actually prosecute the actual criminals, which is very problematic. But the way we've sort of gotten around is that one, we have this private nonprofit where you have to report this stuff. And then the issue is that if a company finds child sexual abuse material and they do not report it, they face very stiff penalties. and so the requirement is that they, you know, as soon as they come across it in, in any form, they have to report it. And of course that leads to the potential and the report sort of details of significant over reporting. If you are unsure, you report the punishment for reporting something that shouldn't have been reporting is nothing. The punishment for missing something that you should have reported could be massive,

Ben Whitelaw:

Right. So, so let's get into the weeds in the report then. Um, I just, before we do so, I just looked it up. 1984, it was founded. Co founded apparently by Ronald Reagan. Interesting. So, but let's get into the report because there's a lot of stuff in there. What were your key takeaways from what, the put

Mike Masnick:

this is a super, super important, addition to the knowledge on these things. Uh, obviously, you know, one of the biggest issues that faces everyone who does any sort of, trust and safety work is dealing with child sexual abuse material. I've heard from people that, if you have a platform that allows the uploading or sharing of any kind of visual or video content, you have a child sexual abuse material. There is no. two ways around it. It is just, that is what happens. It is one of the first things that platforms realize that they have to have some sort of trust and safety effort to deal with, but the whole sort of CSAM ecosystem and how it works has been incredibly opaque. And for a variety of reasons, people just have not been particularly aware of it. some of what comes out in the report is even people who are, on the reporting side, don't fully understand what happens when they report and people on the law enforcement side don't fully understand what the motivations of the companies are when they're reporting and what information they're providing. So. Just having that out there and having a way to actually understand the report. They, they, they spent many months on it. They interviewed dozens of people, all across the board who were involved in the system. They spent a lot of time really trying to understand it. I know that people at Stanford did a few like, days that they spent at NICMEC, like shadowing the cyber tip line employees and seeing what they were doing every day, they really got deep and they really reveal a whole bunch of stuff. So just having it out there is like a major contribution.

Ben Whitelaw:

Something new.

Mike Masnick:

Yeah. The second part that really stood out to me was everybody knew that the system is not. Doing as well as it should. I think everybody understood that, you know, obviously there is a large CSAM issue with the internet and it always feels like more could be done, but understanding where the problems lie was always not entirely clear and some of that has to do with just the opaqueness of the system itself and what you learn from reading the report is that there are a whole bunch of. Potentially problematic incentives. You know, I mentioned one already, like the over reporting. And so, so everybody gets overloaded. All of the incentives sort of conflict with one another and it creates a really challenging situation where there are things that could be made better and the report has a bunch of recommendations, but there is no easy fix. A lot of these are sort of naturally occurring incentive structures that Are not something you fix easily by like passing a new law or, upgrading technology or something like that. Those can help and can significantly help. And when you're talking about protecting children, probably worth doing these things to help. Um, but, but there is no like silver bullet. This is going to fix everything kind of thing.

Ben Whitelaw:

so the, so the kind of one of the key outcomes to that point is the fact that, law enforcement officers who receive tip offs from the cyber tip line, they don't actually know which of the kind of tip offs are more serious, right? This is kind of a key finding. And it's really interesting to me as somebody who has no idea of how the system works, like many of us to see that, yeah, if, if two reports come across your desk and you don't have a clear sense of what a offender is, is more likely to commit further CSAM related crimes in the, immediate future, you can't make an informed decision about that. So talk more about that. And some of the other, the kind of insights.

Mike Masnick:

And again, like you begin to get a sense of how challenging this is because there were a couple of different things there. One is that on the platform side, the platform's reporting this, there are like different check boxes in the system and often they don't fully understand which check boxes are important for letting law enforcement know, like, this is really serious. This is happening. But also even more interesting to me was there was a part of the report that detailed that some. people in trust and safety roles within the companies felt that they didn't want to add in more information than the purely factual information. So they didn't want to say like something to the effect of like, I have a hunch that this person is very serious and like, that something needs to be done about this person or, you know, like, Because they were afraid, like if they got that wrong, the consequences of that could be massive, you know, providing any information that is just like, but if, you know, people who work in trust and safety and who have done this work for a while and seen this stuff, they will often have a good sense of like, which ones are the more serious ones, which ones do they need to deal with, but there are incentive. Like reasonable incentive structures in place that make it hard for them to get across, which ones they think are more serious. And the system is designed where on, Nick McSide, they don't want to provide too much information to the companies. Because again, then you run into the fourth amendment problem. Like, are they instructing the companies how to do this? They don't want to do that. And then law enforcement is receiving these reports from NCMEC, but again, without all of that other information. Like, you know, what the employees are actually thinking, which ones are serious and there's no way for them to do it, so they're just sort of going by gut and they have issues on their end, on the law enforcement side, where. They have to prioritize a whole bunch of different things. Some of which are related to dealing with CSAM and some of which are dealing with, you know, Somebody's house got burgled, and they have to go, deal with that. And, one of the things that comes out in the report is that, police feel that dealing with like local burglaries is often more important because, the people whose houses have been burgled say that they need a police report in order to go to insurance. And so they're getting all this pressure to do that. Whereas, dealing with, See, Sam, even though it was a major, major important issue. And if police can rescue someone and stop, an abuser, that is obviously super important, but they said that. Often police chiefs just don't prioritize this issue. Local prosecutors in some cases they found might not prioritize these issues in part, because being able to seat a jury that is going to be exposed to CSAM is a huge issue because it's very tough to find anyone who's willing to do that. Even some judges, refuse to, to look at that and look at CSAM, even though. To some extent to understand what they're judging. They have to, there's all sorts of issues where it's like a really, really difficult situation. And that this is like, again, there's so much more in the report. Like, these are just a few examples where you begin to look at this, you're like, Oh gosh, at every step of the way, it's really, really difficult to figure out a way to actually make this system work that doesn't cause other problems.

Ben Whitelaw:

Yeah. And we're going to come on to a story later on about, The prevention of CSAM. And, and this report makes clear, and what you've spoken about there makes clear that really the owners should be on preventing CSAM being distributed, right? Because once it has been distributed, You said of this chain of events via NCMEC to law enforcement that as you've detailed are really, really hard to resolve and to catch up on and to address because it runs into the kind of murky waters of the legal system. so we'll come on to that. I was really interested in the platforms. And the distribution of, tips to the cyber tip line and how Facebook were, and Instagram as part of meta were responsible for 82 percent of all tips. Why is that? Do you think is that because, because other platforms are as large in some respects, you know, you've got Google, you've got WhatsApp, but they're responsible for 4 percent of all platform

Mike Masnick:

Well, uh, WhatsApp is part of meta, right? So WhatsApp, uh, does report CSAM and they are included in the meta, batch and they do report a lot. And just as an aside, like an important part is there have been all these, issues around encryption and end to end encryption and the fear that that is, harmful for the fight against CSAM, yet WhatsApp does have end to end encryption and yet they report, a very large number of, of CSAM Issues as well. So, again, I think like the fear about end to end encryption appears to be a little bit overblown, but you know, I think there's a few different issues here. One is that meta, is much larger than any of those, those platforms. Like, you know, there are lots of large platforms, but meta is bigger. And like the structure of meta and the various apps, Instagram and Facebook and WhatsApp. All in some ways are conducive to people trying to share and distribute CSAM. But, part of it also is that for all of the crap that Meta takes, they also have put a tremendous effort into finding and reporting this stuff. So the large number of reports is partly, giving credit to Meta for how seriously they actually do take this issue, and reporting it. Now, I think, The downside of this is that politicians in the media flip that around and they show all of those reports and they say that is proof that Meta is not doing enough. There is the flip side to that argument, which I think is stronger, which is like, no, like the reason you're seeing all those reports is because Meta does take it seriously. They have all these tools to find and report this material. And that's why they are the largest reporter as compared to, and the report mentions this a little bit, an example like Telegram, where Does no reports. Now, Telegram is not a us based company. But there are lots of reasons to think that Telegram should be reporting CSAM to CyberTip line as well. And they simply choose to do zero reports. And so, one of the things that the report mentions, which is that, there are situations where discussions may start on one platform, and they almost always end up with zero reports. push people to move to Telegram to go further with regards to CSAM content, because users of the platform realize that Telegram does very little to nothing to police CSAM.

Ben Whitelaw:

Yeah. Interesting. And so, it is worth noting that despite all the challenges related to the, cyber zip line system, the report does note that it is worth preserving, it's worth nurturing and, and this is something that was mentioned by Alex Stamos and Evelyn Duwek on the moderated content podcast, but like this is a system that needs investment, right. It needs investment in staff and it's investment technology in order to be able to scale the uh, the work that it does, right?

Mike Masnick:

for all the challenges of the program, it does, work in many ways and it could work much better. And a lot of the limitations are around the technology side. There's a whole thing we don't need to get into where, effectively by law, they're banned from using the cloud. So they have their own servers where they store the material that they have. And, If they were able to have cloud based systems, they could have some better technology. They could use AI classifiers that were designed by others that they could pop in. There are a bunch of other things that they could potentially use. would improve things. There are some other things around, like being able to hire good technologists to help them to update their systems. They use XML and which is now effectively obsolete and very outdated. There are a whole bunch of things that they could improve, which would help on the reporting side. And then hopefully things like in this report that are giving some visibility for trust and safety people to understand things for law enforcement, to understand things hopefully could also lead to better situations. But if there was more funding for NCMEC, I think that could go a long way towards, helping. Again, there's no silver bullet. There's always going to be issues here. Um, could be really, really helpful for, you know, based on the report. It's, it's pretty striking.

Ben Whitelaw:

Yeah, great. Okay, so a really helpful An important addition to our understanding of how? See some can be avoided. We'll park that and we'll move on now to our next big story, which is, across the waters over in the EU. And I noted this week, Mike, there's been a flurry of, interesting DSA related news, which I think feels like the kind of commission cranking up, their implementation, of the DSA. I was in Perugia last week and met a number of people who are really closely involved. And there's a massive focus and a number of EU officials have come out in the last few weeks saying that the focus now is on making sure that the raft of regulation that's come out is implemented properly. So we're really starting to see that play out. I think I will mention the fact that There are six countries who have kind of had their wrists slapped by the commission this week for not appointing digital service coordinators, which are the kind of heads of the respective countries and the people appoint people for regulation itself. And so there's been a bit of a line drawn by the EU Commission. To make sure that these people are put in place as soon as possible, because without them, you can't really, enforce the DSA. But the story I really wanted to bring to you was, a TikTok one. Again, we're going to talk a bit about the TikTok story in the US. We can't not, but we've seen that the, this week that, uh, the commission's opened a second investigation against TikTok, about the launch of an app that I had no idea about. We talked before the podcast that you had no idea about called TikTok Lite, which is a kind of lightweight version of TikTok for, countries, people with lower internet, capacity. And it's super interesting to see that the DSA go after TikTok again, just a few months after it opened its first investigation. Breaching rules around moderating content, and transparent advertising. So we have a situation here where the European commission on going off to tick tock on two fronts. And, it's basically. Not dissimilar to what's happening in the U S but it's the kind of wheels of regulation turning very slowly and putting pressure on these apps to, moderate content in a way that is more transparent and more, visible to us. So I found that really interesting. I think the outcome of this, TikTok light stories is to be seen basically. After the investigation was launched, the TikTok policy team announced that they had removed the aspect of the TikTok Lite app, which had caused the commission to go after it. There's a thing called the task and rewards program, which incentivizes users to use the app more. And the commissioner said that this was potentially addictive, and was causing users to be, using the app more than necessary. So TikTok has pulled that away. We're in a position now where the app exists in a couple of countries in the EU. But these investigations will be ongoing and they'll be reported out over the next couple of weeks and months how does this fit with what's happening in the u. s? Right? Give me the view from across the atlantic.

Mike Masnick:

I mean, it's interesting, right? So this is the same week that the U. S. passed the bill, which they don't want anyone to call a TikTok ban, but which everyone recognizes is effectively a TikTok ban if ByteDance doesn't divest, from its ownership stake. I did think just to comment a little bit on the, DSA EU version of this, it was interesting one that tick tock, you know, recognizing that it is under significant scrutiny, everywhere in the world right now for it to, to release this task and reward program. Which, I, I, I always have a few issues with like. politicians jumping in and saying like, you're making your app addictive, because like, to some extent, where is the line between making your app useful? So people like it and making it addictive. And, the issue with addiction is when it is harmful and problematic. And if you're making an app that is legitimately useful, you know, how do you distinguish between those things? And I'm not fully confident that politicians are, really the best at determining where that line should be drawn. But at the same time, it strikes me as like, sort of tone deaf for TikTok to think now is the time to introduce this, this sort of feature. People have complained, especially in like the video game context, you have similar things with like loot boxes and, and these kinds of things that sort of try and get people to spend more time on games that have, there've been all of these complaints about it and concerns about it raised by governments around the world. And you would think that maybe TikTok would take a breath and say like, maybe right now where we're facing this much scrutiny, we shouldn't be launching This

Ben Whitelaw:

This is like a red rag to a european commission bull right, this is like this is provocative in the extreme Um, in, in two major countries in the EU launching an app while you were under investigation that has potential violations of the EU's kind of biggest and newest online, speech legislation, like, what are You doing?

Mike Masnick:

Yeah. You know, at a time where it feels like, and you can tell me if you feel differently, but like the EU sort of wants someone that they can take down with the DSA to prove that the DSA is working. And so. Like, tread

Ben Whitelaw:

blood sport. Yeah, this is blood sport and the EU smell blood. And, uh, yeah, there's somebody's gonna end up on the wrong side of a bullhorn

Mike Masnick:

Yeah.

Ben Whitelaw:

my, my analogy. I think, I think it's, it's really interesting that they did it now. It's also interesting how quickly it responded. You know, there's, there's, there's not many we've had, I think this is third or fourth investigation that commissioners opened. I don't feel like the others, that were opened on X slash Twitter on TikTok or on AliExpress yielded a reaction like this, like within days, having them disable an app that, that was in question. So there's an admission there, I think of some sorts that, okay, this was us pushing it, but it will be interesting to see how,

Mike Masnick:

Yeah.

Ben Whitelaw:

how, that plays out.

Mike Masnick:

And, you know, it is interesting to the extent that, so to bring it back around to like the U. S. ban, you know, as much as I have concerns about the approach of the DSA and the framing of the DSA itself and the way it's being implemented now and being used, At least this focus is investigating a particular action by TikTok, and seeing like whether or not that violates the DSA as opposed to the, here in the U S where if we're going to continue the bull analogy, I guess we, we have the, uh, the sort of bull in China shop. Approach of like, just break it all, you know, there's no, it's not like, let's investigate this particular aspect of what TikTok is doing. That is problematic. Is it problematic on national security grounds? Is it problematic on privacy grounds? Is it problematic on propaganda grounds? Like whatever, it doesn't matter. Just, just, you know, China's got to get rid of it or it has to be banned.

Ben Whitelaw:

yeah,

Mike Masnick:

And so, it's not, not in my opinion, certainly not the greatest approach, I think it's constitutionally problematic. We've talked about this before. I think there are a lot of problems with it, but it is now law. It got bundled. The way it became law was it got bundled with a bunch of foreign aid to Ukraine, to Israel and to Taiwan and a few other things that's sort of the nature of That's the way U S uh, legislation gets,

Ben Whitelaw:

but baffling to me, but apparently that's how it works.

Mike Masnick:

It's baffling to me as well and to lots of people, but it is the nature of the political process in the policymaking process in the US that these things get bundled. it struck me as notable that in the signing statement that President Biden, made in signing the bill. He didn't mention the TikTok ban at all. He talked a lot about the aid. He scolded Republicans a whole bunch, and didn't mention the, TikTok ban, and then later that day posted his campaign, posted two more videos to TikTok, which struck me as interesting and, the vast majority of the comments on both of those videos where TikTok users just yelling at him, uh, and basically saying like, yo, bro, you posted this to TikTok, the app you just banned, you know,

Ben Whitelaw:

Yeah, let me floss, man. I want to.

Mike Masnick:

yeah,

Ben Whitelaw:

the is that the latest thing flossing? I'm

Mike Masnick:

I think, I think that's, that's a generation back. You're a little behind,

Ben Whitelaw:

Right. Yeah.

Mike Masnick:

so we're going to see sort of what happens there. Obviously, TikTok is going to challenge the ban. They've made it really clear that they're going to challenge the ban. There's been some discussion. There was a report in the information that, ByteDance was exploring who they could sell the app to. Could they sell it to a non technology company? Effectively, a non, And could they keep the algorithm? So that was the whole thing. Like, could we sell everything but keep the algorithm? ByteDance vehemently denied that on the record to the information. And then other stories came out like minutes afterwards, effectively saying that ByteDance had no intention of selling and divesting and would. Absolutely prefer to shut down the app. So I think there's like, there's a little bit of a game of chicken going on. They're going to challenge it in court, and when they challenge it in court, they can probably delay the timeline. Um, you know, the timeline is effectively, they have nine months to sell it. But if they're showing in the nine months that they've made some progress, they can get another three months. So really they have a year, but they can probably stop that clock by going to court. And so I've actually been thinking that. I, you know, I originally thought they would go to court immediately and was kind of surprised there hasn't been a lawsuit and there might be, you know, by the time we're done with this, but like there might be a lawsuit soon, but there might be an advantage to like waiting a month or two just to like extend the clock out a little further. Because once they file the lawsuit they can probably get the court to effectively stay the law and say this law is put on hold until we go through this analysis process and therefore they may get more time if they wait a little bit. but we'll, we'll see how that goes. And I think, there are some really big questions about what it means on both ends. You know, can the U. S. government ban an app, uh, or force, force divestiture of it? And what will that mean larger, you know, to a, to a large scale?

Ben Whitelaw:

effect could or does the election in November have on this process? Does that change timings at all? I mean, I mean joe's gonna have to stop posting on tiktok at some point.

Mike Masnick:

Yeah, I don't know. I mean, the campaign told NBC, we will go where the voters go and it's like, okay, so you're sort of admitting that this is an important venue for speech, um, which probably works against you in any lawsuit around whether or not there's first amendment issues here. I don't know how much the election will directly have an impact. I mean, there's no chance that the app is banned by the time of the election. Um, based on the timing here and based on the lawsuit, there is like, Donald Trump has sort of flip flopped on this where he, was the originator of the idea of banning TikTok and tried it and failed. And then recently after he met with one of, the largest investors in TikTok, American investors in TikTok, he completely changed his mind and now he's already talking about how Biden has cost himself the election by banning TikTok. You know, we'll see. It does feel like, I'm certainly not a politics expert. It feels like a weird sort of potential misstep. You know, if you're trying to get out the youth vote and trying to get, the youth engaged to like ban the app that they like the most. That feels like a sort of dangerous thing. So it might lead to some, backlash. But, I don't know that it has any major direct impact on the election.

Ben Whitelaw:

Okay Before we move on to the other stories in today's episode We should probably talk about project texas just very briefly, which is which I think is an interesting addition to the discussion, right? So project texas being You The, quite extensive, very expensive plan to try and ensure that this, band didn't go ahead. which took place 18 months or so ago and involved the creation of a new organization, which was going to house the data. Of the US version of TikTok, right. Which, um, was trying to kind of allay fears around Chinese government officials having access to, US citizens' data. It tried to allay fears about the fact that, the algorithm could be augmented to, I dunno, control us citizens' minds, and. That whole plan kind of didn't get off the ground right before this process took place,

Mike Masnick:

it, it didn't, it didn't. And actually like the Project Texas story goes back further, right? It really has its roots in the attempt by Trump to ban TikTok and to force it sell in a much more clumsy way. And. Originally Microsoft and Walmart wanted to buy TikTok and apparently Trump quashed that and said, no, basically like the sale has to go to someone who I'm friends with more or less. I mean, it's feels a little corrupt, but probably because it was kind of corrupt. Um, and he really wanted Larry Ellison, who has been a Trump supporter and is the founder and not CEO, but chief technical officer of Oracle to buy TikTok. But. Oracle didn't want to spend whatever money and didn't want to own the sort of consumer app. That's really not Oracle's business. So they worked out this weird convoluted thing where the data would all be hosted. It would go into Oracle's fledgling cloud operation. Everybody talks about Google and Microsoft and Amazon as cloud providers. Oracle's trying to be a cloud provider. They have some success that Zoom is all based on, on Oracle cloud. Um, so they basically said, Oh, this is a business deal. We can get TikTok's cloud hosting business, and then we can promise that we'll host everything in Texas, in the U S and on top of that, we'll add in this ability that we will audit all of their systems and make sure that the data is kept in the U S and kept private and not being sent back to China. And so, they've built some of that. And my understanding is that, there was an announcement last year at some point that Oracle was now officially auditing, TikToks data and that a bunch of it was stored in the U S it just feels like everyone sort of forgot about it. And it's not entirely clear, like how much, you know, how do you control for all that data? It's not entirely clear.

Ben Whitelaw:

could could that plan be resurrected if the appeal is successful? And like, is it possible, do you think that the Texas project could be allowed to kind of continue to its fruition?

Mike Masnick:

Yeah, I mean, my understanding, at least, is that that that is still the intention that that TikTok is still sort of working towards that. And, there are some difficulties and just kind of like the logistical aspects of it, but they've been building it and that Oracle has been working with them on it. So I don't think that that has ever stopped. It just feels like, policymakers kind of forgot about it.

Ben Whitelaw:

Um, if the divestiture doesn't happen in nine months, the app will be banned and there'll be no more TikTok in the US.

Mike Masnick:

is sort of, it's, I mean, even that is complex, right? And, and I know we don't have that much time, but it's sort of fun to get into the, weeds a little bit on this, which is that it's just that the app stores have to not allow it. So it's just the Google and Apple can't allow it. You, in theory. Then it begins to get complex. In theory, the website still works. The ability to sideload the app, could still work. There are a couple of other ways that it could work. You could do web based apps, on phones that could work. There are some, ways of reading the law that might also force ISPs to try and block it, but it's, you know, probably not. And that would raise even more first amendment issues, uh, especially the same week that, the U S has restored net neutrality laws, which we're not even going to touch. Um, so it's really challenging. And I'm actually kind of curious to see how Google feels about all of this, because, the companies that stand to benefit from Tik TOK being banned in the U S are Instagram. Absolutely. And YouTube. As well, but I also feel like Google might not like the idea of the U. S. government telling them that they have to ban an app from the Google Play store. And so like Google has really competing interests and I'm sort of curious to see how they feel about it. I don't know if they've commented on it at all. I

Ben Whitelaw:

to have a social platform or app that was moderately used Probably first before Before that they had it banned. I mean that would be the starting point. No, but I take your point

Mike Masnick:

I mean, let's be clear, YouTube is one of the most used apps in the world and everyone sort of forgets about it, that it is, it is a social app, uh, that nobody considers and yet has more usage than basically any other app out there and more time.

Ben Whitelaw:

I'm just sad about Google plus not existing. That's

Mike Masnick:

yes, yes.

Ben Whitelaw:

Um, but talking of talking of app stores and, the importance of app stores in allowing access to content and to apps, we're going to start there for the rest of our roundup, Mike, cause you picked out a story, About. A topic we don't really talk about very often, AR and VR, and the fact that Meta are thinking about how they expand their influence in the space, which is going to have potential content moderation implications in the future.

Mike Masnick:

Yeah. And, in particular that they've opened up the horizon OS, which is the the operating system they use for their VR headsets, the quest, so that others can make hardware on it. And so Lenovo and Asus and a few others, I think are making, VR, hardware that will now use the, horizon OS. And so everything that anyone who owns a quest has access to, We'll have access to, to

Ben Whitelaw:

Are you, are you a Quest user?

Mike Masnick:

I am not, I mean, I have. used friends on occasion, but I don't have one and I am not a regular user of

Ben Whitelaw:

Me neither our producer Leigh spoke quite authoritatively about it. And uh, so we are kind of going by what he says, but this has kind of quite big implications, right? because app stores wield a lot of power and the idea of meta kind of having a role as an os provider and Being a layer in between Hardware and users means that they could You Play a bigger role in, AR and VR safety. Right. Which is, we know has been a big issue.

Mike Masnick:

Well, I think, the, the framing, and we had talked about this earlier before we started the podcast as well, that sort of gives you a sense of what is happening here is this idea that effectively before this metas metaverse play was similar to Apple and iOS where they own the operating system, they own the app store and they own the hardware and they controlled everything top to bottom. And this is them switching from that. Integrated model to an Android model where they produce some hardware. They still own the OS and the app store, but they allow other hardware players to come in and provide an, in fact, you know, they're sort of hoping that that might, really boost the market and get a lot more people using it, but it opens up a bunch more questions of, what happens. You know, I think part of the reason they're doing this is that Meta made this huge bet on the Metaverse. I mean, they changed their name to Meta and they sort of acted like this was the biggest thing in the world. And they've sort of come back to reality a little bit that this didn't take off in the way that they expected and hoped. And they spend billions of dollars on it and it hasn't been as successful. But they're realizing maybe if we open it up, maybe if we let other people try and build stuff. I mean, there are obviously other VR and AR products on the market. But. Setting it up so that, okay, we're going to provide you with the OS. There's a whole bunch of apps. We have the app store, so other people can just build hardware. Like maybe somebody will figure out the magic, uh, system that makes this become the next big platform. And then Meta is built in there and Meta doesn't have to invent it in house entirely. It really is the same approach that Google took with Android, you know, though Google started that way and I think, you know, it's It would have been nicer if Meta had started there also, um, but then they're just become questions where, if that becomes the standard, and they own the app store aspect of it, you have all the the safety questions around reviewing apps and determining what is allowed. There are already safety questions around the VR space around harassment and everything. And, to its credit, Meta has taken that somewhat seriously and what they've done already in different apps. And, they've talked about the, you know, safety in the VR space quite a bit. And obviously they had hoped that. VR would be a bigger thing for themselves. So maybe they're, they have a bunch of trust and safety people sitting around waiting for more people to be using the system to deal with the trust and safety problems and are hoping to do it. But like, there are questions about, which apps are going to be allowed. They have this sort of. Uh, as we understand it, there's, there's a couple of different aspects to the app store. There's sort of like the, heavily curated one and then the more wild west one. And they're sort of bringing those two things closer together. So there may be a few more questions about what apps are allowed in the metaverse and how

Ben Whitelaw:

Yeah. And you know, to, to liken it to the Google Android play again, you know, you don't need a very large number of apps that cause harm or, or that slip through the cracks when you're, you know, Operating at that scale for it to be an issue, right? And the meta has, has knows that from its social products, but, um, it's, yeah, this is a whole new frontier for them. And so, yeah, that's, that's super interesting. Thanks for bringing that story to us. I wanted to talk a little bit about the, kind of an also AI story, but one really interesting partnership that we've seen spring up this week, which is potentially a bit frothy, but I think is worth noting is the fact that 10. Of some of the largest AI generative AI companies, um, in the world have come together to form a partnership to implement safety by design principles to try and mitigate some of the issues we talked about earlier in the podcast. So this is all the names that you'll be familiar with. We've got OpenAI, we've got Anthropic, we've got Amazon, Meta, Google, all the folks who own the biggest kind of most powerful, large language models have basically, Come under one umbrella coordinated by two non profits all tech is human and thorn to basically commit to trying to prevent ai C sam Because as we know as we talked about once it happens, it's super hard to address And so this is basically the end of a, paper, a piece of research that, a number of different people in those organizations have come together to create. And the paper's worth reading. If you get a chance, it comes up with a series of principles, that all of these companies are now going to adopt in their work, basically across the whole development of large language models. So the creation of them, the deployment of them and the maintaining of them. And in theory, this is going to help, create a standard for developing large language models and a safety standard as well, which is something that is not really, in the market right now. It doesn't really exist. So this is good. There's another part to it, which is that all of the companies have committed to being more transparent about their efforts, although it doesn't necessarily specify what transparency means. It would be lovely to have some of these efforts made. Available so the likes of you and I can report about them and talk about them I wonder how much that would be true But yeah It's a step in the right direction and you know fair play to all tech is human and thorn for kind of bringing these Different companies together who you would hope would be talking about some of these share challenges already But you know, sometimes it's not always possible. So

Mike Masnick:

I thought, this was an important step. We'll see how impactful a step it is. I'm not convinced it's going to be as impactful as people hope it, um, the things that struck me and this does go back to, we didn't even talk, there was a little bit of discussion in the Stanford paper about, how AI generated CSAM really threatens to overwhelm the system that has already overwhelmed just the amount of content. I've spoken to a few trust and safety experts who have pointed out that like, there are lots of problems with, AI generated CSAM. But one of them is, you know, we talked about the difficulty of prioritizing things. The prioritizing becomes even more difficult when when it's much harder to tell if it is a real child who is being abused or an AI generated one. And if you flood the system with AI generated CSAM, then the really scary part is it makes it that real children who are getting abused, Are sort of lost in the system. You know, there will be such a flood that it will be difficult to pick out which ones are real and who need help. And so there's a real concern there. And the feeling not that long ago was that a lot of the AI companies did not much care about this. And so to see basically all of the big ones, all of the ones who are working on what's known as frontier models. the main models that everybody else uses, agreeing to, try and put in safety aspects into their models, including importantly, the open source models. You know, that was the thing that was most concerning to me. I'm a big supporter of open source and I think open source AI is really important, but there is a real fear that when you have an open source model that is more prone to abuse because there's much less control over it. There is still questions of like. are people going to sort of strip these safety measures from the open source models. But hopefully there's enough that can be done to really protect and limit the ability of people to use these AI models to generate artificial CSAM. Um, we'll see, you know, the issue with, AI is that there are always hacks around these things and, prompt injection type things that, get around all of the safeguards. Like it happens over and over again. That's still going to be a challenge, but at least they're thinking about it. At least they're working with, Thorne and all tech is human to try and come up with something. And hopefully there'll be, transparent about how well these things work and what they're finding and where improvements can be made. Yeah,

Ben Whitelaw:

you know, somebody from, from this group of companies come on at some point on the podcast and talk about their efforts. I think that'd be really interesting and how they're working together particularly. The next story you flagged Mike is, Not one of the big tech companies in this coalition a very small community called wattpad that you've uh, You've talked about and a new effort that it's made to reduce, A harm on its platform

Mike Masnick:

So Wattpad has been around for a while. It's actually a really interesting platform, usually for like fan fiction. It's a community site where, people write, and share fan fiction. And they decided, this week to shut down DMS. And there had been some complaints that in the past, there were attempts to, to groom some users on Wattpad via the DMS. And basically the platform determined like, this is too much of a hassle. Not that many people, actually use the DM functionality on Wattpad. And, we're more of this sort of publishing community for public discussion. And private discussions are kind of a hassle too far. And therefore we're just going to shut it down and. It's just interesting to see because, for all of the different challenges that we see about online speech and online communications, seeing a service say, you know what, this is not core to our business. We're just going to shut it down. If people want to communicate, they're going to have to find a different way to do it. Our sort of, mode of communication is public discussion. That solves this one aspect of the trust and safety challenge was interesting to see. just sort of stood out to me as something you don't see every day.

Ben Whitelaw:

And it links neatly into the the fourth story that we Will round this week's podcast up with which is an interview with jason citron Who's discord's founder which is obviously a very heavily used? Platform slash app. And he talks to Nila Patel on the Verges Decoder podcast, really about this distinction that you're talking about here, right? Which is, the kind of fork in the road that some platforms have between being a social platform that is public to most of its users and being a messaging platform, and Jason kind of, it's a really interesting discussion. Discord has had its challenges in the past around content moderation. It was recently the subject of a Netflix documentary about some leaks, of US intelligence data. And so it hasn't got it all right, but it has grown its trust and safety function. Largely over the last few years and he talked about how there are basically two modes really in terms of its platform and how it moderates there is the kind of very private semi private um servers and spaces that It needs to give people the tools To moderate content on because it's kind of really up to them And there is the messaging function which it feels like it needs to Moderate and it has a way of doing that. It doesn't use end to end encryption because It feels like it can't, fulfill its mission of keeping folks safe, gamers safe, often young gamers as well, safe with, with entering encryption. And so it was really interesting just to kind of hear him talk about those two modes. And when we talk about the Wattpad story, you can see how they've really taken the decision in route to go down there, which which is really interesting. Have you ever come across a, an app, Mike, that's like, you know, Killed DMs or like killed a feature as as part of like a quite drastic attempt to reduce harm. I was wondering if there was like an equivalent of this before.

Mike Masnick:

Um, I mean, there've definitely been some, I'm, I'm sort of blanking on the spot of, of, of things, um, you know, the, the first one that came to mind, and it's not the same in any sense at all, it's not killing a feature, but like just killing just a point of, annoyance effectively was, of all things, the, knitting community site Ravelry, uh, was Which, a few years ago, banned any discussion about Donald Trump. Just because they realized

Ben Whitelaw:

I remember that

Mike Masnick:

Yeah. And they just realized that that was just such a, uh, trust and safety nightmare, and headache that they just didn't want to deal with it. And they just said, look, you can have your political views, but like, this is not the place to be having that discussion because it always goes crazy. And so that's, it's not quite the same thing. Cause it's not, that's not a feature that was banned. Um, I mean, I'm sure that there are other apps. I'm just, I'm blanking on the spot of

Ben Whitelaw:

Yeah. I mean, in some respects, like it's the idea that all platforms should be everything is, is kind of contributes to the spread of trust and safety resources of knowledge about the harms at play. It's feels like what Pada basically saying we, we want to be. This kind of community, we know we can deal with these kinds of harms. It doesn't benefit us greatly. I respect that in many ways to kind of slim down it, what it offers to users and as part of an effort to make, keep it safe.

Mike Masnick:

And I also think like there's an interesting comparison there with the Citron interview in terms of like understanding, what kinds of spaces you're creating? What kind of community are you creating? Are they public spaces? Are they private spaces? And what are the trade offs for each of those? And there are benefits and values that you know, that are beneficial. Yeah. And tricky with each of those things. And it's, it's important for companies to recognize what it is that they really want to be, and figuring out the best way to handle those different challenges.

Ben Whitelaw:

Yeah, no, definitely.

Mike Masnick:

I mean, I guess actually that does bring up one other one, which, which we, we could have spoken about a couple of weeks ago, which was the, um, Snapchat or Snapchat had that, that, that, uh, I forget what they're, they called it now, cause this just happened.

Ben Whitelaw:

was like a map, right? There was a kind of some sort of map

Mike Masnick:

It was like this, like universe of your friends, like who you were closer to and, and whatnot. And they ended up killing it, but that was basically because like they were getting a ton of criticism of the potential sort of social harm. Um, but I, I don't, that was a few weeks ago and I don't remember the details on that story right now.

Ben Whitelaw:

no, me neither. Um, not going to try and look it up now.

Mike Masnick:

Yeah.

Ben Whitelaw:

I think that's like helpful point at which to kind of wrap up today's episode, Mike. I think we've, we've really gone across the world in terms of our stories, come from the U S and Tik TOK and to the, to Europe and to the, uh, efforts to kind of regulate that. Um, anything else you want to add before we wrap up today?

Mike Masnick:

No, I think, I think that's it for this week. Obviously, as we said, lots of news, there were a bunch of other stories we wanted to cover and we didn't, but we're already going pretty long. So, there's a lot going on in the world of online speech right now.

Ben Whitelaw:

Yeah, and thanks to all for listening. Thanks for tuning in. If you have time and energy, please do leave us a review and rate us on all the, uh, Podcast channels that you listen to us. We'll be back next week and uh, hopefully we'll see you here as well. Thanks everyone

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.