Ctrl-Alt-Speech

The Platform to Prison Pipeline

August 30, 2024 Mike Masnick & Ben Whitelaw Season 1 Episode 26

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

Right, mate. So Substack, as you know, has caused a fair bit of attention over the past few years for its moderation policies, but its homepage prompt offers a much more kind of concerned, caring view that kind of belies the controversy about its moderation. I'm asking you today. What's on your mind?

Mike Masnick:

Well, Ben, I'm wondering as we go through this podcast, how much of it is going to cause people on all sides of all issues to yell at me and complain that I. Couldn't possibly understand what is happening in this or that country because we have lots of nuanced stories today that seem to upset people or people have very strong opinions on

Ben Whitelaw:

Oh God. Oh God. I can't wait.

Mike Masnick:

What's on your mind?

Ben Whitelaw:

So my mind is really the question of how do I get citizenship of four countries? How can, how can I go about getting more passports? Because one of the protagonists of one of the big stories this week certainly has found out how to do that.

Mike Masnick:

Well, it didn't, didn't work out well for him because he's sitting in jail. So

Ben Whitelaw:

Indeed. Hello and welcome to Ctrl-Alt-Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you with the financial support from the future of online trust and safety fund. My name is Ben Whitelaw. I'm back in the hot seat this week, Mike. Thank you for leading the charge last week with a brilliant Daphne Keller. And, uh, yeah, good to be back. How are things?

Mike Masnick:

yeah, things are good. How was, uh, how was your time away?

Ben Whitelaw:

It's very good. Um, I was in the Scottish Highlands, and yeah, I saw some beautiful scenery, ate some deep fried pizza, all of the good Scottish things that one does when north of the border.

Mike Masnick:

Yes. Yes. were discussing that as the, the Scottish delicacy is deep fried, whatever. There appears to be some sort of requirement for, people in Scotland to get deep fryers and then figure out how to throw absolutely anything into them.

Ben Whitelaw:

yeah, yeah, it's true. And I can say this as somebody who's half Scottish, although my accent doesn't show it, but, um, there's a reason why Scots have some of the worst health outcomes in the world. You know, but it was delicious and I would thoroughly recommend it. and I listened to the episode with you and Daphne on the way back on, I was got the train home, the Caledonian sleeper and, um, listen to you and Daphne on, on route back, which was a great episode. Um, how is she having alongside you?

Mike Masnick:

Oh, it was wonderful. Uh, you know, Daphne's great. I've obviously known her for a long time and, she's obviously super, super knowledgeable and detailed and thoughtful on all of these issues. So, yeah, having her on the podcast was great. And there were a bunch of really good stories for her to dig into because a lot of details, a lot of nuance, a lot of the legal issues that she is an expert on. So, uh, I really enjoyed it.

Ben Whitelaw:

Yeah, fantastic. Um, we've got a whole raft of stories this week that we, we have to cover. You know, there's some weeks we struggled for stories a little bit this week. There are very clear stories for us to dig into. So I can't wait to get into that. before we do so just want to thank the folks who, on the back of our plea a couple of weeks ago to share ratings and reviews have given us five stars. On, their favorite podcast platform, really appreciate it where trending well, on those platforms. Now, not many folks if any, have actually given us a written review and it would be great to have your feedback in, written form as well as in style form. We, we'd love to hear from our listeners. so if you get a chance, if you enjoyed today's episode, don't forget to do that. you can also be like, you know, the listener, who gave us one star right back at the start, if you

Mike Masnick:

we'd even launched. Yes, the techter troll, our resident troll, decided that he was going to pre poison the reviews of this podcast before we launched, which appears to still be our only one star. So, uh, hopefully you'll give us more than that, but also we would like written reviews. Written reviews really do help. They give us feedback, first of all, but also they let other people know. know that the podcast is here and that it is good and that it is enjoyable. And I know that some of the recommendation systems, including Apple's, which is the biggest one, rely on the fact that if you're getting reviews and ratings, that helps them decide to promote you to others. So if you would like other people to be hearing this podcast also, which helps us, then please go write a review and make it nice and interesting.

Ben Whitelaw:

Yeah, we really appreciate it. Okay. So, let's make, uh, an episode that people feel that they can leave a, uh, a five star review for Mike. That's

Mike Masnick:

Oh my gosh, so much pressure, so much pressure.

Ben Whitelaw:

um, let's give them something to talk about. So let's dive straight in. We've got a whole tranche of stories that we want to get through. So we're going to start with, a kind of story that we have. Touched on a few times before a platform that we, know and love Tik Tok and, a big ruling that came out, in the latter part of this week, which you and many other very, knowledgeable internet experts are not happy about. That's fair to say. Talk us through this third circuit ruling.

Mike Masnick:

Yeah. So, this really was really a bit of surprise and, I think took a lot of people by surprise, people weren't following this case very closely. I think because most people assume that it would go the normal way that a case like this goes, and it went a very, very different way. And so. The details of the case are horrific and it is often said that, you know, facts or bad cases make bad law. And this seems to be one of those cases where there are tragic circumstances and a tragic story behind it, which is that, You know, there are various challenges that happen on TikTok that, go viral and a bunch of people do them. There have been some, I would say mostly. exaggerated moral panics about some of the challenges being dangerous, and then kids imitating them and getting hurt, or in some cases dying. The actual incidents of those happening to be fairly rare. This is one case where that does appear to have happened. There was a blackout challenge and it From the facts of the case, it appears that one of these blackout challenges was seen on TikTok by a 10 year old who attempted to replicate the challenge and ended up dying in the process. This is, there's no two ways around that. That is tragic and horrible in every aspect of it. The mother of the child then sued TikTok, And claim that they had legal liability for the, death of her child. TikTok used a few defenses including section 230 and the district court threw it out on section 230 grounds, which every other case of this nature, that has happened. the content of the challenge was not created by TikTok and therefore they don't have liability for that. So then it went up to the Third Circuit, and I think the reason everyone sort of had ignored this case was because they expected the Third Circuit would rule like every other court and sort of uphold the Section 230 ruling. The liability is not on TikTok, it is a horrible situation, everybody agrees, but it is not TikTok's responsibility. The Third Circuit did not do that. And they issued this very strange ruling, and it's strange in a few different ways. One, in that it is extraordinarily short. There is very little in the way of detail, and in fact, there are a lot of footnotes and it is possible that there is more text in the footnotes than the actual ruling, which is weird, and basically it says, you know what, recommendation algorithms are not covered by section 230. Now, there is an argument that some people have made that they should not be. And that there, are ways to get to that legally. That is not how any other court has ruled to date. There are reasons why I think that is wrong, but there is a way to make that argument that is. you know, understandable. The third circuit did not do any of that. It basically just jumped ahead and said, well, we've decided that it is not. Now it does acknowledge, and I'll explain sort of what little reasoning there is in a second, but they do acknowledge, and the way appeals courts work is you have a three judge panel who is reviewing it. So the three judges all agreed on this. They do admit that this appears to go against. Basically, every other circuit court ruling, including a previous ruling in the third circuit, which in theory, they should be bound by. and so there is this one footnote that goes across three pages, like that's how big the footnote is, is that the footnote itself is three pages long. That lists out all of the cases that they are going against. That's just very strange. This is not something you normally see. and, in my mind, is the point at which one, two, or all three of the judges on this panel should have said, maybe we are doing something wrong.

Ben Whitelaw:

and, and so the, the footnote is there to kind of basically justify the decision. It sounds like in a way.

Mike Masnick:

Sort of, but not really, right? Because, well, first of all, like, you usually shouldn't be justifying the main issue in your ruling in a footnote, right? Like, that's Not usually how it works. It's basically to say we acknowledge that. basically everybody disagrees with us, but we still think we're right. Now, normally, if you were to do that, you would then have many, many pages of detailed analysis with citations explaining why you are doing that and what the legal basis for that is. They don't really do that here. What they do is they point to the Moody ruling, which is, we generally had referred to as the net choice cases, but now people refer to it as the Moody ruling

Ben Whitelaw:

Yeah.

Mike Masnick:

in the Supreme court, which was case over, which we've talked about a lot on the podcast, whether or not Florida and Texas can pass laws that, that, order companies to moderate in a certain way. And based on. I believe just a complete misreading of the Moody ruling. It's sort of a twisted, a twisted interpretation of kind of a throwaway comment that was first made in the oral arguments, and I've heard it. Basically crackpots in the past. and justices Clarence Thomas and Neil Gorsuch have brought this up. They brought it up during oral arguments in, in the net choice cases, and they've brought it up a few other times where the argument is. So section two 30 is pretty clear. It's very short. You know, the, the operative part is 26 words famously. And it is saying that as an internet service, you are not. Liable for publishing activity done to publish third party content. You're not responsible for the content that you are publishing. And the sort of crackpot idea was that when complained about these laws regulating their content moderation practices, they argued that content moderation practices are editorial decisions, which are therefore protected by the First Amendment.

Ben Whitelaw:

Mm hmm.

Mike Masnick:

So the argument is Clarence Thomas and Neil Gorsuch made was that This is a contradiction to Section 230, saying that content moderation is protected by the First Amendment contradicts Section 230. It doesn't, but they're thinking for why it is, is that you are saying content moderation is expressive activity, because it has to be expressive activity to be protected by the First Amendment, so the editorial choices are expressive. And then in Section 230, you're saying we have nothing to do with this. So don't hold us liable for it because it's not us, which is not, that's not what section two 30 is saying. Section two 30 is not saying we have nothing to do with this. In fact, section two 30 is pretty explicit. You are publishing. This content, you are distributing this content, but you shouldn't be held liable for the content because it wasn't created by you. There's this, it's a little bit in the weeds, but there's this separation of the creation of the content. The publishing of the content and the distribution of the content, and arguably the recommendation of the content, these are all different elements. And what Section 230 is saying is that the only one who is liable is the one who created the content, but publishing, distribution, recommendation, those are all First Amendment protected things. But you, you just, the only one who could be liable for harms from the content is that who created it. But, but the argument that, Gorsuch Thomas made, and now the third circuit is saying is that by saying that the publishing activity, the editorial activity, the recommendation is protected expression. You are saying that it is no longer third party content. You are, are taking ownership of that.

Ben Whitelaw:

By deciding what gets distributed. Right.

Mike Masnick:

which is the, exact thing that section two 30 was literally written to overrule.

Ben Whitelaw:

Okay. and is that, cause obviously section 230 was written a long time ago

Mike Masnick:

Yes.

Ben Whitelaw:

and I have seen some kind of commentary since this ruling came out that said basically section 230 couldn't have predicted the nature of algorithms and couldn't have kind of predicted that the distribution would be so kind of inherent in the process.

Mike Masnick:

Yeah.

Ben Whitelaw:

Is, is that kind of where the, where this ruling gets at is this kind of question of whether back in the day, back in 1996, when it was kind of created and written, it also covered the distribution and the dissemination in the way that, about,

Mike Masnick:

So. I think what is correct is that the internet is different and the way social media works is different than in 1996 when section 230 was passed, but the fundamentals have not changed, which is that the idea behind section 230 was that the publishing activity that an internet service does for third party content. Is protected from liability from any harms or violations that come from that content and the way it has been judged going back in history has always been that. you are protected for traditional publishing activities. There was the sort of famous case was the Zeron case. There's also the Barnes case in the ninth circuit. There are all these cases that basically say publishing related activity for third party content is protected. And that's not as crazy an idea as some people now think it is. But.

Ben Whitelaw:

um,

Mike Masnick:

Because you have things like bookstores have recommended books, right? So, you know, if you walk into a bookstore, there'll be a shelf of like, staff choices or whatever. If a bookstore recommends a book that has defamatory content, you would think it's probably a little weird to then sue the bookstore because you said, well, they recommended this defamatory book, therefore they are responsible for the defamation. That seems a little bit strange because you would say, well, they don't, they didn't know that that was defamatory. They just thought it was a good book or a book that people might want to buy or something like that. So in general, like the liability principles being, you know, on the person who actually created the content makes sense. And we have another example, and I mentioned this in my article. There's a, uh, a case that goes back many years, pre internet, well, depending on where, very early internet, existed, where there was a, an encyclopedia of mushrooms that was published. And it said there was a certain mushroom that was edible, and it turned out it's not, which is bad. And people got very sick from eating this mushroom that the Encyclopedia of Mushrooms said was edible. And the publisher was sued. And what the court said was the publisher is not liable because they don't have a duty to have figured out whether or not every single thing in that book is accurate and true. It is the author. You could, you could, sue the author, but the publisher is not. And so we already have these concepts where publishers and distributors, And recommenders have different levels of protection from the, content authors. And section 230 is really there to sort of enforce that and make that clear and not have to go through a really complicated legal process to figure out where the liability applies and what the court is saying here makes no sense in that it is saying that as soon as you recommend something. You have effectively embraced that content as your own and said that it is now first party speech. so this is what the ruling says. Once you recommend content, it's no longer third party speech, which would protect you under section 230. It's now first party speech. It's your speech. And they're saying that, that this is true because of. Under the Moody ruling, the companies were arguing that we're protected in the first amendment, but that gets the analysis wrong because it assumes that you're only protected under the first amendment for first party speech, which is not true, which is all of these publishing and distribution examples going back in history, past the beginning of the internet, we've said there are different levels of liability and different types of liability.

Ben Whitelaw:

um,

Mike Masnick:

And here they're saying, no, like it all wraps into one. And as soon as you recommend something, you have now taken on the liability for that.

Ben Whitelaw:

yeah.

Mike Masnick:

that's really problematic. It was especially without any sort of deep analysis. They, they literally just kind of rushed by it and say, because of this Moody ruling where they said this thing, which they didn't say it was mentioned, not in the majority ruling. It was mentioned by people in the discussion about it. And the third circuit seems to think that it's like a core finding of the Moody ruling, which is not, it's just not true.

Ben Whitelaw:

they've run with it, haven't they? And they, and they've ended up here. What, what is the kind of implications of this case? If we kind of fast forward a little bit and I want, I want us to kind of move on to some of the other stories we've got, but like, what, what are the implications of this case? You know, it's, it's, it's, it's. What is happening is being handed down to a, a lower court, I believe. where do you see this going and what are the tangible impacts on, on platforms of this, if this doesn't,

Mike Masnick:

So it, this becomes a big deal. It depends on how TikTok wants to handle it because they're, defendant in the case. So what will likely happen is that they'll ask the entire third circuit to review. It's called an en banc review. As I said, this is a three panel, but you can ask the court to do an en banc review, which is all of The judges or a larger panel of the judges review the case, I, which is most likely the next step. And then once they do that, I imagine a lot of people, a lot of, amicus briefs, people will weigh in and be like, Hey, this is crazy. and. And. Then after that, it will almost certainly go to the Supreme court. It has to go to the Supreme court because like, as they say in a footnote, like we are disagreeing, we are outright disagreeing with every other court, including the third circuit. When we have ruled on this issue, that's the kind of thing the Supreme court likes. And so I imagine that at some point the Supreme court will take this up. If it goes to an en banc review, that would probably be next year. We'll cover that. And then a year later, it'll go to the Supreme Court. So we're, we're probably talking another two years before this has actually worked out. But, you know, the important point is that, if you think about this, if any recommendation becomes first party speech and section two 30, no longer protects it. We are right back to the world that why section two 30 was created, which was the Stratton Oakmont case, where they said, if you do any moderation at all, the things you don't moderate are suddenly your speech and you take liability for it. Here, we're saying the same thing, except anything that goes into a recommendation engine automatically becomes your speech and you take liability for it. That's the same situation that we had with Stratton Oakmont, and that was the whole reason why Section 230 was written in response to that. So the idea that the court here is saying that Section 230 means the exact thing that it was written to overrule seems absolutely crazy. And so we'll, we'll see, I hope that the third circuit like comes to its senses and then the non bonk review, they changed this, but I've seen some people saying that this is like a really important and substantial ruling. It is not substantial in any way. There's no one who could read this and say it is substantial. It is important only in that it is. Absolutely crazy. And even people who like disagree with me on section 230, who I think are, you know, serious scholars have said like, yeah, this is, this ruling is crazy. It just makes no sense. And there's no real explanation for it.

Ben Whitelaw:

Okay. So hopefully we'll get to read and review and discuss a more kind of robust, case ruling that maybe gets into the, the issue a bit more. but yeah, super interesting. Uh, thanks for talking us through that. so the next section of the podcast has a series of stories that are linked. And I'm going to call this the CEO is being made to sweat. section of control or speech and, and the listeners will understand why in a second. So you're tuning in, you will have heard about the arrest of the Telegram CEO, Pavel last weekend. And there have been a couple of other CEOs who have been, in the spotlight this week. And we're going to talk about them a little bit together because Mike and I think this is a, an interesting pattern, interesting trend that we're starting to see, but to give you a bit of an update on the Jirov story, if you haven't been following, this is the, relatively kind of. Major news last Saturday that the of Telegram was arrested as he landed in France on flight from Azerbaijan and was arrested on 12 charges. On Wednesday we found out that he had been Had six charges brought against him, including, complicity in spreading CSAM and, distributing drugs and a refusal to cooperate with French authorities. So this has been made to be a kind of, essentially a story about the arrest of a, of a platform CEO on the basis of its content moderation, which I think has, Really ignited a debate around, online speech, what constitutes free speech. And you've written a series of really interesting pieces this week, Mike, trying to unpick what we know and don't know. what's interesting to you about this particular story, as you've been following it as it's unfolded.

Mike Masnick:

Yeah, the main thing is, is how much we don't know, and this is why it's been interesting because I've seen a whole bunch of people sort of retreat to their corners on this and, and feel very sure about whether or not this is a travesty of justice or whether or not this is justice served. And I don't know that we know where on that spectrum this really lies, and it could be at either end of the extreme or more likely somewhere in the middle because the charges are fairly vague. And as written, the charges in theory could apply to lots of CEOs, right? And this is an issue that comes up a lot of times with social media or any internet service that allows for user content to be included in anything, which is that people will do bad things and people will upload bad content. And often that content is illegal. has been said that if you have an internet platform that allows for user generated content, you have a child sexual abuse material problem. That. happens. That is the nature, unfortunately, of the internet. And we've talked about that other contexts before. But the question is, where's the trigger between, you have that problem or you have criminal liability for that content existing on your platform? Now with Telegram, we have discussed in the past multiple times, the fact that they do seem to be at the center of everything. terrible. Uh, the NCMEC report that came out that we talked about, the NCMEC report from Stanford, Telegram does not report any CSAM that is found on its platform to NCMEC. That is probably bad. And it is, there's a very high likelihood that it is breaking the law and doing that, they might argue that they are not a U S company and they don't have to obey U S laws. They do operate in the U S there is a decent chance that some of their infrastructure is in the U S. so that all seems fairly. Questionable, there's lots of reports of other horrible content. And one of the things that is important here to discuss is that a lot of people don't sort of know how to classify Telegram and Telegram itself always positions itself as a secure messenger. Which is incredibly misleading, right? So they positioned themselves as sort of like Signal, which is an encrypted messaging app, but that is not true for the most part, for the most part. Telegram is much more like social media there's this public groups. It's just, you know, instead of one large group, like a lot of social media, it's more a bunch of different channels, but those channels are all public. They're not encrypted in any way. The messaging component of it is actually a much smaller component. And even that is not really. Encrypted there you can turn on encryption for one-to-one communications, but it ha you have to choose to turn it on. Most people probably don't do that. So most one-to-one communications are not encrypted. And as soon as you add a third person to it, the encryption is not an option. So there's a very, very small percentage of communication on Telegram that is encrypted. most of it is not so, therefore Telegram has visibility into this content.

Ben Whitelaw:

And it's worth noting that that that's an important point to make because, the French authorities, have gone to telegram and gone to the founders and asked for specific information, about a telegram user who has, made. threats of rape and who's sent messages. So it's asked for user information earlier this year. And one of the reasons why this arrest has taken place and charges have been brought is because Telegram basically refused to cooperate and didn't give the information, right? So that status of kind of somewhere between encrypted messaging platform, which allows you to say to governments, we're not providing any information to you and, more public. network where, you know, that information is, collected Is I think a really key to, point out in this case. Right.

Mike Masnick:

Yeah. So right signal at times, you know, law enforcement will go to signal and ask for information and signal will say, we don't have that information. We cannot give it to you. There are some cases with Telegram where there are some elements of encrypted messaging where they might be able to do that, but most cases of content on Telegram, that is not true. Now there are all different reports and who knows how accurate or honest they are over the years of Telegram claiming to cooperate with law enforcement on certain things. Whether it is pulling down certain content or whether it is sharing information with law enforcement and there have been stories, in the past in Europe, in Russia and other places and sometimes some of those stories have been a little bit, concerning where there were stories of people planning things, in opposition to the Russian government that suddenly the Russian government seemed to know that some people were saying, wait a second, like, is Telegram safe for communication? If it is an opposition to the Russian government, Durov of course is, Russian, though there are questions about whether or not he is liked by Russia or not, uh, he apparently fled Russia and had to sell his first company.

Ben Whitelaw:

Yeah. He also has, as I alluded to at the top of the episode, three other citizenships of, different countries, including France, St. Kitts and Nevis and,

Mike Masnick:

Emirates.

Ben Whitelaw:

and the Emirates. Yeah. So,

Mike Masnick:

citizenship.

Ben Whitelaw:

which is where Telegram is based. Um, so he is a man with many, passports.

Mike Masnick:

Yes. Um, and so, what it really comes down to is, there are a bunch of different potential charges and other things that were brought up, including like encryption, where France, does have an import export restriction on encryption, which is, Unfortunate because that's, it's kind of a dumb idea, but they have it as this sort of thing where basically you just have to register that you have encryption that you're importing, exporting encryption. And it's, uh, as I understand it, a fairly simple process, but Telegram didn't do it. That feels like an add on charge. the central core to the charges are this idea that. French law enforcement went to Telegram with specific information about potential criminal activity and Telegram effectively ignored them. That's probably bad. And that is not something that a company should be doing. The question though, still is to what level, and this is what we don't know. And this is where it's, it's hard to make a real determination on how scary this is. You It is said, the reports say that there's no indication that, Pavel was involved in the criminal activity, right? When, when the arrest came, some people were saying maybe he was actually out there sharing CSAM content or terrorist content or something like that, which would be a different, story altogether if he was directly involved. There's, there's no indication of that. But, where is the line between, what is criminal activity or what is just like incompetent activity in terms of if law enforcement goes to you and demands information, whether or not you, return that information. And so the issue here is there was an investigation. Law Telegram. It appears Telegram ignored them. Now, again, even then, there are some cases where you. You want companies to be able to push back on if the requests for information are ones that they don't think is correct. And that is, part of functioning society is the ability to say like, you know, we don't think it's appropriate for us. And there should be a legal process then to say, we're going to push back. We don't think we should have to hand over this information or we don't think the request is, reasonable or, legal. but it appears that Telegram just ignored it. Then the question is, Well, if that's the case, is this a criminal issue that should reach the executives, or is it something that can be taken care of, through a fine or some other injunction or some other legal process that goes against the company itself, rather than the executives. So the jump here. To actually arresting the CEO, that feels like a somewhat concerning jump, but again, without the underlying details, we, we just don't know, is there enough there? Do they have enough evidence that makes that jump reasonable? And so I keep trying to push back on, you know, the people are saying like, this is a crime against encryption or, uh, an attack on free speech. It could be, it could turn out that that is the case, but it might also turn out that like, no, he was doing really bad stuff and, totally ignoring law enforcement requests is kind of a recipe for really bad results. and part of the problem is like the French legal process seems to be that they don't want to tell us the details.

Ben Whitelaw:

Yeah. The details are a bit sketchy on there. And it, it feels like there's, there's more to come out about this story over the coming weeks, I

Mike Masnick:

Yeah. Yeah. And, and, but, you know, but people are saying, you know, I think the stories where people are saying like, Oh, this is, a dry run to arrest other CEOs in particular, Elon Musk, I saw a lot of people saying, you know, we're going to go after, they're going to go after Elon Musk next. And one of his fans was saying like, Oh, you know, don't go to Europe anymore, Elon. They're going to arrest you next. I don't, I think that's nonsense. I don't think that's, that's. Yeah. That's not how any of this works.

Ben Whitelaw:

no, no, right. I mean, there's the other interesting part, which is, kind of Jurov's links to Emmanuel Macron. The French president, right. So. was rumors that, when he landed, Girov said that he was in France to meet Macron,

Mike Masnick:

Yeah. He was, he said he was, he was there to get dinner with Macron.

Ben Whitelaw:

yeah, as reported by a kind of satirical French, magazine. So it's unclear whether that's true. Macron has subsequently said that That's not why he was in the country, but Baccaron has long courted Telegram and Giraud as part of his efforts to kind of make France a beacon of technology and growth and still uses Telegram apparently, even though he's, he's not meant to as part of a French government edict. So he is quite kind of closely linked to this platform. Obviously has been relatively hostile to other big platforms in the past. And so if, French authorities are going to bring a case that's going to potentially embarrass Macron to the extent that it might do, I'm wondering like to what extent it is pretty robust. I mean, again, we don't know the details, but politically, it would be a bit of a nightmare if he, you know,

Mike Masnick:

I mean,

Ben Whitelaw:

to be based on nothing.

Mike Masnick:

yeah. I mean, Macron has said that the, the government had nothing to do with it. The prosecution came from the court in Paris, you know, as part of their investigation. So, you know, I mean, there are different ways in which investigations happen, but basically it didn't come from the, government. government itself. It came from the court investigations. and so there is this like separation, which, who knows how true that is and we'll see, but he, he sort of, washed his hands of it and sort of said like, this is outside of my regime. It wasn't, wasn't something that he ordered or whatever.

Ben Whitelaw:

Yeah. But yeah, it's, it's a super fascinating story. It's caught a lot of people's attentions and it's really kind of split people in terms of,

Mike Masnick:

Yeah. And I think it's, you know, it'll, We'll have to see more details will come out and, I just caution people not to jump to either conclusion yet, because it's really just, it's unclear what the specific accusations are and that those will matter an awful lot in determining whether this is an overreach or if it's something that is totally reasonable.

Ben Whitelaw:

yeah, indeed talking of, CEOs kind of scrambling under government pressure, you talked with Daphne last week, Mike, about Elon Musk and his plans in Brazil and, um, you know, It's really interesting to see that we've got an update on this story as well, this week. Brazil, this keeps coming up in the podcast, doesn't

Mike Masnick:

Yeah. Yeah. One of these weeks we'll have an Elon free podcast, but it's not this week. you know, the background we talked about last week where he said he was pulling out of Brazil. That was after this Supreme court justice, Alexandra de Moraes, who he's been fighting with for months had said that they were going to arrest, an ex employee in Brazil. Brazil, like a growing number of countries now, have these laws that require there to be a legal representative within the country. I have somewhat sarcastically referred to these as hostage laws. The only reason to have a law like that is to have someone that you can jail as a tool to put pressure on the companies. I find these laws to be sort of deeply cynical and problematic. Other people disagree with me on that. I get that. But. it feels ridiculous, so his, you know, Elon's response again, in sort of classic, no idea how to be diplomatic, Elon style, he said he was going to shut down all their operations in Brazil, remove all their employees to avoid that, possibility, which DeMores sort of took as a, you know, a provocation and then said, okay, well, we're going to order. All of X to be banned from Brazil. As we mentioned last week, this is not unprecedented. Brazil has done this before they arrested meta employees because WhatsApp would not reveal actual encrypted communication, which they could not reveal because that is the nature of actual encryption, unlike Telegram. And, so they arrested meta employees and for a period of time, WhatsApp was blocked. I think it went through three or four different blocks, in Brazil. So this happens. I think. problematic. I think jailing of employees for not revealing information is problematic. I think, in that context, I mean, again, like it's, a different situation than the, the Durab, this is not that anyone accusing the, employees in Brazil of criminal behavior or any responsibility for this at all is entirely to put pressure on the, Elon Musk to change their policies. And that, that's why it's, it feels like a hostage situation. Why I think the, you know, adding criminal charges there seem just absolutely ridiculous. The idea then of banning the entire app within the country feels to me like an overreaction. Yes. Like the underlying content at issue here and the concerns in Brazil are legitimate. They are worried about, far right elements using X to organize a potential coup or something of that nature in Brazil, but blocking the entire app just feels like a complete overreact reaction in open society. It got taken up a notch then this week when after Elon pulled out everything, the judge not only threatened to ban. X entirely, but also started to freeze Starlink's assets. And Starlink actually has a pretty big presence in Brazil. There was a big story earlier this year in the New York Times about how Amazonian tribes were using Starlink and connected to the internet. And it was actually a really big deal for them. And now Moraes is saying, you know, we're, we're freezing Starlink's assets and potentially all the Starlink services in Brazil might be frozen. Cut off too, which would have real harm. And the end result of all this is just like. It feels like sort of petty fighting between Moraes and Musk, and they're arguing back and forth, and Musk was posting memes about Moraes, he had one showing Moraes in jail and saying, I'm going to work to make this true, he had another one of a video of Moraes turning into Voldemort from Harry Potter, it's just like, you know, it's just sort of petty, childish stuff. The real problem is like the sort of people who lose out here are people in Brazil, people who rely on Starlink for internet connectivity, people who rely on X, which, these days don't rely on X for anything, but some people do to communicate with folks. it just feels like this sort of, Petty battle of like, who's got more power rather than anything legitimate. And it doesn't lead to sort of, useful policies, but it, reminds me a little bit of, what's happening with the Rav France and threats of criminal, you know, with jailing people and all this stuff, which just, it feels like you're just moving up a notch. Like there are discussions that we have all the time about regulating internet platforms and how content moderation has to be handled and the failures and successes on all sides of that, but now, you know, the fact that we're moving to jailing people and criminal charges and blocking apps entirely in countries and seizing other assets, it feels like we've gone up a notch. we've turned things up a pretty significant level, and that seems concerning to me.

Ben Whitelaw:

Yeah. I mean, it's funny that in the attempt to kind of take the higher ground, both Musk and Moraes, who obviously, working for what they believe are the right reasons for, you know, Democracy or for free speech, however they want to frame it, they've kind of sunk lower and lower, you know, they've, they've kind of, we're now, you know, so deep, I think as to, like you say, for Brazilian internet users to be really the people losing out for, there to be no upside to this really, or at least no discernible upside in the future. and all because I think, you know, there's a, two big guys trying to prove out who's more macho than each other.

Mike Masnick:

Yeah, and you know, again, you have these sort of similarities to the Durov situation, where in both cases, they're just kind of, you know, Musk and Durov sort of take this, stupid, very unsophisticated approach of like, well, you know, we're not going to do anything because we believe in free speech, which is really like, I don't want to take any responsibility for stuff that I really should be taking responsibility for. and then because that seems frustrating to the people in charge of law enforcement, they're sort of, I think, overreacting and. You know, again, we don't know with the raw of exactly what the situation is, but there's the possibility there, but Murray seems to be overreacting and just saying like, well, okay, fine. If if you're gonna thumb your nose at me, like I'm going to start putting people in jail or blocking your app or ceasing your assets or all this kind of stuff. And it just, feels like this is not a sophisticated way that anyone is dealing with this. And, and, sort of everyone involved looks bad.

Ben Whitelaw:

No, it's true. And, and, and I mean, this links really nicely to our next story, which is almost like, you know, it's a kind of foreshadowing of, of what might happen in this case in Brazil, I would say. So a lot of you might have. Seen the headlines this week about Mark Zuckerberg, who, the headlines say regretted demoting content during the Corona virus pandemic. on the back of requests from the U S government and also, didn't deal with the Hunter Biden laptop story in the way that he would have liked in hindsight, these headlines came out of a letter to Jim Jordan. he, who we've mentioned many times before on the podcast, uh, who's the chair of the judiciary committee and who is a relatively controversial guy, as we know, um, and the letter is a really interesting read. It's kind of a page and a half, but essentially sets out, yeah, a bit of a mea culpa for the way that Meta, has responded to these big events, particularly the pandemic and how it moderated. Now he kind of says that, he has changed the policies that currently, that Meta use for moderating content. The processes have changed so that this wouldn't happen again. And. You know, he talks about how hindsight has been a great eye opener for him, but there's clearly something kind of very political in this, Mike.

Mike Masnick:

Yeah.

Ben Whitelaw:

the letter reads like, I've got to say, like somebody has, is holding a gun to his head. You know, it's a very, it's, interestingly timed. I read some really interesting insight from Katie Harbeth, who writes a great newsletter, Anchor Change. She talks about, the timing of this letter coming just before. Obviously, we ramp up into the U. S. presidential election. It comes just before, U. S. Labor Day, where, some of these headlines, she expects will kind of slightly die down. And, and basically, it's a, an attempt by Zuckerberg not to get brought into the melee that will be, you know, the election in whatever it is, three months time and to try and steer clear of that. you also kind of noted a few really interesting things about, his letter. Do you feel the same? Is it, is it kind of like a, an arm's length response from Zuckerberg to what's going to come in the Harris Trump election?

Mike Masnick:

Yeah. I mean, I think the, there's a few different ways of looking at this, this letter and the simplest way of looking at it is this letter is designed in case Trump wins. a sort of hedge in, if Trump wins, Zuckerberg wants to be able to point to this letter and say, see, I regret what I did. I'm not that bad. I'm not working against Trump. And all of it is. It's stupid, spineless nonsense. the most incredible thing to me, and we've discussed this before, is that Jim Jordan is doing this on purpose. He is weaponizing his power in the government to attack the free speech of companies and their moderation practices. And he has done everything possible to make sure that companies feel extremely uncomfortable about doing anything to stop any kind of disinformation that supports Republican causes. so this is a pressure technique and in this letter, this very letter in which Zuckerberg says that he regrets bowing to government pressure, which is misleading in its own way, what he is doing is bowing to government pressure. He is directly supporting Jim Jordan's completely nonsense conspiracy theory that the Biden administration forced. Meta to remove COVID disinformation

Ben Whitelaw:

is, is it likely that the text of the letter would have been seen by the judiciary committee and signed off. Do you think, cause there was a sense from some of the, some of the stuff I read that this is a negotiation to avoid being called in front of the committee at some point in the run up to the election and so would, would that have been as obvious a quid pro quo as

Mike Masnick:

it, it, it,

Ben Whitelaw:

hard to say?

Mike Masnick:

yeah, it's, it's an incredibly hard to say, I don't think it works. I think there are probably people within Mark's inner circle who think that this will be an effective technique for that. Maybe there is some sort of more explicit agreement. The letter serves no purpose other than to support Jim Jordan's. Narrative, which is not only ridiculous and misleading, Zuckerberg knows that it is ridiculous and misleading. So the decision to support that narrative, has to be deliberate. Either Mark Zuckerberg and his inner circle are ignorant of how politics works. which feels unlikely, or they're doing this deliberately to signal like, hey, we're not as bad as you claim, which is a sort of spineless caving because all of the evidence has shown over and over again that if anything, Meta properties have bent over backwards to have different rules that support Republicans. there have been revelations that like Zuckerberg himself ordered the limiting of left leaning publications from having their traffic spread. They changed rules. They put in place different rules so that high level Republicans. Could violate the rules more often than not. The trending topics on Facebook throughout the last two elections were pro Trump, MAGA, Republican kinds of things. The evidence that anything he has done is anti Republican is, it's just not there, but it's useful for Jim Jordan. It's useful for Donald Trump to. Argue that as a way to sort of continue to put the pressure for those moderation practices to happen. And then the latest thing, which just came out last night is that next week, and this may play into the timing of the letter next week, Donald Trump has a new book coming out in which he accuses Mark Zuckerberg of trying to rig the election against him, which is laughable.

Ben Whitelaw:

The 2020 election, right?

Mike Masnick:

The 2020 election.

Ben Whitelaw:

the reason being because. Zuckerberg contributed money towards electoral infrastructure, or is it the kind of platform?

Mike Masnick:

it's it's both. he sort of mixes those two things. But and this is in in the letter that Zuckerberg sent to Jim Jordan. Also, there is this part about how out of not meta, but out of his private foundation, the Chan Zuckerberg initiative that they donated a bunch of money, like hundreds of millions of dollars to election infrastructure, a lot of it around mail in ballots and keeping them secure. And so that is actually important stuff. But part of the conspiracy theory lore that the Trump world has is that mail in ballots, even though Trump has used them for many years, and lots of Republicans have used them for many years, that they are somehow inherently insecure and inherently part of the rigging of any election. This is not true. There was no evidence for that, and all that Zuckerberg did was help fund it. To make sure to prove that mail-in ballots were more secure. So the only way that is rigging the election is if. You not only believed that mail in ballots were insecure, but that you also plan to use that insecurity that you believe existed to rig the election in your own favor. It's sort of a very telling thing that Trump is claiming that that is, you know, rigging the election against him because the only thing that he funded was to make sure that those ballots were secure and were properly counted. Part of that book claims that. he has people watching Zuckerberg closely. And if he tries to do anything like that again, this year, he will spend the rest of his life in jail. And so now we are right back to jailing CEOs of social media, the through line of this podcast. and this idea that like, if you don't grovel and it's not even that it's like, if you don't. Actively support my election. We will put you in jail. And so that just, just to take a step back, there may be legitimate claims against a Rob. There may be legitimate problems with the way that Musk is acting, but this idea that we are getting to the point where we are throwing CEOs of social media platforms in jail, leads you to this point where you have the Republican candidate for president, and you have. In the U S threatening that if Mark Zuckerberg does not directly help him in this election, he will throw him in jail for the rest of his life. And if you don't see that there is a, there is a line connecting all three of these stories.

Ben Whitelaw:

They're like three stages of the same story, aren't they, in a way you can almost, you can almost, tied them together with string. I mean, it's this thing about, moderation being political. Isn't that, you know, as a kind of political, I guess field on which companies and governments discuss, and kind of barter and, and, come together. I think are we ever going to get away from that? Is there ever, do you see a future in which questions of speech and who gets to, have a say online isn't political?

Mike Masnick:

I mean, I, Yeah, it doesn't feel like it. It is always difficult to tell, and once you, when you're in the middle of things, things always seem like they'll always be that way. You kind of do hope that we'll get past that. I mean, you know, newspapers have always been political, but we haven't seen, like, quite the same, like, negative reaction to sort of, like, people sort of know, like, okay, that newspaper supports Congress. Conservatives, that one supports liberals, whatever it might be, you know, and people sort of get used to it. And we haven't reached that stage with social media and it feels like a little bit more visceral. So I, you know, maybe there's a point at which people sort of get used to it. But the underlying point here, which is important is that for the most part, and this is no longer true with X, because they're explicit about it for the most part, most platforms don't their moderation practices are not political. They are inherently. Unrelated to the politics. There's a false belief that many of them are moderating because of politics. And the only platform I would say that is currently true for is X. And that is because they're explicit about, because they thought that everybody else was that way, which is just another example of Elon Musk having no idea what, how any of this actually works. Um, so it's, I hope it's not. and I hope we don't, have to live in this world because it would be nice to get past because all of these battles just seem like they're fake battles. They're not battles about what's really happening. They're not battles about the real reasons why trust and safety exists. They're not, the whole trust and safety content moderation space is based on, setting up rules of the road and then enforcing them, and that has very, very little to do with politics, but everybody wants to turn it into a political thing. And it's, it's really, it's a shame.

Ben Whitelaw:

Yeah. And it's, it's a tough week to be a CEO of a platform. That's for sure. Rather them than me,

Mike Masnick:

threatened to arrest me yet. So, you know,

Ben Whitelaw:

it's only a matter of time, Mike, there'll be knocking at your door soon. Mike, thanks for really unpacking all those stories. You've, done an amazing job there of, taking us through. What have been really, a really interesting week. that takes us to the end of today's episode. I think we're, all talked out. We've got more next week. I'm sure there'll be more information about the Jurov case and probably, a bit more about the Zuckerberg letter as well, but for now we'll, we'll wrap up and thank everyone for listening. I hope to speak to you next week and for, thanks for tuning in.

Mike Masnick:

Yep. Thanks. And leave a review.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode