Ctrl-Alt-Speech

Zuck and Cover

Mike Masnick & Ben Whitelaw Season 1 Episode 42

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

If you’re in London on Thursday 30th January, join Ben, Mark Scott (Digital Politics) and Georgia Iacovou (Horrific/Terrific) for an evening of tech policy, discussion and drinks. Register your interest.

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

Mike, it's not a website that I go to very often, okay? But the public access to court electronic records site

Mike Masnick:

better known as Pacer.

Ben Whitelaw:

better known as Pacer to its fans. I know is one that you go to a lot. and we'll talk about why shortly. but when you go to the website, it prompts you at the top of the page by saying, what can we help you accomplish? So today I ask you, what can we help you accomplish?

Mike Masnick:

Well, in an ideal world, you would find me nine better justices for the Supreme Court who actually understand the Constitution?

Ben Whitelaw:

Oh God. Okay. That's that's, that's a foretelling if ever I heard one,

Mike Masnick:

yes. what about you? What can we help you accomplish today, Ben?

Ben Whitelaw:

well, my new year's resolution, Mike was to be much more Zen and to be much more, you know, calm and to, you know, meditate from time to time and to really kind of, just take things slowly and this week has not helped at all. And so it's testing my resolve. And to be honest, I don't think you can help me at all. I'm beyond help. Let's get started. Hello and welcome to control alt speech. Your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's January the 10th, 2025. And this week's episode is brought to you with financial support from the future of online trust and safety fund. This week, we're talking about Meta's big policy announcement, TikTok's oral arguments in the Supreme court. And much, much more. My name is Ben Whitelaw. I'm the founder and editor of everything in moderation. And I'm back with Mike Masnick founder of Tector and it's 2025. Mike, congratulations for making it this far. Happy new year.

Mike Masnick:

yes.

Ben Whitelaw:

Is it too late to say that?

Mike Masnick:

No, no, no. It is, not too late to say Happy New Year. Happy New Year to you. Happy New Year to all of our listeners. Welcome back. We are, I guess, glad to be back. It's like, there's a lot to talk about and, uh, not all of it is, uh, good news. So happy New Year. but yeah, it's, it is a new year and it's going to be one heck of a year, Ben.

Ben Whitelaw:

I'm going to, don't make me look up glad in the dictionary. I'm not, I'm not sure glad to be here is the thing, but no, it's lovely to be back. It's lovely to be talking to the listeners. Again, we had a nice break over Christmas. people continue to listen to old episodes of the podcast, which is great. so we, we saw a lot of listeners, tuning in between the, Christmas dinner and new year celebrations.

Mike Masnick:

And I was going just say, you know, we always say to, rate review subscribe, but I'm going to add one other thing to that, which is tell other people about the podcast. This is, this is one of the things that I think, you know, word of mouth goes a really long way in helping to spread podcasts and, let's get beyond just the rate reviewing and subscribe requests. And if you like the podcast, please tell other people about it. We, we really appreciate it. We know that is how a lot of people find out about it. And, uh, it always helps.

Ben Whitelaw:

Yeah, I was in a, uh, I was having a chat with a tech lawyer in London earlier today. And she said that she recommends the podcast to all of her team, Mike. Um, and so whenever new starters come into the firm, she recommends control or speech. So, if other people can do the same, we'll be much happier for it. So yeah, great to be back. And, uh, yeah, really excited about the start of this year. As well as the podcast returning, you know, we've got lots of stuff on ourselves. Everything moderation and tech are big years ahead of them. And we'll talk a bit about that over the coming weeks. I will flag that EIM is doing its first in person meetup. Do you have plans on the 30th of January, Mike?

Mike Masnick:

I will have to check my calendar, but think I will be half a world away from you in London, unfortunately.

Ben Whitelaw:

The QE might be, might be a lot, but, um, you are invited to the tech policy event that I'm hosting with, Mark Scott from digital politics and Georgia Yakovu from the excellent newsletter, Horrific Terrific. That's going to be a kind of very informal look ahead to what's happening this year in the tech policy online speech space. We'll be doing a bit of a Q and a lots of interesting folks have already signed up. hopefully people from, who are listening to the and are near London or in the UK can come and join us even if you can't,

Mike Masnick:

check. I probably will not be able to grab a quick flight to London, unfortunately.

Ben Whitelaw:

well, don't, you know, just think about it. You don't have to commit now. we should know right at this stage that there are, some very Interesting and impactful oral arguments happening at Supreme Court right now around the TikTok ban and you are getting kind of messages live on your screen, Mike, right from people who are tuning in. this is being recorded as it all happens.

Mike Masnick:

Yeah. Yeah. And so, so we're not going to go too deep into that, but yeah, I, there is a, uh, a sort of, uh, group chat, of, some, first amendment lawyers I know who have been, uh, listening and sending a bunch of messages about it. And so I think it's just concluding kind of as we're recording this, uh, so obviously we're not going to go too deep on it. Just from the impression that I've gotten from the little bits of the oral arguments that I've heard or what I've read this morning, I don't think it's gone particularly well for the TikTok side of things. I do get a sense that The, as I alluded to in the opening, that the justices seem a bit confused. Um, and I will note that, literally the day after Christmas we filed a, brief in the case, and we were trying to argue specifically for the Supreme Court to understand the First Amendment. And I don't think they got the message. Uh, there were obviously lots of other things Briefs from other amici, as they are called, many of whom arguing the same thing that, or not the same thing. We, we had a slightly different argument, but many are arguing along these same lines about the importance of the first amendment. And it does not appear that the message got through, to very many of the justices. Uh, it's always a little difficult to sort of read the tea leaves from the justices oral arguments. And they basically did what they often do, which is push back on, everyone who was speaking. Cause there was the. Tick tock. There was a representative, the lawyer for the users who spoke and then for the U. S. Government. and they did push back on the U. S. Government. the solicitor general in ways that suggest like maybe they are skeptical of some of her arguments and, but the real focus seemed to be on, there's always been this, conflict between the data privacy concerns and the speech concerns and the people pushing for the law have always done a really good job of conflating them so that if you start complaining about, well, the data privacy stuff, you say something like, well, why don't you pass an actual data privacy law? Then they will immediately jump to, but the, Chinese propaganda issue. And then you're like, but that's a free speech issue. And then they'll say, but the national security concerns about the data privacy. So there's like this weird dance where they're always kind of switched back and forth. And one of the hopes was that, at least at the Supreme court, that these nine justices would be able to separate out those two issues and the early impressions from the oral arguments was that they were having a really difficult time and they were falling for that trick where. If you push on. one side, the data privacy or the speech, they jumped to the other. and that's, to me, is really problematic because it's like, okay, these are two separate issues and you can separate them out and look at each of them independently and then have arguments about each of them. But the, trick where you talk about one and when you begin to realize like the argument is falling apart, you immediately jump to the other. Feels like a really dangerous dodge and it feels like it was working on the Supreme Court. And so that, that's a big concern for me.

Ben Whitelaw:

Interesting. And so you were going to go deeper on this next week because. this is all coming to a head pretty quickly. Just remind us of the timings of this, Mike.

Mike Masnick:

Yeah. So the ban is supposed to go into effect on the 19th, which is I guess it's Sunday. the following week. and so, the Supreme court effectively needs to rule in some way or not before that there is the slight possibility, which was raised during the oral arguments that they could, abide by what Donald Trump asked them to do, which is a whole nother issue, uh, which was to just sort of, yeah. Put the whole thing on hold until he was in charge, which is one more day, after the deadline. and so that is a possible, resolution, but the more likely thing is that sometime next week, probably just as we sit down to record just to mess with us,

Ben Whitelaw:

It's always the way, isn't it?

Mike Masnick:

the Supreme Court may come out with its ruling on this, in terms of, you know, what happens. And If they rule that law is valid, which now seems like a Decent possibility. Tick Tock could effectively be turned off. Uh, there's, there are all sorts of questions about what does that actually look like, because the real legal mechanisms for how that works is actually much more complicated. And it actually depends on like, Apple and, Google no longer allowing you to download new versions of it, but people who still have the app might still have it. And there's a question of whether or not other ISPs have to block it in the interim, which is. not entirely clear, and I've heard arguments going both ways on that. so, you know, during the oral arguments, TikTok's lawyer effectively said, like, they would turn it off, that's the way this goes, but that's not clear that they actually have to turn it off, and he wasn't totally committing to it. So, You know, not entirely clear, but TikTok could go away in, 10 days from now or nine days from now. And so we don't, know for sure. There's a lot of, a lot up in the air right now. And, and, you know, my biggest concern about this, and this is what, why we filed an amicus brief. In the case is that the ruling, because it's so sort of, mixing these issues of the speech and the data protection China and propaganda, that the actual first amendment concerns get lost in that. And the ruling that comes out of this really, really undermine the first amendment in very, very significant ways, no matter what you think of TikTok and ByteDance and its connection to China. And the oral arguments this morning did not give me any reason to feel Better about that. We'll see what the final ruling is, but I'm deeply concerned about the larger impact of the ruling, not specifically the impact on this one particular app.

Ben Whitelaw:

Yeah. Okay. That's a helpful summary based upon what is a very live story. you mentioned the U S government setting a kind of dangerous precedent, Mike, that is a very helpful segue into our first story, which is something that everyone listening to this podcast will no doubt have heard a little bit about this week, which is the. meta announcement, Mark Zuckerberg's famous now famous five minute video.

Mike Masnick:

did they say something?

Ben Whitelaw:

They, I don't know if you heard. Yeah, yeah, yeah. Yeah. He's, he's got a new watch. Did you

Mike Masnick:

Oh, oh, oh, I hope it's an expensive watch.

Ben Whitelaw:

yeah. I can see you've got your watch on there. That's I'm guessing that's not 900, 000 worth.

Mike Masnick:

No, no, this, this, this was a free watch to be honest. It was definitely not 900,

Ben Whitelaw:

Well, Mark Zuckerberg wore his special watch for his big announcement this week, the kind of summary was the headline was more speech, fewer mistakes. And he set out new vision. Mike, as you would have seen for Facebook, going back to what he described as its kind of core principles of free expression and speech, and he laid out a kind of five point plan for how he planned to do that. so for people who maybe were hiding under a rock and didn't necessarily hear the announcement I'm going to quickly kind of run through those five points six points if you include what I think is one of the most insane aspects of it. And, and rather than doing an order Mike I'm going to suggest. I do it on a, a sliding scale of insanity, if you'll, if you'll allow me to do that,

Mike Masnick:

Yes, please, please.

Ben Whitelaw:

for all intents and purposes, and it was an announcement that, you know, rile people up for lots of reasons, got an awful amount of news coverage. And I don't necessarily think we should take it all seriously. so I'm going to kind of do it in order of what I think is the kind of least insane to the most insane. And then I want to get your thoughts on this listeners, if you're listening to this and you have thoughts on the order, get in touch with us, um, podcast at. Control. speech. com as well. We want to hear from you. We'll share back some of your thoughts next week. So in order, Mike, okay. The five things, six things were replacing fact checkers, simplifying policies, reducing mistakes, bringing back civic content, moving trust and safety. to Texas, working with Donald Trump to push back against governments. Okay. They're the six things that he did in order. My order is thus, okay. The first one was bringing back civic content. And this is the idea that people all of a sudden want politics on the platform. they made 2021 that actually politics wasn't for them, civic content, as they kind of termed it. actually was causing division and users were feeling stressed by it. All of a sudden, surprisingly, this is being brought

Mike Masnick:

no, no stress at all anymore about politics.

Ben Whitelaw:

Yeah, they, they, like me have solved their meditation and, and, you know, state of mind issues and they're bringing back civic content. this is probably the least insane, still a bit insane, but at least insane because, news and politics content is important for people to navigate their lives. And that was the big criticism back in 2021. you know, so there's a, case of bringing this back, there's clearly a political element to this and, you know, as we'll see when we marry it with other parts of the announcement, actually, it's going to Having particular political outlets and political speech on the platform is good for a certain president elect. So that's why it's, it's insane, but the least insane. Okay. Stick with me.

Mike Masnick:

Okay.

Ben Whitelaw:

number two is replacing fact checkers. and, bringing in a community note system to help fill the gap. Now, fact checking is, debated widely for its efficacy. people criticize it. People have said that, it's slow that it doesn't necessarily the job that you think, like it to, and since it was brought in and kind of 2016 post the last Trump. Presidency, it has received a lot of flack. I personally think that, you don't necessarily know the effects of fact checking until it's gone. And, I'm interested to see how this will pan out, but actually, you know, the insane part of this for me is that you can get a system like community notes to come in and do as good a job because there's a whole raft of issues with, with X slash Twitter's community notes product, again, it's It's very slow, there are some research that says it is effective in part, but it's not the kind of panacea that I think Zuckerberg is painting out to be. So that's why it's my number two on the insanity spectrum. number three is reducing mistakes. Okay. So this is Zuckerberg's saying that he was going to, essentially catch less bad stuff. That's literally how he put it. and by changing the filters. and what it was, the AI systems and the automated systems we're going to catch. And it's going to focus particularly on the most egregious stuff now. So less of the kind of lower level harms that perhaps it did in the past. And, you know, suppression of speech via these automated systems has been something that has been in the news a lot. It affects, particularly underrepresented groups. It was a huge report put out, by BSR a few years ago. About the Palestine Israel conflict where, automated suppression of speech was a huge issue. Meta has been criticized for this significantly and in and of itself, isn't really an issue. However, when you combine this with the policy changes that we'll talk about in a second and the civic content changes as well, I think this is, this is going to be A really serious issue. So that's why it's number three. Number four, Mike is, moving moderators to Texas or specifically, if you, if you really tune into what he says, moving trust and safety and moderation out of California, doesn't say where I'm moving content reviewers to Texas, aside from the fact that there has been lots of content moderation done in Texas for a long time, and we know that because there was a class action brought by moderators in Texas against Meta. this is just a giant signaling move. And, I don't know if you saw the, the law fair, webinar with Kate Klonick, Daphne Keller and others, but they made the point that this is just the kind of giant, anti California hand waving message, like anti the coastal areas. sorry, sorry to cause offense. Um, and so again, kind of insane in its own right, just like a complete, signaling move number five. And I'm getting into the kind of serious, insane territory. Now this is, we're talking batshit levels, was this point around working with Trump to push back on governments going after us companies and quote, censoring more. Lots of people have quoted him accusing the EU of institutionalizing censorship. I can't even say it without laughing and the, quote secret courts in Latin America, which are a clear reference to, the issues in Brazil that Elon Musk has faced and, you know, again, insane, you know, whose idea was it to, set up a, a system in which he's siding with the U. S. government in order to bring about more free speech doesn't make any sense. And we can talk more about that. And then lastly, but you know, clearly most egregiously is the simplifying policies element of this whole announcement. on the face of it, you know, simplifying policies, not a bad thing, but he calls out, immigration and gender. he flags the fact that transgenderism is something that he's kind of looking to address. The language is very, very coded and very, very specific. And since then, we've seen some of those policies start to be announced and leaked to the press. And some of the examples. Now in the policies are abhorrent, you know, they are calling out, trans people are now allowed to be deemed, unreal, you know, the, the worst kind of dehumanizing language you can come up with, you can now say on the platform, according to these leaked documents. And so, What I've tried to do there, Mike, is give a summary of all of those mini announcements and the order in which I think they're the most maddening.

Mike Masnick:

Yeah, this is, I mean, this is the problem with all sorts of things these days, which is that, there is a lot of complexity and nuance in here and so much of. just the levels of bullshit that exist are really wrapped around some kernel of accuracy or truth so that, like, if you attack it, people will say, yeah, but there's a real problem here and, yet, it's presented in a way that is so misleading and so twisted. so, I think that's true here. And I, so I think you're, you're. Order is more or less reasonable, but all of this is, under the backdrop of so much of this is done for completely nonsense reasons. and is all designed to do that. So, like for years I've called out their suppression of civic content or political content. and so the reversal of that, sure, you can say like, yeah, that makes sense. They never should have done that in the first place. That was always a mistake, but you look at like, you don't even have to go that deep. Just look at the dates of when they started this policy. And when they ended it, they started it right after Biden won. They ended it. Right after Trump won again. Right. So it's like, so obviously political. and, you know, the thing that really gets me, this comes after the backdrop of, earlier In the summer of 2024, you know, Zuckerberg sent this like groveling message, which we talked about to Jim Jordan. And then there was this New York times article with the headline, uh, Mark Zuckerberg is done with politics. And it's like. what all of this makes clear is like, no, of course he's not done with politics. He's done with democratic politics, but he's happy to suck up to Republican politics. so when you look at it in the backdrop that way, it reminds me of like, There, this is not exactly the same thing, but like, in the copyright fights, going back in like the earlier part of the two thousands, there's, a wonderful, congressional representative Zoe Lofgren from California. She's not my representative, but nearby. Um, and she was always very good on copyright issues. And based on the way seniority worked at one point in the, two thousands, she was lined up to head the, IP, subcommittee for the judiciary committee, but because she's actually good on copyright issues or in agreement with me. So I will, I will, you know, subjectively say that she's good on it. They killed that subcommittee, as soon as she was up to head it. And then as soon as the next person was up to head it. They brought it back. And, this strikes me as the same thing. It's like, you look at the timing of when you kill a program or when you start the program. And if it is clearly designed to like stop a certain thing from happening, there's this, bad reason behind it. So even though I think the policy was dumb, Changing that, for this reason still insane. So even if that's your least insane, same thing with the fact checkers, I've, you know, from the beginning, I've always said like I think fact checking as a concept is important, but the setup of the way that, social media companies have done fact checking, I think has been pretty much. ineffective for a variety of reasons. And we don't need to go into the deeper reasons for why it exists. I don't think it's bad that it exists. I just don't think it's all that effective. And it sort of created this weird vector where everyone got so focused on the fact checking that they focus on it, the attention driven to it and the hatred towards it. Generated a lot more heat than was useful for anyone. And so, you know, so I, again, like, I don't think it's that big of a deal that they're like moving away from the fact checking program other than as a signal. And so again, against the backdrop of everything else that it did, and this is the one that seems to have gotten people the most worked up. And I saw somebody, I've actually now have seen it twice where people have referred to it as an existential threat to truth and it's like, no, it's not an existential threat to truth. Threat to truth. if you don't have a fact checker, like other people exist who can fact check it. just because you don't have this sort of official fact check. so I don't think it's that big of a deal, but it is also, I also feel like the fact checking one in particular was sort of used as a bit of misdirection because Meta and Zuckerberg knew that everyone was going to focus on that. So let's throw that out there. Everyone's going to get mad about that. And then we're going to do, as you noted, like a whole bunch of Much more crazy shit in the background that is way worse and way more concerning in the long run. And so, yeah, a signal, but like as an effective tool, I don't think the fact checking has, been all that big of a deal, then we move on to where it starts to get really, really crazy. Right. And so the mistakes thing, we've discussed this just recently, like, you know, on the podcast, we've had these examples of the really stupid the, any mention of Hitler, even to say like Hitler's bad was getting blocked or the, the whole like Cracker Jack, story that came up where just saying Cracker Jack or Cracker was getting banned. And, My one take on this, which nobody else has really picked up on, but, for all the talk of how great the AI is and that is automation system is like, this seems to be an admission that no, we're not that good at this. Meta has always had problems with content moderation at scale, even though they're the biggest and they've had the most experience with it. They've always made silly mistakes like this all the time. So there is this element of like, if this was just a recognition, like, yeah, our automated systems are bad and we're not really good at this. Again, that would be interesting. And that would be an interesting admission, but against the backdrop of everything else, it is still crazy. And so it is still this kind of like, yeah, good for them to admit that, but they didn't admit it in a way thoughtful or was transparent or was useful to the world. It was done in a way to say, we're going to allow a lot more, really horrible shit on the platform.

Ben Whitelaw:

Yeah.

Mike Masnick:

so, that's where we start to get into the really crazy stuff. the moderators to Texas thing. just even the way that Zuckerberg phrased it was basically like, we're moving people away from California to Texas to stop bias, which in what world do you think that people in Texas are less biased or like, you know, there's not this sense that like people in Texas are neutral

Ben Whitelaw:

Well, yeah,

Mike Masnick:

biased, like, what the, like, no, no one believes that. And

Ben Whitelaw:

listening. I was listening to this law fair webinar and people in Austin apparently are like, are not like your kind of, you know, typical, Republican, you know, like

Mike Masnick:

Austin is famously not like that, yeah.

Ben Whitelaw:

Right. So it's like, it's completely arbitrary, distinction between California and Texas. And clearly it doesn't mean anything, but that's the level that he was working at.

Mike Masnick:

Yeah, I mean, it's funny because, another thing that, Zuckerberg wrote on threads response to some people talking about this, he was like, Oh, I forget exactly the way he phrased the first part of it, but it was something to the effect of, we, we honestly think that this will make the platform better and that will make more people use us. Yes. Some people might leave the platform, due to virtue signaling, but blah, blah, blah, blah, blah. and. Like I responded to Zuckerberg directly on threats. I don't know if he saw, he saw it, but I was like, look, this is a fucking tell. Like you're, you're, you're admitting, like just using the phrase virtue signaling, first of all, almost everyone, I won't say everyone, but almost everyone who uses that phrase is using it to be an asshole. and take a step back, everything that this announcement did and everything that Metta has done this week. has been signaling. I wouldn't call it virtue signaling. It's perhaps the opposite of that, but to use that to sort of dismiss the people who might be concerned about these changes is just an absolute insult on top of all of the other things that, he's doing

Ben Whitelaw:

Yeah. And be put in danger by them as well. Cause you know, this is going to have serious, you know, particularly the policy, the simplification of policies, which is to call it simplification is like. It's mad because it's not just a simplification. It's like a degradation of, it's a, it's a kind of dismantling of policy that has been created over years.

Mike Masnick:

it's not. And so there is an argument. You can make an argument that like, yes, Meta's policies probably are way too complex. There's this really fantastic. If people haven't listened to it, it goes back a few years till they did an updated version. Radiolab, the podcast Radiolab did this amazing one where they sort of embedded a little bit. I think Kate Connick is in it lot, Embedded with the Meta, I think at the time, Facebook. Moderation policy team. And they walk through in such a good way. I mean, just the challenges and the nuances and like, Oh, we created this rule, but now we have this exception and then, Oh, but there's that exception. And they talk about the rule book and how they have to sort of keep adding in clauses. Like, yes, this, but not in this case, but if this, and you just have to keep writing the rules in different and more involved in complex ways. And they, talk about this process, how the rule book just grows and grows and grows because you have these exceptions and edge cases and all this stuff. And it's fascinating just as a, to think through these issues because people on the outside never think about it. think through how many of these things involve edge cases and stuff. But you can see how over time that collects a debt of like just complexity and problems that lead to other kinds of problems in terms of actually how you enforce the rules. And so there is this element again, where you can take this back into a serious realm and say like, yes, I am sure that the rule book. at meta is way too complex and could benefit from some simplicity other than the fact that there are reasons why all of those exceptions come into play and there are all these issues involved but the reality is because we're starting to see so first there's the public policies that are available for people to see that some people called out wired was the first one to call out some of the changes in there and now what we've seen is What I believe are very angry people within meta releasing the internal version of the rule book and sending those to various reporters who are all rushing to publish them. So we're seeing all sorts of stuff. What's happening inside. What is happening is not what I would call a simplification of the rules. What is really happening is very clear exceptions written for specific culture war issues that. The MAGA world believes is really important for them to be able to say things that are insulting and harmful and targeting specifically marginalized people. and what Meta is doing is not a simplification of the rules, which would be an interesting project to talk about, but rather a, we are writing in exceptions for the people who are mad at us.

Ben Whitelaw:

And, and, in the first, in that kind of like analysis we've done there, Mike, we've tried to engage in, relatively good faith with what was said. cause this is a, you know, online speech podcast after all, you've got to engage in what the platform is doing, but I think you're right. You know, there's a lot of this, which is, not worthy of being engaged with in good faith and like represents the wider, company values, I think. and we, we don't want to go into those too much, but we should talk a bit about them because I think that's, that's really what you're, saying there is that these changes, the way that Zuckerberg did a five minute video that sat on the top of a blog post that was authored by Joel Kaplan, the new head of global policy

Mike Masnick:

new Nick Clegg.

Ben Whitelaw:

The new Nick Clegg, God rest his soul. I thought this podcast was going to be about Nick Clegg last week when he resigned, little did I know what was going to happen in the subsequent days. Anyway. Yeah. So, the video with the, blog post and obviously Kaplan's connections to the Republican party, We have to really talk about it as a wider, Not just about speech kind of announcement. Don't we is You can't really do it in any other way

Mike Masnick:

Yeah, the context, as with everything and almost everything that we talk about, and I try and do the context matters. The context always matters. And it's very easy to sort of simplify a bunch of these things down, but the larger context really matters. And it's not just The switch from Nick Clegg to Joel Kaplan, but also the new appointments to the board of meta, which also came out last week, including Dana white, who's the head of UFC and is a close personal friend of Donald Trump has been really engaged in policy issues for the sort of mega movement there's like this clear declaration that we're now going mega And obviously like the moving people to Texas stuff, like all of this nonsense. But the reality is, again, when you look at the changes for all the talk of Oh, the rules are biased against conservatives, which has never really been true you know, there's all sorts of research on this and we've talked about it and, and all this kind of evidence and stuff, all that talk, what they've done now is bias. The rules specifically in favor of MAGA. Culture war talking points. The changes to the rules are not simplifying. They're not clarifying. They are, we are creating explicit exemptions for the kind of awful speech that you want to use to target. Certain communities and that needs to be called out and it needs to be really clear because for all the talk of like, Oh, all of this has been working. The refs is kind of like the framing that comes up a lot. all of the complaints about the way that, different platforms moderated and saying like, Oh, you're, biased against conservatives, which has never actually been true now, what they're doing is they are biasing. The way their moderation policies work explicitly, like the language is so clear, don't even want to repeat it because they have this, horrifying language of this is what is allowed now. And it is clear, biased, bigoted speech towards certain marginalized communities. That will lead to harm and will lead to problems. And a lot of this is legal speech and there are arguments for, you know, meta can do what it wants, but the signaling here,

Ben Whitelaw:

Yeah.

Mike Masnick:

for all of, of, Zuckerberg sort of talk of virtue signaling, he is signaling with this loud and clear saying, we want MAGA community to be here and to use our platforms to spread their hatred.

Ben Whitelaw:

Yeah. I'm a fan of saying that moderation is political and politics is moderation. And, and this is the kind of week that has, I think, summarized that better than any other. do you think Mike, the virtue signaling as Zuckerberg would call it is going to play out? Do you think people aren't going to kind of vote with their feet and stop? Using the platforms or do you think the network effects are so big what ramifications is there likely to be?

Mike Masnick:

Well, I mean, who knows, right? And this is, this is the big unknown, right? There is the argument that like, this is the playbook that Elon Musk tried. And it may have been successful in other ways, in terms of like electing a U. S. president and being close to him, it has not been successful for the platform. X in particular, it has lost users. It has lost a ton of advertising. It has been very unsuccessful as a business strategy for that. in that, in that realm. And in fact, like it felt like Zuckerberg recognized that because after all, he launched threads as like a sanely run competitor. To Twitter slash X. And so there was a moment where he recognized that what Elon was doing was driving away users. And yet now he's kind of doing the same thing. And so it will be interesting. Somebody pointed out, which I thought was interesting and I forget who, and I apologize if you are a listener and I am ignoring your contribution to this. Um, there's been so much this week that I don't remember exactly who said what, that, um, For all of this new policy and big changes to the system, it was all done through, Mark Zuckerberg's post and the blog post and the Joel Kaplan announce, but there was no notification for users. If you logged into Facebook or Instagram, there was no pop up saying our policies have changed.

Ben Whitelaw:

so true. Yeah.

Mike Masnick:

And so that's kind of interesting and a little bit problematic. And so you do wonder, like for people who don't follow all this stuff, how many of them even realize this is going to ha this is happening, but, I think in the longer run, if this does lead to what it seems likely to lead to, which is a lot more, just. angry, hateful, garbage kinds of speech, I feel like, people will start to look for alternatives. And so it strikes me as was the case of Elon taking over Twitter as an opportunity for third parties to come in and sort of try and take that audience.

Ben Whitelaw:

Yeah, and we'll talk a bit about a piece that renee diresta has written about some of that But it's a reminder I think for all of us particularly for me about how companies such as meta are really, Just vessels they claim to have values that they hold, but actually they can be filled up with whatever values, are around at that time. And for a while that was, certainly more democratically inclined values. That was one, you know, that cared about. Speech and, emphasize fact checking and now that is a very different set of values. And I think that, and trust and safety is a way that those values are manifested. You know, Alice wrote a really interesting piece about this for EIM a few months back, which I'll link to in the show notes. if your values change, then naturally your trust and safety and your content moderation and your speech, policies are going to change with it. And I think that's what we're seeing here.

Mike Masnick:

Yeah, I do want to raise one issue, which that just reminded me of it's a little bit different, which is there is this framing and all this, which really frustrates me. Also, I mean, a lot of this is obviously frustrating me, but like a lot of the framing of this was Zuckerberg and Joel Kaplan saying, like, this is bringing Facebook Meta back to who, Being about free speech. And that is absolute nonsense on multiple levels. One, as we said, like the policies are not really about free speech. They're specifically exceptions to allow for, really problematic speech. But the, the bigger thing is that like. Facebook was never a free speech platform, from its earliest days, it had pretty heavy moderation and, pretty specific rules that they didn't allow certain kinds of behavior in certain kinds of speech. They never allowed anonymous accounts. They always wanted you to use real names. They've always had like the no nudity policy. They have always been pretty restrictive from the beginning. And this idea that, Facebook was ever. involved in the free speech project strikes me as complete nonsense.

Ben Whitelaw:

And why, why was that Mike just kind of journey, journey back through history? Like what was the reason why that happened in the first place? Do you remember?

Mike Masnick:

I mean, I think it was just sort of like, Zuckerberg wasn't in this for free speech. It was never about that. I mean, he was trying to build a business and, to him, there was no underlying like moral imperative to try and help speech. I don't think that was true. I think the Twitter people, the original Twitter people did believe in this kind of like, Ethos of free speech and using the internet to enable more speech, but Zuckerberg never seemed to express that, kind of view. He was trying to build the biggest business that he could. And as we've discussed, like one of the ways that you build a big business is By, having a platform that is safe for brands, for example, and, and others. And so that was really the focus of what he was doing. So this idea that they're suddenly like, we're going back to our roots as a free speech platform is definitely, uh, uh, historical revision of reality.

Ben Whitelaw:

Yeah. Okay. Historical revision of reality feels like a neat way to summarize, that thanks Mike and there are other stories that happened this week that there aren't quite as many, there is as big as the meta announcement, but we'll, we'll do, a bit of a review of those other ones. And. The next one also looks at CEOs of social media platforms, Mike, that are working in cahoots with Republican government officials. So, I'll hand over to you for this because you, for some reason, were, looking at government documents on New Year's Eve. explain that for us. First of all,

Mike Masnick:

Yeah, I was writing an amicus brief on Christmas and on New Year's Eve, I was looking at congressional documents. My life is so exciting, Ben.

Ben Whitelaw:

we're grateful for it.

Mike Masnick:

yeah, you know, honestly, I think this is kind of a continuation of the same story in some way, which is that on New Year's Eve, Representative Jerry Nadler, who's the ranking member of the House Judiciary Committee that is the top Democrat on the Judiciary Committee, released a report which was, Basically the Democrats on the judiciary committee releasing this report called the delusion of collusion, the Republican effort to weaponize antitrust and undermine free speech. And it's a really great report that for no good reason was released on December 31st to guarantee that it would get the least attention possible. There's been no news coverage of this document, as far as I can tell. Other than a tech turd post that I published this morning, right before we started recording,

Ben Whitelaw:

Go and read it. Go and read it.

Mike Masnick:

uh, and it is a systematic and thoughtful breakdown specifically of how Jim Jordan, who runs the judiciary committee,

Ben Whitelaw:

Good friend of the podcast.

Mike Masnick:

yes, has weaponized the government specifically to help. Elon Musk to go after advertisers who pulled their advertising from, Twitter X. And this is a story that I've been telling for a long time, and I felt like I was the only one. I was sort of screaming into the wind, and we've obviously discussed it here about, you know, everything that happened specifically with Garm, which is the, You know, nonprofit, small nonprofit that was trying to work with platforms and advertisers to figure out how to keep brands safe, if they were going to advertise on these platforms. which also, you know, Twitter slash X had excitedly rejoined a week before Jim Jordan came out with this report, calling it like an antitrust violation because they were organizing a boycott. of Twitter, which was never actually true in any real sense. And finally, the Democrats come up with this report, basically calling bullshit on everything that Jim Jordan said, which since turned into a lawsuit that Elon Musk filed against Garm and a bunch of advertisers, and which led to Garm being shut down by the World Federation of Advertisers, but here's this report. From the Democrats, which got no attention, which calls out all of this, that there were legitimate reasons. There were legitimate brand safety concerns that Elon Musk did a whole bunch of things that were really bad for brands on Twitter. None of this is surprising to you or I, or anyone listening to this. I'm sure that there were perfectly legitimate reasons that Garm was just there trying to sort of help everyone, but had no real impact. Oversight, you know, over where people put advertising had no control over that. Advertisers were making all of their own independent decisions. There was no collusion. There was no coordination effort. There was no, official boycott or anything. There were just a bunch of. Advertisers who realized that like having your ads next to Nazi content is probably not good for your business. Probably. I mean, Mark Zuckerberg seems to be betting otherwise, but you know, and so there was a reason why they did this. And then calling out Jim Jordan specifically for cherry picking quotes for quoting things out of context for making arguments that were clearly untrue and not there in order to suppress the speech of these advertisers of garm of others and basically trying to force them to do. in the service of helping Elon Musk, the wealthiest man in the world, and a big funder of Republican causes to be able to go after advertisers who choose not to advertise on the platform.

Ben Whitelaw:

yeah. And there's this great line in it as well the report, which is definitely one to go and read about how he's Jim Jordan's interim report was like a, design for an audience of one AKA Elon Musk. And there's something like you say about the similarities between the first story and Meta's announcement and this one, which is that, you you have Republicans and CEOs essentially kind of writing love letters to each other by the form of blog posts and reports and letters. And as kind of what I felt when I was looking at Zuckerberg video, and he was kind of awkwardly explaining his, you know, what he was going to do. I was like, this is cringe worthy. Like, why don't you just, why don't you just. FaceTime Donald Trump and tell him yourself, um, you know, keep it, keep it private. And I think, you know, this report lays out a similar thing, which is that you had Jordan and Musk essentially working together on this and, to the ends that we saw in the, in the election.

Mike Masnick:

Yeah, and there's a lot in there. It's, it's a 53 page report, I think, and it's worth reading. And again, it got no attention because the Democrats are totally incompetent at how they promote this kind of stuff.

Ben Whitelaw:

Right. Right. Yeah. And so, uh, we're doing the best to bring it at some readership, Mike. we're, we'll go on now to other stories. That we've noted and we'll stick in the realms of kind of government regulation to begin with and just note, as part of our kind of quick story roundup that, that actually Elon Musk has, been in the news, but just not quite so much as, his counterpart over at Meta. So you might've seen over Christmas between, filing your amicus brief and, you know, checking the, government website for, for new reports that Elon Musk was tweeting furiously about a lot of things, including. And particularly about the AFD, far right party in Germany and giving his support for it, that according to Bloomberg has triggered a new surge of activity around the European commission's investigation of X Twitter, which is almost, I think over 12 months ago was announced and is still running in the background. Bloomberg announced this week that. Henna Verkanen, who is your friend Thierry Breton's replacement, in the European Commission is kind of heading up a lot of the DSA work and Justice Chief Michael McGrath have sent a letter to European election officials saying that they were moving forward energetically on the investigation. I thought energetically was a weird word to use, Mike. I know it's a small point, but would you, would you have used a different word than that?

Mike Masnick:

don't know. I don't know. I mean, it's, they're trying, this is all signaling in some way or another, I guess this is the point of the podcast and so they're signaling they're going to do something. I didn't, I would note that this came out a day after Le Monde in France had a, An article which sort of claimed that the Europeans were actually backing off of their investigations and that the, EU commission president, uh, Ursula von der Leyen was putting on hold all of these investigations and refusing to start new ones. It was very weakly sourced and, done in a way that really appeared like someone was trying to. to shake things up, maybe to get a response like we are energetically pursuing this. Uh, so I, I do wonder if that the Bloomberg piece is sort of a response to the Lamond piece.

Ben Whitelaw:

yeah,

Mike Masnick:

And so that there's something going on behind the scenes where some people are saying like, maybe we should hold off. and the argument that was made in the Lamond piece was that certain European leaders are more supportive of Musk. And so you have like, Victor Orban, obviously, and George Maloney in Italy, who sort of Musk supporters, so maybe they're sort of pushing back on these investigations. And then you have another wing of, EU folks who are obviously keen, eager, and I guess, energy, just think about, uh, going after these platforms. And so, I think this is a statement that the EU is like, look, okay, the U S project is in trouble right now. Uh, and we are going to continue with our regulations, which does raise one point, which we have left out so far about the meta story, which is kind of important, which is that they very quickly clarified meta did that. These new policy changes do not apply to the EU. Do not worry in the EU. We're not doing any of this.

Ben Whitelaw:

well, yeah, to join the dots a little bit, the part of the announcement around working with Donald Trump and the US government felt to me like a fear of regulation, particularly in the EU. And you're right, you know, there was a clear explanation of how this was us only, it might be rolled out. Elsewhere in the future, but I got the sense that it was a slight fear of the kind of, you know, EU regulatory regime. Did you get that sense as well?

Mike Masnick:

I, I, it didn't strike me as fear so much as like opportunistic. Right. I mean, so this recognition that, Donald Trump is very much the bull in China shop kind of politician who just sort of screams about what he wants. Right. I mean, we're, you know, we're here about to take over Canada and Greenland and all that. Uh, and so, I think it was kind of like, Oh, here's an opportunity to do what was politically. Impossible before, like before the Biden administration was never going to go argue, about the excesses of the DSA. And I, I'm obviously I've been a critic of elements of the DSA. And I wish that the U S government was actually more vocal in sort of criticizing some aspects of the DSA and somehow it's problematic. And I see, I think that Zuckerberg sees this as an opportunity where it's like, he knows that Donald Trump's not going to care about that. So here's a chance for him to like, maybe go out and say like, Oh, the DSA is this horrible thing. And like, if you don't change the DSA, like we're going to cut off all trade to Europe. I don't know what he's going to do. Right. I mean, it's like, you know, it's either like, fix the DSA or we're going to invade, Iceland. I don't know. Like none of this matters anymore. Like nothing makes sense. So I think it's just an opportunity for him to try and get Trump to push back on the DSA.

Ben Whitelaw:

Yeah. Okay. And I mean, the idea of speech as a form of trade is something that I'd love to dig back into because I think that's, that's a

Mike Masnick:

There's, there's, there's a big history there that, yeah, yeah. That's, that's, we're not going to do that in the last few minutes of this podcast.

Ben Whitelaw:

Okay, cool. But you know, Elon Musk is. also in the news for other reasons, you read a tech post about, some contradictory behavior that he exhibited this week,

Mike Masnick:

Yeah,

Ben Whitelaw:

unlike him.

Mike Masnick:

yeah, it's like, how do we do this story so quickly, but like, so there's been this story for a long time that there's this guy, Adrian Dittman, who people believe is an Elon Musk alt account, and he's shown up in spaces, there was like a Twitter x spaces with Elon Musk, they sound identical. This Adrian Dittman person sounds exactly like Musk. He's been a huge Musk fan. He's always supporting him. He talks about what a great father he is. at one point I think he talks about like how much sex Elon Musk. I mean, it was like ridiculously fawning, fan behavior, but because his voice sounds just like Musk, everyone's like, this is clearly just Musk and, and like you know, there've been all these attempts to sort of prove it, and a lot of people are totally convinced of it. And the spectator came out with this article is basically like, no, there is this real dude named Adrian Dittman who has this weird global history that kind of explains why he would have a similar accent and kind of explains why he would be like hugely supportive of Elon Musk. And it's a real guy. And there was joking about it because like Elon and Adrian have both sort of always assuming they're different people have always played coy. about this question of whether or not they're the same person. I think they both sort of get off on the fact that a lot of people think they're the same

Ben Whitelaw:

Yeah. It's a funny thing, isn't it?

Mike Masnick:

yeah. And so, then when the spectator article came out, even Elon posted like, all right, it's time to admit it. Like I am Adrian Dittman, even though the article sort of proves that he's not, But then, Twitter slash X banned the article, banned the authors of the article, banned the authors of a study that was used as the basis of the article,

Ben Whitelaw:

Yeah,

Mike Masnick:

and, did all that. And it reminded me, because I'm the only one who remembers history, that Elon Musk was furious That Twitter banned the New York post for posting the story about the Hunter Biden laptop and block that link for 24 hours before they admitted that was probably a mistake and went back on it. And in fact, Elon Musk has said the, the former people at Twitter probably deserve to go to jail for blocking the New York post story and saying that the free speech platform should never block news stories, and yet here is blocking a story from the spectator.

Ben Whitelaw:

that was 2024, Mike. This is 2025. You forgot. The

Mike Masnick:

in 2020, in 2024, he did the same thing with the revelation of the JD Vance dossier by Ken

Ben Whitelaw:

you're right. Yeah.

Mike Masnick:

There's, there's just this level of hypocrisy here that I think is worth calling out, even though nobody else seems to care about it, that He's doing exactly the same thing worse in a more extreme manner. The reasoning for it is they're claiming that it violated the doxing policy, which is nonsense. It is not doxing to say like this person who calls himself Adrian Dittman is actually Adrian Dittman. That is not doxing. That is like, this guy is who he says he

Ben Whitelaw:

Identification.

Mike Masnick:

Yeah. Um, but I thought it was worth calling out because again, like, I feel like no, I'm not saying nobody, I'm being a little, you know, hyperbolic here, but like, most people were not calling out the hypocrisy there, and I thought it was worth mentioning.

Ben Whitelaw:

Yeah. Okay. that is worth mentioning. And I think, you know, maybe something that the, European commission are interested in as they do their investigation, who knows, um, the only other story, Mike, our flag, just before we round up today is a really great piece by Rene DiResta, like I said, which is published on the online magazine NOMA. And it's titled The Great Decentralization René has written lots of great kind of essays like this in the past, but it's a really nice look at the really the history of the march from one size fits all platforms to decentralize. And, and federated spaces. And, she kind of rounds up really nicely. were the drivers that kind of led us from there from the big platforms like Facebook and Twitter to the, you know, mastodons and the blue skies that we're seeing, disclosure. Mike, Mike is on the board of blue sky, um, just to do that quickly. Um, so, so yeah, it's a really nice look through, the history books as to how that happened. she makes some really great references to kind of working the refs and referees generally, which I think you mentioned as well, when we're talking about Facebook, is a really helpful lens, I think, through which to see everything that's happening right now. And it reminded me actually of a, of a Michael Lewis podcast against the rules, which talks about referees, which again, lots of great stuff. I don't know if you've listened to it. Um, big fan of that. And, and, also the potential downsides of, of this, you know, of moving to smaller, potentially less moderated spaces in some cases and what that could mean for, polarization for society as a whole. And we don't know the full extent of that, but Renee summarizes it nicely.

Mike Masnick:

Yeah, it's a really good piece for especially if you haven't been paying as much attention to sort of the alternative spaces and kind of how we got here why they've been successful. And I think it also does a really good job, frankly, of raising the question of trade offs in terms of how, trust and safety is handled, both on the centralized platforms and the decentralized platforms and how there are pros and cons to these approaches. I mean, I think most of the rest of our podcast today has been about some of the cons of centralized moderation, when they get into the hands of. people who have, different viewpoints on things. So that will be my diplomatic version of it. Um, but there are also real challenges with the decentralized systems. And a lot of people sort of view them as like, oh, it's just the same thing, but, you know, it's just a new version, someone trying to do different, but the underlying frameworks and the whole like protocol concept of a decentralized system creates different affordances. Some of which I, I personally think are really, really beneficial and that's why I've been a huge fan of them. That's why I ended up on the board of Blue Sky. but some of them also have like different challenges and I think that the piece that Renee wrote really lays them out very clearly and does a great job of it. and so I think it's just a useful piece for everyone to kind of understand this moment that we're in and what may be possible and what may be the challenges of, trust and safety on these more decentralized platforms going forward.

Ben Whitelaw:

Indeed, really nice kind of weekend read, I'd say, to have with your coffee or, you know, whatever your drink of choice is this weekend. that takes us to the end of our episode this week, Mike. we spent a lot of time talking about Meta. We touched on TikTok. We touched on decentralization at the end there. I hope our listeners feel like we covered the full gamut of stories that have emerged this week. you look tired.

Mike Masnick:

Ha ha ha ha

Ben Whitelaw:

You look like you need a rest and it's only, it's only January the 10th. So

Mike Masnick:

gosh.

Ben Whitelaw:

buckle up. Um, it's going to be a wild year, I think, but, thanks everyone for listening and appreciate you tuning in. If you have any feedback about today's episode, drop us a line podcast that control all speech. com. We'd love to hear from you. give us your thoughts on the kind of. Additional analysis around meta. Was it worthwhile? Would you like us to do that again? we respond to all of the emails that get sent in and that rounds us up for this week. Thanks very much for listening. Take care. I'll see you soon.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode