Ctrl-Alt-Speech

Backdoors and Backsteps

Mike Masnick & Ben Whitelaw Season 1 Episode 47

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben are joined by a group of students from the Media Law and Policy class at the American University School of Communication. Together they cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

So Mike, it's very important for me not to further worsen English French relations. it's important that the podcast doesn't do that. I'm a big fan of the French, but I wanted to start today's podcast using Mistral's AI's prompt. Ai is one of the big, open language models that you might have used. is a French startup. That's got a lot of traction over the last couple of years. And it has a prompt on its homepage that says talk to LeChat. So that's not, that's not me being pejorative. You know, it's not me being a kind of, uncultured Frenchman. That's it literally what it says. Talk to LeChat. What, what would you say to the chat?

Mike Masnick:

I would question my, my high school French wonders how, proper that, is. Uh, but I will say that, uh, rather than talking to Le Chat, today we are talking to the class. and we will explain what that means in a moment. But, uh, Ben, what about you? If you were to talk to Le Chat, what would you be saying?

Ben Whitelaw:

Well, as well as asking the chat, why I haven't received any Valentine's cards. Uh, I would ask it, you know, why it took one particular e commerce platform so long to respond to, Nazi content on its platform this week. And it's not necessarily. One of the ones you'd think. Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's February the 14th, 2025. And this week's episode is brought to you with financial support from the future of online trust and safety fund. This week, we're talking about the big encryption versus safety story involving Apple, the giant AI summit in Paris, and Shopify's handling of Swastika content On its platform. My name is Ben Whitelaw. I'm the founder and editor of Everything in Moderation. And I'm not only joined by Mike Masnick from Tector, but we're joined by the Media, Law, and Policy class at the American University School of Communication. thank you very much for joining us. Uh, this is the first, experiment of its kind. And Mike, this is something you organized, isn't it?

Mike Masnick:

Uh, yes, this is something of an experiment that we were doing today. in the past, I have been a guest speaker at this class and we were trying to schedule that again. And the timing worked out that it was when we were recording the podcast. And we suggested, what if. We just brought the entire class to the podcast. So they are sitting here, if they want to say hello, uh, very quickly, can do a,

Guest:

Hello.

Ben Whitelaw:

Welcome. Welcome.

Mike Masnick:

and so, uh, hopefully our plan today is that they, were able to. Sit through our planning meeting as we got into this and, uh, solve the stories that we're going to talk about and raise some important questions. And we're going to, as part of the experiment here, try and bring them in to maybe discuss and ask some questions be a part of the discussion. We will see how this goes. This is a little bit messy, but it should be, fun to have their, thoughtful insights included in the discussion.

Ben Whitelaw:

podcast recording, before in person at TrustCon last year, but this is the first we've done in kind of virtual form. And it's a really, really nice format that the students have been really, really asked some really smart questions actually before the call. So I'm looking forward to bringing them in and if it goes well, hopefully have other guests and other groups of people like it in future. we have a very kind of, you know, a podcast this week has got quite a strong European flavor, Mike. I would say, with, a sprinkling of, our usual us, platforms background. So, maybe we'll dive straight in there before we do, as usual, we'd love our listeners to rate and review the podcast if they enjoy it. something that really helps us get discovered across all of the platforms. It only takes a couple of minutes and it's a massive, massive help to us. So, Without further ado, let's get started. You're going to lead us off Mike on a story that, as always happens, there's always a big story that drops right after we finished recording. And, and last week was absolutely no exception. We were talking with the students earlier about this is a story that hasn't got a huge amount of coverage, perhaps because it dropped so late last week and is somewhat technical. So help us unpack why you picked this story this week.

Mike Masnick:

And this is a big story that's, over in your part of the world. and it was reported, by the Washington post, which is that apparently the UK has ordered Apple to effectively, put a back door into its encryption product. And Apple, you know, somewhat famously has, added, encryption in terms of both messaging and then also some of its cloud storage. and so they're asking. Not asking, they're demanding, that Apple put a back door, which would allow authorities within the UK, but also potentially elsewhere to have access to what had formerly been encrypted, materials from Apple users. Now this was, A leak because the order is secret and the details about it are not fully known. and there are a bunch of problematic aspects to this. one of it being that the request is not just for users in the UK, but a global setup so that UK could then request. access to what people might have otherwise thought was private and encrypted information from users around the globe. Due to the nature of security practices right now, there's something called the Five Eyes Program, which is between the US, Canada, the UK, Australia, and New Zealand, where they share, intelligence, Communications, meaning that if the UK can spy on these people, so can all of those other countries, which raises a whole bunch of other concerns as well. so this is a very big deal there have been battles about encryption and encrypted communications going back decades. somewhat famously, there was the Clipper chip the, in the 1990s in the U. S., which was trying to put a backdoor into encryption. There have been all sorts of other battles for many, many years. And much of the battle has always been focused on, you would have law enforcement, Folks coming out and saying, well, we need access to this content. Bad people are doing bad things. terrorists and child predators and all this stuff are able to hide behind encryption and therefore we need access to it in the same way that we might wire tap a phone or something of that nature. The flip side to that argument has always been that one. law enforcement and intelligence services have more access to more information than ever before in history. in the past, people could disappear and hide and could communicate in ways that were not easily trackable. That is less and less the case these days. The areas where encryption is used is somewhat minimal. There's so much other information, location, information, where you are, other stuff that is unencrypted. it is very hard to actually hide From law enforcement or intelligence agencies these days, they have much more visibility into things. So the idea that encryption is in any way sort of hiding, bad behavior in a way that is that law enforcement cannot deal with does not seem to be true on top of that. There's the flip side, which is that encryption is incredibly important for people's safety and protection. And, the argument is often presented that it's this fight between privacy versus security. But the reality is that that is not true. For most people, encryption and privacy protects their own safety and security. And there are all sorts of reasons why. I don't have to go into, great detail on that. But being able to. protect your own information is often a way to keep yourself safe. And so the cases where law enforcement is upset tend to be more edge cases, whereas breaking encryption would put tons and tons of people at risk. Now, there are lots of concerns about the way that this came down, the fact that it is being done in secret. there was a big fight. last year, mainly around the UK and the online safety act, which we have talked about on the podcast, which is sort of focused on social media safety and things like that. There was a concern that there was an element in the online safety act that would be used to attack encryption. And you had, Apple spoke out about that, but you also had even more forcefully, Meredith Whitaker from Signal. making the rounds and being very clear that if the online safety act passed in that manner, that signal would probably not be able to do business in the UK because it would lead to this breaking of encryption. so there was a slight sort of backstep on that. And the way the online safety act came about, it didn't have all of the language In as troubling a way there were concerns that it could still be used to crack encryption and sort of do this force back doors into encryption according to the reporting and again, like a lot of this is because of leaks and reporting investigative reporting this was done under the investigatory powers act, which is a different law that passed a few years ago It's also been referred to as the snoopers charter and this is an example of why And it was, basically an order given under that law telling Apple that they have to build in a backdoor, that they can't talk about it. And so Apple has not been public about it. and. Effectively, they can't really appeal the order either. From what I understand, believe they can only appeal that it costs too much, but that is not a societal level cost they can make the argument on only that it would cost too much monetarily for Apple to implement this. And so there is now a lot of concern and people are raising concerns. Certainly there have been, uh, a bunch of, human rights and civil society groups have spoken out. we have people in Congress in the U S we just, uh, appointed our new director of national intelligence, and, uh, A bipartisan, surprisingly bipartisan group from, from, Congress sent a letter to her asking her to push back on the UK's attempt to backdoor encryption.

Ben Whitelaw:

this is, uh, somewhat ironic that. us Brits are bringing the American politic system together, you

Mike Masnick:

Yes. Thank you, Ben.

Ben Whitelaw:

through this issue. But yeah, the comments, from senators are really striking. I just want to read one of them out. If Apple is forced to build a backdoor to its products, that backdoor will end up in American's phones, tablets, computers, undermining the security of American's data, as well as the countless federal state and local government agencies that entrust data to Apple products. It's very, very stark. And, the statements from Amnesty International and Human Rights Watch are equally. So this is a massive, massive, story, Mike, listeners might be thinking, you know, encryption feels a bit far away from kind of where I'm at as a, somebody who's interested in safety, explain again, the trade offs, that we're constantly making and podcast, if that's okay.

Mike Masnick:

Yeah. I mean, as I sort of was alluding to earlier, the safety and security question are intricately linked and protecting your users is important. And one way to do that, it depends on the product and the tool and what it is that you're trying to do, but for many, services having encryption and protecting the user's safety go hand in hand. And this is why there have been debates for a long time and questions about, should, direct messaging end to end encrypted. And, you know, you had meta that spent many years. it's a difficult thing to do it right. Because part of the issue is if you do it wrong, you expose an lot of private information and you put people at risk. And I think that is really important to understand. Meta took a lot of time and then eventually did implement end to end encryption for their various messaging products. Twitter for many years tried to and found it very difficult to get it right. And so they never fully implemented it. When Elon Musk took over, he insisted that that was one of his priorities. It was like the one priority that I agreed with him on. and then they, Did this sort of half assed implementation that wasn't really encrypted. And they sort of, in typical Elon fashion. They sort of pretended it was encrypted and promised that they would do a more complete encryption job later. We've never about it since that was like two years ago. they never mentioned it again. but you know, being able to protect your users and protect their privacy is a huge, important thing. And this is. A huge attack on the ability of companies to keep their users safe. And, if services think that it's going to stop with just the UK going after Apple, think that's wrong. I think. That if this works and if Apple does do this, and if the UK doesn't back down, it's going to open up a lot of stuff around the globe of governments demanding access to all sorts of information that will put users at risk. And so I think this is, a hugely important issue for safety and trust and safety and online user speech and, and having them feel comfortable, on any particular service.

Ben Whitelaw:

Yeah, and also speaks to, I guess, a shift in UK politics and its approach to safety as well. You know, the Labour government was elected last year. Keir Starmer as Prime Minister has, included safety as one of his kind of five pillars of its government. one of the areas that it's seeking to make. ground on and it's he made many things about safety from a kind of physical, you know, real world on the streets perspective, but has also talked about it in relation to. The online space as well. There are a number of very, influential civil society and charities in the UK who, do a lot of policy work, including the NSPCC, which you might have heard of, and they have been advocating for, Changes to the way the encryption works to allow for, child safety, improvements. So to allow, platforms to be giving, government data to be able to address what they see as harms specifically for children. The child safety groups in the UK, as far as I understand, are very, very vocal and very, very powerful. So again, we're kind of seeing. I guess probably a year or 18 months worth of work from the point that you mentioned, where Meredith Whittaker was talking about encryption in the online safety act through to the labor government being elected. And almost, you know, this being a kind of a milestone in that process. Tom, one of the students from the American university was, was also trying to kind of was thinking about why this might've happened to Tom, you have some thoughts about again, why now, and what this might mean from a labor government perspective.

Guest:

Yeah. Um, so I was just kind of wondering if this perhaps had to do with kind of in reaction to the far right riots and this kind of made them more of a sense of urgency to perhaps kind of at least do something about it. Um, particularly with the pressure on the new government coming in. I was just kind of curious your thoughts on that.

Ben Whitelaw:

Yeah, I think that's fair. I think there's an ongoing storyline that Mike and I often refer to in the podcast, which is kind of Elon Musk's views on politics in the UK and, people connected to Downing street and political scene in, London are very aware of, Musk and what he's saying. it's a kind of constant backdrop to policy work being done across a number of different, parts of the government as far as I understand. And, and so, yeah, I think being able to address some of the underlying issues, related to the riots is, part of this, you know, it doesn't necessarily excuse it as Mike says, it does feel like a big step. Couldn't. the comments from the senators and from the civil society organizations, I think back that up, one of the human rights watch, researchers said this is an alarming overreach by the UK authorities seeking to access private data of UK, but worldwide, you know, if you're, if you're concerned about rights in your country, Maybe address the underlying issues related to that rather than trying to, create these kinds of backdoors. So don't want to excuse, the labor government too much or make it too kind of party political, but it's definitely probably a factor.

Mike Masnick:

the one thing I would add to that is that it does feel the timing of this beyond obviously the new labor government is them, I think, trying to take advantage of kind of the chaos in the US and the sort of distracted nature of everybody in the US to sort of see if they could sneak this through while everyone is focused on, other issues. And, you know, I think, it's perhaps a cynical take, but I think it's, it is worth pointing out that, the timing of this is certainly interesting.

Ben Whitelaw:

So you mentioned Mike about the compliance with this order. Is there any risk that Apple of wouldn't, exist, in its current form in the UK would stop, particular services or products?

Mike Masnick:

Yeah. I mean, Apple has made it clear, they made it clear during the, the online safety act debate that if they were forced to remove or backdoor encryption in the UK, they would try to, no longer offer those services in the UK. There is a question of whether or not that matters under the order, because it is a, global order and access to global communications. And then I think it just becomes some sort of legal fight. but yeah, I mean, Apple has made, made it pretty clear that they might seek to at least, stop certain services being available in the UK, and then it just becomes an escalation and who knows where it ends up after that.

Ben Whitelaw:

Okay. Well, again, you know, a story that's somewhat passed me by until we started talking about it earlier and something I know that as, the likes of human rights watch and amnesty start talking about it kind of blowing the whistle more on this, I'm sure we'll return to, we'll turn now to, Paris, not a billion miles away from where I am in London. And the AI Action Summit that took place this week, where over a thousand thinkers and doers related to AI, people working on policy and regulation, got together to talk through, how, AI will evolve on a global scale. This is not just folks from Europe, but folks from the States and elsewhere as well. This follows on the, Back of two other summits like it one in London two years ago, which was famously organized by Rishi Sunak, very much trumpeted by the conservative government at the time. I was hosted at Bletchley Park just outside of London and then one in Seoul last year. And in both occasions, there was, huge, agreement around what AI, the future of AI should look like a number of agreements were signed. A lot of kind of patting on the back of, not only companies, but also elected officials as well around how they were going to take the future of AI forward. And everyone was praised for their kind of general agility on the, on, bringing these summits to bear this week was very different, Mike. I don't know if you, came across some of the, Stories, but you know, Things have changed and probably, you know, we could put it down to. Donald Trump and the new, presidential, state of things, I'm going to leave you to talk about, JD Vance and what he's talked through maybe trying to read the tea leaves on, what he said, but the broad swathe of what came out of the summit was that. Essentially, France and other companies, have been closely involved are advocating for a move away from, very strict regulation on AI. A lot of the wording around the summit was about, innovation and competition and economic growth. And the general thrust was that if we regulate AI too heavily, you know, we reduce the ability of companies to grow and to kind of generate, wealth and for people to benefit these ways, as we know, and as we talked about a couple of weeks ago on, on the episode that on deep seek, when we think about innovation, purely safety and security often, is offset as a result of that is the thing that gets lost. And we somewhat see that in one of the big announcements made this week, which was a public private partnership by. France, which put a lot of money in and a number of other companies called current AI. essentially this is going to be focusing on kind of large projects in the public interest. They've got 400 million euros to spend. They're going to try and get 2. 5 billion euros over the next five years. And this is all about creating big projects. Data sets and focusing on a, open source technology, and then involving communities in, in the growth of AI. But within the wording of the current AI, website, there's some very, very clear indications that actually this is a shift. so I just read a quote from one of those, involved. We need to do things in the affirmative sense, not just think about regulation and things you want to forbid, you know, Emmanuel Macron, the French president also got up and said something similar. This is not just about regulating what, what is bad. It's about trying to, get the benefits and the economic benefits. Benefits of what is good as well from AI. So that's a very, very interesting shift. We're going to see a lot of companies take the lead from you know, the key messages from this summit, Mike, what did you make of, the summit more broadly, what, what do you make of these kind of big, big, big. Highfalutin network dues that everyone goes to that, but we never get invited to.

Mike Masnick:

Yeah, I mean, I definitely thought it was interesting and I think the point that you raised of, sort of government's talking about less about regulation and more about innovation is perhaps not surprising. Interesting in a way. And obviously anyone who listens to this podcast regularly knows that I am often skeptical of regulatory proposals and do believe strongly in, enabling more innovation and things like that. You know, so I should be, I should be thrilled about, about this in some sense. Um, but I feel that a lot of this was very cynical and that it wasn't a thoughtful approach, right? there are ways, you know, part of my argument is always that, there are ways to deal with safety and security that are important, but they're important in some ways to the bottom line itself, and that, that companies should be thinking about these things, but it feels like the approach that many people are taking are like, bottom line overall, like just, you know, forget everything else, forget safety, go as fast as you possibly can. and I think that leads to really bad results and that will lead to backlash. and I think that that is bad in the long run. I mean, the fact that France is now presenting this, I think that ties back to our opening prompt today, which is Mistral, right, which is a French company and has been a successful player in the AI space. so it's, not surprising to me, but again, it's perhaps a slightly cynical take where it's like the rest of Europe is all in on AI act and we've got to regulate heavily. And then France is like, wait, we have this successful company here. Like let's pump the brakes a little bit on the, on the regulatory side and let's, let's actually support, this company. And so, there's always these sort of. Push and pull and trade offs of, you know, regulation versus kind of like pure, unregulated markets and how that plays into, innovation. But I, you know, watching all of these speeches and, announcements like I came out of it very cynical. I don't think that these, companies are doing these things or the countries and their approaches to regulations are being done thoughtfully, right? I think this is being pushed by strain of, technologist and entrepreneur and investor that is. you know, as they refer to it as accelerationist, which is basically let's ignore any of the safety concerns. It ties back to the thing that we have talked about in the past of the, Mark Andreessen techno optimist manifesto, where he declared trust and safety to be the enemy of the people or whatever, the enemy of innovation, which I think is, is wrong. I think, if you want innovation to not lead to backlash, and if you want these technologies for people to feel safe using them and to embrace them and figure out the best ways to integrate them into your lives, people have to feel safe using them as well. so I think there's this overcorrection that was very clear in the way that announcements in the discussion around the, the summit. Worked out.

Ben Whitelaw:

Yeah. I mean, yeah. The kind of thing to know is that the, hugging face, which is a AI platform, you'll know, has a, also got a French CEO. Um, so.

Mike Masnick:

right.

Ben Whitelaw:

Yeah. So there's, there's a number of. big AI players who have kind of French personalities who are reportedly very close to Macron have kind of really shaped his thinking over the last few years. I also agree with the kind of mood music around the summit and was reading very carefully the reactions of a number of trust and safety kind of experts who were there. Theo Skiadis, who's the public policy, lead on DoorDash. She's there. She's somebody who I have a lot of respect for. She's very, very thoughtful about some of the safety concerns whilst also, not doing it for the sake of it, but, but, but, you know, very, very, careful with some of how the regulatory regime should work. She said that there was a gloomy. Feeling to the summit at the end of it. And people felt very afraid about how, some of the participants were, thinking about safety as a, low down on the list of priorities, and that was demonstrated by the fact that there was unlike the other summits I mentioned, there was no kind of signed communique, at the end of it, at least not one signed by the U S and the UK. So some of the big names that were traditionally, In agreement, at the end of these summits, not. singing off the same hymn sheet this time. So yeah, it's very, very different to Seoul and very, very different to London. And it does open up a question about who will end up governing AI and what happens next.

Mike Masnick:

Yeah. I mean, you know, one of the things that I saw too was, rather than safety, a lot of the language was around security and I think maybe that is a sort of a statement also on the state of the world and kind of where we are, where sort of the traditional orders of the world that we've. Grown used to over the last few decades are potentially breaking down. And, we've definitely seen a lot of the positioning, certainly in the U S whenever there's talk of, regulatory regimes around AI in particular, there's always talk of like, well, you don't want to let China beat you. And so it becomes this sort of like nationalistic thing in which security and sort of like this idea of like security at the state level, becomes more important than safety at the individual level in terms of what people are thinking about. And I think that was really reflected in a lot of the language. And discussions that came out of, this particular, you know, the particular summit.

Ben Whitelaw:

Yeah. Did you have any, take on, JD Vance's comments? Cause he was there, you

Mike Masnick:

Yeah.

Ben Whitelaw:

he said some stuff. It was reported.

Mike Masnick:

He said some stuff. Yeah. I mean, you know, like if I can go through my week without having to deal with JD Vance is a better week, but, but, um, yeah, I mean, it was a lot of sort of, you know, nationalistic jingoistic stuff. I mean, he used the opportunity, he attacked the DSA, he attacked European regulations. You know, there was a lot of culture war related stuff. it's, it's very JD Vance, you know, there's not substance there. but it's, very much driven by the people that he has surrounded himself with, which are the sort of, you know, Silicon Valley, accelerationist, techno optimist, uh, people. Investor class that just wants, wipe out anything. Like we don't even want to think about safety. You know, we just want to be able to do whatever we can. And yeah, sure. Like if we destroy a few lives or a few thousand lives or a few million lives, like, well, you know, that's the price of innovation. and so, It was, it was frustrating, but, and, and obviously extraordinarily different than, you know, the way the, the Biden administration approached these things. And I was critical of the Biden administration approach to things. I thought their approach to dealing with AI was, fairly short sighted as well. But this is, this is swinging the pendulum very much in the other direction and without any thought towards how this plays out. And this is my concern with so much of, well, what is going on around the world, but certainly with the Trump administration is this inability or unwillingness really to think through. the implications of what it is that they're positioning and assuming that like, well, if we just, wipe away any concern about safety and just allow everything to move forward, everything will work out. And, the car's going to crash in some way, then people are going to get hurt and not preparing for that and not realizing that there's going to be a backlash. And it could be a really damaging backlash that has, wide ranging implications and not being willing to even think through those things is. Incredibly short sighted and I think will lead to really bad situations in the long run.

Ben Whitelaw:

Yeah, I agree. And I actually before we started recording, went back and looked at one of the big announcements from the London summit. Which I don't know if you remember was the AI safety Institute. This is a kind of body that was, created so that, researchers, government officials, people working roles in companies could come together figure some of these nutty questions out together. literally this morning, Mike, they've changed the name to the AI security Institute.

Mike Masnick:

Well, there, there you go.

Ben Whitelaw:

So, uh,

Mike Masnick:

moving from safety to security

Ben Whitelaw:

yeah, exactly. We might have to change the opening of the podcast, uh, to, to reference security in some way, uh, so as to be in line with everyone else. Um, one good thing that did come out was the announcement of Roost. the robust online. Open safety tools. I think I've got that right. this is a big initiative that we've kind of been tracking a little bit. We know a number of the people involved and have been seeing this develop, but essentially is a, large scale tooling initiative supported by a bunch of companies that is making, tools more accessible for smaller, medium sized platforms that can't build them. Themselves that can't do it internally. you can talk a bit about the background, Mike, but this is, this is actually kind of a, I think a positive thing to have come out of the summit. again, you know, it's worth noting that we know, some of the people involved, so we, you know, we, are biased in that way, But, this is focusing on three very specific cases, initially building tools to address CSAM, child sexual abuse material, producing classifieds to reduce harm and also creating tools for moderators to be able to, do that with a greater degree of wellbeing. So what did you make of the kind of announcement in the context of the wider summit, Mike? Like, how do you see the broader geopolitical shifts happening towards AI regulation and this initiative?

Mike Masnick:

Yeah. I mean, I do think this is an important initiative and, regular listeners of the podcast will remember that last year in July as one of the bonus chats, we had Camille Francois and Juliet Shen on the podcast talking about the open source work that they were doing at Columbia. That is basically what has been turned into Roost, with a lot of support from, various philanthropies and companies as well. And so, um, actually do think it's exciting because they're sort of stepping into a space that is important, where there are tools that are sort of necessary table stakes to do trust and safety that. are really fragmented all over the map. Lots of companies are to have to rebuild them from scratch, even though they're effectively the same tool. yeah, well, Roth, former guest cohost on the podcast, you know, has talked about how. When he was at Twitter many years ago, he bought this company called Smite. that was a, classifier engine that almost everybody used for trust and safety. And the day that Twitter bought it, they also shut it down for everybody and sort of left everybody high and dry and scrambling to rebuild a classifier engine, which is just sort of really important for analyzing different content on different networks. And so being able to have those tools and have them be available to everybody and be open source, I think is important, especially as hopefully we're moving to a world where there are not just. four or five giant companies, but we're seeing smaller companies being set up that need to do trust and safety or not even companies like Mastodon instances, being able to handle sort of trust and safety stuff if there are open source tools that they can plug in and make use of, I think that's actually a really powerful thing. And I think it's a really important thing for. building a more robust, world for online communications and online speech.

Ben Whitelaw:

Yeah, I agree. I mean, the only thing I would say, and I haven't spoken to any of the team involved about this, but the absence of Meta and Reddit and a couple of other big platforms who have invested a lot in tooling over the last 15, 20 years. And you have some, some of the most advanced tooling, as far as I understand it is a bit of a blow. And do you think that, Roost is worse off as a result of that?

Mike Masnick:

Yeah, I don't know. I mean, we'll have to see, my guess who know, I don't know the reason why I haven't, I haven't spoken to anyone involved either on, on this in particular. So I have no idea. I mean, you always have this sense that when you have companies that sort of are, Above and beyond what other people are doing. They might be a little bit more proprietary about what they've built. And so not as willing to sort of share. but does that last over time? I don't know. I think that. the team that is working on Roost and the folks who are involved how they think about these things, I think they can build very, very strong tools. They have built super strong tools in the past. They know what goes into this and then building tools that are open source and for a wider audience to use and embrace and then build on and contribute back. You know, the nature of sort of open source tools in general. I'm not as worried that like maybe a This company or that company is not a part of this. Obviously it'd be nice. And maybe as Roost proves itself, they'll end up deciding that they do want to join in and I think that would be good, but, overall, I actually do think it is a really positive development and hopefully leads to just better tools for more platforms. Which also means, less need to rely on certain giant companies whose interests might not be the best. I mean, we're talking all about, companies moving away from safety and security. We've talked about Meta cutting back on its content moderation stuff, having tools that allow others to come in and not being totally reliant on Meta making the decisions for what is okay to see on the internet. To me is, that seems to be moving in a better direction.

Ben Whitelaw:

Yeah, definitely a positive take from the summit. Aram, who leads the media law and policy class at American university, also had a thought about how this tooling initiative might play into some of the regulatory stuff we talked about. Aram.

Guest:

Yeah, well, I had really two concerns. One is, you know, as an open source platform. the notion that individual adopters of Roost might be able to use it as a tool of oppression rather than a tool of safety for its user base. But to speak to your question, I think there's a long history of new tech being kind of integrated into the regulatory apparatus. And so I could easily imagine, for instance, in the U. S., Section 230 protections being contingent on platforms adopting Roost. which would leave, other alternatives and the platforms that wanted to adopt them out in the cold and potentially put them on the wrong side of the law.

Mike Masnick:

Yeah. I mean, it is a legitimate concern. you know, we've certainly seen that, especially in the copyright space where, people created tools, you know, content ID, or similar tools, or now we're seeing it in the age verification space where, companies have built these age verification tools, and then that is being used as justification for various laws around copyright or age verification, often. Lobbied heavily for, by these very companies that then would sort of profit from it. and so I think there is always a risk of that and it is worth watching. I am less concerned about it in this situation for a few reasons. One, this is a nonprofit, this is an open source thing. I don't think they're going to be, in the same sense, lobbying for their own profit. because of that, but more importantly, it's just sort of the nature of what these tools do and why they're useful, whereas the, Other examples around copyright and age verification tend to be things that go against what users are trying to do. often what these tools are for is to make a platform overall safer and better for the majority of users. So there's actually a, good sort of positive reason to make use of them no matter what, and therefore, less reason to sort of require them by law. and Less having to deal with sort of special interest reasons for why, there might be a regulatory approach around these things. I do think it is a concern. It is something worth watching. We are already seeing, lots of countries outside the U. S. Mainly, sort of requiring these kinds of things anyways. And in that world, I think it is still actually better than to have kind of an open source nonprofit approach that is available rather than pushing everyone into a for profit, uh,, corporate approach that, you know, I think is, a worse end result. So it is worth watching. I share your concern. I think it's a really good question to think about, like, will the existence of this lead to a different regulatory approach like undermining 230 or something like that. It is worth watching, but I don't think this alone is going to do it. I'd be more concerned if it was completely corporate driven.

Ben Whitelaw:

I think there's also a practical element to this as well. If you, you're working in a platform and you're looking to bring on a vendor or a piece of technology to address one of these issues, it's really difficult to find out. what's best in class and, to go through the rigmarole of demos and, calls with vendors and to figure out, okay, do I need this? Will it integrate with my other, you know, tech stack in terms of, safety classifies, like it's, it's really, really slow as far as I understand. So actually having something that sits above and kind of demonstrates best in class is something that. People can go to first, I think is a good thing. that should help the, hundreds of thousands of smaller, medium. So it's platforms, forums, instances that are going to be obliged through regulation to, figure this stuff out. So, yeah, thanks for the question, Aaron. and if you're listening to control speech and you're, you work for one of the vendors or technology companies that have classified and you want to take part into this initiative, you know, you're interested in, contributing to the open source, project that's, that's being spun up, you know, get in touch with the guys. they're really, really thoughtful. We'll be really interested to hear from you. I think that that'd be much, much recommended. Great. awesome, Mike. So we are two stories down. We've, we've addressed UK and, France, and now we're onto much more familiar ground, uh, in the US, uh,

Mike Masnick:

We're not going to, we're not going to get through a, uh, an episode without talking about a U S based story.

Ben Whitelaw:

no, no, and it leads neatly on from the conversations and the question that Aram had around section 230, and picks up a thread that we touched on a few weeks back on the podcast.

Mike Masnick:

Yeah. So, obviously there've been a whole bunch of cases in the U S sort of attacking two 30 and sort of seeing where they're going. There was a case last year that we spoke about, which is the Anderson versus tick tock appeals court ruling in the sixth which, think we talked about at the time made no sense to me. It was a ridiculous ruling, was very problematic basically saying that, if you have an algorithm on your platform, it effectively wipes out your 230 defense that recommending stuff is no longer protected by 230. I believe that's wrong on multiple levels, which I don't. Need to get into all the details, exactly why. there were two things that happened this past week that were interesting on that level. One is that. Everyone assumed that TikTok was going to appeal that case to the Supreme Court. and they put in a notice saying they are not going to do that. Um, and they did that a couple of weeks before they had to, I think they actually had till next Friday to decide to put in the, the cert petition for the Supreme Court, and they chose not to do that at all, meaning that their Accepting ruling in the sixth circuit, which went against them and was very problematic. I'm sorry, not six or I keep saying six or it was the third circuit. got to keep those circuits straight. Uh, so I apologize. and I think this is bad for anyone who is in the third circuit, because now you have two 30 does not really apply to anything with algorithms in the third circuit, and I think that is bad and we're going to see a whole bunch of really dumb lawsuits. Filed in there and it's, it's going to be bad for a lot of companies. but also interesting, perhaps more interesting is in the fourth circuit. Again, I got to keep my circuits straight now in the fourth circuit. there was another case, which is the MP versus medic case, which goes back to Dylan roof, who was one of these horrible people who went in and shot up a church, you know, sort of famous, awful story. and again, there was an attempt to hold meta. liable for certain things based on the algorithm and, you know, people having an account and all of those kinds of things. And here, the fourth circuit said, of course, 230 applies. And of course, meta is immune from that. and there were a few interesting things about it. The fourth circuit is where the very first case that challenged section 230 where section 230 was an issue Was ruled it's called the ziran case. I'm not going to get into the history of it though. It is very very interesting And so that has been sort of the standard across the country and there have been people who have been critics of ziran and a few years ago again in that same circuit in the fourth circuit in a case called henderson They seem to maybe reject Zeron, not explicitly, but came out with a ruling that sort of was different in terms of sort of setting up a different standard for where you could get around 230 and there were worries that that would lead to a whole bunch of cases that, found their way to get around 230, but here. In the MP versus meta case. the appeals court was, the majority of the appeals court'cause there is a dissent, but the majority of the appeals court said no. Like the way Zoran works, and this is, IM important to understand conceptually without getting too deep into the weeds. The way Zoran works is it says two 30 is designed to protect companies for doing traditional publishing activities from being held liable as if they were the publisher. and this gets really confusing to a lot of people. Often people who should know better. It is the traditional publishing activities that protect you from being held liable as a publisher. There's this belief, like the whole thing where it's like, if you're acting as a publisher, you sh you shouldn't get liability protections, but no, two 30 is actually standing for the exact opposite argument. It is saying because it's user generated content and you as a platform don't have the ability to review and understand the legal implications of. each and every piece of content for the activities you do that are just like any regular publisher, you don't get held liable like you are a regular publisher. Because you have no way of knowing all the specifics of that content. And that was, that's the point of the Zeron case and other cases that are followed. And so what the MP versus Meta case is saying, which is important is algorithmic recommendation is a traditional publisher activity and therefore 230 applies and you get those protections.

Ben Whitelaw:

Mm. Okay.

Mike Masnick:

And so it is sort of setting up a circuit conflict now between the third circuit and maybe a couple of other, other circuits as well, but mainly between the Anderson ruling around tick tock in the third circuit. And now this, the MP meta ruling in the fourth circuit, what will be interesting is now, if The plaintiffs in this case, try to appeal this one to the Supreme Court. And I think, just from a purely emotional standpoint, this is probably a better case to go to the Supreme Court than the, TikTok versus Anderson. And maybe that's why TikTok chose not to, uh, request cert from the Supreme Court that they felt that people, whoever the people are felt that this is a, better case to get in front of it's somewhat more reminiscent of, uh, of the Twitter versus Tamna and Gonzalez versus Google case from a few years back, which were about sort of terrorist acts and whether or not Twitter and YouTube in that case could be held liable. and the Supreme court sort of was just like they didn't address 230 specifically, but they were kind of like, wait, you know, you can't really trace terrorist actions to the fact that they're Those groups also had social media accounts. And so there is this element of that that might play through. So I think this is a better case if it does go to the supreme court and could Be a way to get the third circuit back into line with how 230 actually works So it's an important case and we'll sort of see how it plays out

Ben Whitelaw:

What are the timelines for that, Mike? how quickly might this go to the Supreme Court?

Mike Masnick:

I mean assuming the plaintiff decides to to ask for a cert petition They have I forget the exact time but they basically would have to file cert petition fairly soon within the next few months I believe You And then there's a question of whether or not the Supreme Court takes it up. And then when it takes it up, I think a standard would be that they, if they decide to take it up in the next term, then they would probably hear the case, I would guess late fall or probably early 2026. And then you would get a ruling in probably summer of 2026 would be the most likely, it could get pushed further than that for a variety of reasons, but, generally speaking, I think it would be a summer 2026 ruling.

Ben Whitelaw:

Interesting. And, and when you say the kind of emotional aspect, you mean because it's not related to TikTok and there isn't this kind of Chinese national security fury around it. It's more likely to be judged more on its merits. Is

Mike Masnick:

It's, it's partly that, so there's two things to it. One is, is that, and there's all these other questions around TikTok and who's going to own it. And, you know, and obviously the Supreme court made it clear that they're not, they're distrustful of TikTok in general, which I think, frames it in a poor light for the Supreme court, but also the nature of the case itself, the TikTok one was about, a child who did one of these challenges that they found on TikTok and ended up dying. And it's, it's a very. emotionally wrought, sympathetic one. obviously, the Dylan Roof situation is also emotionally wrought, but in a slightly different way in that it's, more, a terrorist style action as opposed to a kid succumbing to a viral challenge. And so I think it's, it's a little easier to separate out the actual legal issues and the implications of it than in the Anderson case.

Ben Whitelaw:

Okay. Interesting. Okay. That's helpful. I mean, I'm still very much learning about what the different circuits are. So this, this is good for me. This is good. You know, third, fourth circuit. I'm getting there. Um, and, um, and Eric Goldman's blog on this issue, which we'll share in the show notes is, is very good. We're also trying to include some of the other cases you mentioned there, Mike, if people really want to go into the weeds on, uh, on some of the cases you mentioned there.

Mike Masnick:

I'll give you another little thing, which is, I don't know if this has been talked about, I'm not sure if I'm even supposed to talk about it, but I'm going to do it because what the hell, um, very soon I am releasing, in partnership with some other folks, so I won't name them yet. That'll be what I'll say, a podcast all about the history of section two 30, which will go deep into the weeds on a whole bunch of this stuff. I've been interviewing people who were there, the authors of two 30 lawyers, all sorts of people all around two 30. Uh, it's going to be a, I think a six or seven part series. It should be out in, I think it's supposed to come out next month. Um, and so I might as well start promoting it now. but it'll go in pretty deep in the weeds on a whole bunch of this stuff. I'm really excited about it.

Ben Whitelaw:

nice. And there'll be a, a Netflix version, I presume as well.

Mike Masnick:

We'll see. We'll see. Come, come talk to me, Netflix. We'll see.

Ben Whitelaw:

Okay, great. well, we've got time for one more story. And this, this is a story that the students they've been joining us today have been very interested in. So I want to get their thoughts on it. And this is big platform story this week, or certainly the biggest kind of drama related to a platform this week. And it's, got everything, it's got the Superbowl, it's got Kanye West, it's got, you know, Failing to deal with Nazi content. it's like back in the days of Twitter, not too long ago, Mike. if people haven't seen it, it's the fact that, I'll briefly unpack what happened, Kanye West, also known as, Ye, Ye, never really known how to say that. Um, bought a Superbowl ad, local, not national. I think that's important to note, pushing people towards his, uh, website where he was selling a bunch of products at the time of the, ad going out, the, products were regular t shirts. I think, it was basically, you know, nothing untoward very quickly though, those products turned into a single t shirt with a Nazi source to go on them. And, you know, was very much like in the middle of the Superbowl, a lot of traffic, a lot of people, a lot of interest. And all of a sudden, you know, you've got a platform policy issue at play. Shopify is the underlying e commerce platform for Kanye West's website. And. It took them best part of a day and a half. It was only on Tuesday where the, they kind of de platformed Kanye. They took down, the site from his personal site. And the justification for doing so is what's really fascinating here, Mike. So you would have thought that actually this falls under a hateful conduct policy or some sort of hate speech policy, actually, as was reported by a number of different, media outlets this week, it's not. Actually, Shopify got rid of that policy last year in July. And so actually. Kanye West's content got taken down on the basis of fraud. the logic reports that the justification was that the merchant, Kanye West in this case, did not engage in authentic commerce practices. Can you believe that? That's, that's what, you know, that's what posting a t shirt with Nazi emblem on it constitutes now. And, basically that's the reasoning for his, de platforming. So we have a situation where major, major e commerce platform, at the center of a massive, massive story like this. I wanted to bring in some of the students, here, I think, Will, are you really interested in this story and how it was kind of shaping out? The engagement around this story and around Kanye West generally is obviously massive. What did you make of it?

Guest:

Yes, so I had kind of a comment about whether divorcing profit for the social media companies and just these companies that sell offensive material like this, whether divorcing profit from endorsing hate speech or hateful content, hateful merchandise. Would help push regulation more towards safety and promoting a safer, more equitable environment in media and communication for everyone. Or if the pattern of deregulation is now just kind of so ingrained in world politics and in national politics in the U. S. that it, you know, we're on the train ride and it's not going to slow down.

Ben Whitelaw:

the train ride that doesn't slow down might be the title of this podcast, Willa. I will say that. Um, just to note before you come in, Mike, it is worth noting that, Shopify had a very, very good. Um, and I'm, I'm not saying this is connected to them getting rid of their hateful conduct policy last year, but their profits were up 31 percent year on year in the latest, filing. So, well, it's got a point, Mike.

Mike Masnick:

Correlation is not causation, Ben. It's,

Ben Whitelaw:

I certainly hope so.

Mike Masnick:

Yeah, I, you know, I mean, right. There is this, push and certainly among sort of the Silicon Valley investor class that, seems to believe that more hateful speech leads to more profits, which I think is nonsense. And I think that they will find that is nonsense long term. I do think it is a good question to understand. Like one of the points that we've made for years. well, we've only been doing the podcast for a year, but we've, done it on the podcast for a year and I've done it for years as well as is, actual safety actually is good for the bottom line and, dismissing this as not like, it's the Nazi bar issue. If you represent yourself in this case, literally being willing to host Nazi content, people begin to assume that you are willing to support Nazis and therefore they might not be willing to support you. And that hopefully leads to less business in the long run.

Ben Whitelaw:

Yeah. Talking of Nazis. I might bring Aisha into this because she noted very cleverly that actually this wasn't the only Kanye West drama, Kanye West related drama this week and he had some, ongoing beef in the background on X. I should explain kind of how you feel like this, plugs in.

Guest:

Yeah. so I saw that on X, he made some again, hateful comments, which were called out. Um, and he credited Elon Musk, who's the owner of X for allowing him to spew this hate speech and using this platform basically as a soundboard to, um, Get this communication out about this, and in our class, we've been talking a lot about the 1st amendment and free speech and what constitutes as essentially like good free speech and not so good free speech and what are the barriers in between that? And I think Kanye was really speaks to this right now.

Ben Whitelaw:

Yeah, I would agree with that. And you know, I think the, the irony is that he took himself off X, after this all happened and, presumably, doing a job for all of us that we, you know, that he's had enough.

Mike Masnick:

Yeah, but I I do think there's a larger point there that was it was really good and thoughtful question Which is you know one of the things that we've seen over the last two plus years since Elon took over Twitter and turned it into X was that he has been very pretty explicit that he is fine with hate speech on the platform and encouraging that kind of speech to be platformed and to be promoted and to be shared and Claiming that this is about free speech when the reality is as we've seen with X and it's dwindling user numbers. It's still a lot of people, but it has gone down is that that drives some people away, it drives a lot of people away because a lot of people don't want to be in that space. And I think that wraps around to the topic that we've been talking about whole day. and, I think it, opens up, this is the discussion that we're having now, which is, is free speech really about allowing hateful speech, or is it about figuring out the way to allow the most people to feel comfortable on your platform to speak? and this is a perfect example of that. Yeah.

Ben Whitelaw:

was, was talking about kind of objectivity and subjectivity, what, cause one of the things that was in the statement that was leaked as part of this story was, the fact that, you know, The platform didn't want to be seen as being overly subjective. Um, it wanted to remove as much subjectivity as possible as what they said. And, I know the communication students, uh, American university have been thinking a lot about this. So where do you think this sits in, in a broader thread of, trying to be more objective and less subjective, Gabriel.

Guest:

Yeah, so if you'll let me get into the theoretical weeds here, because what are we as graduate students good for, if not getting into the theoretical weeds, and I promise I'll bring it back around in the end. this sort of reminded me of a conversation we were having in my methodology class about what we can conceive of, knowledge on sort of a spectrum between two, uh, epistemological frameworks. One of those being positivism, Which is, uh, to say that there is an actually existing truth and, a shared common knowledge outside of each of ourselves. And on the other end of that spectrum we have constructivism, which is more relative in thinking about how we each, create knowledge in and of ourselves, and we don't necessarily have this shared sort of experience. Now, negotiating these two is a framework called post positivism, Which tries to amend positivism to, acknowledge the subjectivity of each and every one of us. And so it's, it's not relativist per se, but it is really focused on acknowledging Each of our subjectivities and trying to, uh, negotiate reality in a way similar to how cultural studies theorists like Stuart Hall, were talking about in the latter 20th century. So, in that way, we can think about how, the, Shopify issue is hewing towards positivism and trying to establish some kind of real Actually existing objectivity and they're basing their arguments in that sort of sense and maybe that gives them a stronger legal framework when their general counsel Jess Hertz is saying that it's not a good faith attempt to make money and that factor into their decision. They're trying to remove as much subjectivity as possible They are really reifying this concept of objectivity which post positivists Understand is Sort of a fallacy or at least more complicated than it would seem from a positive, positivist perspective. I think I got that right, uh, if retroactively deduct points from my grade.

Ben Whitelaw:

That was great. I mean, I, I, again, it really, really helpful. I think, could get you on to talk about that in a whole episode, I think. But, you know, the, the kind of post positivism of platforms is something that we're seeing all the time in a way. And, the push and pull between positivism and post positivism, as Gabe has explained it, Mike, is, is a really fascinating one that, we're seeing each week when we talk about these issues.

Mike Masnick:

And I think, that was really important and, and was glad that you called out the quote from Shopify's, general counsel and trying to explain this and basically arguing that, this whole decision of, taking down the thing, you know, wasn't about the speech, but their failure to be authentic and that it might lead to fraud and him saying that, opinion doesn't factor in here, it's like, opinion always factors in, right? And, and, this is an important point that, I think everyone in trust and safety sort of recognizes you do want to set rules and you do want to set rules that can be repeatable and understandable and explainable because otherwise it's just chaos. You can't do trust and safety by vibes, but at the end of the day, when you have to make the final decision, there are a lot of cases that have a really subjective component to it. And so, you have to understand that. And Shopify is, general counsel said they're trying to remove as much subjectivity as possible. And to me, that's a sign of somebody who either doesn't understand what they're talking about, who are, who is misrepresenting the situation. There has to be an element of subjectivity in all of this. And, people who work in trust and safety know that. And as much as they want to try to remove. As much subjectivity, make it as objective as possible, have clear rules, have clear ways to apply it. The actual application of those rules always involves some element of subjectivity. And I think it's good when people can actually admit that rather than trying to deny it. And then, because then you come up with these really complex, silly, uh, Arguments for why you pulled down a t shirt that was just a swastika

Ben Whitelaw:

Yeah, and you're, you're met with the full force of Gabriel's explanation of post positivism, which I love, might be one of the smartest things that anybody's ever said on this podcast.

Mike Masnick:

Yeah, that's that's the high point we are never getting back there again

Ben Whitelaw:

Awesome. that is probably an appropriate point to wrap up today. We've touched on a range of different stories as ever. Big thanks to the students at the American University in the class of media, law and policy. massive, massive thanks to them for their inputs and for their thoughtful questions. to Aaron for setting that up and Mike for, being the bridge there. will do this all again next week. We won't have any students with us. but

Mike Masnick:

We will not be doing this next week. We are off next week, but we will be back in two weeks

Ben Whitelaw:

correct. He's right. on holiday. So, but we, uh, we'll see you in two weeks and we'll do this all again. thanks very much for listening. Take care, everyone. appreciate you listening. Bye bye.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode