Ctrl-Alt-Speech

The Comedy of Errors

Mike Masnick & Ben Whitelaw Season 1 Episode 48

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

So Mike, I don't really subscribe to the idea that Elon Musk is some kind of, you know, motivational genius. I mean, I certainly wouldn't, you know, survive one of his companies, but I'm still gonna borrow the email that he sent to federal workers this week as the start of our podcast today, I'd like you to please reply with approximately five bullets of what you've accomplished this week.

Mike Masnick:

I, I feel threatened then.

Ben Whitelaw:

may not be back on the podcast next week if you can't.

Mike Masnick:

I feel like my job is on the line. Um, well, I I was going to say that five of the things that I did in the last week was right about all of the nonsense that Elon Musk is currently doing in our government. At least five times. I was about to count, but, but I'm, I'm gonna assume at least each day this week, I have written about Elon Musk, and I think I. I think we're gonna try to avoid talking about Elon this week on the podcast.

Ben Whitelaw:

Yeah. The, this is gonna be the extent of it.

Mike Masnick:

Okay. Well, can, can you then reply to me with approximately five bullets of what you accomplished last week?

Ben Whitelaw:

Well, yeah, I was obviously not here last week. Um, neither of us were, we were, I was on a break and the five things I did on my break, Mike, were going for a walk in the rain in Cornwall, sitting in a hot tub, eating a pasti, eating delicious fish and seeing a whale, uh, all of which, all of which would get me sacked by Elon Musk for lack of productivity.

Mike Masnick:

I'm, I'm sorry, Ben. This is, I'm, I'm gonna feed this to the AI and the AI is going to say you were not very productive last week.

Ben Whitelaw:

It was nice knowing you. Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's February 27th, 2025, and this week's episode is brought to you with financial support from the future of online trust and safety fund. This week, we're talking about regulatory fines, content moderation, lows, and one woman's fight to get non-consensual imagery of her taken down from the web. My name is Ben Whitelaw. I'm the founder and editor of. Everything in moderation, and I'm joined by a somewhat ailing Mike Masnick, who has dragged himself from his deathbed. You. You've been a bit ill Mike,

Mike Masnick:

a little, a little exaggerated, but if, if my voice does not sound quite up to my, usual levels, I'm, you know, I'm a little, a little under the weather, you might say.

Ben Whitelaw:

man. Flu touch a man flu.

Mike Masnick:

I just, I, I, I,

Ben Whitelaw:

that.

Mike Masnick:

I just, just, just a little sick. I'm all right. I'm all

Ben Whitelaw:

Okay. Okay. Good to see you again. we've got a lot to get on with today. we were talking just before we started, Mike, about how much we enjoyed our last episode with the students from, the American University. And we got some great feedback from listeners about how perceptive they were and how much better they were than us and how, how everyone wants to listen to them again, so. We are thinking about doing a bit of a q and a episode, right? We're gonna get, we're asking our listeners for some questions and some input so we can, we can do something similar in the future.

Mike Masnick:

Yeah, it, it's, this is an experimental idea. It's literally an idea. We had about 10 minutes before we started recording, so this has not been particularly well thought out. But if you have questions that you think would be interesting for us to answer and discuss. it might be a good thing to send in and we'll collect them and at some future date, um, maybe in a few weeks, we'll maybe do, whether it's a segment or an entire episode. I don't know. We'll see. I think it would be a fun thing to do kind of a q and a, sort of episode.

Ben Whitelaw:

We're coming up to our 50th episode of Control Al Speech as well. So it'd be nice to change the format up a little bit and we always let listeners to get in touch. Um, so yeah, whatever questions you have, it might be something very current about some big regulatory story or something you've been wondering but haven't been able to ask anybody. Now's your time to get in touch and, and ask one of us. So, we're gonna on today's stories, Mike, we've, got, And actually a slightly different tone to this week, because the last few weeks have been a bit depressing. We've talked a lot about,

Mike Masnick:

Months, last few months, Ben.

Ben Whitelaw:

is that how long it's been? Um,

Mike Masnick:

Years, decades, centuries.

Ben Whitelaw:

yeah. It's fair to say that there's been a fair bit of, uh, you know, we've been a bit down the dumps. There's, everything from the kind of Mark Zuckerberg. Speech in January to some of the Trump announcement stuff. And Doge, it's, it's been a, it's been a bit bum out. So we've gonna start with the, I guess what's been a bit of a highlight this week, right? Which is, is seeing content moderation, writ large on one of the biggest TV shows in, in the us one of the biggest kind of comedy shows and, content moderation. Got the John Oliver treatment and. will say, I'm excited to hear what you think about this, Mike. I have not seen it

Mike Masnick:

Okay.

Ben Whitelaw:

I myself have been prevented from, from watching it.'cause I'm based in the uk John Oliver doesn't allow, despite him being a Brit does not allow us Brits to, to watch his

Mike Masnick:

He's an American citizen now, man. So he's, he has left you behind.

Ben Whitelaw:

yeah, we are long gone. We are long gone. Uh, and don't I feel it, um.

Mike Masnick:

does, is it, is it licensed? I'm sure it's licensed somewhere in the uk. Does it just show up later or what's the,

Ben Whitelaw:

I have a feeling it's on Apple tv. I haven't seen if it's live yet and, uh, other streaming channels are available. so I will plan to watch it at some point I'll, I'll, boot up my VPN, but, I've seen a lot of reaction to it and people have been saying that it was a pretty good, pretty nuanced take on, content moderation, particularly with regards to meta. What did he say? How did you feel about it? Talk us through it.

Mike Masnick:

Yeah, I mean, the simple fact is whenever we see any sort of mainstream entity doing a story on content moderation, I sort of, hold my breath, clench my teeth, worry about how they're going to miss the important nuances and, Just get stuff wrong and sort of present unfair or unrealistic picture of a very, complex, nuanced subject. Obviously, we talk about it and we try and break it down every week, and it's very, very rare that a mainstream. news organization in any sense does it well, and yet, you know, John Oliver and his show are sort of famous for doing deep research. It is a comedy show, and yet they have like real reporters and fact checkers on staff. a few years ago for a totally different story, they had interviewed me and it was one of the most. Thorough interviews I've had, actually ended up being multiple interviews. They were, trying to understand a, particular story. I ended up having nothing to do in that story. I wasn't, you know, it wasn't mentioned in that eventual story, but they spend a lot of time talking to actual experts. Then figuring out how to make it funny. and so, when that happens, you hope that you'll get a good result. And this was, amazing result. it is like the most accurate, thoughtful, mainstream portrayal of the challenges and nuances around content moderation that I can recall ever seeing. Plus, absolutely hilarious. Right. So, I sort of talked about it.

Ben Whitelaw:

sounds like an episode of Controlled speech.

Mike Masnick:

Yeah, exactly. Yeah. If, if we were funny, Ben, I mean, it has the British accent, so there's that side of it. It has deep, thoughtful nuance about content moderation, so there's that side of it.

Ben Whitelaw:

yeah. Just not, just not together.

Mike Masnick:

He's, he's, you know, good production quality and really, really funny. And like, I can think, you know, like, hey, sometimes I can say something that's kind of amusing, but like, not on this level.

Ben Whitelaw:

Yeah. Okay. Okay, so we've got, we've got a bit to go, but so you were pleased with, the kind of way he unpacked some of this stuff.

Mike Masnick:

I mean, honestly, a lot of it really felt like, very much in the style of the things that I write, and it, it felt like, if John Oliver were doing teched, it was kind of the way, way I thought about it or, you know, the equivalent of if John Oliver were doing control alt speech.

Ben Whitelaw:

And John, if you're listening, we'd love you to come on the podcast.

Mike Masnick:

Yeah. Guest host. Guest host. I think that'll work. and so, you know, clearly they spoke to a lot of the right people. we've heard from some of the people that they have spoke to, I'm not gonna reveal any, any names or stuff, but like really thoughtful, knowledgeable people in the space were the ones that they consulted because they actually do the research and they wanna have a thorough job. And my my understanding was they actually started this a while ago and they were actually thinking of doing. An episode on this in a previous season, so years ago or a, a year ago at least, and had done a whole bunch of research and then really finally only got it to come into place once Mark Zuckerberg did his decision to, upend, get rid of fact checking and lower the standards on content moderation. And that was kind of the hook that they used. And so throughout the episode that's kind of the through line is what Mark Zuckerberg is doing. But then that leads to a discussion about content moderation, why it's important. Section two 30, what happens if you don't have content moderation, why everybody actually needs it, why it's good, why it's not censorship. It mocks the idea that has been expressed by a bunch of people. That content moderation is just censorship of conservative voices. It has a clip of Evelyn Eck, who many listeners will know talking about how if you don't do content moderation, your entire website gets filled up with porn and diet pills. Uh, it, it has, a clip of Katie Harba, who lots of our listeners will also know who's another, you know, wonderful person in the space. talking about Meta's policies around. In that case, it was about. There was the Nancy Pelosi video. This was a, you know, a famous one from a few years ago where somebody put up a shallow fake of Nancy Pelosi, where they basically slowed down her speech to make it sound like she was slurring and possibly, under the influence of something. and Nancy Pelosi got really mad at Facebook for not taking down that video. and I had written about this at the time where I was like. that's not a reason to take down the video. and Katie basically made that point, in the clip, I forget which, I don't know what show it was actually from that, where they got the clip from saying like, yeah, you know, Nancy Pelosi was really mad. and that's like really clear example of where John sort of got the nuance of this was when he came back from that clip. you know, it's presented in this way where like, you should feel mad at meta for not taking down this obviously, misleading video of Nancy Pelosi. But John comes back from the clip and he says, you know what, that's not a good standard, like Nancy Pelosi is mad at. This is not a reason to remove content from the internet. Because then, where does that stop? Then it's like any politician who doesn't like the way they're presented, should we remove it? No, and there are other clips in there that I actually thought were really good, like they have some, clips of like meta videos from back when Meta was actually trying to show the complexities and difficulties of content moderation and the nuances of content moderation. There's this great clip, I don't even know what it's from. I had never seen it before. Of someone within meta writing on a whiteboard about, the intent to mislead versus the truthfulness of content and creating this four quadrant grid where, sometimes, it's not truthful, but there's no intent to mislead. That's just somebody being wrong on the internet. That happens, you know, sometimes it's, where there is intent to mislead, but it's, it is still truthful, which is sort of cherrypicking data, which is like a really complex area and what do you do about that? And then there are cases where there is high intent to mislead, but it's not very truthful. And those are hoaxes and, know, what do you do better? And it's sort of explaining all of the thoughtfulness and nuances that go into this. and so it was great to have that. Displayed on a mainstream television show.

Ben Whitelaw:

Yeah. And how, how did it feel as somebody who's been like covering this topic for such a long time to see that in that way? Because, and I, and I'm sure lots of listeners have seen it, unlike me, they have spent a lot of time thinking about this stuff. They spend a lot of time doing this stuff. How did you feel like seeing this be explained in that way to a much bigger audience than it's used to?

Mike Masnick:

And I, I should note, like, to, to reveal some, some level of personal bias. It does mention me. He doesn't mention me, but there is this, screenshot. he said, some people say I am, some people, uh, that content moderation, is impossible to do well at scale, which is my impossibility theorem. And he has a picture of tech tur and the headline mass Nicks impossibility theorem. So. I heard from a few people who saw that was like, Hey, Mike, you're on John Oliver. It's like three seconds. It goes by so fast. Somebody else I know, I had told them like, oh, hey, I was on, and they watched it and they looked away at that moment and they didn't say like, I didn't see you. It's really, really quick.

Ben Whitelaw:

Did you see a big spike in Tech Dirt

Mike Masnick:

No, no, that, that, that didn't drive Tech Dirt users. I, I'll tell you that much. But, um, I mean, honestly, what I thought was this is such a great way to explain content moderation and trust and safety to people who don't understand it. Right. it's well done. It's funny, it's entertaining. It's something that people watch for years sort of my go-to for like, if I wanted people to listen to something. To understand content moderation besides control alt speech, of course, I would tell them to listen to. There was a radio lab episode and then there was an updated version of it. I can't remember, like six or seven years ago where they did this really thoughtful, sort of deep dive on trust and safety content moderation at Facebook. it's just like really well done in, in terms of laying out oh, you think this is easy? Well, here's an edge case. Oh, okay, now you have this edge case and you make this rule around the edge case. Here's the edge case on that edge case, and sort of like walking you through how nuanced and difficult it is and how impossible it is. But that's still is like asking people to listen To a podcast for an hour to understand it. this is about half an hour, but it feels much faster. It's funny, everybody loves watching John Oliver. Maybe not everybody but Mo you know, most people sort of get it. It's funny, it's entertaining and it clearly explains the trust and safety challenge. It doesn't go too deep. There's all sorts of nuances and all sorts of things that it can go deeper on, but just as like a general overview of like, look, here are the real issues that are going on. Here are the challenges, here are the nuances. This is not as easy as anyone makes it out to be. On like the regulatory front, it doesn't go very deep. It does mention section two 30 twice. it shows the cover of Jeff KO's book. It's very, very cool. And towards the end he says, you know, like, well, what do we do about this? And some people wanna reform or repeal section two 30. And he says I've yet to see a proposal for that. That wouldn't make it worse and wouldn't allow for more government enabled censorship, which is exactly the point that I keep trying to make. And so that was like really encouraging and exciting because it's so easy for so many people to just say, well, this is bad. The answer has to be, we change two 30 and we'll sort of fix content moderation, which is never actually the case. and so. as a like, here's half an hour of something fun to watch, to understand it. I think people within the field hopefully can use this to tell people around them like, this is my job, this is what I do, this is why it's important. And I think it's, it's a really useful tool just to get people to better understand these issues.

Ben Whitelaw:

Yeah. and Jeff Koser, for those who don't know, is a very eminent cybersecurity professor. the book you are referring to is the 26 words that created the internet. Right. He's written a few, yeah.

Mike Masnick:

he, he's, he's written a few books, but that's the one that they show on screen, when he is talking about section two 30. So,

Ben Whitelaw:

Okay, cool. so, it's a win for us in the sense that the issues are, are kind of, presented in a way that, we would, as people who know a bit more about the space than others feel comfortable with, is there anything that he doesn't get quite, get right or that you feel like you could go a bit more into?

Mike Masnick:

I mean, there's always stuff, right? I mean, look, you know me, right? I can spend hours going deep on any one of these points, right? And so there's always things where it's like, you could go deeper on section two 30 and why it's so important and there's all this other stuff. But there was nothing like, every time I see any sort of. mainstream approach to these stories, they always get something big and important wrong, and whether it's, an error of omission or something like that, and I didn't, find that with this. There's no glaring error. There's no glaring thing that's missing. He could have gone deeper on some stuff, some of the jokes, perhaps, like cheap jokes that could have been replaced by like a more thoughtful analysis of stuff, but really not really like it, really like tonally and thematically and, overall. Breadth of what he was covering for a half an hour segment. It's, it's, it's about as close to perfect as you could imagine. I mean, it's, it's really good for people who are in this space, like, I don't know, they should try and get him to be like the keynote of the next Trust Con is.

Ben Whitelaw:

Yeah, yeah. Yeah. Excellent, and we'll, we'll come a bit later on in podcast through, trust Con again. one thing I think I, I, I really like about crossovers like this, I guess, Mike, where it's our kind of niche topic breaks out beyond the sphere that we are used to being in, is that it's likely to en engage people in new ways. And it's not just TV where I think that this has potential, but I've been seeing an increase in content moderation being, the plot line in a lot of different. Other mediums. So for example, you know, fiction. We've seen a whole bunch of, stories, over the last couple of years focus on the work of content moderation and, people reviewing stories. There's one that I read called, we had to remove this post by I think a Dutch author called Hannah. Might have got that, pronunciation wrong. There's been a whole suite of theater productions as well, including one that's just about to start in London, which I've got tickets for. I previously had a chat with a guy called Ken Urban who's a. A director who, put a play on in, New York in about two or three years ago called The Moderate. and there's a subsequent play that's been created by somebody else that I know has been, I think on, in in New York as well. So we're seeing kind of content moderation as a idea and as, as a, as a narrative breakout into these other areas.

Mike Masnick:

there's another one too, which I haven't seen yet, but people were just telling me like in the last week, that Mythic Quest, which is a TV show on Apple tv, that there's a plot line. This, I think they're on their second season. I've never seen the show. but there's a plot line that's all about content moderation and, and. people were yelling at me like, I gotta watch it. And, and they seem to suggest that it's actually like fairly well done. But, but I haven't seen it, so I can't speak to it. But that's another example. Like it's a very popular show from what I understand.

Ben Whitelaw:

Yeah, and I, I, I think those, you know, much like the games that you've created to try and help people understand nuances, you know, these are other ways that help the general public do that.

Mike Masnick:

that's the one thing that John Oliver could have done. He could have shown our game one, one of our games at least.

Ben Whitelaw:

He would've said, some people have made this game.

Mike Masnick:

Yeah. Yeah,

Ben Whitelaw:

Maybe you must have written, maybe you've written something about it in the past, and he now just hates tech Dirt. Who

Mike Masnick:

Yeah. Yeah. It must be, must be. I will say the one show, he did a show, I don't know if it was last year or two years ago, about cosa, about the Kids' Online Safety Act that I criticized because I thought that was like the one case where I'd seen him do an episode where he didn't take into account. These nuances. And it was a very sort of pro cosa without recognizing the potential harms of Cosa episode. And that was like the one episode of, his show that I was like, oh man, I'm so used to you doing like this careful, nuanced, thoughtful research that this feels like it, came up short. so maybe he's mad at me because of that.

Ben Whitelaw:

Yeah, sounds about right. well, if, if listeners saw the segment from John Oliver, if they've, taken a look and their interest, you know, they, they've reacted to it. I'd love to hear what people thought. You know, get in touch with us at a podcast that I. Control alt speech.com. That's C-T-R-L-A-L-T speech.com. Yeah, it'd be lovely to hear what other people thought about it as well. slight change from the, the doom and gloom. we'll get back to kind of slightly more serious matters now for our next story, Mike, which is, a story that I've been really interested in and, and noted. This week, which is, an eSafety commission stories based out of Australia that, is essentially a big, big fine for the platform Telegram for a delay in submitting some information as part of a regulatory, request. I'm gonna take you back to, a few years ago now, Mike, where. in 2024, this is the time when Trump was selling Bibles. p Diddy was having his, his house, uh,

Mike Masnick:

Ancient history.

Ben Whitelaw:

history. You know, Kate Middleton was, potentially sick, but we weren't sure a long time ago. Right. And back then, the eSafety Commission. sought to get some information from six platforms about how they were dealing with terror content and violent content on their platforms. And they also made specific requests to two of those platforms for how they were dealing with child sexual abuse material. Now this kind of request for information, was part of the Online Safety Act that they passed several years before that, and it's very similar to the kind of request for information that. big regulators do all the time, including the Digital Services Act. We talk about a lot on the podcast now, there was a bit of a deadline for that information, in May of last year. And one platform failed to, get in touch with these EC commission by that time. And you won't be surprised to hear that it was Telegram. Um. We've talked about Telegram a lot. I'm gonna bring you in on, some of the timeline here, Mike, but the Telegram only responded with the information requested by October some 160 days after the fact. And it has subsequently been hit by a big find of,$957,000 this week, which is the biggest fine of its kind, as levied by the eSafety Commission. So. It's interesting for a number of reasons and I'm, I'm gonna kind of bring you in. I think this is, Australia in an election year. Okay. This is, important because. Everyone's going to the polls in Australia in May, and as we've talked about in recent episodes over the last three or four months, there is a big push in Australia to essentially be seen as the, politician or the, the party bringing big tech to heal. That's something that's been happening in Australia for a long time. But we saw that particularly at the end of last year when the law was passed for limiting social media for under sixteens. You know, this, this is an ongoing issue where, social media platforms are, are not being responsible for the harms on their platforms and. we almost saw it in a, a storyline as well that had took place just before Christmas where fines were almost introduced in Parliament for missing and disinformation in Australia, that never made it through. But it's part of a pattern of, essentially trying to kind of bring big tech to heal. And this is the latest part of it. It, it's a big amount of money and Telegram have already said that they are going to appeal it, and we may see that happen o over the next kind of 12 to 18 months or so, but big amount of money, telegram under the spotlight again, and essentially a, a kind of win for Australia in terms of trying to enforce, the online safety act that has been, in place now for a couple of years. What did he make of it, Mike? what's your reading of this and, where do you think POV's arrest comes into it?

Mike Masnick:

Yeah, I mean, there's a few things here. So one, like, isn't that big of a fine, really. I mean, I, I don't know. I, I mean for, for a company the size of telegram, I, I don't know, doesn't strike me as that big of a fine. but yeah, I mean, I do think the timing of this is, the issue, right? So they were supposed to respond by May and they ignored it, and that was telegrams thing, right? Telegram always ignored all of this stuff. They didn't report anything to ncmec. they tried to avoid reporting stuff to the eu, all of these things. And then at the end of August, suddenly Pavo DUR is arrested in France, and then suddenly they're much more transparent. They're suddenly talking about stuff and what. Six weeks later, they're suddenly filing the thing in Australia that they had ignored for four months before that. Gee, I wonder why, what? Possible motivating factor could have entered into the, the, calculus at that point. Right. I mean, it's so clearly driven by his arrest that they were finally responding and now now that they're sort of trying to pretend that they were acting in good faith all along, You know, saying, oh, of course We did it. we just, did it late. I don't know that they ever even gave an explanation for why they did it so late. The reality was that they were thumbing their nose at governments around the world, across the globe, all of this time until he got arrested. And then suddenly they started to, take some of this stuff seriously and respond to governments in a more. serious manner. And so, you know, mean, I guess they're gonna appeal it because they feel that they have to appeal it. But, I'm pretty sure that they can afford the million dollar, less than a million dollar fine. and move on if they have to. The real question is kind of what happens after this. Like, do they actually change their policies in any meaningful way? Is it just that they start to be more, you know, continue to be more transparent about these things? I think, that's gonna be the more interesting thing. Because they were clearly just trying to ignore governments and, basically saying, you know, governments come get us if you want. And now some of the governments are, and, and you know, I think the arrest really is the motivating factor.

Ben Whitelaw:

Yeah, I guess you're right in saying that, what happens next and what do we gain from levying that fine, or, you know, what does the safety commission, get from doing? So, I mean, it's interesting that the information that the platform submitted as part of this, transparency process has appeared in a report and. This report came out just this month. It's called, behind the Screen, and it's subtitled the reality of age assurance and social media access for Young Australians. Now, this isn't only the information that has been sought from the platforms, it's also a, a mixture of surveys, from young Australians about their use of social media and it's kind of setting up, I guess, some evidence and, and some trying to explain the thinking that will. Inform the eSafety Commission's plan to, police the under 16 social media law that's gonna come into place next year. So that report doesn't have any information about Telegram, which I guess is, is an issue. It's a bit of a black spot. but it also doesn't have any information from, x slash Twitter because in, strange shadowing of this situation, Twitter. X decided that they weren't going to reply to a similar response, two years ago. And, its claim was actually the eSafety Commission wanted information from a company called Twitter. We're now called X Corp. And so we don't have to reply, so this is like the second, you

Mike Masnick:

And now they don't have to reply because otherwise the US will impose tariffs on'em.

Ben Whitelaw:

Yeah, exactly. Yeah, because then I'll get, you know,

Mike Masnick:

Wait, wait. This is a musk free show. We're not, we're not talking about Musk.

Ben Whitelaw:

stop it. Stop it. We promised. Um, so, so this has happened before. that legal process has taken a long time to resolve itself. It's still on in the background. I. I don't think it will be as, problematic with Telegram'cause they can't make that claim. But it, but it's that jostling between platforms and regulators, isn't it? The kind of battle for dominance as to, you know, what information is provided and to whom and when.

Mike Masnick:

Yes. and we're seeing that play out globally, right? I mean, this has been the big story for the past, you know, decade effectively of different regulatory regimes. Talking about trying to get. Big tech under control for both good and bad reasons, right? And so that's where the concern is, is that, this is why I've always been a little bit critical of a lot of these regulations because it is also justifying. Problematic demands, across borders. and that's, that's where it gets tricky. There's another story which I had mentioned, which we weren't going to do, but I'm just gonna bring it up really quickly because I think it relates to this, which is that, this week also, Brazil tried to order, US companies take down, content on Bolsonaro, who's been arrested and charged with, trying to facilitate a coup in Brazil. and so there was like a court battle where the US Court said that the American companies don't have to take it down, but we're, constantly seeing these kind of jurisdictional battles over who gets to control global information. I understand being able to control information within your own borders, but when it gets to global information, it begins to get a lot trickier. And yet we're seeing that around the globe as all of these different countries and their regulatory regimes try and sort of battle it out. And the companies, you know, have to figure out who they're actually going to deal with and how they're gonna deal with it when those things conflict.

Ben Whitelaw:

Yeah, it definitely, you know, will be claimed as a win by the, a safety commission. You know, I think how it pans out between now and May and the election, it remains to be seen, I, I expect, we'll, we'll have more stories related to Australia and, and how they're trying to, deal with the kind of big tech before then. because it's, you know, Australians want. some of the big tech platforms to be brought to heal, but it's not as simple. Simple as that.

Mike Masnick:

And it's a good messaging, right? from a political standpoint. You understand it. It's a good messaging that we're taking on big tech. It's sort of a, big amorphous enemy to yell at, which I think actually leads a little bit to the next story that,

Ben Whitelaw:

Yeah. Let's go into that. Let

Mike Masnick:

which is, you know, this is also true in the US right? I mean, this has been, there's been this effort to yell about big tech for the last decade. And this is, on both sides of the aisle, uh, the political aisle in the US though for totally different reasons. and so in this past week, the FTC and we talked about the new FTC Commissioner, Andrew Ferguson had said, like, when he was. Lobbying, like very publicly lobbying for the job, more or less put out this idea that as FTC Commissioner, he could go after social media companies, big tech for their moderation practices, which is nonsense. I mean, it's just is a complete and total misreading of the way the First Amendment works and the way the FTCs authority works. And yet he opened up this, comment period Inquiry into tech censorship. and the thing that strikes me about it, you were talking about how there's all these efforts, especially in election years, to bring big tech to heal. And my question here, and we don't need to go too deep into this, I think we can explore it much later in the year once these comments are all in and sort of the FTC decides what the hell's gonna try and do, but. The US government has already successfully brought big tech to heal. Like, who is he going after that's the question that comes out. Like there's this element of this request for comments that is to me, them fighting the last war. The GOP, the MAGA world has won. Right? they beat, obviously like, Elon Musk. We're not supposed to mention him, but Elon Musk is totally on their team. He's running their team. Like, let's be clear about that. Jeff Bezos is completely on board now. There's a whole bunch of stuff going on with him. Mark Zuckerberg completely on board with this. Like who is the big tech that is anti-conservative now that they're going after they, they don't exist anymore. They have all been brought to heal. They've all kissed the ring. They're all doing exactly what the, Trump and Musk now are demanding. So what does Ferguson think he's doing? It's, it's so much like they've built up this, it was totally false. you know, whatever made up monster of big tech in terms of what it was doing, attacking, conservative voices, which was always nonsense.

Ben Whitelaw:

Yeah.

Mike Masnick:

and that was all part of working the rest, which worked because none of them will do anything now they're all listening to what Trump is saying and following him and, and his lackeys in terms of how they treat certain kinds of content. What is he investigating now?

Ben Whitelaw:

Yeah. Yeah. I mean, it's, its true. Isn't. the picture of Donald Trump being inaugurated, surrounded by all of the big tech CEOs suggests that the co-op, has worked. you know, they're on side, they're there. I mean, it makes me, makes me kind of think back to our, you know, the good old days of talking about Tre Breton and the kind of theater, the theater that surrounded, the back and forth between him and Musk trying to. make Musk, pay attention to, mis information during the US presidential election. You know, there's a kind of, theater about it that, again, probably ties the, eSafety commission story to the FTC one, and, and that goes back a long way.

Mike Masnick:

I mean, it's just people want to you know, we're tough on big tech. It's just like, it's a marketing message, right? It's a political message. we told you that these people were bad and now we're, you know, we're gonna do something about it. Even though it's like, it's never been realistic and it's especially not realistic now, but it's motivating to the, base of low information people.

Ben Whitelaw:

but then what? What do we do? You know, because.

Mike Masnick:

A about which part?

Ben Whitelaw:

about the kind of, because this is regulators doing their best, you know, the fines aren't, huge as you say, they're not gonna, make a massive d in, revenues for these platforms. But dunno if the idea that it's done and dusted that, big tech and marga sided and there's no hope, like.

Mike Masnick:

Uh, I mean, look, let's be clear right there, there, there are good faith attempts at regulation and there are bad faiths, right? and so we shouldn't combine the two, and we shouldn't, be blind to the fact that like, in some cases it's, it's more of a gray area, right? Some cases it's this mix of like, we just wanna appear tough on big tech because it's an election year. But then there are like legitimate things to be concerned about, like Telegram not responding to these things. Like that's legitimate for Australia to, call out and potentially find them for, I'm not saying that's not true, but it's like. It just opens up, the willingness of many people to ignore the difference between the good faith efforts and the bad faith efforts, I think is, is worth calling out and being very, clear on. And so, I'm not saying do nothing and that government shouldn't be thinking about this, but I do think, like, have to be aware how much this effort of like being tough on big tech is being co-opted. By authoritarians as well. And like, when you do that, you have to be really thoughtful about how you do, because other people are going to use the same thing to justify their own actions. And so that's, more the point that I'm making. And we're seeing that with the current ftc. there's no evil here that he's actually going after. He's just making a message, which is like, nobody else do this. Nobody else tried to block. whatever conspiracy theory we're gonna put forward, because I can use the power of the government to come after you. and we've justified that in some of the ways that, everybody has talked about big tech for the last decade.

Ben Whitelaw:

Yeah. And so just to, be clear on what happens next, the inquiry's open until May, what can we expect to happen then?

Mike Masnick:

I mean, who knows, right? I mean, so then, you know, the FTC could try to come out, I mean, this is assuming that the government still exists, that the FTC isn't shut down in a month. We don't know anything right now. Right? Like in theory. Then they could come out with some sort of rulemaking. and that's probably what will happen. And, and they will sort of try to argue effectively that their authority allows them to go after companies for content moderation that the FDC doesn't like as a, as a unfair and deceptive practice, or a consumer protection practice. And then if they actually do take action against any company, which maybe will happen because Ferguson will wanna appear like he's tough, then it will go to court. And eventually you hope that a court recognizes that this is absolute bullshit and tosses it outta court. But again, that's all theater and it could go on for years and it could really harm whoever becomes the target of, whatever the FTC decides to do.

Ben Whitelaw:

Okay. So we'll, uh, keeps us in some business then if we'll have that to talk about.

Mike Masnick:

there will be that to talk about.

Ben Whitelaw:

Okay. So let's go to a bit of a roundup of other stories now, Mike, we've got a few platform stories that we have identified. we'll start with. The one you picked out from 4 0 4 media, which is, just out very fresh. it's about Instagram and, the latest error, moderation. Error.

Mike Masnick:

Yeah, I mean we, we've been reporting on, really obviously poor errors on the part of Meta and its various platforms and its content moderation services, and it seemed very clear that a lot of their systems were not very well done. And like, again, content moderation at scales impossible to do well. Like errors happen, mistakes happen,

Ben Whitelaw:

Who, who said that by the way?

Mike Masnick:

Somebody. some people say, um, but this story is kind of nuts because it feels like. Among the things we know that Meta has said that they're sort of rolling back a lot of their content moderation things and you know you're going to see more awful content. They've made it clear like they think that is acceptable. Collateral damage, this is something different. So the story is, and it's in 4 0 4 media, that Instagram, which has this product called Reels, which is sort of like their competitor to TikTok, which is these short videos that, the scroll buy and they sort of try and recommend them to you. The algorithm for that started recommending all of the stuff that they normally block, like the worst content in the world, murders and self-harm and gore and violence. And so there was just this example in the 4 0 4 article, which is. worth going through. they opened up reels and they saw an elephant repeatedly stepping on and flattening a man, a man attacking a pig with a wrench, a closeup video of someone who had just been shot in the head, a woman crying while lying on top of a loved one who had just been shot to death. It goes on. There's like. 10 more of these. Basically every video was the stuff that, you would normally block. It feels like this was, almost certainly someone flipping a bit, right? I mean, there was a setting that said never show this to people, and someone accidentally put it into the, always show this to someone kind of thing. It's, the kind of coding error that happens, but it is the worst of the worst content, the, traumatizing, horrific content that it was recommending to everybody.

Ben Whitelaw:

So users saw this or was there some sort of like sensitive screen that

Mike Masnick:

It, it did have the sensitive content warning, so you did have to click through to see it. So the reels feed, which is just, you know, this scrolling, TikTok, like feed was just sensitive content, sensitive content. Everyone, there's a, a video in the Instagram story and like every single one is sensitive content, so you would have to click through to see it, but if every video you're being recommended is that, I'm sure a lot of people were, were exposed to this and then wondering why the algorithm suddenly thought the thing that you wanted to see most was like literal death and gore.

Ben Whitelaw:

Yeah, meta have come out and said that this was an error, as they always do, as, as every platform always does. And I always wonder if there's something in being able to, enforce platforms to give. More information than just say it's an error. You know, we, we we're in this age of transparency now and, in different regulatory regimes you are, you know, supposed to give information about why a decision has been made. What is there to stop, you know, a regulator saying when something as egregious as this happens, when users start spotting something as big as this, a platform has to give a certain degree of information about what happened and what they've done to, to address it.

Mike Masnick:

Yeah, I mean, it's, it's trickier than that, right? like that always sounds nice in theory. And in this case, maybe they could come out with a more clear explanation because it's almost certainly a kind of thing where it's just like the code, the algorithm got messed up and was recommending the stuff that it said to never recommend. but in a lot of cases it's trickier for a variety of reasons. One may be that there isn't a good answer, right? These algorithms, especially as the AI tools become more and more sophisticated, you often don't know. The real reason. It's just like the AI said so so it's not easy to get a real answer out of it. And then the other thing too is that like revealing some of that stuff can also, in some cases, I don't think so in this case, but in certain cases revealing that kind of information can help bad actors because you're explaining certain details about how the algorithm works In a world where everyone is trying to game the algorithm to their own advantage, revealing some of that information helps. spammers and, others figure out like, well, what signals is the algorithm relying on? So when you reveal those things, you could be revealing that kind of information. There are a number of reasons why it's not as simple as just saying, like, you have to explain this. In this case, it feels like, It feels like Instagram should be able to explain like how this one, it could, because it seems like such an obvious thing of what really happened here. But you know, I think the policy generally is not to, because you don't want to get into that habit when there are things that would reveal sensitive information or information that could later be gamed by bad actors.

Ben Whitelaw:

Right. Okay. Yeah. A lot of my ideas sound good, but are difficult, and this is, this is the latest one,

Mike Masnick:

This is, I know it's, I mean, it's not, it's not a bad idea in, in theory, right? It's just like, this is the complexity and nuance of these things, which is like, there are always more trade offs to every decision and, thinking through all those trade offs is, is very important.

Ben Whitelaw:

Yeah. Okay. we'll come back to that. I mean, sticking with Meta Mike, I mean, the story I wanted to flag to listeners is, related to Mark Zuckerberg's January announcement, in which he said that there'd be fewer mistakes, ironically, and, uh. this is a piece, from the ft, which has reported that the Meta Oversight Board, the independent, but meta funded oversight board, in which lots of lawyers and legal professors and civil society experts kind of mull over big content moderation decisions. a number of the board members were not consulted and unhappy with the decision, that Zuckerberg took in January. The report says that they were not given, any heads up and have been somewhat kind of spinning since then. Um. This isn't a super surprise in and of itself. You know, a lot of the members are really thoughtful, kind of experts in their own, spheres in politics and in media and, Laura as I said. But what's interesting is that we talked a bit about it on the podcast at the time, at the point where the announcement was made on the same day, the oversight board put out a announcement. Supporting the, sentiment of Zuckerberg's speech, to reduce the number of mistakes and, to try and minimize obviously the false positives that, guess he was alluding to it later, updated that, press announcement to say, we're digging into some of the other elements, including the policy changes, which, a number of different civil society

Mike Masnick:

The trade offs,

Ben Whitelaw:

Yeah, the trade offs part. Yeah. Yeah. Um, and so, that got a bit of heat for the fact that. it seemed like the oversight board had been given some idea of what was happening, but not given the full details, somewhat left. them with a bit of egg on their face and this kind of continues that sense. I mean, in the broader scheme of the oversight board, we know that they're gonna get funding until 2027. And, For those who don't know, you know, the oversight board is a fascinating experiment, you know, often called the Supreme Court of Content Moderation. puts together these judgements on interesting nuanced cases. Lots of detail, a lot of other platforms often, follow suit in terms of what Meta do. So, is interesting to see the oversight board somewhat split in this way. Did you have any kinda reflection on, on.

Mike Masnick:

Yeah. I mean, you know, I think Zuckerberg's position is a clear threat to the existence of the oversight board and its future. And, and he's basically saying like, look, we're done with this. Right. You know, the oversight board came out of everyone yelling at, Mark Zuckerberg for many years and, and he was sort of trying to. figure out how to deal with it. And he, had some conversations with some academics who sort of suggested like this high and mighty idea of like, well, why don't you have this sort of Supreme Court judicial body that could review these things? And he sort of bought into it. But that was all part of his, like, we're trying to be a responsible steward in this world. And what his announcements from last month were where we are no longer. the responsibility is on you, you users, like. Forget it. we're just gonna be the platform, and that is an existential threat to the future of the oversight board. And so I think they're coming to, grips with that. and I had written, you know, when he made his announcement originally, sort of similar to what the oversight board said. there are things in what he said about the announcement about limiting the mistakes and like dialing back some of the things that actually makes sense in a vacuum if you take it out of the context of everything that's happening and the obvious reasons why he was doing it, which were political reasons and I was able to see that the oversight board, people should have been able to see that from the beginning. So they got played a little bit in the way that they handled this. And I think they've since realized that. And I think that they're, now, realizing like there's a good chance that, meta might continue to fund them, but might not. And they need to figure out a different way to survive if they're gonna go forward and they had to call this out in some form or another.

Ben Whitelaw:

getting played is a good way to describe it. I wonder if there are some oversight board members who are thinking about moving on. There's not a lot of change amidst that group, um, at least to my knowledge. So again, one for us to think about, and to keep

Mike Masnick:

And it'll be interesting, right, because, as we've mentioned in the past, like Meta also has now set up this, dispute resolution board in the EU under the DSA, which is, you know, part of the requirement, which is an example of the oversight board trying to branch out and be more than just a sort of meta specific property. And I, my guess is that they will, put a lot of emphasis on that and basically try not to be so reliant on meta in particular right now.

Ben Whitelaw:

Yeah. that appeal center Europe is a really interesting experiment. Again, it's, it's about, users being able to take individual cases. Uh. That they are not happy with that, where meta did not make a decision and to have that appealed by a group of, legal experts in the country where that decision was taken. So, again, fascinating kind of behind the scenes, plays there. one more story we'll touch on Mike, which, is a really kind of powerful story, a really, really impactful one that you read from Wired. it concerns Microsoft, another platform that we don't often talk about. you know, we should say there's there's a lot of discussion here of kind of, you know, non-consensual imagery. So a bit of a warning there. But talk us through this and, what made it interesting to you?

Mike Masnick:

Yeah, it's a, a very interesting story about a founder, named Brie Lou, who had been a venture capitalist and a, AI company founder who had some non-consensual. Intimate imagery that was posted online and that she spent years effectively trying to get down. And the, the article sort of details the difficulty that she had in trying to get it down. Now this is, you know, non-consensual intimate imagery is an issue that has been around for a while. and there have been increasing efforts to make it easier for people to remove it. And stop. NCII is is a big one. and this just walks through how much trouble she had. She worked with a French NGO that tried to help her. She sort of looked at her sort of legal, possible responses that turned out to be fairly limited. and the issue was that a lot of companies, it sounds like most companies. When alerted to the imagery and videos, we're willing to take them down, but a lot of them, a lot of the images and videos were hosted on Microsoft's Cloud, Azure, and they asked for more information, but potentially were not clear on what more information they needed. And you can sort of understand, right? You don't want to just have like If someone can send a spreadsheet of content and say, you have to take this down, that could be abused, right? So there has to be some sort of careful review of this content. but something broke down in the process of communication. And so Microsoft left this content online for months, many months, and the only way it was eventually taken down was apparently she and someone else went to Trust Con where we were, and found someone from Microsoft in the bar. They were not. They were not attendees of Trust Con, they just went to the hotel where Trust Con was found. The Microsoft person at the bar confronted her, this is not the way to necessarily do this, but in this case it worked. Um, and we're able to get her to escalate the issue to look into it, and then finally got Microsoft to take this content down.

Ben Whitelaw:

Yeah, this is a remarkable, like, length that this woman has gone to, you know, Truston is the major content moderation and online safety conference held in San Francisco every July. as Mike said, we, have been there the last few years and we did a live podcast there last year. But, you know, the extent to which this woman had to go to and the, the organizations that she spoke to, she spoke to, you know, ncmec. she spoke to the revenge porn helpline in the uk. It was only this kind of very small French. victim support charity that actually was able to help her out, and took her case on. So it's a, fascinating example of I guess the, the energy and, and the kind of strength you need to have as somebody who's affected by this harm in order to, really follow through on it.

Mike Masnick:

Yeah, I mean, you know, one of the things was that I'm a little confused by it. She said she didn't go to stop NCII, which was like set up specifically for this because she felt that it wouldn't be a good fit. But it's not entirely clear what that means, because that's really what stop NCII is supposed to be about. So I'm, I'm a little curious about that. You know, there's a couple other things in here too where, it talks about how she was very supportive of the Take it Down act, which is something that we will need to talk about at some point. It, it. was passed out of, the Senate recently and is an issue. But it's targeted at this issue, but it has some real problems with it. And so, again, with all these things, right, there are trade offs and nuances that make things that seem really easy. You read this story and you're like, yes, obviously there ought to be a law. Then you start to think about like how that law could be abused and the problematic aspects of it. And the fact that like Microsoft was like trying to confirm stuff or like ncmec, she had trouble with ncmec.'cause NCMEC said, well, it has to be underage content and it's not, you can't prove that you were underage here. but those. restrictions are in place for good reasons as well. And we sort of need to understand that this is a case where some things fell through the cracks, and that's a problem that needs to be solved. And Microsoft is, you know, they express that they're, they're very concerned about how that fell through the cracks. Um, it is a worthwhile story just to understand like where things go wrong. but, you know, I don't think it, it shouldn't necessarily lead to like, well, the answer to this has to be this legal change.

Ben Whitelaw:

No, but it, it's an example of, of an individual and a user kind of, fighting back in a way. and, you know, it's, it is a really worthwhile read for those who,

Mike Masnick:

yes. And go going to remarkable lengths that no one should have to go through to deal with this kind of situation. And I think that's the underlying point

Ben Whitelaw:

Yeah, totally talking of, you know, remarkable lengths. That brings us to the end of this week's podcast, Mike. I think we've, done a, stunning job of summing up today's stories. thanks every for your, thoughts and for bringing those stories to us. Thanks to our listeners for tuning in. if you enjoy today's episode, don't forget to rate and review the podcast wherever you get it. we'll be back next week. I won't be well watching, and Mike will hopefully be a lot better than he is now. we'll see you all next week. Take care. Bye.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode