Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
Between a Rock and a Hard Policy
In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Stack Overflow bans users en masse for rebelling against OpenAI partnership (Tom's Hardware)
- Tech firms must tame toxic algorithms to protect children online (Ofcom)
- Reddit Lays Out Content Policy While Seeking More Licensing Deals (Bloomberg)
- Extremist Militias Are Coordinating in More Than 100 Facebook Groups (Wired)
- Politicians Scapegoat Social Media While Ignoring Real Solutions (Techdirt)
- ‘Facebook Tries to Combat Russian Disinformation in Ukraine’ – FB Public Policy Manager (Kyiv Post)
- TikTok Sues U.S. Government Over Law Forcing Sale or Ban (New York Times)
- Swiss public broadcasters withdraw from X/Twitter (Swissinfo)
- Congressional Committee Threatens To Investigate Any Company Helping TikTok Defend Its Rights (Techdirt)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So Mike, this week I'm drawing inspiration from the curtain twitcher's platform of choice. As they say over on Nextdoor, what's on your mind, neighbor?
Mike Masnick:Well, first of all, I love the term curtain twitchers. I don't think I've ever heard that before. Uh, but, uh, I'm glad you're back this week. We had fun last week with Alex, sitting in, but with you away, I had to run this part of the show and sort of guide things. And that is, uh, there's a, mental stress there. So I am glad you are back in this seat and driving this part of the show. Uh,
Ben Whitelaw:Carrying the mental stress. Yeah. Yeah.
Mike Masnick:that is the main thing I need you for. So what,
Ben Whitelaw:Well, glad to be back.
Mike Masnick:yes, what, what, what is on your mind?
Ben Whitelaw:Well, I'm actually wondering if any of my, uh, next door connections or neighbors are, on Facebook as well as on next door, arming up as part of the, increasing number of militias that we're going to talk about later, which are, are back on the platform. I mean, I I've got some reservations about some people on the street. But I don't know them well enough to know if they are, uh, gathering arms
Mike Masnick:And that, that is called foreshadowing. So,
Ben Whitelaw:Exactly. Yeah. Glad to be back. Glad to be back. Hello and welcome to Ctrl-Alt-Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you with financial support from the Future of Online Trust and Safety Fund. My name is Ben Whitelaw, and I'm back in a chair, as Mike says. I'm the editor and founder of Everything in Moderation, if you are new to Ctrl-Alt-Speech, and I'm very glad to be back in a the seat alongside Mike Masnick without the mental load that you had last week. I'm glad I was missed to some degree. That's to say that,
Mike Masnick:yes, yeah, the, uh, just the, the soothing British accent, uh, I think really pulls the show together. So, yeah,
Ben Whitelaw:Said no one ever said no one ever. Um, but no, you might, you and, uh, Alex did a great job with things last week I listened on the plane home from my work trip away and, uh, did a fantastic job. I caught up very succinctly on the stories I missed during the week. So great job to you. And thanks Alex.
Mike Masnick:It was, it was a fun discussion and, I think Alex did a great job stepping in as well.
Ben Whitelaw:Yeah, brilliant. Last week we saw in the stats for the podcast, really interesting little kind of group of listeners that we've been tracking since we launched the podcast. Do you want to tell people about our burgeoning listeners in Germany?
Mike Masnick:Yeah. It seems that apparently we have a bunch of listeners in Germany. It is not surprising that the majority of our listeners are in the U S and then also various other English. speaking countries, the UK there, those are your fans, Ben, uh, and Canada as well, Australia, uh, but we have apparently a large following in Germany and according to our stats and, and there may be questions about how accurate these are, but according to our stats, we have a decently large following in Frankfurt, Germany. And in fact, if you look over cities, the largest listenership in any city in the world is in Frankfurt,
Ben Whitelaw:Who'd have thought it?
Mike Masnick:Yeah, so I don't know if you're listening to this from Frankfurt, from Germany, anywhere in Germany, I guess. Let us know. We're curious. We're, we're happy to have you. We're happy to have you listening. Um, you know, we do have big listenerships in other cities and, and I think if we added up all the different Bay Area cities, that is probably our largest sort of metropolitan market, but because the Bay Area is so divided among different cities, it's not as clear, but we, we have a, a very large listenership apparently coming from Frankfurt. Yes.
Ben Whitelaw:vocabulary as well. Mike, this is perfect. You know, I can say to our German listeners, Wie geht's? Willkommen Ctrl-Alt-Speech. Um, and it also allows us to say, you know, Bewerten und Resonieren Sie die heutigen Podcasts. Which, if you're a non German speaker, and I did have to Google Translate that, unfortunately, but it means please rate and review today's podcast if you enjoy it. And that goes without saying if you're, German or otherwise. So, um, there we go. There we go. Those, those five, six years of learning German, finally paying off, which is
Mike Masnick:Wonderful. I'm glad we can put you to good use.
Ben Whitelaw:Awesome. So let's crack on. We've got a lot to talk about this week, and we're going to start off with a platform that we don't often mention. It doesn't really take a lot of airtime here on Ctrl-Alt-Speech, but you've identified a, a fascinating story that kind of has a number of different threads, AI, policy that you want to talk through today.
Mike Masnick:this was a story that I first saw it on Tom's hardware. And it was really building off of a Mastodon thread, that someone had posted. And the, the original story was that Stack Overflow had cut a deal with OpenAI. Like many of the AI Deals, it's a little unclear what the actual deal is. There is some element of licensing. There's some element of tool exchange. There is some element of API access, and sending traffic and whatnot. But it was interesting and, in some ways a little bit reminiscent of deals that Reddit, has cut with different AI platforms. And we'll have a little bit more on that in a moment. But the thing that struck me was. There is this kind of backlash against AI from a lot of people. And so some of the users from Stack Overflow looked at this and were upset because Stack overflow as a community. If you're not aware, it's where people go to basically ask technical questions and other people respond and share stuff. Then it is a huge knowledge base of really useful technology coding, answers to questions. So anyone who does anything in coding uses stack overflow. If you're having a problem, if you're trying to figure out the best way to do anything, there's a stack overflow bit of information for you.
Ben Whitelaw:it's a great community, right? You ask a question, you often get responses. It's not like you're posting into the void. Like people are, you know, it's very, very well run by design. And, and also because of the nature of the community. Right.
Mike Masnick:Yeah. And you know, it's been around for a while. It was originally created by a couple of guys who were like really well known in the coding space, and it has since been sold, but it is a, a very, very important and large and successful community and very, you know, just incredibly useful. And so there was a bit of a backlash to this deal because, you know, it is, free to use, people are using the service for free and they're contributing their knowledge for free. And this felt to some users as if it was then Stack Overflow selling the things that they had contributed for free to an AI service that was going to Maybe sell it back to them was sort of the, the, the framing. And so some users were upset and the Tom's hardware article focused on one particular user who went in and tried to delete what he had put because he wanted to protest this deal. And one of the things, the way Stack Overflow works is that if you've asked a question and then there's an answer that has been accepted as sort of like, the top answer or whatever, you can no longer delete it. And there are obvious reasons why Stack Overflow would be set up that way. If you're creating this knowledge base, you don't want people to go in and delete stuff, but apparently there is still some way that you can edit answers. So he went in and edited some of his content, which I think was from like over 10 years ago, but he edited it as a sign of protest. And what he discovered very quickly was that Stack Overflow then went in and Edited it back or reverted the edit.
Ben Whitelaw:Whoa.
Mike Masnick:Uh, and then suspended him for a week. Um, which is a really interesting content moderation, trust and safety scenario
Ben Whitelaw:Right. Yeah.
Mike Masnick:on a lot of different levels, because, this is an interesting, case study in a lot of different ways, right? There's been a whole bunch of discussion, obviously, over the last few years with the questions around training AI and how you feed information and how these AI systems will get enough information for training their new models. And you have all of these services online that have collected all sorts of user generated content for years, Reddit, again, being another example of this, that have these huge corpuses of knowledge that these companies in some cases have struggled to find a really lucrative business model. Usually their business models have been around advertising and they've been, these are successful companies, that have done decently well, but now they have all this data and maybe some of them are saying there's a business model and licensing that data to the AI companies. But the problem is, is like, whose data is that really?
Ben Whitelaw:They forgot to ask their users. Right.
Mike Masnick:how do you contribute to it? And then you have this question of, like, if somebody then wants to edit their content, and I believe that the individual in question has said that he has since He's now trying to use the GDPR, and the, the right of erasure within the GDPR to demand that his content be deleted from Stack Overflow, which then raises a whole bunch of other questions. And so it's just a unique scenario that is now created by the nature of the AI world that we live in and the, insatiable thirst for information that those models need.
Ben Whitelaw:Yeah. Super interesting. So, so just tell us about what the, implications of this banner is. Is this guy still, is he still off the platform? Do we, do we know if anybody else has followed suit? Is he a one man band at this stage?
Mike Masnick:so I believe that he is not a one man band. The impression that I've, gotten is that other folks, I mean, there, there were a bunch of people who were very clearly angry about this deal and felt that it was unfair and it was, you know, selling of their information in an unfair and unexpected way. And this gets to like some of the other just general privacy questions that have always been around regarding, any internet free internet service, which is, you know, the exchange of data. For access to a free service, you get access to the service. You're giving up some element of data, which historically has mostly been then used for advertising and. A lot of people are not entirely comfortable with that, but they have sort of come to terms with like, this is the exchange. Like I'm giving up some data that will be used to target me with advertising. I might not like it, but it's okay. I'm getting this service for free, but this is now a huge new element within there. And just the, the nature of. AI backlash that is going on right now that I might think is a little bit overblown in some cases, but I understand where it's coming from. Has led to a lot of people being very, very concerned about these deals. And so a lot of people were very, very mad. But this was the one case that we know where he explicitly said that he tried to do this and Stack Overflow suspended him and reverted his edit. I believe others are trying to do the same and that there has been some similar reactions from Stack Overflow. But we don't know how large it is in terms of whether or not he's still suspended from the platform. I don't know. I, I checked his mastodon feed and I don't see any direct update from there other than some more just. general anger, uh, about AI and, and these tools. But you know, it raises this other, the, the implications question, which is that when you run a platform that is all user generated content, if you are making moves, That really piss off a lot of the people who have generated that content and built all the value for you. Uh, does that harm your future? You know, will people still be willing to contribute to a stack overflow if they think this kind of thing is going to happen going forward? And have they killed the golden goose effectively?
Ben Whitelaw:Yeah, this story reminds me of the substack Casey Newton story of,
Mike Masnick:A little bit. Yeah.
Ben Whitelaw:months. Right? So if you think about it as you know, the policy, the licensing deal is a bit like a kind of policy decision in many respects. And this user, you know, not quite the same as Casey, but is an engaged user on the platform is contributing. Content is providing value in the same way Casey and other Substack writers were, you know, you can see how suddenly these engaged users have power, you know, to make decisions, that might influence policy and how the, the direction of the platform, evolves, right? So this is a really interesting dynamic. I think we're seeing lots of different ways. And if we're, kind of trying to draw a thread between this other things we've seen recently, it's that, you know, There are some users who moderate content or who play a role, an outsized role in the creation of a community who are suddenly becoming kind of more influential in the way that platforms are. Created in the way that their policies are managed. I think that's probably the thing that I'm taking away from this. And, you're probably right. They're not going to see like a mass Exodus from Stack Overflow, but, you know, in Substack, we saw over a hundred writers write a letter to Substack leadership, you know, there was some people who left, they moved to Ghost. There probably isn't the same equivalent platform as Stack Overflow. It's like one of the best out there, but you were saying previously that there are some kind of Small versions of, like, Stack Overflow equivalents that are cropping up.
Mike Masnick:Yeah. And, and, you know, it's, it's interesting. There have been others that have been around, like ever since Stack Overflow first came around. And in fact, this is like the long forgotten history of Techdirt. At one point we actually made use of an open source stack overflow clone, and we had created a community around, business models, media and entertainment business models to have discussions on it. It didn't end up working that well, but it was kind of a fun experiment, but there are tools out there to build your own stack overflow. And you know, what we've seen over the last few years, Is that with the rise of like decentralized platforms, people are recreating a bunch of these different sites. So, you know, when there was the sort of rebellion against Reddit last year, a couple of services like Lemmy and Cabin showed up on ActivityPub, which were basically Reddit clones in a decentralized manner. And I've heard, but I haven't yet seen, but I've heard that people are creating stack overflow, decentralized services. I heard of people trying to do it on ActivityPub. I've heard of people trying to do it on Noster, but I haven't seen them yet, but it'll be interesting to see if any of these can actually catch on, you know, the big. Platforms, you know, the stack overflows, the Reddits, they have a kind of gravity to them and a momentum that is hard to change. It will be interesting to see. And I think you're right, that there is this sort of recognition that these marketplaces, these communities are not, you know, it's not just the company and their business partners, it is the users in those communities and. they have to be considered. And you can, there are, there's a long history of communities that falter and die because they're not well managed. And doing something that pisses off a large portion of the community can have real long term impacts.
Ben Whitelaw:right. which is right. Why Reddit has announced these policy terms, right? Which, you mentioned that the open AI deal that it signed, it's now come out with more details about what content is included in that deal and the extent to which it will be used to train its LLMs. And that's quite interesting to talk about as well, isn't it?
Mike Masnick:Yeah. I mean, then the timing of this with the stack overflow deal was really interesting because it, it came out with, you know, Reddit had just got, just went public a few months ago and they came out with their first earnings, which were better than expected and, but with it, they laid out what their content policy was on licensing content to AI and it seemed to try to strike a more, respectful of the community approach than what Stack Overflow appears to be doing, and that it explicitly said, like, if a user deletes content on Reddit, that content is no longer available to an AI service. Um, and you get the feeling from looking over the content policy that Reddit laid out, was that this is, it's a bit of a moving target. But they are trying at least to figure out where this works in a way that is respectful of the actual people who are creating the content on Reddit, but that still, is useful in a manner for the AI services. And so I think it'll be interesting to see how that goes. It would not surprise me to see Stack Overflow kind of back down from the position that they took here. And. Maybe follow in Reddit's footsteps a little bit in terms of like setting up a clear policy saying, this is why we're doing it. If you really want to opt out, you can do so. That strikes me as where this is most likely to end. Though it, you know, it depends. You certainly could have, depending on the management, uh, who I, I don't know, but you could have management who says like, you know, Screw you. This is the, the terms of service. We can do this and we're going to because it makes us money, but I, I don't know. I don't know.
Ben Whitelaw:I, I wouldn't want to pitch myself against, hundreds of thousands of developers and coders. There's, you know, there's doing it with a, an audience or a user base that isn't very tech savvy and is doing it with Stack Overflow's user base. And I, I would not want to go into that fight unarmed.
Mike Masnick:Yeah. Yeah. Yeah. It seems, seems unlikely to end well.
Ben Whitelaw:Agreed. Um, so yeah, really interesting story, some new platforms, some new considerations for platforms in relation to users and to policies that they're adopting. So that's really helpful, Mike. Thanks for talking through that. Let's move on to our second story that we'll go in depth on today, which is a regulatory story. It's Ofcom who have this week announced, a series of new child safety guidelines, some draft proposals that it's going to seek consultation on over the next, couple of months. The deadline for feedback is in mid July. And this is interesting for a couple of reasons. It's the second consultation that Ofcom has, launched on the back of. The passing of the online safety act, which is the UK is big piece of, regulation, trying to keep users safe online that the government in the UK is pitching itself as the safest place to be in the world, which is some claim. I will say that. And, uh, Ofcom did this big consultation late last year around protecting people from, harm online. And this is the second, big consultation on that. So what it does is outlines. The kind of things that platforms and services are going to have to do to protect children online over and above they're doing for adults. So there is a whole set of protections that it's outlining for kids, obviously more susceptible to harm. And, um, the first thing to say that there's a whole bunch of, um, Research and data that Ofcom have launched. I won't claim that I've gone through all of it, Mike. There's five volumes of analysis and research. So I've gone through some of the big documents since it was launched on Wednesday. And my take so far is that there, there are 40 measures that they've launched, for consultation kind of fall into three buckets, which I'll talk briefly soon and get your thoughts. So these fall into five areas. And. I would kind of class them as unsurprising, unhelpful, and odd. That it's, it's, it's like the good, the bad, and the ugly, but,
Mike Masnick:They sound like they're all ugly.
Ben Whitelaw:kind of, I, I had to be clever with the wording. And so, So these are the kind of things that platforms, both the user to user services, kind of mostly social and the search users are going to have to kind of implement if they are to be compliant with the online safety acts. And so in the unsurprising bucket, we have a couple of things. We've got age checks. Right. So a big thing for, for keeping kids safe online is making sure that they are of an age to use the platform. And Ofcom has, gone quite big on, what it calls highly effective age checks. So it, it kind of specifies that it's not just any old age check. it's an age check. So either a verification or an age estimation that. is a couple of things that it's accurate, it's robust, it's reliable, and it's fair. That's what it classes as, highly effective. Also in the unsurprising bucket, things that we kind of expected Ofcom to talk through on the basis of previous, communications around the Act is the need for safer algorithms. So for, algorithms and recommender systems on platforms to not recommend content that is not suitable for children. So that's, a range of content, you know, pornography. Eating disorder content, that kind of stuff as well as hate speech, violence, you know, having a means of assessing what content is being recommended is, is a big part of that again. Not so super surprising. We kind of expected that you might not agree that they are the best things for keeping kids safe, but they're not surprising from a policy perspective. And then there is the kind of unhelpful bucket. So there's a few things in here. It calls for effective moderation. Which it basically says, you know, every platform should have services, processes and teams that are moderating content in an effective way, which is. I put as an unhelpful, addition because it's so Vegas to be basically meaningless, um,
Mike Masnick:opposed to, like, you know, nobody's going to recommend ineffective moderation.
Ben Whitelaw:Exactly. And there's a few things we can go into detail there and then, and the somewhat odd, you safety measures that it recommends are a couple of things. So it's calling for stronger governance and accountability within platforms. So having a named person responsible for children's safety, which is something we've seen in, other kinds of regulation. But also an annual senior body review of all risk management activities related to child safety. Again, strange thing to be calling for explicitly and then even more odd than that, I would say an employee code of conduct that sets a standard for employees for children's safety. So, trying to get, I guess, set internal standards and processes that mean that There is an element of liability among teams and staff. And then finally, it's calling for, simplicity of terms of service, to be child readable. So basically to make it so that if you're a child who's signing up for a service and you've done all of the above, you can know what you're signing up for, which again, is absolutely critical. I'd be surprised if there's any kind of real evidence or research that validates that again, that might be in, in the five volumes that we haven't gone through yet, but yes, overall, really interesting package of, suggestions that fall into these kind of five. Areas which I've bucketed in those three ways, there's going to be a consultation now, there'll be lots of inputs from platforms, civil society, other organizations with, an agenda and a view in the space. Did you manage to go through it? What's your take on things?
Mike Masnick:Yeah. I mean, I, I haven't gone through all of the research either, but I had, you know, I looked over the, release and the proposed guidelines and, you know, I, I thought it was interesting, I mean, I agree with some of what you said, I don't fully agree with all of it, I mean, some of the, odd decisions, like, I'll start there, like, I, I think that, those seem more around the idea of like creating norms where as an organization, the organization is constantly thinking about child safety. And I, I think there is some value in that. I think that, what we have seen is that in general, like that is honestly like the major factor in having platforms actually, be safer for kids and taking these things into mind, which is that it is a cultural thing that, the organization itself and the people who are working there have to be thinking about this. So when things come up, they just have that in mind, like we do not want to be the platform that is in some ways harmful to kids. So as a sort of cultural approach, I guess, but I don't know that you can fake that or force that on people.
Ben Whitelaw:I mean, that's kind of my feeling, right? Is that you could really pretend that you cared by, by having an annual meeting and by having a piece of paper that said all of our staff have, uh, agreed to this kind of contact, they even done some training, um,
Mike Masnick:that's right. It kind of like, it feels like all sorts of other compliance situations where it's like, there are all sorts of things that people have training sessions on that they have to do for compliance factors. And there are lots of questions out there on how effective any of those are or how much of it is just kind of going through the motions. Um, And so, yeah, I agree that if it's just sort of adding this compliance burden without actually impacting culture, that's not going to make that much of a difference. Um,
Ben Whitelaw:just on that point, we hear a lot of the time we've spoken about ourselves, about how people in platforms do actually typically want the best for users. and if that is the case, like to what extent is this moving the needle on from
Mike Masnick:yeah, right. So there is, this is like the biggest thing that is kind of this implicit assumption in almost all of the, like protect the children regulations that go around, which is that the companies don't care and that is just generally false and, and that's why so many of these regulations come out in such a weird way, you know, there is a narrative out there and. Like I had an article this week about the wife of the governor of California, insisting that companies, you know, social media companies simply don't care about kids. And like, that's not true, right? I mean, you, we, you and I, both of us, we talk to people at these companies all the time and they take these things really seriously and they are thinking about it and they don't want to be in the position of. Doing harm to kids. And so like there are trade offs and really difficult decisions that are made in these processes, but so many of these laws and regulatory proposals sort of come at it from the assumption from the outside that that is not true. And that these companies need to be forced into taking kid safety seriously.
Ben Whitelaw:Right. And what's interesting about this Ofcom series of proposals is that there are a lot of people at Ofcom who presumably had a role to play in writing this, who came from
Mike Masnick:Yes. Like a lot of them. Yeah. they
Ben Whitelaw:There was a whole tranche of, of, of, you know, moves over the last couple of years. So again, oddly framed, I
Mike Masnick:I mean, so I wonder how much of that is really political, right? Because also, this is, this is being presented to people outside of these realms, and people will look over this, and they'll be, you know, politics people and, and people in the media who don't have that experience. And so some of this feels like you have to include it, otherwise you're not going to be taken seriously. Even if it's kind of useless in impact, but
Ben Whitelaw:yeah, Regulatory theater,
Mike Masnick:yeah. I mean, that's, that is really what I think some of it is, but I did want to move on to some of the other elements of the proposal that. It sort of hand waves away some of the real tradeoffs and difficulties here. And that's, the part that bothers me. I mean, I was reading through it and a lot of it really reminded me of the age appropriate design code in California, which has since been ruled unconstitutional, though is currently being appealed.
Ben Whitelaw:Just, just outline for listeners what that involved, Mike, just in case people are catching
Mike Masnick:so I mean, it's funny too, because like when it was presented here in California, it was presented as we are modeling this off of the UK's law, the age appropriate design code, even though they're really different. If you looked at the two laws, they were actually really different. The California law was actually much closer to the online safety act, which was then passed later. And now this. This proposal is sort of a, you know, as part of the online safety act, uh, regulations, but in California, it required companies to do a, product assessment. I forget what DPIA, I forget what the things stand for, that looked at every feature on your site in terms of how it might impact children. Um, and you have those done and available to the attorney general there was a bunch of other stuff around like, getting things set up to prevent harm to children. And you know, what came out during the, court hearing on this, where the judge asked the state of California directly, like, what if there is, content that is perfectly relevant for adults, but that, might be harmful to children. Do they have to. Block that because that raises free expression. First amendment issues in the U S certainly. And the state of California sort of tried to tap dance around it and said, well, you know, it's okay as long as they do the age estimation, which again, the Ofcom proposal has age estimation. And the problem there is that like. We know that technology is not that good, and, you know, people will claim it's getting better, but also that it has privacy implications, and we've seen that there are all sorts of concerns about, if you're doing age verification, verification certainly has real privacy questions. We just also. Last week, a different story was down in Australia where they have approved, a pilot program for age verification. They also had a similar program for bars and clubs in Australia where you have to age verify. There's like this online age verification system for people going to bars and clubs. But that system got breached and details were leaked, right? So you have this very clear issue of potential privacy violations and privacy concerns, which could be harmful to children. If children are, you know, uh, providing private information for verification purposes, the estimation is, you know, a little bit. Less of the privacy concern, but a little bit more on the accuracy concern and the problems associated with that. These technologies are, still very much not, they don't always work. And, and there are real concerns about how they're applied here. And then that leads to the question around harmful to children, right? And this was also a problem with the AADC, in California. Like, there is content that everyone can look at and say, this is harmful to children. Some of them are sort of like mentioned in the Ofcom stuff around eating disorders, self harm. The issue is that we do not know. how to deal with that content in an effective way. And it depends on different children, different situations where, it seems easy from the outside to say eating disorder content. Well, that's bad and it should be blocked. But what multiple studies have found is that It doesn't work that, you know, if kids want to find eating disorder information, they are going to find it whether or not the company allows it. You know, there was a study done with Instagram or Instagram had blocked all these keywords. And what happened was the kids just came up with new terms and got around those blocks. And so it was just this whack a mole game and Instagram tried to stop it. And then. What some of these studies found, which I still think is absolutely fascinating was really eyeopening to me when I was reading them was that as this game of whack a mole continued, kids began to move off of these major platforms into smaller, harder to follow communities where they tended to be more extreme when the conversations were happening on Instagram or TikTok or some other platforms, people were coming in and showing up and Oftentimes people who had had eating disorders in the past and we're trying to help people and we're providing them with either people to talk to or resources to help guide them towards recovery when those conversations moved off platform and moved into these sort of darker corners that was happening less. And so there is an argument that telling, the major platforms that you need to block this content. Actually leads to even greater harm by pushing kids into more dangerous scenarios. And those kinds of really careful trade offs that are not being thought about. And people are just looking and saying, this is bad content. It has to be stopped. If that actually leads to more harm, that's a real problem.
Ben Whitelaw:Yeah. I mean, there's a lot in there. And I think the, the point I'll pick out is the fact that, you know, you basically can categorize the content in However you want, right? And Ofcom have tried to do this in three tiers. You can categorize the content in kind of however you want. Doesn't make it any easier to spot.
Mike Masnick:right,
Ben Whitelaw:You know, you can call it whatever kind of term you'd like and Ofcom call it. The most egregious primary priority content, which is pornography, self harm, suicide, eating disorders. It doesn't matter. It's super hard to do, I guess it's almost like a stick to beat the platforms with when they failed to be able to spot that because they there is no real ways of doing
Mike Masnick:it, it, it really feels like a lot of this is setting up a way to punish the platforms after the fact, something bad is going to happen because something bad happens. There's no way around the fact that sometimes bad things are going to happen. Sometimes those bad things are going to involve kids. It feels like this is setting up a thing to then punish the platforms afterwards for not magically spotting it. And, and so just like one other example, just to drive this point home, there was a report last year, maybe it was two years ago, I forget exactly when it was looking at eating disorder content, and it amazed me going through the report where one of the examples used of eating disorder content was somebody had posted a photograph of a pack of gum and they said, this is eating disorder content. I don't know that it was actually eating disorder content, or it was these researchers saying it was eating disorder content. You would need so much more context to figure out is a pack of gum actually eating disorder content? How do you deal with that as a platform? It, it, it is way more challenging than I think, a lot of policy makers and the media make it out to be
Ben Whitelaw:Yeah. And how do you not block the content of users who just want to post pictures of their gum?
Mike Masnick:Exactly, I mean, it, it,
Ben Whitelaw:which isn't something I've done a lot of, but maybe there are people out there. Maybe there are, there may be, there are Facebook groups for that.
Mike Masnick:it's, it's, I'm just like, it's really important to just get across how challenging and difficult this is that this is, you know, it feels like if you haven't gone through this and haven't thought through it or haven't talked to, the experts and the people who are actually making these decisions like you and I have spent a lot of time doing, it may be hard to realize that at first, it just feels like, you know, self harm content. Bad, like you got to stop it. But if that actually leads to an increase in self harm, that's something we should know.
Ben Whitelaw:Yeah, we should move on. We'll definitely come back to this as a story. It's interesting to note that there was a number of parents whose children, committed suicide or have suffered from self harm via social media platforms, they claim, who, who got together this week and were actually quite critical. Of, of comms response. I don't know if you saw that story, but they basically argued that they weren't consulted sufficiently about their experiences of living through this kind of trauma through their children. And so it's an, interesting side note that the regulation is one thing. The kind of proposals is one thing, but actually how you arrive at those proposals is another thing and making sure that the people Who should be involved are involved throughout that process because otherwise that can you know the story that was Quite prominent in the uk media this week was the fact that The father of molly russell and a number of other children who have been very vocal about online safety over the last few years Were not happy and so that became the narrative. So Lots of play here, both as, the proposals and how effective they'll be, how hard they will be to, for platforms to adhere to, but also then how do you make sure that those, the actual kind of work you've done over months and months and months lands in the right way. Because that's almost as big a part of how regulation is successful, I'd say. So good, good to kind of put a pin that I think Mike, so let's move into the, the quick roundup stories. because I've been away for a week, I'm going to try and be a bit stricter on us. Uh, I've actually got an idea how long we've been running for, because I can't see a clock on the screen. So I'm,
Mike Masnick:We're, we're, we're going a little slow. So we, let's pick it up.
Ben Whitelaw:let's do it. So, let's start with your first story of, the best of the rest, then.
Mike Masnick:Yeah. So, this is really quick. I mean, in part, we're mentioning it because it feels like we need to mention it and there isn't that much to go on, which is that TikTok has finally sued, Merrick Garland, the attorney general over the law to ban TikTok. This is the challenge that everybody expected. Took about. two weeks, a little bit longer than I think some people expected, but, they put it in. There's not much to comment on regarding the actual case at this point. It's pretty much exactly what everyone expected it to be. It's a constitutional challenge. There are concerns about the first amendment, about, equal protection issues. There's a claim of that's a bill of attainder, which is, you know, a law that is directly targeting a single party. And we'll see, this is going to be a long process. It goes straight to the DC circuit. which is the appeals court. That's because it was written into the law that it had to go there. So that, jumps over some of the earlier hurdles that maybe makes this a little bit quicker than other cases might be, but that's going to be a, an ongoing story for a while. And I'm sure we'll, we'll mention as we go, but the sort of related story that, I just wrote about that just happened in the last couple of days was that, uh, Net choice, which is a trade group that has been challenging a bunch of these laws. I was just talking about the California ADC. That was net choice challenge. Those net choice challenged, they've challenged like so many laws. They challenged the Florida and the, Texas laws that are, the Supreme court is still, is about to weigh in on, the challenge laws in Ohio and, and Arkansas and all over the place. They, um, On Wednesday, TikTok was listed as a member of NetChoice. On Thursday, they were no longer listed as a member of NetChoice. And the story that came out and was revealed in Politico was that members of Congress effectively threatened NetChoice that if you do not kick TikTok out of your organization, we will investigate you. And that is basically what happened. Like there's no two ways around it. And it struck me as like really eyeopening considering like the Murthy case that we had discussed previously on this podcast around some of these very same Congress people saying the government should never interfere with speech in any way whatsoever. And here they are literally threatening organizations for. Their speech and for their advocacy and for defending the rights of an American company.
Ben Whitelaw:Not even waiting for the band to happen. Like, not even waiting for it to play out and for the challenges to, to kind of be brought forward, right? It's like, they've already
Mike Masnick:And the thing that I've heard, and this is still like grapevine rumor level, but I do think is important because we have seen that clearly this appeared to have an impact on net choice in terms of no longer working with, uh, tiktok, tiktok no longer being a member, but that I'm hearing that there's also pressure being applied to other organizations that are working with tiktok trying to defend their rights. And that includes PR firms, lobbying firms and law firms. And that is not right, right? This, whether or not you believe that the tiktok ban is legal, the idea that Congress would threaten to investigate organizations simply for trying to help tiktok defend its rights in the U. S. Is something that should not happen. And so that was a really interesting thing that just came out, just sort of happened yesterday, as we're recording this and, um, I think it's a really scary turn and hopefully get some attention.
Ben Whitelaw:yeah so, um Mike bringing the uh exclusive slash gossip Onto control all speech, which is good, which is new. Not so good in terms of the time mike. So i'm going to be super
Mike Masnick:Yeah. Yeah. Let's go.
Ben Whitelaw:my story um, so the story I'm interested in and read a little bit about was a kiev post story An interview with the facebook public policy manager You for Ukraine, who's given an interview to, Kier Post, which is, in English language, Ukrainian publication. And what's interesting in this interview is it's two things, the fact that she talks about the way that META is trying to, avoid, mitigate missing disinformation by employing Ukrainian moderators. with a knowledge of the context and of the conflict to try and kind of pick their way through the Russian disinformation that obviously is part and parcel of this conflict and has been since the start. So using kind of local experts, and moderators to kind of try and understand The second other interesting thing is that she noted the fact that Meta doesn't do a great job of, helping people understand why they've been shadow banned
Mike Masnick:Yeah.
Ben Whitelaw:And shadow banned is, I think actually she actually mentions the word, shadow banned. I think so this is idea that. There are some people whose content gets kind of flagged or removed, and Meta, she kind of admits, doesn't do a great job of understanding why that is, or how to resolve it. And they have introduced a tool recently called Account Status to give a bit more visibility to that. But it's interesting that there's an admission that that is not something Meta is great at. We don't often hear public policy managers admit that. So, specifically in a context as live and as, current as, the Ukraine, Russian war. So super interesting. I thought that they were being more proactive in communicating some of their policy decisions there.
Mike Masnick:yeah. I thought, I thought it was fascinating. I mean, for both of those reasons, I think just, you know, the focus on and talking about like having local, knowledgeable moderators who understand the context, I think is super important, sort of a recognition and something of an admission that Meta hasn't always been great at that. Um, you know, in different areas of the globe where there are conflicts, where they maybe haven't had people who understand the context and that has led to really problematic results. I thought that was great.
Ben Whitelaw:we also talked about how Meta has moderated content in Arabic, particularly related to the Palestinian Israeli war. And again, very notable differences, I'd say. In terms of how they're approaching Ukraine, Russia, you know, numerous reports that we talked about on a previous episode of the podcast, which we'll link to in the show notes about how Arabic is not well moderated and very common words like Shaheed are banned, despite their very different meanings in different contexts. My expectation is that they have paid more attention to the Ukraine, Russian war, and aren't putting more emphasis and resources into that than they have done in the Middle East. In other context. So again, kind of asymmetry in the way that a platform is dealing with a conflict is something to note and talking of Facebook and not dealing very well with, a particular harm that we've seen before. This kind of leads us neatly onto your second. Small story of the day.
Mike Masnick:Yeah. And this was, it was originally a wired story and then there was an MBS NBC story about it as well, about how, Facebook has officially said that they will take down far right militias that are using Facebook to organize and talk. And the reporting is that that is failing, that there's been a return of a number of. far right militias. And it was not that hard for journalists to find examples of that, you know, and growing. And there were sort of these questions about why isn't Facebook actually dealing with it. And I had a slightly contrarian response to this, I think, which is that, yeah, exactly. Uh, which is like. I would guess that the reason that META is allowing these to return is that it is potentially very helpful to law enforcement to keep tabs on, these folks. Uh, you know, it has been admitted by law enforcement in the U. S. and the FBI in particular that far right militias are considered a threat. and a domestic security issue. And if they were doing all of their organizing on Telegram, it would be harder to keep tabs on them. And I imagine that they are using Telegram and other services quite, quite readily as well. But if they're using meta platforms, mainly Facebook, because it is easier, that also means that it is probably easier for law enforcement to keep tabs on them. Law enforcement has a clear process in which they can keep You know, get a warrant to get more information and they can keep tabs on it. So my guess is that the reason that Facebook is maybe not cracking down so much is because it is helpful for making sure that we know what far right militias are planning these days.
Ben Whitelaw:Is that something platforms do? Is that they allow, you know, I'm not sure if you, you know this, but like, do they allow kind of certain groups to proliferate or to exist with the view to doing that? Cause that's, that's quite a kind of
Mike Masnick:I mean, I, I don't, I don't think that they're doing it. Like, I don't think they're, Oh gosh. how do how do
Ben Whitelaw:not a trap, is
Mike Masnick:yeah, I mean,
Ben Whitelaw:might need to get out of those groups pretty pronto.
Mike Masnick:are cases I know where there have been like effectively honeypots, right? So like fake groups set up by law enforcement. And there has been like back and forth with the platforms where I don't think the platforms actually like that. They don't want to be seen as working that closely with law enforcement. In certain cases, so I, I don't know if it's, it's not like these are like hand in hand, like Facebook is doing this deliberately. I don't know. Doubt it's possible, but I would doubt that I think it's just sort of the nature of it. I would imagine that law enforcement maybe has expressed that like, Hey, these are kind of useful. Like if you were to shut down these groups, we would lose Intel into what is going on. And that may play a role in all of this as well.
Ben Whitelaw:Yeah, particularly leading up to an election, I guess. Yeah, fair. Super interesting. Um, my final story to wrap up today's podcast is, the newest, the latest publishers to leave Twitter slash X, which is, is
Mike Masnick:Is this going to be a running theme?
Ben Whitelaw:This, this, I think this might be my kind of, yeah, my, my story that I come back to. Obviously I have a, an interest particularly in publishing and media, and we've seen over the last year or so, a bunch of particularly public media, leave the platform. Partly because it's not really doing anything for them strategically. We, we know that links and news has become less of a priority for Twitter since. Since must took over last year NPR quit PBS also quit and now we've seen a couple of Swiss public broadcasters Also do the same They've either quit or reduced the number of accounts that they are kind of actively managing specifically in this case because The values of Twitter slash X they say do not align with their journalistic values so there's some quite punchy quotes about Twitter being full of trolls and bots and how it's, not somewhere that they want to be engaging with, with an audience, which is really interesting. So, particularly for public media, I think there's a strategic decision to be made about whether X is the platform for you. And, and I'm wondering, obviously as a Brit, whether BBC or other public broadcasters are potentially going to follow suit in the near future. As more publishers do so, it's more and more likely and I wonder what that means for the platform and for its advertisers if you're not going to have Accounts as big as bbc news and others constantly pumping out content. I think there's a an interesting thread that we can pull there in future episodes of the podcast mike But for now, you're gonna have to get your your Canadian French public broadcasting content directly from the website.
Mike Masnick:Yeah. I mean, it is, it is interesting, especially given just like, the major benefit I think of Twitter for many people for many years was as a tool to follow the news. And that is, that is clearly dying. And, as a publisher myself, like we stopped posting to Twitter a year ago, almost exactly a year ago, actually.
Ben Whitelaw:Happy anniversary.
Mike Masnick:you. Thank you. Uh, and it's interesting to see, much larger media organizations pulling out and recognizing that, one, it's just not as important and not as valuable, but also just the flood of nonsense and scams and bots and spam has really just diminished the value of that platform massively.
Ben Whitelaw:Yeah. Yeah. Yeah. I'm definitely spending less time there myself, to be honest. Um, but yeah. That goes without saying, you can follow us, on well moderated platforms. Um, we are, we are online, both of us on BlueSky and LinkedIn and elsewhere. For now they're well moderated and, uh, you know, that probably, Wraps us up for today, Mike. So we should thank our listeners for joining us. Um, the German contingent and everyone else.
Mike Masnick:About to say, if you're, if you're in Frankfurt, let us know, how you've convinced so many of your friends to all listen to us. That is great.
Ben Whitelaw:Definitely. Definitely. You can get in touch with us, by the website, ctrl alt speech. com. And if you're interested in sponsoring a podcast, you can reach out via the contact us form on that page as well. Do rate and review on the podcast. Podcast platform of your choice really, really helps us goes without saying that we couldn't do it without your support. And thank you for listening today. Speak to you all soon.
Mike Masnick:Thanks.
Announcer:Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.