Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
Sometimes You Have to Whack Some Moles
In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- He was suicidal and needed help. A 15-year-old girl pushed him to kill himself on a live stream (Washington Post)
- Romania’s top court annuls presidential election result (CNN)
- Continuing to protect the integrity of TikTok during Romanian elections (TikTok)
- Covert Facebook Network Found Targeting Romanian Voters (Bloomberg)
- TikTok pushes far right candidate content in Romanian election, Global Witness investigation shows (Global Witness)
- Romania annulled its presidential election results amid alleged Russian interference. What happens next? (Atlantic Council)
- X’s Yaccarino Praises Child Safety Bill and Urges House Backing (Bloomberg)
- Elon Musk’s X comes out in favor of pro-censorship law (Mashable - January 2024)
- Kenya’s President Wades Into Meta Lawsuits (TIME)
- Attacker Has Techdirt Reclassified As Phishing Site, Proving Masnick’s Impossibility Law Once Again (Techdirt)
No actual moles were harmed in the making of this episode, which is brought to you with financial support from the Future of Online Trust & Safety Fund.
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So Mike, there's a new app called Mosey, which is a kind of new private social network for helping connect people you know with other people you know in physical places. And it's been made by the founder of Twitter and Medium, F. William. And, uh, there's a kind of interesting prompt in it, which is it gets people to make a plan. So like for your travels, it's prompting to people to I guess figure out where they're gonna be. So I want you, Mike, to make a plan today.
Mike Masnick:Well, I will say that my, plans last week, which we had talked about on the podcast from my birthday to, celebrate my birthday by watching the Romanian election got, uh, got destroyed by a Romanian court, which, which perhaps we can talk about, uh, how about you, uh, you have a plan to make here? Uh,
Ben Whitelaw:My plan actually is a 2025 plan. I'm thinking full steam ahead for next year. and as part of that, everything moderation has a survey out, which also includes control alt speech, asking readers and listeners the topics that they care about, and what formats that they want us to bring to them. So, this is a, a request for people to, to fill in the survey, to give us their thoughts on. podcast and the newsletter, and, that will help shape my plan for next year, which may or may not, have anything to do with the remaining election. We'll, we'll find out soon. Hello, and welcome to control old speech. Your weekly roundup of the major stories about online speech. Content moderation and internet regulation. It's December the 13th, 2024. And this week's episode is brought to you with financial support from the future of online trust and safety fund. I'm Ben Whitelaw. I'm the founder and editor of everything moderation. And I'm with Mike Masnick, who has newly turned a year older,
Mike Masnick:every day, every day I get older, Ben, I don't, I don't know about this. I don't like this system.
Ben Whitelaw:I know we're talking about your age again. It feels very young. I mean, you know, I don't want to be dating you, Mike, but it just seems to come up naturally.
Mike Masnick:Yeah, yeah,
Ben Whitelaw:did you have a good day anyway?
Mike Masnick:um, sure.
Ben Whitelaw:Yeah. No Romanian election. So it's kind of, it's hard to pick yourself up after that, isn't it?
Mike Masnick:Yeah. Yeah. It was a real, a real letdown. I was really looking forward to, to the Romanian elections that, that didn't happen, but no, no, I, I, I obviously. I had a very nice birthday, just, you know, low key, uh, spend it with family. Um, went for a nice hike, really enjoyed that. Actually, uh, had a, went down to the coast and, hiked up a big hill and got a beautiful view of the ocean. Thought I might've seen a whale, though. I'm not sure if it was really a whale. Um, so yeah, it was lots of fun.
Ben Whitelaw:For the purpose of this is a whale, let's, let's, let's,
Mike Masnick:let's just,
Ben Whitelaw:claim it's a whale. Um, great. That sounds really nice. That sounds fun. did you get access to, uh, mosey at that time? Wait, wait, are you on mosey? Are you moseying? I think that's the verb, maybe.
Mike Masnick:Yeah. I mean, I saw the announcement from Mosey. Unfortunately, Mosey is currently limited to iOS devices and I am a Android user, so I am on the wait list for when there is an Android version of Mosey, but I actually think, I don't know if Mosey is a good business, but I actually think it's kind of an interesting idea. Yeah. feels like it may be a small market. I might be in that market of somebody who does travel somewhat frequently. where it's basically like, you know, you sort of get your address book and you can put in travel plans and you can recognize when people, you know, will be in town, like this is always a thing. Like if I'm traveling to New York or DC, which probably my two most. Often visited places every time is I'm planning out the trip. I'm always like, well, I have whatever I'm there to do. And then it's like, how much other time do I have? And who else do I know? Who's in town who I might want to meet up with. and so like, that seems like a kind of a cool use case. But it feels like a pretty small market of people, but then I kind of like the idea of like using technology in a way that connects people more in real life. Like we always complain about, the sort of online world versus the offline world. I think there's a lot of overlap between the two. and we've seen other things like this, obviously like Meetup was built on in some sense, the same concept. but I think Mosey is kind of an interesting tool. I'd love to test it out if, they ever get around to releasing an Android version.
Ben Whitelaw:Hopefully somebody is listening from the Mosey team. I mean, it's also the kind of way that we ended up meeting in a way was, you know, we met online and then we, we met in person at a conference and we, we could have met probably a bit earlier at another event had we had a way of, having people connect to us. So, you know, I like this idea too. It'll be interesting to see how it pans out. And, uh, I hope you get access to it before, before long. Um, there's also, before we start today's episode, before we dive into our stories, have a kind of, we have some good news in terms of how the podcast has gone this year, which we should share with our listeners,
Mike Masnick:Yeah.
Ben Whitelaw:this is, this is the first year we've kind of been running it. And, you know, it's nice to see some feedback from the listeners. Right.
Mike Masnick:yeah. Yeah. And, and just, just this morning we received, uh, an email from Buzzsprout. Buzzsprout is who hosts our podcast. It is, uh, one of the largest, I think we were trying to look that up. I think it's one of the largest. It's in the top three. It's unclear if it's number two or number three in terms of podcast hosting companies, but they sent a sort of like your year in buzzsprout, uh, sort of the Spotify wrapped that everybody is now trying to adopt for their own services. And they noted that we are in the top 5 percent of Buzzsprout hosted podcasts, in terms of downloads. So people are downloading, people are listening. And so thank you very much to all of our listeners. Uh, we really appreciate you doing that. You listening to us every week and, giving us feedback and obviously Rating, reviewing, you know, subscribing, all the stuff that everybody tells you to do. but the fact that we made it into the top 5 percent of, downloaded podcast, at least on Buzzsprout, which I think is a pretty, pretty good representative sample is, really exciting and encouraging for us.
Ben Whitelaw:yeah, no, exactly. I think this is our 41st episode. Um, we had a bit of a, an intro episode. We've had a couple of mini sodes as well throughout the year, but we've, pretty much had a good consistent beat of, weekly recordings and trying to bring listeners. Yeah, the must, the must listen to stories from the week, the week's content moderation news. So it's really nice to get this kind of feedback to get people's ratings and reviews as well. So if you haven't rated or reviewed us on your podcast platform of choice, this is a, another reminder from us to do that. It really does help us get discovered by new listeners and, uh, yeah, obviously share, share it with colleagues and friends and family as well, where you think it's, uh, relevant to them. cool. So that covers off a lot of the kind of admin this week, Mike. So you want to jump into today's stories and see how we fare.
Mike Masnick:Yeah, yeah, it's, uh, you know, it may be a, a bit of a depressing start,
Ben Whitelaw:Yeah, no, it's, it's a tough one. We should, we should say to our listeners that the first story for today is, uh, has mentions of suicide and self harm. So I just want to flag that up front, but it's a really strong piece of reporting about a topic that we've covered in various forms over the course of the year. So we wanted to bring it to listeners and to talk about it today.
Mike Masnick:yeah, so the story is from the Washington post, and it's, you know, one of these big feature stories, that is, it's, it's a hard read, but it's, I think it's a, it's a really interesting one. it sort of keys off of one story of, of someone who, took his own life. but it involves, you know, uh, the online world, it's tough to even describe, but it, there was a group called seven, six, four group, which was basically a bunch of, angry teenagers, it appears for the most part, who were just sort of egging each other on to do more and more horrible things. and there've been, there's been some reporting on, on the group in the past, But in this particular story, that group sort of. Went around trying to encourage people to engage in self harm up to and including, taking their own lives. And so it focuses on the reporters were able to track down, a young woman who, a few years ago as a teenager had, depending on how you look at it, successfully encouraged at least two people to take their own life. she now regrets that. and recognizes that this is something she has to live with for the rest of her life, admits that she was just, uh, you know, an angry young teen, who had her own issues, um, that were not well dealt with, and just became sort of focused on being horrible to other people, um, And in part, you know, it was, she got involved in this group and she was sort of, it appears, groomed by one of the other members of the group, including to do horrible things to herself. and she sent, was pressured to send photos of herself to others in the group and then to encourage other people to do horrible things. There's a lot of horrible things. things in this story. Um, and at one point, basically, you know, as part of the inner circle of the group, you know, she was sort of pushed to encourage more people to do bad things. And that included, the main story in this article about this, uh, young man who who ended up taking his own life, streamed online in the discord with some people cheering him on. It is horrifying in all sorts of ways. Um, there is an, you know, sort of the online speech element of all this is that much of this happened, via discord. And there were different groups on discord. there is some of the framing of the article suggesting that, you know, discord should have stopped this, which, you know, it's a little unclear how they could have. It is clear from the article that discord was taking down the Some of these, groups, and they were just, you know, able to sort of reform and create new groups, but also there are stories in there of when the main person in the story, started talking about, in this case, he set up his own group specifically, because he said he wanted to take his own life. and it does talk about how a number of users. urged him not to and were supportive and saying, like, you'll get through this and things like that. Obviously, that was not true of everyone. And some other, some people pushed him in the other direction. And then when he did take his own life, people did cheer it on, which is horrifying. and there's all sorts of things. You know, I think the larger story here, as is often the case, is a story about mental health. and how we handle mental health care and what we do for people who are, dealing with mental health issues, whether it's depression, anxiety, or whatever it might be. In this case, the young man who took his life, It does say he was diagnosed with mental health problems. He had spent time in a hospital. He was given medication that was unclear if he was taking the medication, but it does feel like this is another story of how badly society is set up to deal with people who are having mental health episodes and dealing with mental health challenges. And sometimes that comes out in very tragic ways and this is a really tragic story, but I thought it was also a good Not a good. I mean, there's nothing good about this story but but uh another example of sort of the of of human life and it sort of is a similar story to ones that we've talked about before around sextortion and pig butchering And all these stories are sort of online communication where there are some people involved in these communications who are just horrible and encouraging people to do bad things for a variety of reasons, whether it is to make money, which was the case in a lot of the sextortion, and pig butchering stories to hear where it's just, you know, kids just, being mean, which is not something that's new, right? I mean, I think it is something that kids sort of go through a phase to some extent, this is obviously much more extreme than most, but there is a point where a lot of kids are sort of, the world seems cruel, you know, it's like the, the teenager stage where you go through and some people lean into that and sort of explore cruelty. and this is definitely a situation where it's gone too far, but there is this question of, how do you deal with that at a societal level? and I still think the idea of like just focusing on, well, Discord should have magically blocked this when at the same time, I think there are a lot of examples of discord groups other groups that are really helping people and people who are in distress and need human contact and communication are using it as a way to talk to people. And. You know, it would be great if we had all the resources in the world to help people who are having challenges and that it was easy for them to find them and go to them get the support that they needed. But a lot of people are using support networks of other kinds and, online communications is a big part of that. And so you can't just say like, well, you know, discord shouldn't allow, teenagers to ever use the thing because that would actually, I think, do a lot more harm, so I, this is one of these stories where it's like, gosh, this is a, it's a really horrifying story. It's kind of amazing that they got that young woman to speak about her regrets and the situation. Um, But, uh, you know, it's, one of these things where it's like, I think there's again, just the societal issue that we keep trying to sweep under the rug and rather than deal with societal issues around mental health, we just point the finger and say, well, this internet company should have magically stopped this.
Ben Whitelaw:yeah. I mean there's two things I really appreciate about this report, Mike, and one of them is the fact that they, Washington Post have gone to the lengths of finding this now 18 year old woman who was responsible for Kind of asking this young man to commit suicide and pressuring him to do so. she's actually based in somewhere in Eastern Europe. They don't use her name. She was a minor at the time. So they they're very sensitive as to reporting her comments, but it's very rare you get to see that side of, of the story, right? You know, the sextortion stories we talk about, the character AI story. We talked about last week, obviously slightly different because it's a kind of LLM, but the, the kind of cause of the harm very rarely gets the kind of right, right to reply in many senses. You know, some of the gangs in Nigeria and Ghana and Ivory coast, you know, they're not necessarily sought for comment a lot of the time or, to understand what the causes are. And it's very clear, as you say, that this young woman was going through her struggles at the same time. And, and not an excuse, but that comes out clearly from the piece. And then it's the fact that, yeah, this, this young man had a really tough time, you know, he was, he dropped out of university, split from his partner. He stopped taking his meds. He got stuck in, Kyrgyzstan on the way to India, you know, like because of COVID travel restrictions, like it was a, Everything that could have gone wrong basically went wrong. And sense I think from this is that, you know, it would push a lot of people to have dark thoughts at the best of times and let alone if you were struggling with your own mental health issues. do you think this kind of stories like this market change in the way that The kind of platform responsibility is reported on, and to what extent would you, would you like to see more stories reported in this way with this kind of nuance?
Mike Masnick:Yeah. I, I appreciate the depth and the nuance of the story, but you know, that's difficult to do, you know, especially with the, the budgets that, newsrooms have these days. Um, I think this is a really important story. I mean, it was, you know, when we were talking about what stories we were going to cover today, this was the first one I went to, because I was like, we haven't seen stories like this, especially, getting that, point of view from the young woman and, and, and her side of it. Because you're right. Like that is never covered. And in fact, it's almost characterized like people talk about it as like, Oh, well, as often it is scammers. And so it's like, well, clearly the motivation was money, but in this case, it's not, and, we all know stories of like, you know, And sometimes there are our own stories, right? Of like kids who are cruel, uh, kids can be really cruel. and often it's because, there's other stuff going on in their lives that we don't know about. so I, really appreciate the story just because of that, full picture. that this one gives and that we so rarely see. I don't know if it represents a change in any of the discussions on this stuff, because most of the stories aren't like this. And it's so easy and so natural. This is not a blame thing. It's so natural to look at any story around this and say, this is the good person. This is the bad person. Um, I do appreciate that this one is a lot more complex and a lot more nuanced and saying like, we're all humans. and we all go through our struggles know, and being able to lay that out, I really appreciate it though. Even, you know, even this story, I feel like maybe focuses a little bit too much on the internet component of it. yes, like these people were connected over the internet and yes, that has an impact. but. Again, like I feel like it, to some extent, while it talks about the mental health issues, it underplays them and, overplays the, like, well, you know, why couldn't discord have stopped this?
Ben Whitelaw:Yeah. I mean, there are critics of platforms who would say that the messages that this young man sent, should have been flagged in some form. You know, he set up a server. He said, welcome to my suicide chat. I'm trying to host my own virtual funeral. Those are things that I guess the uninitiated might say, well, they should be flagged in some sort of system and addressed and his account, I don't know, paused or halted or terminated or whatever it might be. Is that kind of a fair critique? Do you think, or do you?
Mike Masnick:sure. Yeah. Yeah. I mean, but again, it's like you recognize the realities and the impossibility of, Trust and safety and content moderation at scale, which is, how many other things were flagged? and how many of those were falsely flagged? How many of those needed human review that, you know, it's easy to pull out any of these things in isolation. And like, I look at that and I say, yes, this should have been flagged. Somebody should have looked at it there should have been concerned. There's a question of, but even if it was flagged, what do you do about it? Do you just shut it down? someone is, doing a desperate cry for help and you're just shutting down that, that's not. Good. You know, these are, every one of these is always like super, super complicated, much more complicated than people say. Like, so people look at that and say, yes, they should have flagged that they should, but then what should they have done? Should they have sent, resources to where he was stranded? Like, you know, how do you find, a, social worker in, where was he, Kersakstan? I forget now. you know, to help this individual, like all of these things are like really, really complicated. challenging problems. And it's easy to say, well, they should have caught that. but then the question is, but then what, and then what do you do about it? You know, again, like, there are examples of the, the counter examples where someone publicly expresses ideas around self harm and that leads to people coming to help them, which is what you really would have hoped would have happened in this case. And it didn't, unfortunately. but I, you know, I don't know how you balance those things.
Ben Whitelaw:Yeah, that's true. Is it, these stories are kind of helpful in that they go deep and they make you understand the individual cases, but it's almost hard to comprehend that these are, this, this story is probably happening dozens and dozens of times at this very point where this happens and a platform is supposed to be able to address all of those things at once, like that's, that's the reality that trust and safety teams face, but it's very much worth a read, even just to understand, kind of nuances Um, and yeah, Thanks for flying it, Mike. I hadn't read this before you sent it to me. So thank you. on now to a story that we touched on last week, Mike, which is, uh. What's going to be the reason that you were celebrating your birthday, but ended up being controversially, canceled. So this is the Romanian election, um, the Mike touched on last week. So I'll just kind of unpack a little bit about what you talked us through last week, Mike. then we'll go into kind of the bunch of new stuff that has happened this week, which makes this really interesting. So last week you Kayleen Giorgescu, a kind of far right kind of populist. Unlikely to poll very well. And in the first round of the Romania election actually outdid himself and got 23 percent of the vote, got into the final two for Sunday's election, second round and, surprised everyone was against all the odds but some declassified documents were released that showed that the election might not have been as it seemed. And the, intelligence agency in Romania, found potentially pro Russian interference in a number of different forms, reports of cyber attacks, reports of potential sabotage of, people's accounts, a lot of coordinated networks of accounts that had sprung out of nowhere. And also Around a hundred influencers who've been paid. To promote George Escu's, policies on TikTok and other platforms. And, it's important to note that part isn't illegal, under Romanian law, but, obviously isn't something that, you necessarily want to, to happen in an election. Anyway, we finished the podcast, uh, Saturday, Saturday rolls around and all of a sudden the Romanian. Supreme Court, the highest court in Romania, essentially annuls the election and, cancels it and suggests that the election is repeated within 90 days. so that is the place we are at the moment. The election has, the result has not stood and we're going to get another election within 90 days. So you will get your chance to watch the election, Michael, this won't be on your birthday. Um, so, so this has raised a whole bunch of. Really interesting questions. We have two groups of people, essentially, who, are claiming that this is, Romanian courts upholding its constitutional right to a free and fair election. And there is a group of people who are claiming that this is an anti democratic, approach that, the Romanian courts are essentially kind of defining what The lecture result is, and so there's already been a really number, like a really interesting debate around this. And I want you to get your thoughts on kind of just that part, just the fact that this almost seismic event has happened, almost unparalleled in some cases.
Mike Masnick:mean, the idea of a, constitutional court, annulling an election seems, just as a gut reaction, horrifying, right? Just, you know, sort of constitutional crisis, rule of law, all that kind of stuff where you're like, Ooh, I don't know about that. but, you know, I'm not an expert obviously in Romanian law, especially not Romanian election law. So I don't know, but, there was a really good Atlantic council piece that you pointed out where, people who are actually in this stuff said it is, unprecedented. It feels like the thing that, happens right before, like, a coup or something dangerous happens. It's, you know, so that seems problematic, but if it is true that there were real, fraud or, or, problematic things happening, within the election, then maybe it makes sense. But, you know, the thing that, struck me And I mentioned this last week when we were talking about it was, you know, so much of the focus is on the internet stuff and, buying influencers or, questionable foreign, content being promoted. still doesn't explain the votes, right? I mean, there's this part of me is like, I haven't seen any reporting on this that suggests any of the voting was fraudulent, just the sort of, social media promotional stuff. and to me, that seems like, you know, you're sort of. Buying into the narrative that, the internet speech here is way more powerful than I think people have found in most other cases. And so I'm still a little unclear on that aspect of it.
Ben Whitelaw:Yeah. So let's get into that because that's, that's really where some of the developments this week have shaken up. Right. So it's worth noting that TikTok came out and published a kind of long blog post on its site that said, we did everything we can.
Mike Masnick:the longest thing I've ever seen. Tick tock do
Ben Whitelaw:Yeah. It's really comprehensive, isn't it? You know, normally there's kind of like 400, 600 words. I don't know how long this is. It's, it's kind of in the thousands and it goes through kind of exactly what it did in, in relation to the election. So it hired 120 experts to, advise on different aspects of its response to the election. It's working with 20 fact check bodies. It's, it mentions the number I gave last week, Mike, that has 95. Romanian speaking moderators, which is the highest, in real terms and in relation to the number of TikTok users in the country. it's kind of has a pretty comprehensive argument that we did everything we can. And it reposts, a blog post that it put out in Romanian in English to kind of try and stand that up. So that's the first thing to say that it kind of suggests that it did everything it could, it also, one of the things that we talked about last week was the fact that Georgeski maybe wasn't put on the list of what TikTok calls the, GPPA list, which is governments, politicians, political parties, accounts. So there's a list of people that get added to, and there was some reporting that suggested that he wasn't on it, which is why he got a larger. Reach on the platform blog post seems to just, he was so, so we're at a point now, you know, that maybe TikTok's word against the kind of Romanian courts essentially. the interesting thing that, comes out afterwards that you flagged is a Bloomberg piece based upon a report by a couple of civil society organizations that also meta is kind of implicated in this as well. So there were seemingly a coordinated attempt across 24 pages On Facebook, which published 3, 600 ads, which is an insane amount of ads.
Mike Masnick:Yeah.
Ben Whitelaw:and all these pages were owned by seemingly the same. Individual. We don't know who, but the email addresses and the kind of underlying infrastructure suggests that they were all owned by the same, person, same company, and those ads were promoting Georgescu and his policies and stance on various issues. And so. Again, political advertising, is not illegal, via social media in Romania, but again, you know, there's, questions here about the platform's responsibility in, in ensuring that the right processes were followed.
Mike Masnick:yeah, and the other interesting thing that I saw, and this hasn't been talked about a lot, but it was mentioned in one of the articles, and I have all these tabs up, and I can't remember which one, um, was that it is also believed that a lot of stuff was happening in telegram as well. Um, and that. You know, this comes up all the time. Like there's a lot less visibility into exactly what's happening on Telegram and how they're handling it. And, often Telegram, and I think we've discussed this before in the past. Also, Telegram is often used for like coordinating behavior on other platforms. So it feels like, perhaps some of the TikTok and the meta, efforts were sort of maybe coordinated on Telegram, a less public manner. so, you know, clearly it, it feels like, this is a case where there is clearly evidence of some, probably foreign influence campaigns. But, you know, still not as much detail for me in terms of like, how does that translate to the votes? Right. You know, he still got the votes and where is that line between like legitimate advertising non legitimate advertising? And there is a line, right? I mean, foreign influence in elections Is a concern, and a legitimate one. And so I understand why, why people are upset about it, but wish there were more details on it. There was also, another one that you flagged and another group that, had set up an account that just followed the candidates and did nothing else. And they were looking at like. You know, on Tik TOK, how the algorithm push stuff. And it, notes that it pushed George Eshkoo's content way more than Laskony, who was the, competing candidate in the election that didn't, didn't happen, um, and, by a large margin, um, I'm sorry, I'm trying to find the exact number, but it was, it was a very large margin. I, I don't, what does that indicate? Right. That there, there, could be all sorts of other factors as to why that is, which is like, perhaps just, just was the more popular candidate on TikTok, you know, among the TikTok users in Romania, Yes. Some of it may have been false, but you know, so it's, it's unclear again, like there's all of this stuff where it's like something was going on. We don't quite know what, but there's also no direct evidence of like what the impact was. And again, like there is some element here that feels a little bit like we don't like the results of what happened. And we're going to blame the internet for it
Ben Whitelaw:Mm-hmm.
Mike Masnick:it's difficult to pick a part, like how much of this was actually the internet. And then at the same time, it's like, this is probably not the case here, but you have to compare it to a situation where, what if there was like. You know, a candidate who uses the internet well and people like what they're saying and that's how they build a constituency. Like we still kind of want that to happen. The idea that like, people can break through traditional media and maybe there is a better candidate who isn't getting coverage. And so it's really difficult to separate out those situations from what happened here. and then the fact that the court just comes in and it was like, do our, it's kind of weird.
Ben Whitelaw:yeah. I mean, Mariah Shatto, who's a kind of former European politician, she wrote in the FT about how the subsequent investigation that's gonna happen is probably gonna give us the most information. We've ever had about how these influence campaigns work and actually, maybe what the mitigating circumstances the caused situations like this, and will give us a much clearer sense of, as you say, like, Was this as people who use TikTok regularly being influenced by content that they saw on the platform and voting accordingly? In which case, what do you do at that point? But ban all the platforms? Hard to say. the thing that I keep coming back to, Mike, is political advertising on these platforms. and actually the irony with this is that earlier this year, The European union, passed a piece of regulation about the transparency of, and targeting of political ads. I don't know if you remember, I think it was kind of February or March time. And basically saying that kind of political ads should be better labeled. They should have much more stricter, rules around targeting. So you can't kind of go off the particular groups of people on, platforms. but the, the kind of irony is that this is only going to come into play in autumn 2025. So we're in this, we're in this kind of interim period where, I guess, yeah, there are some elections and that doesn't apply, but the European Union is already thinking about political advertising
Mike Masnick:but that's, I mean, even that is like a really difficult thing. Like what, what counts as political advertising? Right. I mean, obviously if it's directly about a candidate you know, you can understand that, but it's like, what if it's just about a policy? What if it, you know, and you're like, well, you want people to be able to talk about different policy points but like, if those are labeled as political advertising, then that disadvantages, people who have a particular issue, same with the targeting thing, like maybe you You know, it's important to be able to target a particular issue at a particular community, and you remove that, you're sort of. making it more difficult for issues that aren't as mainstream or, marginalized groups that want to get a message out. So I understand the reasoning behind it and I hate to be, you know, again, like, I feel like I'm, I'm the guy who's like, this is more complicated than everyone makes it out to be, but that's, that's the way my brain works on these things. All of these issues tend to be a lot more complicated.
Ben Whitelaw:no, I agree. I mean, what I would say with the political advertising is that I, just wonder to what extent it's better if platforms just don't do it.
Mike Masnick:and that's, you know, like that is the stance that platforms have taken. I mean, Twitter took that stance but I raised the same points then when they did it, where again, it's like, but what is political? How do you, how do you define political advertising? If you're, talking about a policy. And you, you want to do an ad about a particular regulation or, the impact of a policy, is that political? again, you just, there's judgment calls all the way down in terms of how you do this. But I, I don't I understand the reason why, and I think, like, it makes sense for a lot of people to say they want to do it, but it still raises a whole bunch of judgment calls, and so, you know, it was interesting when Twitter said, we're just banning all political advertising, Elon reversed that, obviously, um, but, um, yeah, I, I understand why platforms would want to do it, I just, it turns out to be a lot more challenging than I think people, you know,
Ben Whitelaw:Yeah. I'm sure it is. And I think, I wonder if the actually political parties, if we started there and they didn't have the ability to, to do paid for, you know,
Mike Masnick:But then again, like,
Ben Whitelaw:like what do we, what do we lose in that? It's a large chunk of change for the platforms, but actually the reputational risk and the potential harm
Mike Masnick:people figure out ways around that. Right. I mean, you set up a cutout. It was not directly connected with the campaign and then it's, you know, and like, yeah, you know, then maybe you can call that out and do stuff. But again, like it all becomes really, really challenging.
Ben Whitelaw:Yeah. No, it does. It does feel like whack a mole, but you know, you, you got to wax, got to wax some moles.
Mike Masnick:sometimes you have to whack some moles. Yeah.
Ben Whitelaw:okay. Great. So a lot has changed in the Romania election, story this week. So we want to give you an update on that. a couple of other stories, Mike, from around the world that we've talked through. Yeah, where do you want to start? What's your,
Mike Masnick:let's cover the, the sort of back to the U S briefly, uh, the, uh, the COSA, uh, quick update on COSA. Cause there was some news on that this week, which was, there was a new, new version sort of released. Not really. It was just posted by Marsha Blackburn, who is the coauthor in the Senate. Of course, uh, released a new version and there was a big push to try and get the house to vote on it. Again, the Senate version was approved, massively only three votes against it. And the whole issue has been, will the house vote on it? And so there was a big effort, a bunch of senators, bipartisan senators sent a letter to the house, bunch of the activist groups that are supporting COSA, like kicked off this campaign. And the big thing that they did was they released an updated version in the Senate. Again, not officially. Just posted this version to Marshall Blackburn site saying here's our new version of COSA. And with it, when it came out, Linda Iaccarino, who's the CEO of X, posted A thing saying like we support this we worked with the committee to draft a better version that is supportive of free speech and then elon responded to that post and said, you know, whatever like protecting children is the most important thing. keen observers of these issues may remember that during a congressional hearing earlier this year on child safety with a bunch of the tech company leaders Yacarino had said That yes, X supported COSA. Nobody really believed that. It was pretty obvious that she had never heard of COSA, uh, and was sitting there and had a bunch of politicians yelling at her about child safety. And somebody said, kid online safety act. And she was like, yeah, we support that. And, and then the line at the time was like, we hope it accelerates, whatever that means, right? Like. This was, it was so clearly someone who didn't understand what I was talking about, but it seemed clear that, that what the politicians did was they said, Oh, this is an opening. Like, you know, she's saying that she supports it. So let's get like a full throated endorsement on this. And there were probably some people in the circle were like, well, you know, COSA has these like big problems about the first amendment. And some politicians were like, we're going to add a little bit of language to fix that. And For the less sophisticated person, uh, the language that was added to COSA may look like it solves the problems, but it doesn't. There were sort of two major changes, it's a little bit more than that, but the two major changes to the duty of care section, that is the main section that matters, and is the one that is most concerning to online speech activists, And one of them says, like, based on the views of a reasonable and prudent person. And so the argument is like, oh, this won't be abused because we'll have to. But that standard was already in the bill, just in a slightly different phrase. It meant there was like this reasonable, uh, it said like reasonable policies already, which means a reasonable person standard, which is a fairly common standard. But it's one that you still have to litigate. You know, were these actions reasonable? You have to go through the whole litigation process, which is a pain. The other part is this stupid clause that is basically like, nothing in this bill is meant to, be used for the removal of first amendment protected speech. and so then people are like, see, look, you know, this can't be used to remove first amendment protected speech, but it's like, you don't put that clause into a bill, unless you know that it is there to, you know, the, the, the way you read that bill will attack first amendment protected speech and already no bill can be used to take down first amendment protected speech, because that's the point of the first amendment. So, you know, this is, this is a useless clause and is basically an admission. that the thing was, First Amendment protected speech. But now, I sort of feel that Linda Musk were a little bit rolled by politicians in this, that this was an opportunity to get them to endorse it. And already you had Richard Blumenthal, who was Marshall Blackburn's partner. He's the democratic coauthor of COSA. He went on TV and. Was praising Elon Musk as a free speech champion. You know, one of our prime, you know, most free speech. First Amendment supporting people out there, which we all know is complete garbage. Um, and it's saying, you know, even he is supporting COSA and therefore we know that COSA is free speech supporting, which Yeah,
Ben Whitelaw:producer Lee pointed this out before we started to have, you know, to be a free speech advocate nowadays is to promote speech legislation. And like, that's, that's the kind of weird world we're living in right now where, Musk is being held up in this as both a free speech advocate and also a keen advocate of, you know, some quite stringent, kids online safety bills.
Mike Masnick:yeah. And it's, you know, it's all for show. It's so cynical. it's all just like political posturing. and it's really frustrating, but the underlying thing is that it appears that this particular campaign is probably not going to work. there were a few comments from Mike Johnson, who's the speaker of the house, who is really the key linchpin here, suggesting he, He was not convinced by these changes and he's saying, look, we can look at this issue, but it's going to be next year and it's got to be done in a more thoughtful way. Um, whether or not I believe him, I don't, but like it doesn't appear that this, particular campaign is going to work to get COSA voted on this year.
Ben Whitelaw:Yeah. Okay. Interesting. So we'll probably return to that in early 2025. talking of, Politicians and platforms working, with each other and politicians being slightly convinced of, platforms roles in the speech, ecosystem, the next story from Kenya talks a bit about this as well, right? So this is an interesting update to a couple of stories that we've. Touched on previously been really long, long standing stories to lawsuits, one by a group of 184 moderators who have accused summer and outsourcing company of, kind of malpractice of, uh, not looking after moderators of, causing them kind of mental health issues and PTSD, among other things, and a second case where, a smaller group of moderators have brought a case against meta for, perpetuating. in relation to some of the civil war atrocities in Ethiopia. So these are two longstanding cases. They've been running side by side. yeah, there's been a few, elements of it. They've gone up to the kind of higher courts in Kenya, been allowed to go ahead. Now met us was trying to, argue that it shouldn't be sued in Kenya. And some comments this week have really added some quite interesting elements of this. So on Monday, Kenya's president, William Ruto, waded into the discussion by saying that he's preparing a bill that prevents outsourcing companies and their clients from being sued in Kenya in future. So essentially kind of preventing what's happening at the moment happening, in the future and his comments are really interesting. So I wanted to kind of just read a couple out. Um, he said that those people were taken to court, meaning Meta and Sama, and they had real trouble. They really bothered me. Now I can report to you that we have changed the law. So nobody will take you to court again on that matter. So, essentially summer, and Metta have made representations and, according to Ruto had planned to relocate some of their operations elsewhere. Ruto's very concerned with. the youth unemployment in Kenya. He's been elected on a kind of strong economic, platform, and he wants to bring jobs and prosperity back into the kind of large Kenyan, youth population that there is. And he feels like the, I guess, burgeoning, Data labeling, content moderation, outsourcing, market that's evolving is one way to do that. And so, yeah, here we have again, a politician. Being very clear on, the benefits of content moderation economically and from a, from a labor perspective, but actually they're being really quite large, outstanding questions about whether this is a good thing for the people doing the work. Um, what did you think about this Mike?
Mike Masnick:Yeah. I thought it was, it was really interesting, right? this, goes back to an issue that has been raised many times in the past, which is that when you're doing content moderation, you want to throw a bunch of humans at it, you are exposing a bunch of humans to a lot of really awful stuff and what do you do about that? Right. You want, you know, if you don't do that, then people complain that. That bad content is not being reviewed and not being taken down. And that's a problem. But also if you have people in these jobs, they're going to be exposed to lots of awful content. And how do you deal with that? It's another one of these, like no easy situations. And then there's a question of like, who's liable for what, you know, if people are being exposed to horrible things, you hope that they're getting the support and help that they need. You want, counseling efforts and, and other things, to be in place, maybe rotating types of content that people are dealing with, all these kinds of things, but then who is to blame for it? Is it the outsourcing company? This is a whole other issue where it's like the difference between the, the platforms themselves and the outsourcing company. Who has the liability here? You know, this law in question that he was talking about, it appears that. it basically is really focused on like not blaming the platforms for actions by the outsourcing company, which, you know, is, you can sort of understand, but there's always the question of like, how involved is the original platform versus the outsourcing company? Some of that may depend, you know, there may be some cases where it is a hands off approach and there may be some cases where it's not, and it's sort of like, a fictional divide, you know, to try and try and wipe their hands of, the problems of it. And it's, it's a little unclear which, which is true in which case. but it is also, the issue here, the, president is making clear is like the economic value in certain countries of, building up. whether it's call centers or trust and safety moderators or all these things It is an economic opportunity for some countries as well And so you have again like in all these stories we talk about there's all these competing factors at play And how do you balance all those things in a way that? reduces harm. and that's, that's a really tricky, tricky one. And I don't know that there are good answers for it. Um, it does feel like everyone is again, posturing, right? You know, everyone is sort of like making the most extreme case on their side. And I think the idea that like Mehta and Salma were talking about, you know, moving out of Kenya probably spooked them. Um, and, and that led to, to this push for the law, which, you know, even his description of the law, I think overstates what the law actually is. the Time article that we were looking at, sort of makes that clear, sort of has his quotes. And then it's like, the law doesn't really do what he says it will do. Um, which also makes me wonder like what did, what was his instructions about what this law should be versus what it actually came out to be? But I do think this, and the, the related situation in Ethiopia, which you mentioned briefly, I think these things are going to become really important. You know, it is the sort of hidden truth to platform moderation that a lot of people don't want to talk about, which is that, the real brunt work Falls on, often underpaid, um, not well protected people in, various countries that people don't think about.
Ben Whitelaw:Yeah.
Mike Masnick:that's a concern. And this is a chance to sort of shine some light on that, which I actually do think is very valuable.
Ben Whitelaw:Yeah. And I think there's a huge variation in the standard and quality of the care provided by.
Mike Masnick:Yeah.
Ben Whitelaw:And, and outsourcing companies. I think, you know, for a long time, I was very critical and I still, I still keep a watchful eye, but there are some outsourcing companies that I think have got on board with the idea of doing wellbeing, uh, in a really serious way, um, we've featured a few people on the podcast who I think take a much stronger approach, Sama, you know, Sama. Didn't seem like it was very hot on this at the beginning. And, and that is in part because of, you know, and that caused some of these issues and there's a separate Reuters story, which we'll also include in the show notes, which again, hints at the kind of way that it thought about its team. So this Reuters story, talks about how, moderators were targeted by, the Oromo liberation army. and these moderators were working on, Ethiopian queue. So they were moderating content related to the Ethiopia and they came to Sama and said, we're being targeted, you know, receiving threats, Sama turned around and said, you are making this up and these messages don't exist and you're not to be listened to. and it took us a kind of, uh, a bunch of time to get them to agree to investigate it. So within that story. It's a kind of side point to, the main Ruto story that we're talking about, but it gives a sense of like which companies care about their moderators perhaps which don't. And I think that's, really where, some of the reporting, really brings this out.
Mike Masnick:Yeah, I mean, you know, definitely the issue of like threats to moderators is a big issue. And this is really interesting because these moderators, claim they were receiving threats. And, and there's some examples in the article of like, basically this group saying like, you're targeting us and pulling down our content and we're going to target you. And they went to their management and the management was like, nah, you're making that up. And we're not going to do anything about it. We're not even going to investigate these threats. and is it possible that the threats were made up? I guess it is. Um, and, it's pretty eyeopening this articles is, pretty crazy.
Ben Whitelaw:Yeah. So these, two, longstanding cases in Kenya will, will likely, we'll hear more about them next year. They're likely to, come to, court finally. And, you know, I'm hoping to bring more people who really understand the Kenyan context onto the podcast, because I think it's a fascinating topic. so yeah, we'll, go on to our last story of the day, Mike, which is a story that. affects you and tech Dirt, so you can talk, you can talk about it very authoritatively. you've, uh, been hit by in the latest of a long line of, of tech dirt being targeted.
Mike Masnick:yeah,
Ben Whitelaw:reclassified as a phishing site.
Mike Masnick:Yeah, yeah, I've been censored.
Ben Whitelaw:My God,
Mike Masnick:So this, it was funny because this, this actually kind of started oddly. I had received an email, from Palo Alto networks, which is a, you know, security company, saying we have processed. Well, first I think I received one saying like, we've received your request. And I was like, I didn't request anything. And I'm like, okay, it's, you know, spam scams, whatever. And I ignored it. And then like a day or two later, I received something saying, we've processed your, request to reclassify TechTurt. To fishing. And so like the, the email itself was really weird where,
Ben Whitelaw:we should say that it's fishing with a P H rather than an F, right? You're not, you're not, you're not veering from tech into,
Mike Masnick:yeah, yes. Yeah. Yeah. It's not, we're not, I'm not casting lines and seeking sea bass or whatever. Um, yeah, it was like, basically like that we were scamming people. And so, the email said, you know, your previous category was computers and internet info, and you suggested it be reclassified as fishing. And the new category that we're putting you into is. Computer and internet info, which was the old category. So the email was weird because they claimed that we were reclassified, even though it was really rejecting the fake reclassify as phishing. So I was like, okay, this is weird, probably someone messing with us, but what a weird thing to try and do. And like, pretend that they were me. And so I sort of, you know, I think I commented on it my colleagues and then sort of, you know, forgot about it. And then a few days later, all of a sudden I got a message saying, I went to TechDirt. And I got this big pop up from CloudFlare, which is the, uh, CDN and security service that TechTorch uses saying, warning, suspected phishing. This website has been reported for potential phishing. and it basically, it has an option to ignore and proceed, but it's a big warning that most people are probably not going to click through because, you know, no, nobody wants that. And I was like, what the, what happened here?
Ben Whitelaw:This shouldn't be happening.
Mike Masnick:Yeah, I logged into our CloudFlare account and there was a notice there saying you've been declared a phishing site. You can request that we review it. and I clicked that button and said, Hey, like,
Ben Whitelaw:What are you doing?
Mike Masnick:going on here? Um, I will say I also, I know some people who work at CloudFlare and I reached out to them. and they were very open about it and they escalated the issue. They got it fixed very quickly. I will say that the, the official request I did also reached out to me very soon after, so that they would have taken it seriously as well. And we're very open to what happened, which was basically, they were batch processing a bunch of sites that were reporting for phishing. Most of them actually are phishing and they just missed that ours was included in the middle. And they said, you know, if we had looked at it. We would have realized that it was bogus. It was just somebody wrote something saying this site is sending malware to people. And I said, that is obviously garbage and they, fixed it. But to me, it was just this example of yet another case of, trust and safety is impossible. Like, you know, mistakes are going to be made, especially when you have a whole bunch of spam and scams where most of them are. I want to say legit, but not, you know, like they are legitimately spams and scams and you're just going through and processing all this stuff and saying, delete this, delete this, block this, block this, block this, block this. When a legitimate one gets caught in there, when there's this false report, it's very easy to miss that. so I get why that happens. Even if, it was a little annoying that like we were, but I really appreciated Cloudflare responded very quickly and we're very open about it. And in fact, Matthew Prince, who's Cloudflare CEO suggested like, this might be a topic you want to write about because it's kind of interesting to see these kinds of attacks and the way that people will try and, harm sites that they don't like the fact that they'll report us as a phishing site and that sometimes it'll work even if only briefly is really kind of fascinating to me.
Ben Whitelaw:Yeah. Agreed. And like, yeah. Props to Cloudflare allowing you to talk about it in that way. People might remember that I had a similar thing happen to EIM and I had to go via a number of different providers of data services and ended up in a Facebook group. Messenger conversation with somebody in Thailand. And, you know, I still don't know what happened there. So the fact that you were able to contact CloudFlare and resolve that in this way is a good thing. We could talk much more on Mike about the number of times you've been targeted and tech has been labeled and categorized as all kinds of things on the internet. but, I think that kind of. Sums up things nicely and actually is a nice note to end on. I think after today's, kind of difficult start, but, thanks as ever, Mike, for joining today. Thanks for bringing your stories to the listeners. I hope we've sufficiently rounded up today's, and this week's content, moderation, news, and analysis. you enjoyed today's episode, don't forget to rate and review us. And, uh, we'll look forward to speaking to you next week in our last episode of 2024. Take care. See you soon.
Announcer:Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.