Ctrl-Alt-Speech

The Most Moderated Word on Meta

March 29, 2024 Mike Masnick & Ben Whitelaw Season 1 Episode 3
The Most Moderated Word on Meta
Ctrl-Alt-Speech
More Info
Ctrl-Alt-Speech
The Most Moderated Word on Meta
Mar 29, 2024 Season 1 Episode 3
Mike Masnick & Ben Whitelaw

In this week's round-up of the latest news in online speech, content moderation, and internet regulation, Mike and Ben cover: 

  • Users shocked to find Instagram limits political content by default (ArsTechnica)
  • Oversight Board Publishes Policy Advisory Opinion on Referring to Designated Dangerous Individuals as “Shaheed” (Oversight Board)
  • It’s not a glitch: how Meta systematically censors Palestinian voices (Access Now)
  • High Court orders temporary suspension of Telegram's services in Spain (Reuters)
  • Imagining a Roadmap to User-Led Governance of Meta (Tech Policy Press)
  • Why Bluesky Remains The Most Interesting Experiment In Social Media, By Far (Techdirt)
  • Meta’s most banned word, ad targeting vs moderation and new civility research (Everything in Moderation)

The episode is brought to you with financial support from the Future of Online Trust & Safety Fund. We weren't able to schedule our usual Bonus Chat at the end of the episode so Mike and Ben talk about their thinking on podcast sponsorship, why advertising content doesn't have to be all bad and how you get in touch if you're a company or organization looking to reach Ctrl-Alt-Speech's growing and global audience.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Show Notes Transcript

In this week's round-up of the latest news in online speech, content moderation, and internet regulation, Mike and Ben cover: 

  • Users shocked to find Instagram limits political content by default (ArsTechnica)
  • Oversight Board Publishes Policy Advisory Opinion on Referring to Designated Dangerous Individuals as “Shaheed” (Oversight Board)
  • It’s not a glitch: how Meta systematically censors Palestinian voices (Access Now)
  • High Court orders temporary suspension of Telegram's services in Spain (Reuters)
  • Imagining a Roadmap to User-Led Governance of Meta (Tech Policy Press)
  • Why Bluesky Remains The Most Interesting Experiment In Social Media, By Far (Techdirt)
  • Meta’s most banned word, ad targeting vs moderation and new civility research (Everything in Moderation)

The episode is brought to you with financial support from the Future of Online Trust & Safety Fund. We weren't able to schedule our usual Bonus Chat at the end of the episode so Mike and Ben talk about their thinking on podcast sponsorship, why advertising content doesn't have to be all bad and how you get in touch if you're a company or organization looking to reach Ctrl-Alt-Speech's growing and global audience.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

In the words of Bluesky's very chilled, very laid back skeet box. What's up, Mike?

Mike Masnick:

Well, Ben, uh, today I'm, I'm a little sick, but I am so dedicated to getting Ctrl-Alt-Speech out into the world that I am still here, still recording and still talking about online speech. What's up with you, Ben?

Ben Whitelaw:

appreciate your resilience against the virus. Um, what's up with me this week is Meta. There's a whole lot of Meta news this week. I kind of want to apologize to our listeners, but it's inevitable in an election year with a platform that has 3 billion users. So buckle up listeners. It's a Meta episode, I'm afraid.

Mike Masnick:

Yeah, I was going to say it's, it's kind of impossible to avoid Meta too consistently. They're going to pop up pretty frequently in this podcast, no matter what we do.

Ben Whitelaw:

Yeah. Hello and welcome to Ctrl-Alt-Speech, your weekly roundup of the major stories about online speech, content moderation and internet regulation. This week's episode is brought to you with financial support from the Future of Online Trust and Safety Fund. My name is Ben Whitelaw. I'm the founder and editor of Everything in Moderation, a weekly newsletter for trust and safety professionals, hopefully opened and read by some of you guys as well. And I'm joined by Ctrl-Alt-Speech's, loyal and hopefully soon healthy host, Mike Masnick from Techdirt. So sorry to hear very well, Mike.

Mike Masnick:

I'm doing all right. I, you know, I've, I've been worse, but I've also been better, so

Ben Whitelaw:

Yeah. Touch of Manflu? Yeah,

Mike Masnick:

uh, coughing up, occasionally too much, but, uh, hopefully we will avoid any coughing on the podcast,, thanks everyone for joining us here on Ctrl-Alt-Speech, our third episode. How exciting is that? And, I'll also give a quick shout out. We have been posting a couple snippets from this podcast on the Techdirt podcast feed, and we are seeing that as leading to more people subscribing to Ctrl-Alt-Speech. So welcome to the listeners of the Techdirt podcast. I think you will find the two podcasts go well together. They're different, they, they fit well together.

Ben Whitelaw:

like a kind of fine wine and some cheese, I think.

Mike Masnick:

There we go. There we go. Which one? I'm not gonna, I'm not gonna choose which one's the fine wine and which one's the cheese. So we're going to discuss later every, every week we have a bonus chat. Uh, if you've heard in the first two episodes, we had our Sponsors, including our launch sponsor, Modulate. And then the second week we had Block Party as our sponsors. This week we were unable to record a sponsor conversation in time. Uh, so we are going to still have an interesting bonus chat, which is just going to be Ben and I talking a little bit about the philosophy of what we're doing here and the philosophy of the sponsorship setup, and if you would like to sponsor Ctrl-Alt-Speech. One, you should listen to that bonus chat cause I think it'll be really interesting for you. But, uh, send us an email at sponsorship@ctrlaltspeech.Com. That's C T R L alt speech. com. Or you can go to the Ctrl-Alt-Speech website and click the link there to contact us.

Ben Whitelaw:

Indeed. Before then, we've got a number of really interesting stories this week. We're taking our usual approach of looking in depth at a couple of the biggest ones, and then rounding up our most interesting, best of the rest towards the end of today's episode. So, you've got a whole lot in store today. And, uh, we'll get started. With probably a story that we're going to come back to a lot over the next six or nine months we are in the midst of an election year, Mike, and, uh, you picked out a story that is ongoing about platforms approach to political speech.

Mike Masnick:

I mean, in particular this is a Meta story and so Instagram and threads they've been signaling for months now that their intent was to not be platforms for political speech. They talked about it last summer, Adam Massary talked about it in Threads in particular, and then in the fall, and then in February, they put out a blog post more or less saying like, Hey, we're serious about this. And yet what's happened now is that any sort of political speech is really downranked on both Instagram and Threads. And there is a setting that you can unwind that, but I think. What happened was a lot of users didn't realize that they were defaulted in to not seeing any sort of political speech. And that's had some wider impact in that a lot of people were really surprised and, and in some cases kind of annoyed by it because a lot of people believe it or not, Ben actually do use social media to talk about politics and potentially elections,

Ben Whitelaw:

Yeah, yeah, that makes sense. And, and. This, this is, you know, there's been a lot of consternation about this, right? This has been really widely reported this week. Basically, Instagram didn't notify users, right? so it feels a little bit duplicitous. It um, certainly from user's point of view, right?

Mike Masnick:

It is a really sort of weird decision, right? Because, you know, there is this option and people can change it, but like, why not inform the users? The fact that they didn't, it seems really weird and users are, are sort of discovering it by accident. You know, some people obviously looking at their settings, but, but more to the point, I, I've been seeing a whole bunch of people talking about how if they post any kind of link, now to threads in particular, that it gets no interaction. Basically, it appears that the, the threads algorithm is really downweighting links, uh, as part of the, like, no political speech thing. And in fact, I saw somebody say that the, proper way to post. If you want to link to a news article or something on threads, the proper way to do it is to post a screenshot and then in a reply, post a link because the, replies are, are, you know, that's not what it's going to get promoted in the algorithm. But if you're just posting a screenshot, then threads thinks, Oh, they're just posting a picture. Therefore we can promote it. Maybe this is not political content. Which to some extent, like gets at the nature of how silly all of this is. And the fact that like, how, you know, how unsophisticated the various algorithms are that they're basically interpreting links as political content and images is not political content. And then on top of that, how users pretty quickly figure out the ways around whatever system you've set in place, because you have these unsophisticated algorithms. And so the response is like, well, we're going to figure out how to game that system.

Ben Whitelaw:

And there's a broader piece here, right, which is Meta and the platform that it owns, kind of winding back from not just political content, but also news more generally, right? You know, in day job, I work with publishers and I'm a former journalist based in newsrooms and work now as with publishers as consultant and publishers are saying like, we've seen massive, massive, Dips in Facebook traffic over the last six to 12 months. Leaving huge gaps in our kind of revenue models. And yeah, we can't reach audiences in the way that we used to. So this political content piece is part of a wider shift by Meta away from more kind of current affairs and news, which has caused them Issues in the past PR disasters, it's, you know, it's put pressure on them as a company. But you know, doing it in advance of an election year feels bit of a move in itself, right?

Mike Masnick:

Yeah, well, I think it sort of gets at a few different challenges that are all happening at the same time. And so one is certainly, as you all know, the media business has been struggling of late. And, and some of that was that, the media business has been struggling for a long time, but Facebook had sort of promised to be the solution to the problem of the news industry and a very desperate news industry had really sort of embraced Facebook. and it turned out that, you know, it was all sort of based on a lie. Um, and so they're discovering that and sort of discovering that when so many media properties bet so heavily on Facebook and Facebook traffic that as, Meta has moved away from that. They're suffering from, having put all their eggs into the Meta basket to some extent. Now, the other, the other aspect of this, that's, that's really interesting is that at the same time that all of this is happening, we have governments around the world that are trying to put in place these news bargaining codes or link taxes, depending on how you want to refer to them that are based on the idea that Meta and Google in particular are somehow stealing from the news publishers by posting links and maybe a short snippet or summary of the articles, and therefore they have to pay for that. And so, you know, the, the most well known one is in Australia. There were. Few in Europe beforehand. And now there's one in Canada and people have been debating about one in the U S and there's this belief out there, which I think is misguided that Meta and Google must pay these news publishers. And I think Meta realized quite rightly that it doesn't really need. To be, a place for news. And if these countries are going to put in place, these laws that make them pay for something that is not particularly valuable to Facebook, it wasn't keeping people on the platform. It wasn't making them use it regularly. In fact, it was only. creating PR nightmares in all sorts of ways, then why not cut back on it? And so I do wonder how much of this is even, you know, in response to these new laws and Meta realizing, like, if we keep linking, letting people link to news, it's just going to cost us more and more money for very little benefit

Ben Whitelaw:

Yeah. I mean, that is true. And in many respects, I agree that there's no, there's no downside to this in a way, right? There's no downside in Meta pulling for itself.

Mike Masnick:

For

Ben Whitelaw:

There might be its users.

Mike Masnick:

yes, I was going to say for, for, for an informed populace, there might be, and that's, potentially serious.

Ben Whitelaw:

and we do care about the informed populace. Let me tell um, that that is true. So I understand the strategic decision behind the company doing that. What do you foresee are the implications going forward? In terms of the elections this year, in terms of the kind of knock on effects, how do you see this playing out in this year of all years?

Mike Masnick:

Yeah,

Ben Whitelaw:

you know, I mean, my only thought here was, maybe we're going to have more kind of political parties and publishers to a degree trying to reach audiences around the election who previously got a free audience, right? Who had a whole bunch of followers, and were able to reach that audience with content, whether that is political content, about the party itself, or Stories about the election, they're going to have to kind of find new ways of doing that. And we do see, we have seen in the past where those audiences are kind of cut off publishers and other organizations have to then pay. So they might actually increase Facebook's revenue from these, this particular segment of customers. But do you see any other kind of wider implications than that?

Mike Masnick:

Yeah. I mean, I think there, there could be a lot of things that begin to shift in terms of how and where people find their news, right? Already, there have been stories, especially among young people that they're using TikTok to find news. And we could see that expand elsewhere. If you're not getting your news from various Meta properties, and that's where you spend a lot of your time, but you do want access to news, as much as you previously liked, News stories being mixed in with baby photos or whatever it was that, that you would see in your feed. Uh, I think people are going to start to search elsewhere for where they're going to, to get their news, because I think the reality is that people still want news. In some form or another and where and how they get that is going to shift. And so that is potentially an opportunity for someone. And I know that there are a lot of companies that are trying to enter the space and sort of be like news aggregator kinds of services. Um, and we'll see how well any of those. Go, I mean, Artifact, which was one of those companies announced that they were shutting down recently, but then sort of walk that back this week also and said, it's really cheap just to keep the servers on. So they might just keep it going. And that was, that was one, interestingly, tying this all together. That was founded by Instagram's co founders. Uh, but Um, you know, I think it's just going to be this shift in how people consume the news, because I don't think the desire of people to know what's going on is going to go away, but the systems that many people have gotten used to over the past decade, especially, are going to change as a source for news. And I, and I think, you know, I think that's a big deal, but it's not clear how that plays out.

Ben Whitelaw:

Do you see any potential impact on election results as a result of this decision, whether it's in India or in the EU elections, or in the US or in the uk? Can you foresee that?

Mike Masnick:

I think that's a lot harder to predict. Certainly some of the reason, you know, if you go back as to why we're in this position in the first place is because a lot of people did get upset about, Facebook's potential role or claimed role in various elections. And that, you know, you can go back to, to the Brexit vote and then Donald Trump being elected in the U S and a number of other things that often people Because those were seen as surprising and unexpected results and people wanted to, to find a cause for it, they often jumped to the idea that it must have been Facebook. I think that the data supporting that is not very strong and is actually incredibly weak. So I don't know exactly like what the impact will be. Some of it, I think will depend on how it develops over the next few months in terms of where it is that people are finding news and whether or not those are trustworthy sources, but also whether or not different organizations figure out how to game those systems, right? This is part of the constant battle that I think part of what's going on here is Meta saying they don't want to have to keep fighting this battle of trying to deal with bad actors who are really trying to game the algorithms And promote Whatever content they're promoting. And so the easiest way to do that is just suppress all of this content. And so does that lead to a less informed populace or maybe a more informed populace? I don't know. There's an argument that if, if there's less nonsense being fed to people, maybe that is better, but if they're also getting less, useful content, maybe that is a problem.

Ben Whitelaw:

yeah, yeah. We will see. We will see. Um, let's move on now to another story, Mike. We will definitely come back to elections again in future episodes. And I was looking at this week, uh, an interesting story that was put out by the Oversight Board, Facebook's kind of quasi judicial advisory organization, and they've issued a public advisory, Which I think is really interesting which I want to dig into a bit. I know you've read it as well before we kind of dig into the actual advisory itself For those people who maybe are listening who don't really know what the oversight board is It is this kind of supreme court esque Organization that facebook have funded but which is independent from the company itself set up in 2020 And it's a kind of radical experiment to have a wide range of experts input into its content moderation policy It was basically initially given 20 experts that were brought on board to help figure out some of the key policy decisions that Meta itself was struggling to deal with. These were legal scholars, Nobel laureates, digital rights advocates, etc. And they played massive role in the last three years in making some really big decisions on the platform, most notably, the kind of banning and readmission of Donald Trump onto the platform. So it's a really interesting body, in itself to look at. People have kind of quite varying degrees on its success so far, but this week it issued this kind of really interesting policy advisory on what is basically its most banned word on the platform, which was a fascinating kind of nugget in the advisory. And it refers to this word Shaheed, which is basically banned. It's an Arabic word banned on the. One of its policies, the dangerous organizations in individuals policy and Shaheed kind of doesn't really translate to English as the policy advisory explains, but most is like the word martyr or martyrdom and is often used in posts where people are praising acts of violent terrorism. So Meta has decided at some point in time that this is a word that it doesn't that it blanket bands. And there has been this kind of process over the past three, four years, which we're going to unpack now about whether this should be unbanned. And the recommendation that it's made this week is basically that this, ban should cease to exist, that Meta should stop thinking that Shaheed violates its policy and should only, consider it a violation if it's accompanied by some sort of visual indication of terrorism. Or some a statement of intent around terrorism or you know, basically kind of tighter conditions for it I just I just thought it's really fascinating This is the biggest word band platform with three billion users and no one's ever heard of it

Mike Masnick:

yeah, I was going to say, it's, you know, it's sort of fascinating to learn that that is the most moderated word, on the platform. Um, and you know, does again, sort of get to the idea of how global a platform Meta is and all of the debates and arguments that we have in the U S and in Western Europe, we may not be, seeing some of the the biggest controversies and apparently is a very big one around this particular word and the fact that Meta has a blanket ban on it and it struck me also as a really clear indication again of, some of the impossibility of doing content moderation and how much. of questions around content moderation always go back to context and you how do you, how do you have enough context to determine if someone is using, the word Shaheed in a way that is connected to terrorism, then you can make a strong argument that it makes sense to have that as violating a policy. But, you know, a blanket ban is such a crude tool. And you can, you know, understand probably why it happened and I know we'll discuss in a minute sort of like how long this process has taken to get to this point. But, there were all sorts of stories about how Meta was not, uh, you know, the terrorist organizations were making use of Meta, and met his properties for a variety of reasons. And they were getting slammed in the media for it, for allowing terrorist content on their platforms. And so you can see how Sort of in desperation, you would go for overblocking. And one of the ways to do that would be to do an outright ban of this word, which is often connected to praising terrorist attacks or terrorist related content, but as the, oversight board ruling makes really, really clear, like there are plenty of cases when that is not true. And in fact, that word is often someone's name, either someone's first or last name. It is a relatively common name as well. And there are lots contexts, again, it doesn't, it doesn't translate perfectly to martyr. And so there are other stories of where it is used just for someone who, who dies well, Um, on their job, whatever it might be. And so there are all sorts of cases where it's clearly not violating a wider rule about terrorist content, but was still being blocked under Meta's rules.

Ben Whitelaw:

Yeah, and the policy advisory makes very clear that removing the ban allows, you know, to be used in these other contexts, right, which reporting by the media, commentary, academic discussion, human rights debates, you know, all of these ways which are currently, you know, Astonishing to me, actually not possible at the moment. And yeah, it speaks to, I guess, the importance of having more visibility of what is on these policies and the of bodies like the, the oversight board who can interrogate the policies and make recommendations, albeit not binding Facebook Meta now have to respond to the policy advisory and make it clear a bit. It is taking its recommendations on board to whatever degree And so that will be interesting to see how it works I think I think worth as you say mike talking about the timings here because

Mike Masnick:

Yeah.

Ben Whitelaw:

That they're really fascinating. I think we obviously have a war taking place in the middle east between israel and palestine And so I just want to kind of talk about the timings here The oversight board gets a lot of stick for being very slow You it faces criticism for not taking enough cases on and the advisory notes that it was asked to look at this particular issue back in February, 2023. Now, obviously, war broke out in October and at that point, the advisory note says that it was preparing to publish back then, a judgment, but it decided to go back and do further research to see if what it was recommending worked in the context of what was happening. Which is, I think, commendable. obviously coming out now, that's over a year since Meta asked the Oversight Board to advise on it. And that's a long time. However, what I think is really interesting, which, you know, critics of the Oversight Board might find interesting particularly, is that Meta have been thinking about this issue for even longer than that. It started a process of looking at the word Shaheed in 2020

Mike Masnick:

Yeah.

Ben Whitelaw:

three years before I asked the oversight board to advise, and nothing has been agreed internally since then it in that time it had an impact assessment done. I don't know if you remember, there was a big eruption of tensions in Jerusalem in 2021, which is the kind of shake Jara. Controversy where some people were evicted from East Jerusalem and it caused a Massive sense of tensions online. There was a lot of content that was posted in the israel palestine kind of Online space a lot of the content was actually shadow banned suppressed and this report was published in 2022 that said this isn't right. This isn't on And even at that point, Facebook struggled to take the recommendations of the report and do anything with it, which is why, I'll come around to it, which is why it then submitted a request to the oversight board. So for me, this is a really clear indication that the oversight board is doing good work that Facebook couldn't do on its own, which I think is an important kind of point to note. And, you're right that we don't see enough of the kind of context to why decisions are made policy decisions are made in platforms and this is a really helpful kind of clear indication why

Mike Masnick:

Yeah. And I think it is interesting, right? There is, there is some level of transparency here that is, is added by the oversight board in terms of some of the thinking on this thing. But it also does show, as you said, like these are complicated decisions with lots of trade offs and they're not really easy to make. And this is one of the things that certainly comes up in so many of these debates and discussions about content moderation and trust and safety, where it's like when things look bad and look obviously bad. There's a tendency and an instinct to say, well, like that is obviously bad. Why did that, you know, that never should have happened. Any human being who looked at it would have known that's bad. And yet, the reality is always that there are a lot more trade offs and a lot more tension in figuring out how to make these decisions. And you can argue that, in this case, You know, all of this took too long, the initial investigation starting in 2020, the request in 2023. It is, it certainly is interesting that., The oversight board admits very clearly that they were ready to publish this policy advisory much earlier, but because of the Israel Palestinian situation right now, they kind of held back to see if that they were Scenario changed what they were thinking of, uh, or thinking about regarding this policy and then eventually decided that, you know, it was still worth talking about. So it is, it is interesting to see, these aren't easy decisions. None of these are easy decisions. And I think people immediately jumped to like, this is the obvious thing when it's always a lot more complicated than that.

Ben Whitelaw:

Yeah, do you think mike that Meta has the right people based upon what we know about? the company Who have context about what's happening around the world to make these kind of? policies and these blanket policies if we're being critical, if we're being kind of harsh on, on Meta, I don't think it would take a huge amount to, create a policy that was maybe not such a blanket ban on a word that is used very widely. Yes, it is. It means smarter, but yes, it also means other things. It feels like if you had the right people in the room, the right people expertise from the right countries, regions, you would probably not end up at a blanket ban in the first place. Is that

Mike Masnick:

Yeah. No, I, I, I mean, I was kind of shocked that they had a blanket ban on this word. I mean, you know, blanket bans on a particular word are like the kinds of things that small startups do when they're freaking out about their first content moderation decisions and not the sorts of things you expect from sophisticated companies that have like highly detailed goals. rules and policies and large teams of people in place. And so you could maybe understand it if it was in. Like a small country that speaks a language that very few people speak and they don't have the necessary expertise. But just the fact that this is the most banned word on the platform and it's not just an Arabic word, it's used in a number of different languages sort of spread beyond that. The fact that at no point in this process did Meta get people in the room to say like, Hey, maybe a blanket ban of this word is a problem and we should, we should have slightly more nuanced policies. I was kind of shocked about, about that simple fact. I mean, you know, Facebook, in theory, or Meta, in theory, has a very sophisticated trust and safety team. Um, you know, they've been working on this for a while. And to find out that they would have a blanket policy that bans a particular word without any consideration of, wider context was really surprising to me, especially in this day and age. If you had said they would have had that, you know, 10, 12 years ago, like they wouldn't have surprised me. But today I was definitely pretty shocked.

Ben Whitelaw:

Yeah. And I, again, I don't wanna kind of hammer home the timings too much, but the fact that so little was done or, or was unable to be done off the back of the, the, the kinda human rights report that came out in 2022 following the eruption of violence and the online. Hate and the suppression of Palestinian voices in 2021 is an issue for me. You know, I think what we've seen as a result of that inability to do something about this policy and other recommendations that report made is that we've seen the same thing happen to Palestinian voices and Arabic content in the most recent war. In you know in the war that took place started in in october and continues to happen And just last month there was a report by access now Which would include in the show notes about how? palestinian voices have been kind of routinely censored in this most recent war and How that is a direct result of Meta's inability to do something about it after the last set of violence like I'm currently really interested in the way that the company Has failed to see that this might happen and try to do something about it But maybe that's maybe it's more complex internally than i'm giving them

Mike Masnick:

Yeah. I mean, I, I lean towards the, this is more complex side of it. Um, again, like I recognize from the outside, like you can see some of this stuff and you say it looks bad, but also recognize like that on the ground. These are rapidly changing situations and the actual context of, you know, I mean, there are much wider debates, around the world right now about, what, what counts as praise for genocide and what doesn't. And it is, you know, It is not clear. And some of these things are much more challenging than I think a lot of people, make it out to be. So I have, I have some sympathy for the challenges here in terms of like, Palestinian versus Israeli content and how do you handle that moderation? I think it is a lot more complicated than most people are making it out to be. And I also think, and I'm not saying this is true of you in particular, but I feel like some of the discussion on these stories. comes down to, you know, whether or not people support, one side or the other in, in, in this particular war. And there sort of makes them more naturally predisposed to saying, well, this content is obviously bad and this content is fine. When the reality is that. With each of these, this is a very, very complicated situation and the speech associated with it is also super complicated. Um, and so I think it is a bigger challenge. And so on that, I'm, I'm willing to cut Meta some slack.

Ben Whitelaw:

Yeah, and give some credit to the oversight board for doing something in less than a year that Meta to do three

Mike Masnick:

Yes,

Ben Whitelaw:

yeah interesting interesting story Thanks for that. Mike We should turn now to our best of the rest stories, our roundup of other stories that we have read this week that we've delved into that have taken our fancy. And, uh, you've got one from Spain, Mike, that you've kind of been following for a fair old while in terms of, uh, the broad, the broad storyline. And this is a kind of new update.

Mike Masnick:

Yeah, so this was a court in Spain ordered that Telegram, all of Telegram be blocked in Spain. And that was kind of shocking. And when I originally showed the headline, I assumed it was because of some sort of, you know, terrorist content or something horrible on Telegram because Telegram is sort of famous for being the place where lots of horrible content happens. But no, it is a copyright case. And so, uh, you know, immediately. gives me nostalgia for the good old days where every discussion about online speech was really a discussion about copyright law how much the world has changed. But this was a case where a bunch of studios complained that people were uploading infringing content. And sharing it via telegram. And so the Spanish court basically said, well, we figured this out. Telegram has to be blocked entirely in Spain, which is crazy for a variety of reasons, just the idea that like, before we decide anything on the merits, we're going to say an entire app has to be blocked and that ISPs have to block access to this, this app just seems absolutely, you know, impossible. In an extreme and an extremely crazy response to potential infringement. Um, and so, you know, I, I have a problem with that, but it does go back to, as you said, the thing that I've been following for years was that in the sort of late. 2000s, 2008, 2009, 2010 timeframe. Spain actually had put in place some, some pretty reasonable copyright laws, which is kind of crazy. Cause most copyright laws are, are terrible in my opinion, and very strongly weighted towards large industries and Spain put in place copyright laws that were much more heavily weighted towards users and. You know, a user freedom and the ability to share stuff and basically saying, you know, except in extreme cases users, you know, sharing content is not doing massive damage to various industries and the entire entertainment industry freaked out. And then with it, the U. S. Government sort of kicked into gear and. And really put tremendous pressure on Spain to change those laws. And there's this whole process called the 301 special 301 report, which is a complete joke, I think, but I'm not going to go into all the details of it where the U. S. basically names and shames different markets that it feels are not doing enough to protect Hollywood. It's the shortest version of it. And so they've really went after Spain hard and it became like a big diplomatic thing. And so about a decade ago. Spain flipped and went the other direction and started passing very extreme you know, very problematic copyright laws in response to the U. S. Pushed by Hollywood. Uh, and so I think this is just kind of like the latest result of that having a court order that blocks an entire app because a small percentage of people are maybe using that app to share infringing works just seems just seems absolutely ridiculous. You know, disconnected from the, the size of the problem if there is a problem at all.

Ben Whitelaw:

is it likely that this will not last very long? That the, you know, how do you, how do you kind of see this panning out with Telegram have to do something in order to be allowed back in the country?

Mike Masnick:

it's a little unclear. Um, and so I don't know what the, what the current status is. I imagine that Telegram is going to fight it in some form or another, though. I, you know, I don't know what, you know, Telegram's legal presence in Spain is, and, and sort of how they're dealing with it, but, it's a weird setup and it's just kind of, you know, at a time when other countries are looking at banning apps and you have, you know, the U S and, and India, you know, With TikTok and what, and, and, you know, all of this stuff, it just feels like another example, though, from the copyright world of, of apps being banned. And I think it's, I think it's just really problematic and, and, and not a good sign for the, for the open internet.

Ben Whitelaw:

Yeah. Okay. Um, thanks, Mike. My kind of story that I was interested in this week is kind of an op ed think piece from, uh, which is published on Tech Policy Press by an AI fellow at a organization called the Institute for Law and AI. Um, Kevin Fraser basically writes this quite idyllic, quite interesting piece about how Facebook Meta should change its whole policy on moderation and allow people to form what he calls kind of federated groups to allow us to moderate content in a completely different way. Really caught my eye as something a bit different to what I'm typically reading and digesting for the newsletter each week. Basically, He unpacks how Facebook, in the very early days, in early 2010s, allowed its users, which was a much smaller user base than it is nowadays to decide on kind of sets of policies that it thought were interesting. And often sometimes the terms of service that the company was thinking about changing. And so people would comment and, uh, if, if it got, if a policy got a certain number of comments, it would, uh, be then put to a vote. And if a, if the vote had a. A high enough threshold of its users at the time, then that would be passed. Um, so Kevin kind of makes the point that Facebook should go back to its old roots and create a kind of new way of, it's not dissimilar to a kind of citizens assembly, if you're familiar, familiar with that, where you have a representative body of users from different parts of the world, different demographics different user types who can input onto Facebook. The kind of policies that it has on this platform at large. It's a lovely idea. I was really interested to, to read the piece and Kevin makes a really kind of nice case for why we should be spending our time and energy on reforming Facebook, because it's obviously not going away anytime soon. And it's going to be a major platform for the foreseeable. So, you know, we shouldn't just leave it there to rot. Love your thoughts, Mike, and what you think. And obviously if you're, if you're reading the piece afterwards and you want to drop us a note with your thoughts, listeners, then do get in touch as well. Um, we'll share the email address in the show notes. What was your thoughts, Mike?

Mike Masnick:

Yeah. I mean, it's, it's an interesting, uh, concept. And, you know, I followed the work of Aviv Vadya, who is at the Harvard Berkman Klein Center, who's been really pushing for a similar sort of thing, which he refers to as platform democracy. And it's just a way of, you know, enabling Basically like a group of users to make decisions. And so, you know, uh, sort of randomly chosen group of users, but rather than just selecting them and having them vote on different policies, actually running through a larger process, you know, again, sort of taking from this concept of citizen assemblies of educating them, bringing in a bunch of experts on, on the topic that is being talked about. And. discussing it with them, sort of going over the pros and cons and then having them make a vote and a decision. And it's a very idealistic, interesting concept. There's a question though, of how well that actually works in practice. And certainly there are examples of it working within citizen assemblies, but citizen assemblies are about like governmental decisions. And when it comes to decisions for a tech platform and a single tech platform, I think it becomes a lot more challenging. And there are certainly like big questions about, you know, Would Meta ever allow this because what if the, you know, citizen users of Meta decide on a policy that would, you know, really limit Meta's ability to make money, for example. Um, and so it's, you know, I think there's something interesting there that it would be cool if Meta were willing to experiment with it. in the same way that they were willing to experiment and set up the oversight board. But it sort of feels like the oversight board is Meta's big experiment along these lines and, and I'm not sure it's really open to opening things up to more democratic, you know, values and setups.

Ben Whitelaw:

Yeah. Okay. Well that kind of experimentation leads us neatly onto the third and probably the last story we should cover in today's best of the rest is this idea of blue sky being. a really interesting experiment right now. And we talked a bit about it a few weeks ago, but you wrote a piece on text about it again. Um, why do you think that? why should our listeners be interested in blue sky? Really

Mike Masnick:

as I'm playing around with the way Blue Sky has now implemented their what they call stackable moderation I think is really, really interesting. And we're going from, you know, the largest platform in the world in Meta to one of the smallest right now in Blue Sky. Um, But I think there are really interesting lessons to be learned, especially, you know, we're just talking about different, different models of governance and, and models of how to handle things. And in fact, much of our conversation today has been about like these challenges with content moderation. And so blue sky has just, just recently, you know, within the last few weeks, implemented this model of. Where they've open sourced their tools for content moderation and then allowed anyone to set themselves up as what they refer to as a labeler. And this struck me as really interesting as we're starting to see the first labelers show up and effectively what it is is. Third party content moderation services where users can choose to subscribe to that moderation service or not. And so there's some really interesting examples already, and I'm sure, I'm sure that we'll see more as they go, but. You know, there's one that's like a co op community led content moderation service specifically focused on the kinds of content that many platforms are less willing to moderate themselves. And so it could be more extreme speech, more sort of. You know, leaning towards hate speech, kinds of things, bigoted speech. Um, that, you know, this is a big complaint that, that often comes across. So in this case, this third party moderation service has been set up and people can subscribe to that. So they say, I don't ever want to see anything that even gets anywhere near the more extremist speech. You can subscribe and basically have the, you know, then you have. The options yourself in terms of how do you handle that kind of content? Do you hide it entirely? Do you minimize it? Do you put a warning on it? There are all sorts of things. And I think this starts to open up some really, really interesting possibilities. So, you know, one of the other labelers that has been set up is a labeler that simply designates content that has posted an image that doesn't have alt text. text, right? So on most social media, if you post an image, you can put an alt text, which says what the image is, which is really useful, especially for blind users who use screen readers who want to know what the image is. So, and some people use alt text and some people don't. And so if you don't use alt text, it's, it becomes problematic for people who rely on those screen readers. And so here's a labeler that will for find that kind of content, label it, and so you could subscribe to that and make sure that none of those posts show up in your feed. Obviously it would be better more people used alt text. But it's, it's a really interesting model. And so then I was beginning to think more about it and you could start to do some really interesting things. So an example, going back to, you know, our first story of the week around, you know, political content. Well, you could have a label or show up that. designates which content is political, or you could even, you know, get even more granular and designate, you know, is it conservative or is it liberal or is it, you know, whatever you want to, uh, you want to set it up and then you could put different rules around it yourself. So, you know, one example is maybe you don't want to see political speech on weekdays. Or, you know, but you're okay with seeing it on weekends, you know, who knows, whatever your reason is, but with this kind of

Ben Whitelaw:

to do that?

Mike Masnick:

well, who knows,

Ben Whitelaw:

Political weekend sounds like nightmare.

Mike Masnick:

It depends depends on what your hobbies are, I guess, um, but you could set up a system like that. And unlike, you know, the, the Instagram thread situation, you're not relying on the decision that Mark Zuckerberg or Adam Masari makes, but you know, you're now setting it up yourself and setting the rules yourself. And so, you know, another example of this is like, Oh, you know, maybe you've had a rough day and you only want to see happy content. You can, you know, choose a labeler that designates which content is happy and which content is not, and, you know, make sure that you only see happy content for a day or two or whatever, there's a whole bunch of possibilities. And by opening up that system and allowing it to be, you know, distributed in a way, it just creates all of these really, really interesting possibilities. And it'd be really interesting to me to see other social media sites adopt that, that kind of service.

Ben Whitelaw:

Yeah. And these labelers are as simple as basically following. An account, right? It kind of looks like you're following a user, but you're following a kind of moderation Label, It's really interesting the actual ui of it Um might not be for everyone, but I think it's fascinating to see How they've made it as simple as as following another user on a platform. It's great

Mike Masnick:

Yeah, you just subscribe to the labeler as if you were following a user and then it will appear in your moderation tab and you can select different options based on that. So yeah, it's really interesting.

Ben Whitelaw:

Yeah, great Um, super. Thanks. Mike. Thanks for unpacking this week's online speech and content moderation news and thanks to all the listeners for also Following along. I hope you enjoyed today's episode. If you have enjoyed today or any of the previous episodes, please do rate and review us on all the major platforms. It really does help us be found by new listeners like yourself and, uh, tell your friends and your family as well. Word of mouth is always great. Um, please stick around as well to talk a bit more about how we sponsor. Uh, how we think about sponsorship on Ctrl-Alt-Speech, Mike and I are going to have a quick conversation about that now in our bonus chat. And, uh, thank you again for listening. We'll speak to you all soon. Thanks to everyone who stuck around for this bonus chat with Mike and myself, where we're going to unpack a little bit about how we're thinking about sponsorship and sustaining Ctrl-Alt-Speech as a podcast and ensuring it's the best possible podcast for you, the listeners. Mike, we are obviously on our third episode now. We have had a couple of great sponsors so far, including our launch sponsor, modulate, and we spoke to Mike Pappas, the CEO, on how he's thinking about gaming and, and speech and moderation. Last week we had Tracy Chou from Block Party, another great sponsor and. Unpack for people who are listening still why we're thinking about sponsorship in this way and why we've decided not to You know have ads or have any other kind of types of funding model at this point in time

Mike Masnick:

Yeah, and this goes back to stuff that I've been talking about on TechDirt going back a really long way, which is that good advertising and good sponsorship. Should be valuable content in its own right. And I think this is something that a lot of people across the, across a variety of industries always get wrong. And that they focus on the idea that advertising and sponsorship is, you know, annoying and intrusive and, and that's the only way it can be. And so you have to force in the middle of your podcast an ad for a mattress, or, you know, you have to try and get a, uh, you know, an audience who cannot leave to force your ads on them. And I find that model really problematic. And so I've always been focused on where can you find sponsorship models where the content itself of the sponsorship or the advertisement is actually valuable. And. relevant in its own way. And there are lots of examples of, of this, you know, I, so I've written for years about this whole concept of advertising being content and content being advertising, which sound like the same thing, but are two slightly different concepts, but related concepts. And just this recognizing that any kind of advertising. Can be useful content that people want to see. And, you know, we, to some extent, we see that every year where people watch the Superbowl to see the ads, because the ads are the most entertaining bit of the Superbowl. Um, and, but, you know, it is possible to create and have. Advertising and, and sponsored content that is actually reasonable and relevant to an audience. And so to some extent, our sponsorship model here, where each week we have, 10 minute interview with a sponsor at the end on a relevant subject to listeners of Ctrl-Alt-Speech seem like a way of, you know, proving this out, hopefully we'll see, it is an experiment in many ways, but seeing could create a model that, you know, that is actually relevant and is interesting, and we've already heard from people, I've heard from a few people who were so happy with the sponsored content, which is not something that most people hear, you know, when, when, when creating any, any sort of media property, but because, you know, our first two sponsored bonus chats were so relevant. To the discussion that we were having, uh, you know, we've talked about toxicity and about privacy and things like that, that are actually very much relevant to the world of online speech, that they were actually valuable content in their own right.

Ben Whitelaw:

Yeah, and it's worth noting as well that those those sponsors Were kind of very much hand picked by us. You know, we we wrote we reached out to Companies and people that we really felt added to The idea of the podcast when we were kind of practicing and kind of coming up with it in the first place and so I think that idea of Going to people who complement the podcast The the topic and and what we're doing and really know Specific things about aspects of the industry for example gaming or for example privacy As we've heard in the last couple of weeks is really important to me You know,

Mike Masnick:

Yeah, and

Ben Whitelaw:

key is that we have people who are we don't have anybody come on the podcast These are folks that we know and trust and who are additive to to the space and doing good work as well

Mike Masnick:

Yeah. And, and that's not to say if, if we don't know you, you wouldn't make a good sponsor. We're happy to hear from, from people, but you know, the, the setup of it I think is also important in that it's an interview. Um, you know, if, if both of us are available, be an interview by both of us, if only one of us is by one of us, but you know, allowing us to, to interview the sponsor about a topic that, that is relevant and we work with the sponsors to find a relevant topic and to, to make it. You know, both useful for our audience and for the sponsor at the same time. And I think that's really a key aspect of all of this is that, you know, we can create sponsored content that is actually valuable across the board, that it's not something that is annoying or not useful to the listeners, but is also. You know, useful to the, to the sponsor. So it's, you know, it's, it's a cliche to have like win, win, win solutions, but it really feels like this is the kind of setup where it is actually useful, useful content. It's not content that, that people fast forward through.

Ben Whitelaw:

Right, and I think the wider zoomed out view of why we have sponsorship in the first place is also pretty important to touch on We are lucky to have financial support from the future of online trust and safety fund who have given us basically kind of a bit of startup funding to get the podcast going and to prove that It has an audience. Our theory is that there is a group of professionals and practitioners who want This News about content moderation and online safety delivered to them in podcast format every week. And there are some great podcasts out there that we listen to ourselves. There's a space there, I think, for for news as well. Alongside some of the great interviews and analysis that other podcasts do. And we want to be able to kind of sustain this podcast and that model. And for us to do that, we need to find, you know, an audience and also to find sponsors who want to connect with our audience. So, you know, we can't do that. Unless we we do that sponsorship model and you know, mike is obviously working full time on tector I have a full time job and run everything in moderation kind of on the side. So You know It's really important that we have sponsorship as well. It's not something that we're doing just for the sake of it I think that's key key to point out

Mike Masnick:

Yeah. And, you know, I think that it is really important for us to, to be able to prove out. That, that this is, a model that works and that this is something that is sustainable.

Ben Whitelaw:

Yeah Okay, great. Well, thanks very much, Mike. We should wrap it up there. Thanks to the listeners for staying on a little bit longer to hear us talk about the podcast and do a bit of navel gazing. Really appreciate you sticking sticking by us and for listening again this week. We'll see you next week.

Mike Masnick:

And if you are interested in sponsoring, uh,

Ben Whitelaw:

good

Mike Masnick:

us, if you want to be interviewed by us as part of the bonus chat, send us an email and just, if you go to Ctrl-Alt-Speech, com, there's a form there that you can send us a note.

Ben Whitelaw:

Brilliant. Thanks, Mike. Take care. See you soon.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.