Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
Nothing to FCC Here
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host David Sullivan, the Executive Director of the Digital Trust & Safety Partnership. They cover:
- Trump's FCC Pick Wants to Be the Speech Police. That's Not His Job (Wired)
- Sauce for the Goose: The FCC Lacks Authority to Interpret Section 230 Post-Loper Bright (The Federalist Society)
- Roblox gives parents more power to protect the safety of young gamers (NBC)
- Meta should allow third party imagery of terrorist attacks, with a warning (Oversight Board)
- As Bluesky soars, Threads rolls out custom feeds globally (TechCrunch)
- Threads’ algorithm will focus more on the people you follow (The Verge)
- The communications minister cited a study in support of a teen social media ban. Its co-author disagrees (Crikey)
- Meta says it has removed 2 million accounts linked to pig butchering scams (The Record)
- You Too Can Hire an ‘Etsy Witch’ to Curse Elon Musk (Wired)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So David, I don't know about you, but I have been using Canva a lot lately to create presentations. I actually think it's pretty nice software better than the, uh, old standards for presentations, but I noticed that it has a wonderful prompt and I'm going to ask you the prompt from Canva, which is, what will you design today?
David Sullivan:Thanks, Mike. Well, um, while I do enjoy using Canva for all sorts of things, I am not going to be designing anything with Canva today. I'm going to be designing a spreadsheet because next week is the unveiling of public reports from the very large online platforms and search engines about their EU Digital Services Act. Risk assessments and audits. And so I'm going to be designing a spreadsheet to keep track of all those reports and starting to take a look at what they say, what they don't say, and uh, what they mean for, online regulation going forward,
Mike Masnick:How exciting.
David Sullivan:thrilling stuff.
Mike Masnick:That was great. Uh, on my front, I will note that, um, if you don't know this week, I launched a Kickstarter project. For a game about social media. And I am still designing, I'm actually designing the rule book for the game. If you haven't heard, please go check out our game on Kickstarter, One Billion Users. you can find it by just going to onebillionusers.Com. but that is my design for, for this
David Sullivan:That is much more exciting than mine.
Mike Masnick:Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It is November 22nd, 2024. And this week's episode is brought to you with financial support from the future of online trust and safety fund. as you may have noticed, Ben is off this week. I believe he is on a train somewhere and we decided it would not be a good idea to try and get him to record the podcast while traveling on a train from Edinburgh to London. so we have an excellent guest host with us, David Sullivan, the executive director of the digital trust and safety fund. D partnership. So, uh, welcome and thank you for joining us. Um, before we get started, I will note that next week is Thanksgiving in the U S. I know it is not Thanksgiving in the UK, but we are planning to be off. And I say that we are planning to be off because the last two times that we plan to be off both times, we ended up doing a podcast. I really don't think that is going to happen next week. I don't think so. but, uh, I will again, use this moment to, ask people to check out our game on, Kickstarter. It is really fun and it actually has to do with online speech because you're running a social media network. And part of the point of the game is you have to figure out how to deal with toxicity on your platform. swear. It's fun. My kids love it. Which is bizarre,
David Sullivan:Anyways, we
Mike Masnick:but it is a, it is a really fun game. Anyways, we have lots of interesting stories to get into that are less fun than playing a card game. So, David, I'm gonna, I'm going to start by throwing it to you. And you had picked out this article from Wired this week, about our, Incoming FCC chair and maybe some thoughts about where internet regulations may be heading in the new administration.
David Sullivan:I did, and I should say just before as a quick preface on this that, I had the pleasure of seeing your co host been while I was in London last week. I mentioned to him that I had been enjoying being outside of the United States just to take a little bit of a media diet. You know, break from the news, uh, and that while I was keen to, guest host, whenever, the need might arise that I was enjoying not reading the news for a little bit. So I'm thrust back into it. Yes. Uh, and you're right into the heart of what's next for the internet, with the, uh, incoming, Trump administration. So, um, I thought that, you know, this is an unavoidable topic at the moment. Uh, and, there's a lot of talk, and we will see what kind of action, actually, unfolds. I think, you know, with Brendan Carr is the FCC chair. He has You know, talked quite a lot, including in, you know, very recent, days and weeks, about different actions that the FCC might take with regards to things that has historically not regulated, um, like internet companies and section two 30. Uh,
Mike Masnick:and are arguably does not have the authority to regulate, but that's just a little aside.
David Sullivan:Indeed. Indeed. I mean, I think that more. So what I am wondering about is how much there's going to be talk how much there might be action given the Republicans having, all branches of government going in is whether they are going to try to really, implement some changes or try to implement some changes depending on what authority they might have. Claim to have or court might say that they have or not And so it will be an interesting moment To see what happens because there's been a lot of talk about section 230, but I think as as you've said circumstances have changed a little bit since uh, this was the these were the the mantras Of the trump administration or of of trump administration related officials, few years ago
Mike Masnick:Yeah. I mean, it's interesting to me that, you know, and I've, I've known Brendan and some sense or another for years. and there's an element that I wonder if his. Thinking is sort of stuck in the final months of the first Trump administration and is assuming that the second Trump administration will pick up right where it left off. Because if you recall, if I send you back to those wonderful days of When we were all locked in our houses. you may recall that over the summer Trump wanted to ban TikTok and he wanted to repeal section two 30 and he demanded that the, well, he. Convinced the commerce department to issue this request for the FCC to review its own authority on section two 30, which it has no authority over section two 30, but, but, uh, and so they began this process, but it was sort of this lame duck process. And ever since then, cars continue to attack section two thirties, continue to attack Tik TOK and said it should be banned and all of these things, but in the interim. Trump has completely flipped on TikTok, on the TikTok ban. And we can say whether or not that had to do anything with the fact that he got an awful lot of money from the person who owns 17 percent of ByteDance or not. Maybe not. Or, you know, the section two 30 questions become a little bit trickier in this administration, because you have Donald Trump who now owns a social media platform called truth social, which relies on section two 30 and actually has, literally copied and pasted lines from section two 30 into its terms of service. And you have the first buddy. Uh, Elon Musk, who owns the, the platform X formerly known as Twitter, that not only heavily relies on section two 30, but literally this week filed a lawsuit. Against a California law pointing to section two 30 as one of the reasons why that law was, could not come into force.
David Sullivan:sort
Mike Masnick:And so I'm sort of wondering if the Brendan Carr view of the world is outdated and, and kind of what happens, I mean, there's been all this talk of like, you know, the Silicon Valley tech bros, got themselves a president and a lot of them do like section two 30 and so I'm sort of wondering where this goes.
David Sullivan:I think the other thing is politicians like being able to, uh, I think beat up on, companies for all kinds of different issues. Uh, and so, um, you know, it can be helpful to keep that issue alive by talking about it. Um, Um, as opposed to trying to take some sort of actual action. And so I think we'll see, I think we'll have a lot of talk, and, and this, you know, I think there's a few different strands of this, right? There's the section 230, should something be done about that, piece of it, and then there's also sort of other child safety related, uh, legislation and regulation. and so the question will be, you know, will this continue to just be a useful point for getting media hits and, viral social media posts, and all of the things that politicians, love, or is it something that they will try to actually move something on? And I'm not convinced of that just yet.
Mike Masnick:Yeah. Yeah. And I mean, there is the other backdrop to all of this, beyond just the Silicon Valley connections to the incoming administration, but also what the Supreme Court has done in the last few months. Effectively stripping much of the administrative state of their power to do things. And in fact, I was just seeing this morning, I mentioned to you that there was a piece, that the Federalist Society of all places published, arguing that the FCC lacks the authority to interpret Section 230 as a result of the, the recent Loper Bright decision, which basically strips the administrative state of much of their authority. And there's something, I don't know if it's ironic, Or it's just, you know, farcical that, you know, the, federalist society, side of things has been pushing to strip the administrative state of all this power, just as Donald Trump comes back into power and, Taps Brendan Carr and who seems excited to use this administrative state power that he might no longer have because of his team, making sure that the Supreme Court ripped it all away.
David Sullivan:will require kind of new levels of Calvin Ball, uh, in order
Mike Masnick:which may
David Sullivan:uh, Yes, don't let's not underestimate the ability to improvise a solution to all of this, but there's certainly some contradictions there.
Mike Masnick:yeah. but one of the things that, you know, you brought up about this sort of, using the threat of regulation is I think, you know, it's important to talk about. And I, I I'm pretty sure I brought this up before on the podcast. And we were talking a little bit before we started recording about this. I often think back to a piece that. Ed Felton, professor at Princeton, very thoughtful guy on internet policy, had written about the net neutrality fight a long, long time ago, where his argument was effectively that we need net neutrality, but that the actual laws and how they were written were very tricky to get right, and they would probably get it wrong. And therefore what he liked best was sort of the, the sort of Damocles concept of continue to talk about. Implementing regulations on net neutrality, but never actually do it. And because the threat is there, companies will sort of act better to avoid it switching to bad, legislation. And I've talked about that before and mentioned that it was the Ed Felton who inspired me on that. And I could never find. the piece that actually had that in there. And you like, like a superhero of nowhere, it was like, Oh, I found that piece and you found this paper that Ed Felton wrote on net neutrality from 2006, that basically makes that argument. And, I think this kind of thinking might be important in the next administration where there'll be a lot of sort of, you know, regulation by bluster, I think might be the way to think about it is that it'll be a lot of talk and you better do this or else, rather than actual regulations.
David Sullivan:Indeed, and there's already been a fair amount of that. I mean, even looking at sort of the, area around child safety, you know, I think that the specter of the Kids Online Safety Act, it might be a good, example of this in that we've seen companies enact a bunch of stronger measures, around protections for younger users, um, and, um, you know, and I think that there are some of the issues there, particularly when, ultimately, all of these child safety types of, bills will ultimately hinge on how you can tell whether someone is a child and sort of get into the challenges around age verification and age assurance, where I think it is helpful to have companies Experimenting and sort of thinking about different ways to do that. but it's a very blunt instrument if you try to, uh, enact, legal mandates. so this may be, I think, an issue where that kind of, yeah, uh, the threat, of legislation is, is actually bearing some fruit. We'll see what happens going forward. Certainly, but the Tone may change and you do you don't have divided government. Um, so there, someone could move something, but we'll see if they have the appetite to do that.
Mike Masnick:Yeah. And in terms of sort of, you know, reacting to this stuff the rhetoric around it ahead of time, we also saw this week that Roblox announced a whole set of what they refer to as significant updates to their parental controls and other safety features, targeting kids on their platform. And I, and I think this had been sort of. Previewed about a month ago. I think they had sent out emails to, users of the platform kind of saying this was coming, but I think it was sort of officially started the process earlier this week. I think some of the changes started on Monday and some of them will go into effect over the next few months. but we're starting to see this reaction and you know, I think there's a good argument that it is a reaction to all of the kind of negative Stories and news, and there was a report specifically targeting Roblox, about their sort of child safety efforts. And so without any law officially in place, they're putting in place much more stringent parental controls and safety features for kids.
David Sullivan:Yeah, these are, you know, and perhaps these are some measures that should have, uh, should have been in place, uh, a while ago. Um, but I do think that this, and actually thinking about this from. the perspective of trust and safety practitioners inside companies. I do think that these kind of external environments actually help strengthen the hand of folks who are looking to prioritize these issues, inside companies in arguments where, you know, in the absence of this kind of pressure, uh, it may be that the, you know, folks working on growth and engagement, win arguments over safety features and friction and things like this. so I think that, that, these kinds of developments can ultimately, be helpful for folks who are trying to make these changes on the inside, which often take longer than it seems like, but then often come out and seem reactive to things that have been going on in the press.
Mike Masnick:Yeah. Yeah. I mean, it's interesting, you know, in talking to, to trust and safety people where I feel like they're sort of pushed and pulled in different directions where I've heard from some who say, you know, they recognize like bad regulations will just make things a total mess and often make things more difficult. But at the same time, having regulations allows them to go to senior management and be like, look, you can't just dismiss us
David Sullivan:Yeah.
Mike Masnick:like now, now there's a legal thing. So that, you know, some people feel more strongly others about how useful the regulations are. And so this is, I wouldn't say it's like the best of everything, but it is, it is a tool that is useful for trust and safety people to say, like, look, You know, we got to do this, otherwise we're going to get attacked. And if we do continue to fall down on this. then really problematic regulations could come in as well. So, yeah, hopefully this empowers, you're a trust and safety, child safety person listening to this, hopefully this empowers you to be able to convince management to
David Sullivan:And to make that preventive argument that you can, yeah, that it's cheaper. to prevent than clean up a mess.
Mike Masnick:absolutely. All right. Well, let's move on to our second story this week. this is the oversight board, came out with a new ruling that I thought was really interesting and sort of. Demonstrates a few different things just about trust and safety, content moderation, speech, and the oversight board itself. This was question around, there was the terrorist attack in Moscow in March of this year. That a lot of people had posted video and imagery of, and Meta had removed a bunch of it. you know, saying it, violated their rules and some people protested it and it went to Oversight board took a few different of these cases And they decided that meta should allow third party imagery of terrorist attacks But there are some conditions on that with a warning So that people aren't just suddenly exposed to, people being killed, uh, in some cases or, or, injured. and if it was with posts that condemn, or share information about these attacks. So basically, you know, is it newsworthy and is it not encouraging more terrorist attacks? and my first thought on this. Was that, you know, this gets back to something that I've talked about, over a decade specifically around terrorist content, which is that it's so contextually based where you can have the exact same videos and, in one version, it is someone using the video to, Hype up more people to do more attacks. And then others, it can be a documentation of a war crime. and how do you separate out those things without looking at the context? And there's to be, especially among some in the media and some policymakers that, you could just say we have to ban terrorist content and get rid of it all. And this ruling to me, I think is, you know, sort of begins to dive into the nuance and the trade offs around that and recognize that, you know, it's not If you look at the context, there are contextual areas where it probably does make sense to allow someone to do this, maybe, you know, with a warning and where it is clearly not encouraging more behavior like this, but like, documenting. This very newsworthy and important event and having that public record, I think is actually really important. And it's interesting to see that that's where the oversight came down and said that that meta made a mistake in pulling down this content. What was your reaction to seeing this ruling?
David Sullivan:Yeah, I thought this ruling was really interesting. uh, I think one also for, actually just, you know, Showing what the process looks like inside meta around these kinds of events and this kind of content, you know, in 2024, uh, there's actually the description of sort of what happened, as well as then the analysis and the, the decision to overturn that, that I think really provides a level of detail in a concise way in the summary of the case, um,, that, shows how these things are looked at in terms of, you know, different versions of the same. you know, essentially similar video, but being shared by different people with different comments attached. how that was reviewed both by, you know, artificial and human means, and sort of the decisions that were taken. I think that this, this goes back to questions around. I think that the initial, decision to remove was because, um, Uh, it was, depicting the moment of designated attacks on visible victims, which is, of course, a real, very significant issue if you're, you know, the family of someone who, who is, you, you, you'd have reasons to not want that. Um, but on the other hand, the newsworthiness. Element of this is really important. It's one that I've also been have worked on, you know, at different times. I remember back. was at the global network initiative before I joined the digital trust and safety partnership. And in 2015 2016, we worked on a, a policy brief about extremist content where we were working with meta and a few other Internet companies and then working with human rights organizations like the committee to protect journalists who were really focused on this issue of Not wanting to have content that is reporting about, terrorist, events, getting sucked into this pressure from policy makers to take all of this down. Um, so I think it really does show the level of nuance, that there's gradations in what you can do. and so I think the idea of, putting a warning here is great. It doesn't get around the fact that you're going to continue to have These edge cases, you're going to continue to have false positives, and false negatives when it comes to how this gets implemented in the future. But it's a very well reasoned case. You can see both sides of the issue. and think This is, to me, is actually one of the most important things about what the Oversight Board does isn't so much the decision to overturn or not overturn meta in any case, but it is creating a library of really great analysis from a human rights perspective of Content moderation decisions. and that's something that that is valuable. You know, whether you are an academic, uh, researcher, whether you're inside a company thinking about your own policies, or enforcement of them, whether you are working for an advocacy organization, there's a wealth of analysis of what these decisions look like from a freedom of expression perspective from an international human rights law perspective, that are very concrete. They're not abstract kind of, um, this is this is really brass tacks. And I think that's a very useful resource for all of us.
Mike Masnick:Yeah. It's, it's actually, you know, I don't know if it was ever the intended purpose of the oversight board, but think there's, real value. And I know, you know, I had mentioned this to you also before we started recording, and I had heard from someone and I have forgotten who, and I apologize for whoever, whoever told me this, sort of an interesting side effect of like the, greatest impact of the oversight board was. Teaching human rights experts about the, the difficulty and difficult trade offs of trust and safety. but you're right that there's like, there's this wealth of information and analysis and, really thoughtful display of the trade offs in trust and safety work and in content moderation questions. Which I think is really important. And obviously I've spent many years trying to figure out ways to get people to understand those trade offs because as you know, and as I'm sure a lot of our listeners know, a lot of people come to trust and safety questions, if they have no experience in the space, thinking, You know, well, you stop the bad stuff and you, you leave the good stuff. What's the problem? Why do you guys keep messing it up? And the reality is that it's way more difficult and way more nuanced. And there are trade offs with all of these decisions and, edge cases have edge cases, right? You know, the deeper you go on all of these things, the more difficult it becomes. And, you can write any rules that you want, and someone will find the exception to that rule where suddenly you're just like, well, wait, but is this, does this fall into this category? Does it fall into that category? And there's a ton of subjectiveness involved. And as soon as you're dealing with anything that is subjective, you're also dealing with, well, everybody's going to weigh the different factors in the subjective. Decision differently. And in fact, that came out in this particular ruling, it's noted that, you know, there wasn't full agreement on the board. and there were some members of the board who felt that no, like Metta was right to pull this content down and they should keep it down because it's, it's violent terrorist attacks and it's, showing some some horrible things But how do you balance that with the newsworthiness aspect of it? And the use of it to condemn these attacks, for example it's a really really difficult question and anyone who thinks there's an easy answer. I think Reading through this decision and getting a better sense of the different thinking and the different trade offs You know, I I hope most people will come out of this and say well there isn't There isn't an easy answer and you can feel. One way or the other, and I sort of, come out of this, I, I think I agree with the way the board came out on this personally, but I, I, totally respect and understand people who feel the other side of it, because it's not like, I think this is a 100% obvious this has to be the answer. You know, I'm probably at like 60% this you. Feels like the right decision given these different trade offs and the different context. and so, I think that it's, it's a really valuable tool in that sense,
David Sullivan:tool. Absolutely. You
Mike Masnick:and then the, one other aspect of it that I think is kind of interesting is that this also gives you a sense of. Like the difficulty of doing this at scale, right? This is a thoughtful, interesting, weighed out pros and cons discussion of it, which is great for these three cases. But that doesn't take into account the fact that there are 3 billion people on, Facebook blue, as it's called these days. And, you know, countless numbers of other people on these other platforms.
David Sullivan:know,
Mike Masnick:And, you know, not all of them are posting terrorist material. But, there are so many of these cases and the people who are doing the trust and safety work and the content moderation don't have time you know, this is part of the reason why there is this oversight board, but that process takes, it takes a whole bunch of time. So I think it, it is also good to like, we can talk about this all we want, but to also recognize at the same time that the scale of the issues that these companies face means that you, you know, the people making the Day to day decisions aren't able to do this deep analysis and bring in, a bunch of human rights experts from around the world to weigh the pros and cons of these things. so, you know, I think that's, that's really important to call out too, because sometimes people will lose sight of that and say, well, Oh, okay. Now the oversight board has done this. Now we've solved the terrorist.
David Sullivan:Yeah. This one case, you know, the app like requires this much amount of like, really, really careful thought. It's such a great, sort of case study for like teaching of these kinds of issues and things like that. But yeah, when it's at, at scale, I don't know, industrial level, it, it's a whole other matter.
Mike Masnick:Yeah. Yeah. All right. So yeah, it's a really, really interesting, um, decision and we'll of course have it in the show notes. So I definitely recommend people check it out if they have a chance. All right. Let us move on to our semi lightning round. We shouldn't call it that anymore because we never, never make it quite to lightning speed. Um, I will use this moment to insert my disclaimer, about my, my role on the board of blue sky. Uh, we mentioned sort of the Exodus last week from X to other platforms, including threads and also blue sky. I am on the board of blue sky. So you can consider anything I say related to blue sky to be potentially horribly biased. Um, and feel free to discount it that way. Uh, I don't think. This next story is directly about blue sky, but it is tangentially related to blue sky. And that's why I feel the need to raise the disclaimer regarding my association with blue sky. This is actually a story about threads and a couple of announcements that threads made this week. And David, do you want to summarize kind of what, threads did? Yeah,
David Sullivan:custom feeds, feature where you can sort of create new feeds, um, In threads, based on, I think, searching for, you know, different, accounts or keywords. so it's, it is not the same as the types of, custom feeds, that are, have been created and, and can be created on BlueSky. We can talk more about that. but it is, a step towards, curate your own feeds as opposed to the kind of, our algorithm will just. Find the stuff you like and serve it up to you approach. and then also I believe there was an announcement that they would, prioritize more, content from accounts that you follow, uh, as opposed to accounts that you do not follow, which, um, again, may seem like it's, uh, you know, Should have been how they were doing things from the beginning. But, um, in any case, some interesting responses, from threads in terms of perhaps as blue sky, uh, you know, has kind of, really, gained some momentum.
Mike Masnick:it definitely felt, and a lot of people certainly commented both of these moves appeared to be a response to blue sky. Who knows if that's true or not? it might be there, there was also a story this week, which I think was potentially a little misleading, that suggested that blue skies, daily average users had surpassed threads. I don't actually. I think that's true. I, this is not from any inside knowledge of either platform, uh, blue skies data is all public, so blue skies user base, all that information is very, very public threads is not. so we're sort of relying somewhat on what we hear from. people and what the press puts out there. the report was based on a, uh, a study from SimilarWeb. SimilarWeb sort of has, ways that it, I think I mentioned this last week, also somewhat sneaky ways that it, tracks web visits to a platform. So I don't think it's really tracking app visits, which could be very, very different. but the general sense was that, this is threads responding to competition from blue sky. And, we talked a little bit about this last week also, but I think it's a really interesting time that we're seeing competition in this market for social media and that companies are trying different things and really sort of trying to see what works. And that's always been the most exciting part to me is. it felt like this market had really stagnated and now suddenly we're seeing these sort of competitive elements come into it, which I think is, exciting and probably good for users of these platforms and for generally the future of internet speech.
David Sullivan:Absolutely. and I could say some, some things about blue sky to so that you don't have to, but I would just say, I think like, I've been really excited about of the developments in terms of new. Features and aspects to blue sky that seem to be created both by the team there as well as by users in a kind of very productive back and forth. I think about it in contrast to the early days of Twitter when users of Twitter invented things like the hashtag. Uh, if I'm not mistaken and retweets on, then the company eventually said, okay, well, we'll turn this into a feature. and, in the case of, blue sky at the moment, it feels like it's a much more kind of like, collaborative environment in which things like, Starter packs and different types of feeds, are both, being developed by folks, who have access, to, the platform because of its openness, and to the, the protocol, I, I should say. as well as the company sort of seeing how it can use those things to really, take advantage of this moment and really scale up. but I think, you know, yeah, more competition, the better. you know, and I, and I do think that, one difference between, the feeds, on threads so far, which seem pretty limited in terms of like, pick some accounts you like, pick some topics, um, versus, you know, some of the, the best feeds on BlueSky are things like, Quiet posters, uh, you know, the feed that just surfaces things that you might have missed, because people don't post that frequently and sort of reveals the fact that people who obsess over a reverse chronological timeline in social media are, are, are, Sort of missing the fact that that's going to then be dominated by high volume accounts, Uh, and you're going to then miss things from the quiet poster So that's like a you know, it's not about a particular account It's not about a particular topic that you're interested in But it's a way of surfacing stuff you might miss and that's really kind of fun So i'm enjoying the way that iterating on some of these things and I like the fact that it it may be spurring some similar innovation from threads I
Mike Masnick:Yeah. and I think it's great. I mean, you know, when custom feeds launched on blue sky, my first reaction was like, why doesn't every social media have this? It's like such a fantastic idea. the threads implementation is not, as you said, is not the same as the blue sky implementation. The threads one is basically like you can do a search and then you're still relying on our algorithm to give you results sort of based on that search. And that's. in terms of like user simplicity, I think there's value in that. you know, blue sky, it's a little more complicated to create a custom feed. And, you know,
David Sullivan:I definitely lack the skills to create a custom feed. Like, I created a starter pack for trust and safety that people seem, seem happy with. I do not have the skills to do a custom feed for trust and safety, just yet. But, um, there are accounts, you know, I do think there's users who are trying to fill some of the gaps, uh, uh, on the blue sky side. So you see things like, uh, I think it's Hunter Walker from, uh, uh, Talking Points Memo, who's been going around and verifying, politicians and journalists and sort of filling the gap where people are not able to verify, based on their domains, uh, in some cases. And, so there, there's a, yeah, there's a group effort that's going on there that is, that to me is very interesting.
Mike Masnick:Yeah. And, you know, and that is part of the sort of BlueSky ethos and the fact that it is open source when, all the code is open source. So, people outside the company submit, requests and, and, suggestions and features and code. And when BlueSky rolls out new versions, they will often be thanking people who are not employees of the company, but just independent. Open source developers who wanted to see a feature and created it. And the team is able to plug it in. And so there is a lot of excitement there, but I do like, I just, in general, I like this idea of all these alternative platforms pushing each other and coming up with interesting features and. You know, we've seen that activity pub and mastodon have been implementing a few different features that also sort of originated elsewhere. And I think that's exciting. There are other platforms. I wrote something about this a few weeks ago around like, there's some of these smaller decentralized platforms like Farcaster, which has a really neat concept of frames where basically you can build apps into your, whatever equivalent of a tweet is, which is, you know, Both horrifying from a cybersecurity standpoint, but, but potentially really interesting where it's like, if you could tweet something that is an app itself, Oh, that's really interesting. And I think we'll start to see other people experimenting with things on that front as well. and so like this period of experimentation and sort of figuring out You know, what really works for users, because right now we're in this sort of battle for who's going to get the users, I think is really, really interesting and really exciting. And, and it's something that we sort of lacked in the space for about a decade. when it had been sort of, you know, stagnant and focused on this one thing. So, people were joking about threads, copying blue sky, but I actually think it's like, it's a good sign of, the competition in the space and the attempts to make. Things better for users at a time when, you know, we spent years talking about surveillance capitalism and extractive efforts where it was trying to, take as much from the users as possible and so getting back to a world where companies are actually trying to provide better services for their users seems like good thing. Yes, exactly. Exactly. I don't know what the, uh, the, uh, the term there would be, but I should come up with something, next up on our list. we have talked to the past two weeks. We have talked about attempts in both. Australia and the UK to potentially try and ban kids from social media or ban phones in schools, or those things are sort of related. There has been a little bit of movement and some interesting stories on that front this week. Uh, do you want to summarize the ones that you found?
David Sullivan:So I think the, um, the main thing is in Australia, uh, where I believe they have now introduced legislation, to ban, social media for, folks under 16, if I'm not mistaken. And, so this has been a real, moral panic and overdrive, I'm going to say, uh, a little bit here. and it's been interesting to see how they have, tried to, use research, to bolster their argument that doesn't necessarily substantiate the argument.
Mike Masnick:Yes, that was the big one. This is, this is a story in Crikey where part of the push for the, teen social media ban was that the communications minister in Australia, Michelle Roland talked about this report that she said, backed the idea, that, Kids under 16 should be blocked. The research in question was somewhat well known and widely cited research from, Oxford professor, Andrew Shabelsky, who has been a guest on the TechDirt podcast in the past. And as someone who I've talked to a lot about this, this is a very, very thoughtful and detailed researcher on this stuff. and also his, colleague, last name Orbin, I'm suddenly forgetting her first name. Which I feel bad about. Um, but,, Professor Shabelsky came out and said, this is miss citing my research. My research does not support this finding. and said, I do not agree that it provides the justification for this policy. And I think that they have. misunderstood the purpose and findings of our research and said that he hoped that they would maybe reach out to him to talk about what the research actually said before regulating based on their misreading of the research, which it's quite something.
David Sullivan:It's it's a extreme case of something we've seen elsewhere in terms of taking pretty nuanced findings about this relationship between young people's mental health and well being and social media and trying to use it to enact very blunt measures and I think here it's the this notion of his research, if I'm not mistaken, is a it's a it's a very Backs up this notion of heterogeneity and that there are people have very different, you know, sort of responses. Sometimes they can be very bad. sometimes it's a really important outlet. And I think what's telling in the case of Australia and we will see what happens because this appears poised to move. Rather quickly. Australia has already has online safety, regulations that have been enacted. It has an independent regulator that you safety commissioner, Julie Inman Grant, Julie is, uh, very, vocal advocate, um, for online safety regulation, and it was telling that, um, I believe, her office, had been, uh, Fairly quiet about weighing in on this ban and had not endorsed it. I think there's now a statement from a safety saying that they have, I think, welcomed, this proposal where they had previously just acknowledged it, which may be a sign of the amount of political pressure here. to do this. but, I think, Julie has, has actually, spoken pretty, eloquently about, the importance of not limiting, access because of that positive potential for use of the internet for young people. And so hopefully, the actual research and, um, the actual, uh, you know, evidence, will help to, inform some of this. Okay.
Mike Masnick:Yeah, and that is definitely the core to the research is it's not that everyone is wonderful and everything is wonderful online. I think almost everybody acknowledges that there's, there's a certain group of kids for which they do have, trouble with it. And it is problematic. There are questions about, you know, there's been other research suggesting that it is often kids who are not getting help elsewhere. Uh, and, because of that are effectively turning to the internet, which is then exacerbating existing problems. but a lot of that research talks to, more targeted, more specific interventions designed to identify and respond to those who are having difficulty without removing the access for those that it is helpful for. And so it's unfortunately, problematic that they're using this research in this way. And I'm glad that, uh, Andrew was willing to speak out on it. Be interesting to see if the, uh, policymakers in Australia are willing to recognize that. and you know, there is, you know, obviously there's a lot of research on this that flies back and forth and people claim. that certain research says this or doesn't say that and often it is misleading and lots of people trot out Studies and exaggerate the findings and the impact of it but I do think it is problematic when you know, you have a high ranking official who is Pointing to the study and saying this gives us the reason why we have to do this And the researchers themselves are saying like, no, and you haven't even spoken to us and you're wrong. It doesn't support that at all. So I think we'll have to see kind of where this goes, but it does feel like so much of the way it's, progressed in Australia, it was felt like a runaway train. There's, there's no way to stop it. It's, it's coming,
David Sullivan:coming whether people like it or not. Indeed, and could easily, as you mentioned before, um, spread to other jurisdictions, as this is starting to be something they're talking about in the UK as well.
Mike Masnick:Yep. Yep. Yep. All right. Uh, moving on our next story, sort of as a follow up in some sense on some other stories that we had covered, but not for a little while we'd spoken about the sort of pig butchering scams, which have gotten a lot of attention, where scammers often based in Southeast Asia. And there was, there are some horrible stories of sort of. tricking people and effectively slavery involves and that the people were tricked into going to different places in Southeast Asia and then locked in buildings and told they have to scam people, to get out of it. But the, the gist of The pig butchering scams is that they befriend people online, and it is a long term scam where they spend, days, weeks, months, in some cases, befriending someone and building up trust and then eventually getting them to often, put money often cryptocurrency into fake cryptocurrency accounts. And they lied to them about their money's going up, up and up. And then eventually they just take all the money. and there had been lots of concerns about this. It's very difficult kind of scam to fight for a variety of reasons. We had talked on the podcast about an effort of a bunch of companies coming together to try and work on this. and, the group was called tech against scams and we are starting to see some results from that. And that meta announced this week that it had removed 2 million accounts that were linked to these pig butchering scams. And part of that seemed to come from this collaboration between these different, companies, including Meta, obviously Match Group, and then a bunch of the cryptocurrency platforms as well, which is actually important since so many of the scams involve cryptocurrency. So seeing some results from this, I think is a positive first step.
David Sullivan:what do you think? Yeah, absolutely. And, you know, I should say, you know, meta and match group, both, partner companies in the digital trust and safety partnership, where, you know, among the best practices in our framework, as part of the enforcement of
Mike Masnick:about disclaimers today.
David Sullivan:Yeah. uh, is working with other industry partners to address, different types of specific risks. And so these kinds of partnerships are, what you wanna see happen. I, I would also say they are kind of, something that you get from our current, constitutional and, you know, statutory framework for Internet regulation in the United States, we have the First Amendment and we have Section 230 and Section 230, one of the things that it it enables companies to actually, think about working together on issues where they might have knowledge of content. that is not stuff that they want on their sites. and so, you know, I think these kinds of, coalitions to address specific acute issues, are an important tool in the toolkit when it comes to these issues.
Mike Masnick:So it would be bad if Brendan Carr comes in and takes away section 230. Is that, is that what you're saying?
David Sullivan:I mean, you know, this is one of the things that, you know, a lot of the things that companies do is, yeah, is rooted in section 230. And so let's, let's bear that in mind, because I do think that this is an issue, pig butchering. Everyone gets these messages, right? Everyone, you know, has gotten messages from someone who's, oh, I maybe I had the wrong number and someone trying to, make your acquaintance. and it's an issue that you know, it crosses. partisan lines in the United States, you know, and nobody wants to see, senior citizens getting their retirement funds fleeced by somebody. the fact is that these scams are operating at an industrial level. they'd started in Southeast Asia, but they are spreading and they're standing up these kinds of institutional scam centers in other parts of the world as well. it's a cross platform. challenge, where it doesn't, it's not something you can easily diagnose just on say meta uh, or, uh, the, the scammers will then move to private channels. They will use cryptocurrency as part of the, the payment side of this. so you need this kind of collaboration. It also is going to require collaboration and it does require collaboration with law enforcement around the world, which is complicated. so I think this is just. the start. what I would say is that oftentimes when there is an acute crisis around a particular type of abuse online, and there's a recognition that, that something needs to be done about it. Oftentimes the first thing companies will do will be to announce that they are going to do something. And then they will announce that they have done something, which is usually a large number. Uh, you know, we have removed. X, uh, you know in this case two million, yeah two million accounts, uh linked to these scams in the past it was, you know, I think I remember twitter, 150 million pieces of isis content You know back in 2015 or whenever that was, without a denominator About okay. Well out of how much? Um, but I suspect that there's a lot going on inside the companies, when it comes to this and that the trend of transparency from companies around these things is you start with a big number to try to, get a PR win, but that Ultimately, what comes out is what you saw from companies in the cases of a lot of the, election interference or coordinated inauthentic behavior investigations where you start to see more systematic reporting about what's going on and a lot more information comes is the article
Mike Masnick:Yeah. I think it'll be interesting to see this is very early. And, you know, as the article that talked about this in the record notes, scams are still prevalent. They're still going on. It's unclear how much of a dent this has made. And also the scams are, you mentioned, are moving elsewhere. but they're also becoming more sophisticated and they're using AI tools and other things that make it an even bigger challenge. But it's good to see that. these companies are at least trying to tackle the problem and are making some level of progress on it.
David Sullivan:Indeed, and also always outstanding question is telegram.
Mike Masnick:Yeah,
David Sullivan:much of this is stuff ultimately is taking place on telegram and is there any will to do anything about it there?
Mike Masnick:Well, Telegram is suddenly so, so compliant. Some stuff or so they claim, but yes, that is a, another discussion altogether. all right, let's close out on a, a quick one. It's a little more fun. You found this one, in Wired about how you can hire an Etsy which. To curse Elon Musk. Do you want to talk about that real quick?
David Sullivan:Yeah, so just, uh, that, yeah, apparently there was a viral, tick tock, where, uh, um, someone had, hired, a, Etsy, which, uh, in order to cast a, a hex on Elon and that there was a lot of collective interest in doing this. Um, I thought it was interesting just because I know that actually at Etsy, whether to allow this, uh, Sale of what they call, I think, metaphysical services, uh, is something that they've had to wrestle with, uh, and have actually prohibited. and I think that's been in large part because it can, in some cases, you know, this can over into scams. Um, but in this case, it seems a little bit more, of a kind of, maybe a little bit of catharsis, uh, spiritual catharsis delivered via Etsy and, TikTok.
Mike Masnick:Yeah. I mean, there was some funny stuff article, including they spoke to Etsy, which two 22, who apparently, uh, perform this hex who, who noted she doesn't necessarily believe in hexing or cursing, but instead encourages participants to focus on manifesting what they want to see in the world. Which is basically like, of course, I'm scamming people. I'm just give me money. And if it makes you feel good, then, and you think bad thoughts about Elon Musk, go for it.
David Sullivan:I also, yeah, just love the quote. I really just love the idea of supporting a small business and sending ill will to someone I hate. A statement of our
Mike Masnick:I had to say a statement of our times.
David Sullivan:for, for joining us.
Mike Masnick:What the internet enables a collective wishing ill will on someone else with a monetary transaction associated with it.
David Sullivan:been
Mike Masnick:All right. Well, I think we will leave that there. Uh, I think that's about as much as we can do on that story. but David, thank you so much for joining us, and having this discussion and talking through all these, different issues related to online speech this week. it's been a, a fascinating discussion as always. And so, uh, thank you very much.
David Sullivan:a pleasure. And, again, just
Mike Masnick:again, just a reminder, we will be off next week for Thanksgiving in the US. and please check out my Kickstarter for 1 billion users. Uh, really fun game, all about social media. If you work in the space of trust and safety, and you want to explain to your family and friends, what you do, this game is absolutely free. Again, actually fun. My kids like it, but it also gives you an opportunity to tell people what trust and safety and content moderation is all about and how to deal with toxicity online. All right. That's it for us. We will be back in two weeks. Thank you for joining us.