Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
This Podcast May be Hazardous to Moral Panics
In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Surgeon General: Why I’m Calling for a Warning Label on Social Media Platforms (New York Times)
- The Surgeon General Is Wrong. Social Media Doesn’t Need Warning Labels (The Daily Beast)
- Anthropic calls for AI red teaming to be standardized (Fortune)
- How small claims court became Meta's customer service hotline (Engadget)
- Pornhub to block five more states over age verification laws (The Verge)
- New York Outlaws 'Addictive' Social Media Feeds For Teen Users (PC Mag)
- Meta Oversight Board’s Helle Thorning-Schmidt: ‘Not all AI-generated content is harmful’ (Financial Times)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So you probably haven't got around to downloading bubble yet, Mike, this is a new home screen widget app. Uh, I don't know how hot you are on those kinds of apps, but I liked it enough to use it's prompt because it kind of, it's how I feel every time we start recording the podcast. And the prompt is, what are you going to say today, Mike?
Mike Masnick:Well, I am rarely at a loss for words. I always have lots of things to say, but I'm going to start out by saying this is another week where we have been spared from the Supreme Court destroying the internet. I was up early paying attention and, uh, no Supreme Court rulings yet, but, uh, next week, next week.
Ben Whitelaw:Okay.
Mike Masnick:What do I? Yes. And, and, and what about you? What are you going to say today, Ben?
Ben Whitelaw:Well, it's a quite a week by all intents and purposes. We have been scrambling around a bit more for stories this week, but we've, come up with a free couple of good ones. And, yeah, there's a, there's a few things that we will look at, particularly about how porn in the U S is, on the tank as well. So let's look out for that. Hello and welcome to Ctrl-Alt-Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you with financial support from the Future of Online Trust and Safety Fund. My name is Ben Whitelaw, I'm the founder and editor of Everything in Moderation, and I'm with Mike Masnick, who remains still bright eyed considering you're getting up so early to check the, uh, You know, Supreme court website.
Mike Masnick:It is, it is a fun process, uh, waiting for the reveal every morning, uh, and, and finding out, cause there were cases released yesterday. There were cases released this morning, still none of the really big cases. And as of right now, though, this is very likely to change. They have added next Wednesday as a decision day, but I think a lot of people are expecting there will be decision days next Wednesday, Thursday, and possibly Friday, and everybody's hoping that they will finish by Friday because the justices usually like to start their summer vacation at the beginning of July. So, we shall see.
Ben Whitelaw:There's a reason for everyone to get on with it. I would just want to, before we crack on with today's episode, want to say thanks to one of our listeners, Nick, who shared the platform prompt that I, uh, opened today's episode with, as regular listeners will know. We are, always trying to find interesting and fun new ones. So if you have others up your sleeve, do get in touch with us, via email. That is podcast@ctrlaltspeech.Com C T R L a L T speech. com. We've got a few in the bank, Mike, for the next few weeks, which I'm glad about, but that's an ongoing nervousness of
Mike Masnick:Well, we, there'll be more apps. We just need more people to create more apps. Nobody's going to stop. And then we just have to, fine. But if you are creating an app, please create a unique prompt. There's so many that just copy the standard, Facebook prompt. And so, you know, be a little creative with your prompts. We need, we need them.
Ben Whitelaw:And it might even help your platform as well. Um, we should note before we go on as well, that we're little more than a month out from, TrustCon, where we'll be doing the first live edition of Ctrl-Alt-Speech, which I'm really excited about. We've got a great panel lined up. We know which room we're in now. It's in our interest to get as many people to come as possible. So, you If you are going to be at TrustCon this year in San Francisco, at the end of July, do you make sure that you come to our session and uh, if you're not going to be there when we, we will obviously make sure that we share the episode with you via the usual feeds.
Mike Masnick:But yes if you will be at TrustCon, we are the last session on the last day, 4pm, do not miss it. don't leave early and skip out on what is going to be a very exciting live episode of Ctrl-Alt-Speech.
Ben Whitelaw:Yeah. I've told my friends and family that it's headline it's headline
Mike Masnick:Exactly.
Ben Whitelaw:You know,
Mike Masnick:Everything else is the openers. We're the main act. No,
Ben Whitelaw:that from TrustCon organizing committee,
Mike Masnick:no, no.
Ben Whitelaw:um, it's, it's helping me anyway. okay, cool. So let's start then with our stories for this week, Mike. We have one that I know you'll have a lot to say about because you wrote an op ed for it, for the Daily Beast this week. So tell us what it's all about.
Mike Masnick:Yeah, so, Monday morning of this week, uh, lots of excitement. We woke up, there was a New York Times op ed piece from Vivek Murthy. Who is the Surgeon General of the US and, we have talked about in the past, but in the context of the Murphy versus Missouri case, which is one of the big Supreme Court cases that we are waiting on, and it is interesting to me that that case is, in theory about questions of whether or not Murphy was trying to, influence social media to moderate in a particular way and you know, You know, United States has denied that and we're still waiting on the ruling for that. But here is Murthy in the pages of the New York times calling for a surgeon general's warning, akin to what we see in the U S on packets of cigarettes about how it may, be harmful to your health, on social media, the details of how that would look like, would it. Appear at the top of every page. Would it appear the first time you log in? Would it pop up every 30 minutes that you're using it? Who knows? We don't know, but he did call for it. He did admit in the piece that he does not have perfect information, but he believes in acting quickly based on the information that he has. And he recognizes correctly that he would need congressional approval for this. I think that, it would also be unconstitutional, but we'll get there in a minute. Um, but that's basically the point of the op ed was to call on Congress to authorize a surgeon general's warning on social media, saying that it may be harmful to kids and their mental health.
Ben Whitelaw:before we dive in, Mike, Can we talk about the timings of this? Like, why is he calling for it now, do you feel? Because there is this case that's happening. There's he will know about the Netra's cases. It does feel like a kind of odd time to be calling for something like this.
Mike Masnick:it's it's a very odd time. I saw somebody comment that he must be very confident on how the Supreme Court is going to rule to do this, you know, a week or so before they are likely to rule on that case. I mean, it's slightly different than the issues in that case, as we discussed on a previous podcast. But yeah, the timing is somewhat perplexing. I mean, so a little bit of the background here is that a year ago, almost exactly a year ago, the surgeon general did release a report about, teenagers and social media and mental health. Which was a fairly balanced report that I felt was, very strongly. Misrepresented by much of the media, because sort of the first half of the report is going through a lot of the research and noting as all of the research continues to show that there is no clear evidence of, inherent harm or consistently harmful, reactions to teens using social media. And in fact, there is tremendous evidence of it helping in many, many cases. The Surgeon General's own report was extremely clear on this, that especially in LGBTQ communities, there's strong evidence that social media has been helpful in allowing people to, you know, Find others that they can talk to, to, you know, help build their own identity, to find useful information and has made them. Feel more comfortable, in ways that are, very helpful to their mental health. And yet at the end of the Surgeon General's report last year, it did still call on, taking a sort of precautionary approach to all of this and saying, even though the information is mixed, even though the evidence does not show any causal reaction, we still think that we should be overly, uh, well, I didn't say overly, but we should be extra cautious about this and assume That it might still cause harm. And, I thought that was a slightly odd way to conclude it, but that was the part that the media picked up on and they basically framed it as, Oh, the surgeon general is warning that social media is dangerous, which is not what the full report actually said. And so maybe it's just the fact that a year has gone by and really not that much has happened and Congress hasn't done anything, and that Murthy felt that. You know, he had a sort of take it up a notch and calling for a surgeon general's warning was one of that. Now, a lot of people pointed out that, that report was very fair and balanced and detailed and had all this evidence and sort of explained both sides of the argument and noted that there wasn't strong evidence and, any direction. Whereas his op ed calling for the Surgeon General's warning was not that. It made no mention of any of the benefits or the fact that many, many kids have found it extremely beneficial and instead went straight to, you know, it is harmful. It used a few random anecdotes, which is not generally good science in terms of how you do these things, you know, and, and talked about things like, bullying and, bullying existed pre internet and yes, there are differences, there are differences of scale, there are differences of impact, there are differences of, how it works and the timing of everything. And that's an interesting area and certainly worth paying attention to. But, you know, just to use like one example of, a child who was bullied that's not, very scientific. I mean, the evidence still remains that, bullying in schools is more prevalent than bullying online. We don't talk about putting a search and generals warning on the schools as you enter every morning that entering the school may be, you know, may be hazardous to your mental health. And so, the setup of the piece I think was, not great and just felt, not very thoughtful, not very scientific.
Ben Whitelaw:yeah, definitely felt like he'd kind of ratcheted things up few notches since the report last year. And, you know, there's a couple of really kind of panicky sentences in there that seem to chime. And I wondered what you thought about whether this was as a result of the kind of Jonathan Haidt book, which seems to kind of created this, this environment for maybe, Vivek Murthy to kind of. unpack some of his views in a bit more detail. Is that part of the reason why he feels able to write this now?
Mike Masnick:Yeah, I think there's something to that. I think that, Jonathan's book has gotten a lot of attention, a lot of mainstream attention, and has called, has created a lot more calls from media folks and politicians to do something. this week, there was a meeting in the California assembly in which a representative, flat out called out the, the hype book as, being super helpful for our agenda on, on passing, social media legislation and, and talking about how useful it was. So I think, that has gotten into the, sort of narrative flow. And that's why I think it's so important. And I did this in, in the daily beast op ed that I wrote in response to, Murthy's piece, to point out the actual evidence and how the evidence shows a very, very different picture than the mainstream narrative, has discussed. And in fact, just this week for folks who are listening to this podcast, Don't listen to the tech dirt podcast. I had on Candace Rogers, who is probably the leading researcher on teens and mental health in particular, the role of social media and mental health. She has been studying. Teens and mental health issues for two decades and has done all this research herself has also done studies of studies, looking at all of the research on this. And she has argued pretty vocally that all of this is harmful, that the narrative is incredibly misleading and the evidence does not seem to show any causal relationship between social media. and harms of mental health, that the picture is a lot more complex. There are certainly examples of, kids who struggle with mental health, who are using social media in an unhealthy way, but often the picture that it, it points to is one of, kids who are already dealing with mental health issues, who do not have the resources and help that they need, who are turning to social media because of a lack of other, options or other resources. And in fact, that, you know, what her research has found and what she talked about on, the podcast, that I did with her earlier this week was that, there are so many differences here, different kids respond to things in different ways, different social media is treated in different ways, you know, and, and people have different reactions to it and creating this sort of like broad brush, you know, This one thing is bad, or, we had to put this warning on it. It's not helpful. And in fact, as she's, she's also, she had a great piece in the Atlantic a couple of weeks ago, pointed out that. it can lead to real harm because it is suggesting that the thing that kids do to socialize and to communicate is harmful. And teaching kids that something that is pretty natural, that lots of people do to communicate and socialize as being harmful, just sets a very, very, dangerous and problematic approach to thinking about how kids communicate these days. And so it's a really, really problematic approach. I wish that more people were listening to people like Candace Hodgers than to a Jonathan Haidt. But unfortunately, you know, it feels like Murphy is maybe fallen sway to the moral panic.
Ben Whitelaw:Yeah. And I'm interested in like, what do you think about the, equivalency that he draws between smoking and smoking labels? Because that's a really interesting, I think argument. I mean, smoking back in the day was obviously not known to be harmful at all. Smoking labels have helped to. Create this idea that it's an issue. He has a number of other examples of where like warning labels the real world kind of physically help to give a better sense of the harms that a product or a service can provide. And I remember reading a piece of research in 2006 that said that it's like a kind of telephone study that spoke to smokers in four countries and found like this huge gap in understanding between the harms of smoking and, and the only labels were able to kind of give that information. Is that a fair comparison? Is he right to bring that up? I know you, you know, you don't touch on that the same degree in your daily beast piece, but. Is that unfair to kind of bring the real life warning labels to, to social in that
Mike Masnick:Yes, it's incredibly unfair. I did write a piece last year on Techdirt that talked about how, social media is not lead paint and, you know, these sort of comparisons come up all the time and it's incredibly frustrating because lead paint or cigarettes or, you know, a bunch of other things that involve warning labels. are like actual poisons. Like they are chemicals that you are putting into your body that you consume speech is not that speech is, is words. It is expression. It can impact you. It can certainly, Change people's moods and, and have an impact. That is part of what expression is meant to do, but it is not a chemical that you are putting in your body that is interacting chemically within your body, and so these comparisons are really problematic. the Atlantic again, had a, an another piece this week, sort of using a similar headline to the one that I had a year ago, which said, Instagram is not a cigarette. in response to this as well. And that is exactly correct, right? You know, putting nicotine into your body is inherently harmful. Exposing yourself to lead paint is inherently harmful. These are toxins. They are inherently harmful. there's nobody who is not harmed by having that go into On social media that can be harmful, but it is a much more complex scenario and it is based on a whole bunch of different factors. It is not like lead paint and is not like nicotine and cigarettes. And so I find that comparison to be just really manipulative. Uh, and, and really problematic.
Ben Whitelaw:And, and do you, the claim that we don't know enough about how social media affects us because we're still in the kind of early stages of having this in our lives and, and that it's fair to say that, that the role it plays and the time we're spending with it is, growing. I'm guessing from what you just said there, that you also disagree with that point too.
Mike Masnick:Well, I think, it is in some ways it's still early. And in some ways that is no longer an excuse because there has been so much research being done. There is more being done and more research and more knowledge is always going to be better. And I'm happy to see that. But like, if. It were true that social media were somehow inherently harmful to the mental health of teens. We have so many studies at this point, we would have seen something. And what all of these studies, including the one that the Surgeon General put out, but also, this massive study that the National Academy of Sciences put out last fall, a massive study that the American Psychological Association did last year. There was a huge, report from the, Pew Institute. There've been multiple studies coming out of Oxford. There've been. Over and over again, these studies of studies, all the work that Candace Rogers has done, over and over again, and they fail to find any connection and if It's something that was so inherently harmful, you would think some of these studies or some of these studies of studies would start to find them, you know, and, and so it's, it's really problematic to me. And the one other one that I'll bring up was that there was another one in the journal of pediatrics last year, which I thought was really interesting, which also found no evidence of social media causing mental health harm. And what they found. from going through all the studies was the biggest thing that they believed was the impact on, teens and mental health was the lack of spaces for kids to be kids without adults sort of watching over them and, watching their every move, you know, a space for kids to be kids. And. What? I think some of the evidence is showing, or at least suggesting is that, social media has somewhat taken that place because we don't have these spaces where kids could just go and hang out these days, which is a real problem. But then if we. are somehow demonizing those spaces as well. In theory, if that journal pediatrics research is correct, it could make things even worse because we're sort of taking away the one place that we have left kids to, have some space hang out with other kids. Yeah,
Ben Whitelaw:not something that you want to see, clearly and, and something that there is, it still remains not that much evidence for, something that the surgeon general clearly thinks is going to be. part of the solution to the kind of children's mental health crisis that we're seeing. So, interesting to unpack that for us, Mike. Thank you again. And definitely for listeners who haven't, heard the Candace Hodges podcast that Mike referenced, do go and listen to that. It's a really clear sighted, and very unlike Vik Murthy's piece, uh, a very kind of even handed, I think, discussion. So let's crack on now. We, we will go now to the second of our bigger stories. I found this weekend is a, kind of AI story. We should have a bell for those. Mike. Um, um, AI safety is obviously a topic that is increasingly, cropping up in the podcast each week. And this is a blog post from Anthropic. One of the major, generative AI companies that have emerged over the last couple of years, outlining how it tests its AI systems. And so it goes through in quite a lot of detail, it's process for red teaming. which for those who don't know is, is how companies, how, how AI companies, um, test Uh, testing, the models that they've created to identify and how they, they work and to try and get it to spit out, problematic content so that when it gets rolled out to the wider public, it doesn't have the, damage and the harm that we see, um, have seen in recent examples of that. And so the post goes through in detail, four different types of how it does AI, red teaming. I won't go through them all here, but there's a couple of really interesting ones, including how it recruits, domain specific experts across a number of different areas, trust and safety being one, but also national security and what it calls kind of region specific red teaming, where you, find multi lingual multi kind of cultural groups to be able to work with. Push and prod at these, LLMs. And, uh, there's a couple of really interesting things to this. I think the first is that Anthropic clearly really care about the way that they red team and, and in producing this blog post of trying to get more people to, do AI red teaming in a standardized way, it makes that case in the blog post that it would like to see other companies like it do it. Do more of the same, and I didn't realize this, but there's actually lots of kind of nuances to red teaming that make it really hard for comparisons between how LLMs are red team, because basically companies do it in very different ways. so that's really interesting. It's, it's being very open with how it does its work. and it builds on. other companies trying to be more open about how red teams, obviously as a way of giving confidence to, the models that they have. and then, we'll come back to that point. But the other thing is that it makes. In not a lot of detail, but a series of recommendations about what it would like to see the future of red teaming AI models, be, and there's a couple of really interesting points here for us, Mike, because we've talked about some of these before, but it calls for the funding of organizations like the National Institute of Standards and Technology, which we referred to a couple of weeks ago in a podcast in relation to age verification and also the kind of funding of, organizations who can partner with AI companies to do this testing and, wants to see more professionalized outfits, companies who can basically help AI. Companies do red teaming in the way that it does. And so it's basically setting out a future for how AI red teaming might work. And it's really interesting to have this as part of an emerging, I guess, corpus of information that we're seeing. About how these companies try and prevent harm, how they, avoid putting models out in the world that are gonna mitigate some of those issues that we've seen, chat GPT and, and other models produce, what did you think about this? Mike, is this something that, was new to you? Are you. Are you interested in how AI companies are red teaming?
Mike Masnick:you know, I think it's really fascinating, you know, a lot of the red teaming coming out of the cyber security and national security space. And has become increasingly popular in trust and safety fields as well. And looking at, you know, where might harms come, where might problems come from and let's experiment and sort of play act the bad guys to some extent, you know, if we were attacking the system. and there has been, as we were preparing for this, we were looking up what, some of the other big AI companies have announced about their own red teaming efforts. And everybody has, has Something here or there, but it's, it's often in very different formats. And, Google released an AI red teaming white paper, almost exactly a year ago, meta has talked about their red teaming open AI in the fall was recruiting, a special like outsider red team to, do analysis. It's not clear where that stands right now. But this was really interesting, you know, on a few different fronts, as you alluded to in your explanation of it, the idea of sort of thinking through, should we standardize this because, sort of what some of what Anthropic is saying is everybody's doing this differently and it's hard to sort of compare across different companies because they all take very different approaches to it. And I'm, I'm, I'm a little. I feel a little bit mixed about that because there's a part of me that says like, yeah, that makes sense. you want to be able to compare and see how, different AI models are doing against the same sorts of attacks and same sorts of challenges. But then you also worry about, does that lead to a situation where everybody has the same blind spots, right? You know, if everybody's taking the exact same approach, then they, they can miss stuff. But I, I think, you know, when you, you get into the details of the way Anthropic is approaching this, I'm less concerned about that because I mean, even part of their standardizing of red teaming includes this idea of open ended general red teaming, bringing in outsiders, trying different things. So they, they sort of try and cover that with that. But the other part that I thought was really interesting was even like, just legitimately directly calling out trust and safety, having trust and safety risks be a part of the red teaming process, which I think is really good to see. And it's one of these things that I've had discussions, I think, you know, a lot of people within, the trust and safety field understand this, but a lot of, a lot of the conversations I've had with people who aren't that knowledgeable or experienced in trust and safety only think of trust and safety In the social media context and don't really think of it in the AI context. And it's taken a few people that I, you know, when I've talked to them about it by surprise and, you and I both know plenty of people who worked in the trust and safety field in social media, who moved over to the AI space in the last few years, because it's, where there's a lot of interesting stuff happening. Um, but the fact that Anthropic is recognizing that and even calling that out as a category of their red teaming process, I think is actually a really important statement. Yeah,
Ben Whitelaw:know, Anthropic have been pretty good and thoughtful with their approach to safety. You know, it's founder and CEO, Dario Ameda. He he's talked about this in a few different interviews that I've, and I've seen him and he, he has a real handle on it in a way that I think some of the other AI, generative AI companies don't, or at least kind of catching up. I think he actually left open AI because it's approach to safety wasn't up to scratch. Right. So it's been baked in from the get go in a way that it hasn't been in other
Mike Masnick:yeah, it's It's definitely a part of their sort of branding, is, you know, that he didn't feel that open AI was taking safety seriously enough. There are certainly debates about that. And obviously a lot of that is really subjective, but certainly their whole positioning has been, we are more safe version of open AI. I will note as someone who uses a variety of different AI systems for different things, I also find the anthropic ones to be, Quality wise, some of the best, they're really pretty, pretty solid in my own experiments. They're the ones that I rely on most often, the, the anthropic clawed models. but yeah, I mean, their, their whole thing is safety. So it's, it's good to see them talking about it in this way. It's good to see them not just talking about what they're doing, but thinking through how the wider industry should be looking at it and, their policy recommendations are large scale policy recommendations. they're not specifically just for companies to do things like funding NIST, you know, that's a congressional, kind of thing. Um, you know, they're looking at legitimate policy approaches at a time when, a lot of the. Policy discussions on AI are sort of, again, you know, going back to our first story, kind of on the moral panic side of things, like, oh no, the AI is going to destroy everything. And this is a very, very practical way of thinking about how do we make sure the AI does not cause problems and perhaps more realistic problems than, some of the sort of science fiction problems that we hear about. And so I actually think, you know, these, you know, and the idea of, as you mentioned, creating a sort of third party ecosystem of A. I. Red teaming providers is a really great idea and again, sort of pulls a little bit from the cyber security space. But, It would be great to see more of that, as things go on. So I think this is a really solid approach to thinking about these issues.
Ben Whitelaw:Yeah. The, the cynic in me is wondering why there isn't greater sharing of knowledge than there is at the moment, right? We talked a bit about Google and Meta and Microsoft and they've all had released various reports and research about how they do red teaming. I'm wondering why it hasn't kind of consolidated already, if that is what Anthropic is calling for. And I wonder what the kind of incentives are for these companies to share knowledge, get in the same room, talk about how they work, there has been in fairness, a couple of, events where these companies have got together, which we're aware of, but I do wonder like, why it hasn't. Cause we're two or three years into generative AI being a really big thing now, and, and these safety issues are keep coming up. So it's something to kind of figure out, I think why that's not the
Mike Masnick:Yeah, I mean, again, you know, when you have sort of a rapidly changing dynamic system where, all these things are relatively new and sort of understanding where the risks and threats are, are new, you understand that people are trying different approaches and a lot of it is reactive. and so, you know, I think something like this is kind of laying out the like, Hey, we have to be a little bit more thoughtful and start to put together a larger framework. And I don't think that most of the other AI players would disagree with that, but I, I, you know, nobody has laid it out exactly the way Anthropic did with this blog post.
Ben Whitelaw:No, it probably requires them all to kind of slow down a little bit from releasing all these new model versions to kind of figure it out, but, okay, so let's. Put a pin in that and return to that in future weeks. We'll flip now to our best of the rest of this week's news and analysis from around the web. And you are taking us to Meta's battle in small claims courts, Mike, a piece of main gadget that you
Mike Masnick:Yeah, this was a great piece and gadget by Carissa Bell, talking about how people who are generally people who have lost their accounts been hacked or tricked or something and who were getting no response whatsoever from Meta customer service and finding it absolutely impossible to reach out to Meta were going to small claims court. often. Right near where I am sitting right now in San Mateo, uh, where Meta is based, and the, uh, I think the small claims court is like right down the street from my office, but I, I need to check on that. Um, but, um, they've
Ben Whitelaw:Oh, nice. See who's in there.
Mike Masnick:But, but they've been going to small claims court and taking meta to small claims court and finding that it is apparently surprisingly, useful and actually finally getting meta to respond to their customer service requests. And, this surprised me to some extent, but, you know, small claims court, for people who are, you know, Not aware, right. Are for smaller disputes. It is much cheaper than going to a larger court. filing fees are usually in the range of a hundred dollars or so. And I think the dispute generally has to be for less than 5, 000. So I think it depends on which venue, which small claims court you're in. but as a tool to get attention. Is a really interesting ploy. And, the article notes that what people had been doing for a while was when MEDA would not pay attention to them, they were going to, a state attorney general and trying to get attention that way. Though attorney generals, attorneys general is the proper pronunciation, though it sounds weird, attorneys general, we're getting really frustrated. And a whole bunch of them sent a letter to MEDA. At some point, not that long ago, effectively saying like, we are not your customer service. and we should not be your customer service and saying, you know, shape up. And this is, this has been an interesting thing that I I've thought about and written about for many years. I mean, going back to like the early two thousands, I had written a piece talking about how terrible. Like all of the major tech companies are a customer service in particular. I was originally looking at Google and I sort of semi jokingly referred to it as the big white monolith. you want customer service, there's no human face. there's no way. To easily contact a Google customer service. And that's true of basically all of the major tech companies. It's not easy to reach a customer or like a human customer service person. And lots of people have stories and you've seen examples of the obviously. pre-canned robotic answers to customer service, uh, requests that don't really respond to the problem. And it's incredibly frustrating and the ability to actually speak to a human being and, have a conversation is nearly impossible. And that's why people are resorting to these situations where they're, they have to go to small claims court. And I think it's a, it's a really, it really shows how. Unfortunately, little, these companies have done to really invest in their customer service side of things.
Ben Whitelaw:Yeah.
Mike Masnick:And, to some extent brings me to another one of my, favorite, uh, theories or topics of how trust and safety itself is a form of customer support and how. We might be in a, better place if companies realize that, and this goes back to another thing that sort of goes back to the eighties and nineties, where there was this sort of shift in the world of customer support, where originally it was viewed as a real cost center and companies were trying to minimize how much they were spending on customer support. And then some companies began to have the bright idea that like customer support is where customers are most likely to interact With our company. And therefore, if we have a good relationship with our customers, that is good for our business and therefore customer support is really a marketing function and we should treat it as such. We should treat it as a way that customers have a good interaction with us. And I have a pretty strong belief That the trust and safety field needs to move in that direction as well. And that, senior executives need to realize that trust and safety really is a part of the marketing function of a company, that it is part of the way that your company promotes itself, promotes it being a safe place that people want to be. And, this is not quite about that, this story, but just the fact that Meta has so much difficulty dealing with basic customer support for people who have lost accounts. And. You know, meta's response to the article is valid and understand it. And again, like, there are billions of people who use meta products and that is huge and you run into the same impossibility of trust and safety issues where it's like, you can't solve everything, there are bad actors. And so you're going to have people who are trying to game the customer support system and claim that they were hacked when they really weren't and trying to get access to other people's accounts. And I recognize that these are all. Really, you know, mistakes are going to be made that it's the same thing as the, impossibility of trust and safety. You have the impossibility of customer support, getting right. But clearly something is failing here when multiple people are having to go to small claims court just to get your attention.
Ben Whitelaw:And the extent to which that they, the effort and the money and the time they invest to do so really just kind of goes to prove, how much this stuff matters to users. And, it's, it's kind of interesting, David and Goliath tale. And it was kind of heartwarming in a way I actually had my own, version of this a few weeks back on LinkedIn where everything in moderations page. Got kind of categorized as spam and any link that I posted,
Mike Masnick:Ben.
Ben Whitelaw:I know, I know it's the, it's, it's all the kind of, ads for strange sites that I keep posting, um, and it was really hard to find somebody at LinkedIn in all honesty, that could help unpick this. And, you know, you get kind of pushed from pillar to post and, and I'm somebody who understands a bit about how it works. And if you're, you've got a small business or. your account's been taken over and somebody is using it to, and I post ads, that have nothing to do with what you do, then yeah, that's going to be super frustrating. And, interesting to have Engadget pull those stories together and report on them in this way. So I appreciate that, man. That's really helpful. Great. So onto our next story, something that I found, I found really interesting this week was, Pornhubs has extended the block in a bunch of new States in the U S in relation to some new age verification laws. So this is a, an ongoing story that goes back to kind of 2022, 2023, in which. Because of laws brought in, in us States to, uh, essentially enforce age verification on certain websites. Horn Hub has taken a decision to basically black out in those States. So it won't be available to, to people, who live there. Five new States are Indiana, Idaho. I was going to test you on this Mike, but I thought actually better
Mike Masnick:don't, I,
Ben Whitelaw:And, um, yeah, it's just, it's basically ongoing story where more and more of the U S population is, finding it more difficult to access some of the largest porn sites in the country and ALO, which is the newly rebranded parent company of Pornhub, has also taken, five other sites offline in, in those States. And so you have a situation where millions and millions of people in the US are, Without access to, pornography, which I think is really interesting. and as a result, there's all these kind of cottage industries springing up about how to get access to porn. If you live in one of those States, I think there's a kind of really interesting trend that we're seeing. And, and I wonder to be honest, if it's going to happen elsewhere in other countries too, as, the moral panic about, Pornography starts to crystallize into legislation and regulation the way it has done in the U. S. You're not in one of those states, Mike. So, you're not affected, but you may have, you may know people who are.
Mike Masnick:I, no, I am not in one of the states, but I do think this is, interesting in a few different ways. And, and ALO is an, and free speech coalition, which is the trade group that represents the adult content industry. And Pornhub is a major member of, is challenging some of these laws. You know, with. Some success and some not success yet. You know, these are early stages of the court process of challenging the various laws and fighting these. And we'll see how those go, to me, I had kind of thought this was settled law. There had been attempts to, do this kind of, internet age gating for adult content in the past and was thrown out because, you know, The technology was, problematic and was, did not pass strict scrutiny, which is necessary for, this kind of thing. But the courts, in particular in the fifth circuit, down in Texas have, have taken a different view of it. and eventually this is going to go to the Supreme court. And it'll be really interesting to see how that plays out. But it is interesting that ALO, has decided to just, Geoblock the various states. Basically every state that is passing one of these laws, the geo blocking it, they're putting up a pop up that warns you that it's going to be geo blocked. And then, once these go into effect, block access, Most people recognize that if you have a VPN or a proxy, you can get around these blocks, just sort of pretend to be from somewhere else. Now that raises some concerns because a lot of the proxy sites and a lot of the VPNs, especially the free VPNs are actually. potentially really problematic. There are questions about privacy issues and surveillance related to most of those. So you may be putting people at, more risk, if they're using one of those tools. There are obviously some VPNs that are a little bit safer than others, but generally they, they cost. Uh, and so for people who don't Are unwilling or unable to spend the money. it may cause problems. And then a related thing to this, and, and there was a, piece, this was a few months ago, in 4 0 4 media that I thought was really excellent in talking about this, was that, PornHub in particular and, before that was, uh, the parent company was MindGeek and now it's alo. years ago, got into a lot of. Trouble, for, doing a whole bunch of things very badly. And, the company,
Ben Whitelaw:that's an understatement. That's,
Mike Masnick:yeah, it's, I mean, we don't have time to go into the history, history of all of this, but they were not good, in very serious ways and very problematic. And. And it appears in the last few years that the company, new owners, new name, all this stuff has really cleaned up quite a bit and has been, about as good as, you can be, I'm sure there are ways that they could be better, but it has been really good about, getting rid of like just open uploads, making sure that, things that are uploaded are Done in a safe and consensual way, which is, you know, kind of really important, and has actually been a pretty good player in the space. Potentially one of the only really good players in the space. And that means that there are a number of other sites out there that are not, and that are doing all of the terrible things that. Pornhub did in the past, and in some cases, potentially even worse things. And those sites. are not paying attention to these laws at all, are certainly not age gating, are just ignoring it. Most of them are not based in the U. S., have no presence in the U. S., and so don't really care about U. S. laws because it doesn't affect them. They can be sued or whatever, and it's not going to matter to them. And so the other thing is, so if users are still trying to find this type of content and don't want to use a VPN or a proxy site, they are likely going to these sites. Sites that are much worse and much more problematic. And so the end impact of this law, which, you know, are often passed as sort of, moral crusades, culture war kinds of things. But always in the terms of like protecting, people in the state are almost certainly putting a lot more people at risk and leading them to much more risky, dangerous, problematic, really harmful behavior. Um, And, uh, that seems like it should be a problem.
Ben Whitelaw:I think that's a fair point. And it's also worth noting that for Pornhub itself and its parent company, you know, there is a kind of element of self protection here because we have seen other porn companies who haven't, agreed To the age, gating that the states have, put in place, particularly in Texas, they've been very heavily sued. So, um, back in April Chatterbate, which is a kind of another, porn site owned by a separate large conglomerate multimedia, they got fined something like best part of 700, 000 because of, it didn't put in, into place the age verification and it didn't. Geoblock in the way that Pornhub is doing. So it got sued. So, you know, there's, it's not an, it's not Pornhub making a kind of very ideological standard. They're not the good guys. Although I think there's, an element here of being very principled in a way that maybe we don't expect Pornhub to be. but this is, this is. There is a kind of element of self interest here, I think.
Mike Masnick:Sure, sure, of course.
Ben Whitelaw:So let's, on now to our third story in this section where New York is, not one of those States that is, uh, banned porn, but it has taken an umbrage with addictive feeds, Mike. And there's been a couple of laws passed this week to that effect.
Mike Masnick:Yeah, and these laws have been in the works for a while. The main one, the one that's gotten the most attention is the one that, yeah, bans addictive feeds. It basically says that social media sites need to, for children, anyone under 18, make the feeds chronological rather than algorithmic. and it's framed as addictive feeds. Now, a lot of people, especially those, who study addiction find this really problematic because it is not addictive in the same way this gets back to the search in general stuff we were talking about earlier. These are, you know, There's nothing in the science that says these are addictive in the same ways that nicotine or other things are addictive. but you know, that's the way they've decided to frame it. The other law is, similar. It's basically saying that companies are limited in how much data they can collect on children. Both of these laws effectively require age verification, which again has been shown to be a real privacy problem and a real privacy risk and a concern. getting rid of anonymity in some way or leading the social media companies or third party companies to have to store very personal private data, whether it's identification or photographs or videos or whatever. There's a lot of really big concerns there. And then the other part of this is just that, you know, There remains little to no evidence that algorithmic feeds are problematic in any way. And in fact, what studies have been done suggests something very different, which is that when you take away feeds, users, one, They may spend less time, on that particular platform, but they don't go out and like be good citizens, with their, with their new found time. They just switch to another social media platform. It is, you know, these are people who are looking for content, looking for things to do. And if, one platform is not giving them interesting stuff, they just, spend more time on a different platform and they'll just sort of bounce around. So it's not really changing behavior that much, but perhaps even more problematic is that multiple studies looking at chronological feeds has shown that users who are using chronological feeds see a lot more. Negative content, a lot more disinformation, a lot more, problematic content, because the reality is that algorithms are often very useful in minimizing that kind of content and minimizing the access and the recommendations towards disinformation and negative and harmful content. And now in New York, you won't be able to do that. You know, it's, it's, it's. You know, there's this assumption, which is, may have been true at one point, perhaps a decade ago that, the way algorithms work is they just want to recommend to you the most craziest content, the content that is going to get you the angriest, but. Almost every company that does algorithmic recommendations realized long ago that that was not a long term sustainable way of running an algorithm because it just upsets people. And it drives away advertisers. It drives away users. It is not a good way to do it. So most companies today use algorithmic filtering to get rid of the bad stuff, which is the, what these politicians are often asking for is get rid of the bad stuff. And they use algorithms to do that. And now they're saying, you can't use algorithms for that. And
Ben Whitelaw:Right.
Mike Masnick:it's,
Ben Whitelaw:It's, it's really interesting. Is it, it's like the politicians are almost tackling an issue. That's like a decade ago, but it feels like it feels like they're kind of aiming there that I are in the wrong direction to an extent, particularly as we see platforms evolve, how they do ranking as well. So I remember. reading something about how Pinterest was shifting to kind of non engagement based signals three or four months ago. which kind of suggests that, yeah, this, the kind of industries evolve and these platforms are evolving. As you say, they've recognized that. And, um, so this, this may, this may not be the kind of, you know, hard hitting piece of legislation that they think
Mike Masnick:yeah, and, and I'll just close on this, but, in the signing statement when, Governor Kathy Hogel of New York signed the bill, she said, today we save our children. And that is just so frustratingly wrong and, and ridiculous and grandstanding nonsense.
Ben Whitelaw:She hasn't saved them from chores. They're going to have to do a hell of a lot more chores now, because they'll be, you know, they won't be on their phones. Um, okay, cool. And, uh, let's wrap up today's episode with, uh, the final story that I spotted this week, which is an interesting interview with Hella Thorning Schmidt, a member of the, meta oversight board. Um, independent, but meta funded oversight board in which she talks to the FT about a range of different issues that I think are really interesting things that we talk about every week on the podcast. the kind of couple interesting insights I'd pull out were the fact that. She notes about election integrity and the impact of AI this year that actually they haven't seen. She hasn't seen the kinds of impacts in a number of the different elections that she's been privy to. So in the EU, although it's, we're still seeing that kind of shake out, there wasn't the impact of AI that perhaps we thought, although in Pakistan and India, there has been a lot more, incidents, of AI, whether or not it's been, whether it affected the result or not. Hard to say. And then also really interestingly, she kind of takes credit, which I think is fair for the way the oversight board has impacted and shaped matters approach to labeling AI as well. So, there's a case oversight board case a few months back. You might remember Mike where, um, Basically Facebook changed off the back of it, its approach to, labeling AI expanded it's, it's policy. So it now labels, it labels AI content to provide context rather than kind of pulling it down, which was how it was approaching it in the first place, which clearly wasn't sustainable. So. Kind of Thorning Schmidt takes a bit of credit for that, which I think is fair and, and talks about how it's played a leading role in, helping Meta evolve over the last four years, since it's been around and, uh, she tackles this really interesting question by the interviewer, which is, you were the first female Danish prime minister and the CEO of a leading NGO, but most people know you for your work on the oversight board, which to me is, is, you know, reason in itself, why we do the podcast, Mike, because this stuff in lots of ways really matters.
Mike Masnick:Yeah. And it's a really interesting interview. I think it does a really nice job of really carefully laying out the, the trade offs, like all of these things have trade offs. And I think she's incredibly fair in talking about that. A couple of points that I did think were interesting as well. She refers repeatedly to the idea of meta having grown up, in part because of guidance from the oversight board, which may be overstating the role of the oversight board a little bit, but it's sort of an interesting way of thinking about it. But I really thought, I think a lot of people who work in the trust and safety space know all this about the trade offs and the difficulties, but this is a mainstream publication discussion that really does great job of laying out a lot of the trade offs and, hard choices that all of these people situations, engage in. And so even when, when challenged on some of the stuff or where, Metta has disagreed with the oversight board, she's, very open and explaining like, well, this is probably how they're thinking about it and they recognize that this is an issue and, and they have to think through these Different things. And there's a reasonable way to come out the way that Meta came out, even if it disagrees with the oversight board. So I thought all of that was interesting. I will note, as my final point on this, that, it's interesting to her taking credit for the, labeling of AI stuff, just as we were starting this, this podcast, like seconds before. Br before we started, I saw this new headline that came out about how, uh, which we maybe we'll talk about next week. We'll see about how Meta is now accidentally tagging real photos as AI, within their system. So apparently that labeling is still, uh, going through some, growing pains of its own.
Ben Whitelaw:still work to be done. She'll still have a job within the oversight board. Uh, it doesn't get completely canned anytime soon. Okay, Mike, that's, that's a really helpful run through, despite saying that we didn't have as many stories as we thought this week, we've still managed to fill best part of an hour. So, Congrats to us and, and, really appreciate your, your thoughts, particularly on the, the Vivek Murphy P3s, I think is really, really fascinating. We'll, we'll definitely, crop up, I think in future episodes. Um, thanks for everything, Mike. Thanks for, the time and your insights. We're wrap up today's episode. If you've enjoyed listening, feel free to review and rate the podcast wherever you listen, and we'll hope to see you next week, take care.
Announcer:Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.