Ctrl-Alt-Speech

Blunder from Down Under

September 13, 2024 Mike Masnick & Riana Pfefferkorn Season 1 Episode 28

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Riana Pfefferkorn, a Policy Fellow at the Stanford Institute for Human Centered AI. They cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Mike Masnick:

Riana, each week we start the podcast with a prompt from social media and it becomes more and more difficult to find one. So it turns out there are not as many social media things out there as we thought, but recently the FTC helped us out by coming to a settlement with, one of these apps that the kids use that, we old people don't really know about called NGL. And, uh, NGL has a variety of different prompts. We could use this for weeks. so today I'm going to ask you to send me your confession.

Riana Pfefferkorn:

Oh boy, uh, my confession is, you were the first person to send me that bad third circuit decision about section 230 that came out recently, but it was the day that I was about to like get in the car and drive to Burning Man, and I was just like, not today Satan, and I still have not read that opinion. I've just read your blog and like other people's blog posts about it in lieu of, you know, harshing my mellow, basically. Yeah, so, uh, well, continuing in the, NGL vein, Mike, tell me your, never have I ever.

Mike Masnick:

Well, never have I ever, and we will discuss this shortly, but never have I ever pushed for a clearly unconstitutional decision, a clearly First Amendment violating decision that some politicians this week seem to think was just fine.

Riana Pfefferkorn:

Ooh, zing.

Mike Masnick:

Hello and welcome to Control Alt Speech, your weekly roundup of the major stories of online speech, content moderation, and. Internet regulation. This week's episode is brought to you with financial support from the future of online trust and safety fund. I am Mike Masnick, the founder and editor of Techdirt. Once again, Ben is off on vacation, having fun. and instead we're joined by Rianna Pfefferkorn, a policy fellow at the Stanford Institute for Human Centered AI. And we have an awful lot to talk about this week, but welcome to the show, Rianna.

Riana Pfefferkorn:

Thank you for having me.

Mike Masnick:

so I want to jump in because we have a bunch of stories and as we were planning out this week, we realized we have a bunch of, you know, we always try and do a couple of sort of bigger stories. And this week we're combining a bunch of smaller stories into some bigger stories, but I want to start by going to Australia because Australia, how do I put this? For many years. I've always said that Australia is sort of, it's not only on the flip side of the world from the US, but in terms of internet policy, they seem to have taken flip side of what I think of as sensible internet policy and Try to do the reverse and they just everything that they come up with is the opposite. They have views on intermediary liability where, if you do a Google search on something and it shows something you don't like, that. Is maybe a misleading result. You can sue Google for it. There are all sorts of weirdness in there. And this week, Australia seemed to just double down on a whole bunch of policy positions that seem really, really strange. And so we'll start with the first one, which is that they are pushing for this law that will, issue fines for social media for enabling misinformation. And so. Rihanna, do you want to sort of describe like what it is that the legislation is actually trying to do, because it's, there's this part of me that's just like misinformation is something that exists in the world, and that people are going to spread misinformation. How you define misinformation is. Difficult. And I know that lots of people are concerned about misinformation online, but this idea of putting massive fines, sort of, I forget, as 5 percent of, global revenue fines for enabling misinformation seems like you're just asking for a 5 percent tax on every internet social media company.

Riana Pfefferkorn:

Yeah, kind of. I mean, this is all sort of the EU's fault for bringing up the 5 percent of global revenue and there it was for privacy, right. With GPR. But now everybody's taking that and run with it. It's like, Oh, we could just get extraterritorial, like earnings and domesticate those for our own. and always in the service of like an impossible dream that nobody will ever actually be able to measure up to. What is being demanded, especially when it comes to, like, preventing the spread of misinformation, even about, supposedly select number of categories of, news coverage of this talks about, specific things, like, elections and, public health, this information, which, okay, it's not like we've had any kind of disputes here in the United States, for example, about, um, What the truth is around elections or public health issues. Um, so I'm sure that'll just be, that'll go really well. And yeah, I agree with you. Like, I've long sort of thought that when it comes to these, like, we'll impose a fine of like a potentially huge percentage of, your earnings here and elsewhere for, failing to get it just right with whatever we're demanding of you. It's just a tax by another name. And rather than having this like, ha ha, gotcha, now pay up, you know, and continually trying to get platforms to, be on the back foot, you could just have a tax and say, this is the cost of doing business here. And then put that text to whatever you wanted, whether that was media literacy efforts or public health awareness campaigns or something that's totally unconnected to the alleged harms that you're targeting by, was basically rent seeking behavior, as I see it mostly by countries that don't have their own robust domestic Internet sector themselves. You could put it towards road improvement. I don't care. You know, just sort of, instead, instead of putting all of this, sort of window dressing on what is essentially a tax to say like, this is just a tax, this is the cost operating here. It might backfire as we've seen with like link taxes. Right. and news. but it might just, it seems a little more straightforward to me and a little more on the level.

Mike Masnick:

but the thing is, if you just do it as a tax, you don't get the big headlines of we are fining Meta for misinformation that is being spread on its platform.

Riana Pfefferkorn:

And you don't get. the cudgel that you have to try and induce companies to Do more around protecting privacy and, you know, data protection measures or here, the spread of misinformation on a particular, platform, it's interesting that like, they're going for this now, because if I recall correctly, and I think you've covered this before. There had been prior attempts by one of their regulators to try and force global takedown off of X of particular content. It's like, well, that's just kind of a non starter. And ultimately I think courts made them back down on that, but, they're coming and taking another run at this where it sounds like even for the kind of legislation they're, thinking about here, there'd been prior concerns that it would concentrate too much power, to say what this information is, like, In one of their regulators, and as this is the difficulty of trying to, you know, you're nailing jello to a doorknob here, like, or whatever the saying is, I don't even

Mike Masnick:

That's better. Whatever it is, that one's better.

Riana Pfefferkorn:

That's better. Yeah, you heard it here first, folks. Um, you know, that, like, we can't even agree on, like, what misinformation. Necessarily is, and anytime you see these kinds of, like, failing to prevent the spread of it gets us directly into the kinds of, core monitoring obligations, like, general monitoring obligations that also have been, roundly rejected and yet regulators keep trying to come back and demand them because, if something has to get taken down once. and there are folks like my colleague, Daphne, who's talked about this a lot. Like, how are you going to make sure that it doesn't come back and happen again? Well, that basically means having to monitor the entire platform for particular, messages or content or images or whatever. And then, you know, you've taken this into another level where it sounds like it's like, well, we're not asking for something crazy here. We're just saying, These particular categories, that are delineated in, the proposed legislation. It's just about election integrity or, like, threats to a particular person. And then suddenly, like, okay, but how do you actually do the compliance to avoid a 5 percent global revenue fine? It may start to look like the implementation is actually much more intrusive. And burdensome on speech, where again, we can't even decide necessarily, like, what speech would be covered. it suddenly grows much larger than what sounds like something that would go over pretty well with, from the sound of news coverage, a majority of Australians, like, oh, yeah, they're doing something about misinformation. That sounds like a good idea. Okay, what if we turn it into like, and now we're going to carefully see, what speech is allowed to be posted to any platform or not. In order to comply with that. Yeah,

Mike Masnick:

Yeah. I

Riana Pfefferkorn:

just, just tax them. Just tax them.

Mike Masnick:

Yeah. I mean, the thing that struck me and this is in the Reuters article covering the bill is that say, very specifically that this, bill. give the media regulator the power to force the takedown of individual pieces of content,

Riana Pfefferkorn:

Just impose the

Mike Masnick:

right,

Riana Pfefferkorn:

that was what I thought was the tell was like, it's just a tax at that point, T the cost of having this information online that is disfavored by the Australian government is that you pay money to have that online because we're not going to force you to take it

Mike Masnick:

right. or it is a takedown because if you want to avoid the fine, then you're going to just start taking down content and the fact that they won't, Designate specific pieces of content. And again, you're right that they just tried to do exactly that, two months ago or whatever. And so all of this just feels like clearly the intent is to try and get companies to take stuff down, but they don't want to say that. And that, that just creates like. These very vague boundaries, which means that companies, if they want to try and avoid the tax slash fine are going to start taking down content. But, and it reminds me of this is the way that the, Chinese great firewall worked in the early days where the government would not tell companies you have to take down this or that content, they would just say, if. Bad stuff happens on your platform. We will find you and you will find out after we say, and so therefore the end result is that the companies were like, well, take down anything that might possibly be seen as bad and you had massive over censorship, which the Chinese government certainly doesn't care. They don't mind. That's perfectly good for them. This Australian approach seems like the exact same thing. I mean, maybe the Australian government doesn't care, but this, you know, if a company doesn't want to pay the fine has to lead to massive. Over takedowns and over censorship of certain types of content.

Riana Pfefferkorn:

it really, when you look at it like straight on, you see that it's basically the same. requiring takedowns of lawful, but awful content that we've seen in various iterations of various online safety bills, I don't recall off the top of my head, whether Australia's online safety act has anything to that degree, but like it was a big sticking point in negotiating the UK version, because there's basically this, weird, double think where regulars are saying, well, no, it's not illegal for people to say. vaccines, uh, cause you to be able to have your own personal moving wifi, uh, you know, 5g wifi, like, you know, the home base, or what have you, but we're just going to say that you're going to be fined as a platform. If you have that material that is perfectly legal to say, if that is hosted on your platform, we will fine you if it is there. And so it's not just like. As you correctly note, basically a takedown requirements, because that's the only way to avoid being held monetarily liable for it. It's also just lawful, but awful dressed up under another name and saying like, yeah, it's legal to say, but the main way by which people express themselves now is through online platforms who will be penalized if they have it there, and that would be completely as a non starter. If you have phrased it as like an attacks on individual speech, right? Like, well, you can spew whatever nonsense you want, but if it is about vaccine misinformation, public health issues or elections, we're going to send you a tax bill in the mail for saying those things. People would be up in arms, right? It would sound ridiculous. But, instead if it's finding large tech companies, it may, that may make it more pallid.

Mike Masnick:

Yeah. Well, that, brings us around to the second of the weird Australian policy proposals this week, which is that, A sort of cut off, a banning of, people under the age of 16 from social media. and obviously there have been lots of. conversations about age verification, and age gating, and there are differences of opinions. There are different studies saying different things about whether or not this actually makes sense, whether or not any of this works. There are significant concerns, some of which we've talked about on the podcast before, about age verification, creating privacy concerns but here Australia has this proposal to allow the government to designate. an age barrier under which platforms would have to stop kids from using certain platforms. And it's probably at the age of 16. So those under 16 could be banned from using People under 16, I don't know if they use Facebook anymore, but like I was about to say Facebook, but you know, using Tik Tok or whatever. and this is one of those things that again, it feels like politicians love this because it sounds good and parents get excited about it. And they have these quotes. There's an article talking about this in ABC. The Australian ABC, Australian Broadcasting Company, where the prime minister is talking about it. And he has this ridiculous comment where he's saying, parents are worried sick about this. and it says, parents want their kids off their phones and on the footy field. And so do I, which, you know, is like, I mean, ridiculous for a variety of reasons. there is this sort of built in assumption that when they're using phones or screens, that it is not part of real life, that they are not actually socializing with their friends or family and that they're, you know, necessarily spending less time outside. There have been a mix of studies on this, whether or not kids are, you know, if they spend more time on their phones, are they spending less time outside? there are some studies showing one way or the other, but it's not a definitive thing that, kids just sitting on their phones all day never go outside. In fact, there are some reports that it things like social media and Instagram showing people in the great outdoors causes other people to go to the great outdoors, and a bunch of other stuff. But there's this, feeling that like, adults need to protect kids entirely from social media that, research on that, as we've talked about many, many times does not show that social media is inherently harmful. There are some kids who struggle with it, and that is something that would be worth focusing on figuring out how to help those kids. But this idea that if we just ban kids entirely from social media, that magically they will go out to the footy field, Seems, unproven and just feels like one of these things that, politicians do just sound like they're doing something without actually doing something. I don't know. What, what, what do you have any take

Riana Pfefferkorn:

I mean, my, my cynical take is like. When it comes to American style football, we had like three teenage American style football, high school football players dying, even before school started this year of like heat stroke. And, you know, like, I don't think being on Facebook causes traumatic brain injury in the longterm, you know? So I don't know if American style football is an apt comparison, but you know, it's, it's abstinence, but for social media and it's like, I don't know that. Keeping people away entirely from the entire, topic of existence is going to be a way to teach them to, figure out, like, what, how it affects them personally, or how to engage in healthy habits, or to connect with all the people that they connect with, like, it seems to be assuming that It's better for your socialization and your, you know, life with your friends, et cetera, to be outside, engaged in healthful outdoor activities. And that's probably true for a lot of people, but like. You know, I hate to be the person who's like, well, but let's choose AncData from my own life. Like, it was really important to me when I was a teenager to be able to have internet access to like online forums and communities to talk to other people. And you know, kids are also talking to their own actual real life friends, on their phones as well. And all of this just seems so, you know, 13 or 16. And there's a quote from, uh, think it's the prime minister favoring the older side of that. It always feels very It's it's not set in stone or it's not some magic threshold where every kid is magically able to deal with The challenges of being on social media and opportunities of it at particular points in time Cut off age, you know, there's different maturity levels and, kids respond differently and it's just such a weird across the board. Like this will fix everything, and I'm sure we'll get around to this and talking about us proposals to just keep kids off of social media entirely where there isn't even any. Basis, like any scientific evidence here. They're like, well, first we'll run an age verification test to figure out where the right cutoff is. And I'm like, Oh, thanks. I hate this even more now.

Mike Masnick:

Yeah, well, the thing is, too, some of the background here was that last year, the Australian government came out with a report that actually said that they could find no age verification solution that was, secure and would keep private data private. And so you would think that having the government come out with such a report would lead them to say, maybe we shouldn't mandate age verification. And yet they seem to ignore that part of it.

Riana Pfefferkorn:

It's let's make sure as many kids as possible are subjected to data protection and cybersecurity risks. Not to mention all of the adults whose rights are affected as well by this kind of technology. Yeah. No, nothing seems to slake regulators thirst for age verification. Technologies, and I don't know whether they're just not, coming out and saying the quiet part out loud, which is we have goals of, whether it's looking good to our constituents or, pretending like we're doing something about these difficult, societal questions, and we're willing to sacrifice. adults, online anonymity and the rights of kids of all ages to data protection and, access to information in service of those goals, like it's trade offs all the way down and any tech policy question, but I don't know that they're being super transparent about the trade offs. That are in here, although their credit, like in the ABC news coverage, they do quote folks from the greens being like, uh, maybe there's some other ways we could go about this media literacy, et cetera, rather than just a complete, blunt, abstinence for social media style ban.

Mike Masnick:

Yeah. I mean, the thing that strikes me about both of these stories is that, There are these things where it's like, these are really complex problems where there are huge trade offs and nuances and any potential solution. And the Australian government is basically like, well, you know, let's just do it. Like, let's just go in and say, fix it. Rather than, being willing to deal with the, any of those trade offs or nuances, which actually takes us to the third Australian story that we have, and, this is one that, you had called out. And so I will toss it over to you, but it involves, the Australian government's view on encryption and encrypted chats.

Riana Pfefferkorn:

Yeah, so there's news now that the head of ASEO, which is the Australian security intelligence organization, which is basically their, equivalent our FBI, has said that he's. It's planning to soon try and use legal powers that were granted back in 2018 to force tech companies to cooperate with warrants and unlock encrypted messaging conversations in national security investigations. And there've been laws. This is quite controversial at the time that Australia passed the assistance and access act of 2018, that would require tech companies to cooperate with law enforcement agencies Requests for accessing even encrypted information. There's all these, there's a lot of battles that went back and forth over some language saying, well, but you can't require companies to introduce, larger security flaw. Um, and there's a lot of talk about like, well, what does that actually mean in the end? And it was a very unpopular in sort of the world stage. There's a lot of policy, obviously against it. The government at the time kind of said, like, we recognize this law isn't great, but we're just going to pass it anyway. And then we'll go back and fix it later. And then, like, according to, you know, there is some oversight built into it of, well, how often are these powers being used? It seemed in recent years that the Australian, national security and intelligence service Authorities hadn't been using these powers to basically force companies to add a backdoor or to repurpose an existing, technical capability into a backdoor, essentially to get access to otherwise inaccessible data. and it wasn't necessarily clear, like, okay, why are they not making use of these powers that had been around again since, you know, late 2018? Was it that they were finding other existing methods? To be adequate. And that seems to be part of the story here where, the risky business podcast had an interview, actually with the head of ASIO, Mike Burgess, where he had mentioned, like they have other means of getting access to information, but they're much more time and effort intensive, whether that is Somebody who's an actual person who is in the employ of, AZO who can infiltrate, let's see a telegram messaging group or a, uh, encrypted messaging chat, or the ability to, use a more tailored solution for like hacking into somebody's phone. And in that interview with Mr. Burgess, he kind of had mentioned that. The same messaging that I have heard in following the encryption debate for a number of years in various countries, which is messaging has shifted over time from we can't get access to encrypted data to we can't do this expeditiously and sort of as a guarantee. In all instances, and, that is a, admission. And I thought overall that the risky business interview was, he was actually quite measured and balanced in terms of saying like, well, I've been put in this position where I'm going to use all of the, powers available to me to try and tell companies that they need to cooperate with our need to access, information where we have, Warrant or other proper authorization. and we have these more tailored and more cumbersome means of getting our way. but we would like to not have to go that route and just be able to serve a notice on companies saying you have to cooperate with us. and if the companies refuse to cooperate, what happens then? Several messaging companies like Signal, for example, have said. Pretty flatly and bluntly, like, we would rather pull out of a market entirely if it requires, if the only option is either backdoor messaging conversations so we can get access to them, even on a, one by one basis, because it's always like, well, it's not about everybody's messages. It's just about the people who we know have broken the law. And therefore, according to this guy, Don't have any rights anymore. Like, I'm not sure that's how that works. But, if that's the option, some entities have said, we're just going to pull out of this market entirely. And, you know, I think this gets at something that is repeated critique of Australia in particular. And this was something that a colleague of mine was just saying about the, misinformation legislation was like, they are not a major, market in terms of how many people live there. Like. California is the same size or bigger in terms of population as the entire continent of Australia. And so their ability to like, have a lot of bargaining chips at the table is somewhat reduced by the fact that yeah, you can kind of leave that market. And still survive, whereas China has a lot more bargaining power or India or the entire European Union because of how much larger of markets those places are. also the fact that, like, if. An entity does pull out rather than comply, that just means fewer options for the people who live there. And just because there are fewer of them, you'd like to think that that doesn't mean that they're any less entitled than, you know, the half a billion people who live in the EU, to have private and secure messaging. that particular sort of ultimate option of maybe you just don't get to have encrypted messaging anymore is particularly galling because reading the press quotes. with the head of ASIO, part of what he's talking about is like, look, we have a democracy in this country where we have elected officials who the public sends to say what the police's powers ought to be and what my authorities ought to be. And it's like, well, kind of, except when they admit that it's a terrible bill that they're about to pass, and then they just go and do it anyway and swear they're going to fix it later. And those elected officials are basically creating a situation where like either somebody agrees to undermine security and privacy and various other values. For Australians or just deprive Australians of the ability to have those apps anymore. so we'll see what happens with this. He refused to name the companies in question. and so I don't necessarily know who in particular he's talking about. I do still think that the arrest of Pavel Durov, which you talked about in the last couple of weeks has thrown kind of a wrench into a lot of the encrypted messenger debate, just because it's highlighted how weird and different Telegram is as a relatively, not great faith actor, but also one that does not really have definitely not for group chats encrypted, like truly secret encrypted messaging services. And so I'm also still kind of watching to see how our various anti encryption quote unquote lawful access authorities going to shift their messaging. In light of the changes that Telegram actually very quickly decided they were suddenly willing to make, you know, after their CEO got arrested, or is what Australia is planning to do here. Is it going to be like, and then we're going to arrest some guys because it turns out that works, France did it. And it was fine. I don't really know. but yeah, the, the, the stuff with, the ASIO head, like some of the, quotes that he had been giving in these news stories was just saying, like, and this is from another ABC, Australian ABC story. He said, I don't believe in our society. You can accept there's portions of the internet that are not accessible by law enforcement or the security service. And that is really getting at something that I have always seen as a core proposition of. to think about the encryption debate writ large is like there's an underlying philosophical question of, do you believe that there are spaces in human society that should be beyond external policing, internal policing? Maybe you can report if somebody is being abusive in a encrypted messaging conversation on a platform like WhatsApp, for example, but external policing. And there are different answers to that. Some people may say, yes, there need to be spaces that cannot be penetrated from the outside. We need to have that kind impenetrable right to privacy. And if you're the police or the head of the National Security Service That is not the view that you necessarily take, but I thought it was interesting to see him Phrase that as something that I have long used. It's like this is the ultimate philosophical question in the encryption debate And that he just said that quite overtly and those are basically what the stakes are leads into like well And if your answer is no, there shouldn't be spaces that can't be accessed from the outside That ultimately means you're okay with mind reading warrants but anyway That may take us down a whole other little rabbit hole for

Mike Masnick:

there are all sorts of questions about, you know, private conversations on a park bench or whatever, and, and other things. And in the same sense, you know, one of the other comments that he made was this idea that if you break the law or you're a threat security, you lose your right to privacy, which. is like a very blunt statement for something that I don't think is entirely true, right? So, like, you lose some rights to some privacy and, like, break the law, yes, you lose some rights in those cases. Cases, but like threat to security is also fairly broad and, putting this sort of blanket, like you lose all rights to privacy feels a little bit on the extreme and also feels like with the, the previous two stories of like, you're taking something where there is a lot of nuance and a lot of trade offs and you're making these broad blanket statements that probably don't apply particularly well to that sort of trade off situation and nuances.

Riana Pfefferkorn:

Yeah. And, you know, he's, from the sound of it, on the technical side, he's not necessarily a lawyer or legislator. And so I kind of at first thought, Well, maybe he just said that in a somewhat inapt way. And then he sort of said this several times throughout the risky business podcast episode. It's like, Oh, okay. That actually does seem to be his view that like, you just have no right to privacy. If you have committed a crime, not suspected of committing a crime, committed a crime. It's like, okay, well, this is kind of how we figure out whether somebody is guilty or not, but yeah, it's, I don't know if that's just, the view, if he's alone having that view, if this is a more Australian wide view, as you say it, but, like, of course people still have rights to privacy and other rights, even when they are accused of crimes, without that, you end up with Guantanamo, you know, or you end up with the traditional notion of the outlaw. As somebody who is cast outside of the protection of the law entirely because of something that they did or some category that they belong to. It's like, no, we have rights specifically because we need to protect the accused and those under suspicion as well as those who have, done nothing wrong. I forget if it was in story we'll link in the show notes or in another source where it was kind of the same line of like, well, but if, you haven't done anything wrong, well, then your privacy is protected. And you're not going to be affected by these powers that we're going to invoke under the 2018 law. And it's like, oh, oh, we're really going with the, if you have nothing to hide, you have nothing to fear messaging here. Okay.

Mike Masnick:

Yeah. all right. Well, there's a lot there, but I think we're going to move on. We're going to leave Australia and come back here to the U S where, we had a couple of different stories that we're going to sort of bring together here. The first is that in a district court in Utah, they have rejected Utah's social media, protect the kids law. This follows on a few other laws and a few other states that are similar. All of these, attempts to create these laws that say kids cannot access certain kinds of content, and then there are questions around the age verification and sort of what, you moderated. I did find it vaguely amusing. I always find it vaguely amusing when you have, different laws where at the same time, you're sort of trying to pass laws that say you can't moderate adults. They're passing these laws that say you must moderate kids, but, either way, this was a law that, Utah's governor, Spencer Cox, sort of very boldly, first signed while streaming on all the platforms that he declared were evil. he, proudly announced that he was going to stream the, the signature of the bill. And then went after some first amendment lawyers, including Ari Cohn, and saying like, I will see you in court. I'm confident that this law is perfectly constitutional. turns out. According to the chief judge of the federal court in Utah, that, it's not constitutional. In fact, to tell companies how they have to moderate content around kids. and so, I think the ruling is pretty straightforward. It sort of hit on all the points that other similar rulings in other states have done. and one of the things that I thought was interesting was, it gets eventually to the question of, Does this pass strict scrutiny, which is the standard that if you're, trying to regulate First Amendment speech, it generally has to pass strict scrutiny. And as part of that, it is, is there a compelling state interest and is this sort of narrowly targeted towards that? And one of the interesting things that came out in the ruling was that, Um, even if you say that the mental health of children is an important state interest, you have to show some actual evidence that what you're doing is, necessary for that. And the judge actually called out the Surgeon General's report, which was apparently the only evidence that the state of Utah submitted as proof for why this law was necessary. And this is one of those things that, I think has come up in a lot of these, legal proceedings and, and, states and the federal government trying to pass laws to protect the kids is they just sort of take it as given that everybody knows and everybody recognizes that social media is harmful for kids, which is not what the evidence actually shows. And so they make a very cursory attempt to prove their point, which was they submitted this search and generals report, which many people reported on. as saying that social media was dangerous to kids. But if you actually read the report, it doesn't actually say that. And thankfully, this judge read the report. And so he says, you know, the record contains only one report to that effect, and that is the Surgeon General's report. And then the judge actually read it and said it actually offers a much more nuanced view of the link between social media use and negative mental health impacts than that advanced by the defendant's, which is the state of Utah. And it says, For example, the advisory affirms that there are ample indicators that the use of social media can have risk, but, that the, independent safety analysis, the impact of social media on youth have not yet been conducted. And also that there is broad agreement among the scientific community that it has potential for benefits to children as well, which the state of Utah, again, kind of ignored. and so. I thought this was nice to see, but it's one of a pattern of these rulings with, the states are passing these laws and assuming that it is obvious to everyone. And then when the judges actually look at them, they say like, no, this clearly violates the first amendment you know, and you don't have the evidence that you would need to pass strict scrutiny. So, did you have any, thoughts on, on what happened in this Utah court ruling?

Riana Pfefferkorn:

Well, yeah, I, I agree that it was refreshing to see, well, a, that the judge actually read the materials that were submitted as, as supporting evidence and said, it actually doesn't say what you say, it says, but also that this highlights, Why I hate comparisons between labeling for social media and alcohol and tobacco, which is There isn't a condition where smoking a pack a day makes you run real good and do way better at like, you know, your, your, your athletic performance and your lung health. There's no like ambiguous evidence there. I mean, maybe whatever the, smoking industry suppressed for a number of years, uh, or, bought as evidence that smoking makes you look really cool to girls. But, that aside. It's not the same thing. It's speech. It has different effects. You know, it's not even, in terms of the court going into how this is clearly also the content based, restrictions. What even is social media? There's all these carve outs that are a mile long that aren't covered by this. And then one of the things that the court says is like, well, but how is that any different if somebody spending a bunch of time playing video games, something that has also had to go all the way up to the Supreme Court when it comes to this exact, you know, moral panic over the kids. why is that any different from social media where we don't even necessarily know what social media means as a category, but whatever it means, it's not the same as. tobacco and alcohol or oil or whatever else you want to draw it to like it's speech and so we cannot try and regulate it And think that it's going to be as straightforward as saying well Even after all the long and hard fought cases involving mandatory labeling for, you know, the surgeon general's warning on tobacco products, for example, it's not going to come out the same way because as colleagues of mine at Stanford, such as Jeff Hancock and others have shown, there are so many different impacts that being online in general, or having smartphones or being on social media can have for young people. It doesn't all point in the same direction, and it's nice to see the judge, sort of acknowledge that. I also enjoyed that he noted that, like, actually, kids also have First Amendment rights. Like, we've had Supreme Court cases about this as well! and it turns out that kids have their own speech rights. And, people in general have rights not just to speak, but to access speech. information. And so to the degree that this was a case about the rights of users, not merely the rights of the platform itself, which as the judge noted, seemed to be something that the state was a little confused about, in terms of the arguments they were making. it's also important to ensure that minors rights are protected as well. We remain one of the only countries not to have signed the UN convention on the rights of the child, uh, which includes like speech and free expression rights as among the rights that even kids are entitled to. but it was nevertheless. good to see court say, this isn't a, data protection measure. You're basically trying to ensure, stifling of, content that is fundamentally speech that somebody might choose to share on social media when they're on. Still a minor.

Mike Masnick:

Yeah. And I thought this, leads nicely into the next story that we had, which was, mentioned the surgeon general's report in this and, and, you know, The Surgeon General, who seems to have ignored his own report earlier this year, and announced that he thinks that there should be warning labels, that Congress should mandate warning labels on social media. And this week, a coalition of 42 state and territorial attorney generals sent a letter, Telling Congress that they should pass a law mandating exactly these kinds of labels, which I think is pretty clearly unconstitutional for a variety of reasons. Also for everything that we just talked about, that the judge in Utah seemed to recognize as that the evidence just isn't there for this, seems important. And yet all of these attorneys generals are asking, including. Our, Attorney General here in California, Rob Bonta, who just in the last like week and a half lost two cases about social media on First Amendment grounds, you would think might take a step back and recognize that maybe There's a problem with this plan, but he was among the many attorneys general, who signed this letter and are asking Congress to mandate media labels, warning labels on social media. What's your take on the, uh, warning label debate here?

Riana Pfefferkorn:

Like, it, between the Utah, decision we were just talking about, the California decisions that you mentioned, the Texas case where Texas tried to add compulsory warning labels onto porn sites, because of supposedly the evils associated that will, you know, turn you into a depraved, pervert from looking at porn, which even the fifth circuit was like, no, that's, that is flatly unconstitutional. No. Um, of course it's not going to fly here. Like it's, it's ludicrous and ridiculous and disappointing to see all of these AGs sign onto a letter. For something where they definitely ought to know better and be on notice for the reasons you just said. It's all so much just like, there's that, arrested development scene, where it's like, well, did it work for those people? And it's like, no, it never does, you know, they somehow delude themselves into thinking it might, but, but, but it might work for us. Like, what if we just keep trying the same discredited, unconstitutional idea over and over again? It's not gonna work out the way that you hope it will. So yeah, it's just spend your time doing things that are more useful to the people of the states that elected you to be their AG rather than continuing to push this, as you said, like stance of just like, well, everybody knows social media is bad for young people. We're not going to cite any of the contrary, more ambiguous research out there, and then say that the solution to this thing everybody knows is this.

Mike Masnick:

Yeah, I mean, it, strikes me, it hearkens back to the, first story in, in terms of Australia, where there are political reasons and PR reasons to, claim this and make it seem like you're doing something. I mean, the sort of famous joke about AGs is that AG really stands for Aspiring Governor. And the, entire point of being an AG is not to take on crime in your state or take on things that harm the state. citizens of your state, but to get in as many headlines as possible so that when it's your turn to run for governor or possibly senator, people recognize your name. And so there is a part of me that feels like that's what this is. this is grandstanding messaging, that lets them say, you know, we're out here, we're fighting to protect the kids and save you parents, knowing that the, even if the courts strike it down, who cares, that's not their problem.

Riana Pfefferkorn:

It's also interesting in terms of what it assumes about who goes to the voting booth, because like, if I were a 16 year old, And the next time somebody was up for running for AG or for governor, I might very well remember like, oh, yeah, that's the jerk who tried to keep me from being able to use social media at all. You know, a couple, just, just 2 years ago, uh, I'm going to go and try and vote against that person. but, you know, maybe they're relying on future voters, not being a problem for them in terms of proposing, speech restrictive measures on today's kids. very much.

Mike Masnick:

Yeah. All right. Well, let's, move on to, something different. I'm going to jump to, there was a story in the Washington post this week, which is actually about, a report, That was put out by, Claire Wardle, who's a communications professor, um, has done some really interesting work in the past., she's done just done some really good research and thought provoking stuff on disinformation and things like that. And she did this report. Around TikTok users and how they view, misinformation online. And I thought it was interesting in that it, I think it challenges some of the assumptions and there's this, we're talking about misinformation. There's a belief that it is like. All powerful. Right. I mean, like if you ask individuals, individuals tend to say like, well, no, you know, I'm not impacted by that. I'm smarter than that, but all those other idiots out there, like misinformation makes them all crazy. and so this starts to look at Tik TOK users, which often. I think a lot of people think are even more gullible the average person. And that might also be like, old person bias against young people, assuming that young people are easily misled. and I actually found it, there were a bunch of really interesting things and, and the Washington Post coverage, I think does a really nice job sort of summarizing the findings. And it sort of suggests that kids on TikTok have a certain level of media literacy, which actually seems pretty good. and so I was sort of, I felt. And the first thing was that they don't always believe everything that they say. They know that there is garbage on the internet. They know that there is misinformation, and then they try to calibrate and figure out what is true and what is not true. And one way that they do that, and this might horrify people, but after thinking about it, I actually think this is actually interesting is that they, they stay on TikTok as part of their fact checking and that they start to read the comments on things. And there is the sort of joke that you should never read the comments on anything online, which, is sometimes true, or at least if you're going to read the comments, you have to be prepared for a lot of nonsense. But the interesting thing was that what this study found was that, The people on TikTok, you know, they'll see a video and then if they're curious about it, if they're like, is this really true? They will read the comments and begin to piece through what people are saying about it. And they see that people might challenge it or provide alternative viewpoints. And it begins to give them a sense of how to zero in on what is true and what is not. And then from there, they might move to other sources. They'll, go to Google and begin searching on things. And so this is actually, I think, actually pretty good. media literacy techniques to sort of see something, be not sure is it accurate or not, begin to see what are other people saying about this specific content? How are people responding to it? And then beyond that, going to additional sources outside of, that realm, to sort of see like, what else is there that Supports this or not. And so it struck me as like, maybe there's at least some group of, generally younger people that we are teaching to have media literacy, to be skeptical of content that they come across, and then to figure out how do you actually zone in on things. And there's like the joke around like, well, do your own research is often term for like, get sucked into some sort of, even worse nonsense pool of information and, and only, believe absolute nonsense. But this seems like kids actually doing pretty thoughtful approaches to, you know, approaching things with skepticism and trying to zone in on what is true and what is not, what's your take on, this report?

Riana Pfefferkorn:

Yeah, it's, it's encouraging and it sounds like it's not clear, the exact, uh, this is only, it's a relatively small sample, right? It's like 30 people, so it wasn't necessarily representative and it includes people in their 20s. It's not all, you know, minors, for example, but it's certainly the diametric opposite of just banning everybody from seeing any of this stuff until they're 16. And it goes to the, the point that we were talking about earlier of, you know, Learning how to interact with and deal with what you encounter online. and yeah, just recognizing that, like, what you see online isn't necessarily true. And having strategies for how to go and figure out what you think about it is really at odds with the, well, people are just sitting there, slack jawed scrolling for three hours and absorbing all the misinformation without any kind of skepticism. It seems to indicate that people Whether they learn this in school or they develop this themselves, have developed some amount of media literacy strategies. and it's interesting even whether like, is this confined to TikTok? do people have different approaches when they're on different, platforms? You know, do the kind of community notes approaches on X have anything to teach here or are they completely different populations with different strategies? but it's, good to see that. people are doing exactly the kind of user empowerment, you have the ability to critically evaluate what you encounter online, uh, sorts of techniques. And that in fact, maybe that means you don't just need some sort of paternalistic stepping in by legislators to decide what you are allowed to do or see online or not, especially for an app whose very existence is under threat, in the United States, for reasons separate, nominally separate from, the issues of, misinformation online. And then also, you know, it takes us back to the top story that we were talking about with Australia, just trying to say, we're just going to fine you for having misinformation. It's like, well, But what's the harms that you're trying to impose the fine for? Because if people have the ability to, do something to mitigate those potential harms that they might otherwise experience, if they just swallow this hook, light, and sinker, then. You're trying to impose liability for material that is not necessarily inherently having the expected outcome that you are assuming it will.

Mike Masnick:

Yeah, I thought it was a really interesting study. I mean, obviously, more research needs to be done on it. But, I tend to believe that society begins to figure these things out. And to me, this was sort of a positive signal that at least some folks are figuring this stuff out.

Riana Pfefferkorn:

Yeah. I think also like some of the comments that have come up from, especially from legislators and some of the stories we've been talking about is this is a new generation. We don't have any. instincts for how to deal with this yet. And there's a certain amount that I worry about trying to figure out what the best policy should be for something that you didn't have to deal with yourself. Like at this point, everybody of all ages who's been on the internet. should be aware that there's misinformation and stuff that's not accurate on the internet and have some way of thinking through how to, deal with that. but it, it's, it doesn't mean that, you should discredit emerging practices that are coming from people who are, I hate this phrase, digital natives, or who are just more used to emerging forms of technologies and mass communication. than the ones that you grew up with, where the strategies for, figuring out best practices for, whether something is true or not in the TV or radio era, it's just not going to be the same necessarily as what's needed for TikTok. And so I think it's a two way street to listen to emerging practices rather than just say, we know what's best and what's best is let's just ban this app entirely.

Mike Masnick:

Yeah. All right. So then I think our last story of the day is going to be this, an interesting announcement from the, mental health coalition, of a program called thrive, which, uh, is involves Meta, Snap, and TikTok, again, talking about ways to share information around self harm content, suicide related content, and self harm content. it is similar to other programs that a lot of the social media companies have had around sharing signal hash information around particular types of content. Obviously with CSAM content, there is things like photo DNA. there's, terrorist content with GIF CT and others. This is an attempt to do something similar around self harm content. and I thought this was interesting on, couple of different levels in terms of one, giving another example of where we constantly hear that the, companies do nothing and don't care about this stuff. Which. I know and you know are not true that the companies have, you know, usually pretty big efforts around this stuff, perhaps not X, but many of the other platforms, perhaps not Telegram. but other platforms do take this stuff seriously. And this is another example of where they are, but also a few concerns about how this program will actually work. There's been a bunch of research on self harm content and suicide related content, and it is a mixed bag again, in terms of what the impact is in terms of exposure or not exposure, blocking exposure. how much of this is sort of demand driven, versus supply side, in terms of influencing behavior and, what happens You know, there, there are people who believe that, you know, if people are searching for this content, that is a point where there could be an intervention that might be helpful and there are approaches that, use that, whereas this, where it's just sort of, we're going to sort of designate a bunch of content that we determine is self harm related content, and then create a hash, pass it across all of these different platforms and hope that they all block it. raises at least some concerns about whether or not that will be effective, who's actually making sure that that content is what it says, and, what qualifies and what doesn't. But, I think it's an interesting approach. I think it's one worth watching, but also I just have some concerns about how it plays out. You know, I know you've done a lot of work with NikMik and sort of what's happening with the CSAM content there. what's your take on this when it comes to self harm content?

Riana Pfefferkorn:

I mean, I share the concerns that you have, like in my mind, I have this spectrum. Of the categories of content for which some sort of, whether it's a signal sharing, program has been put into place voluntarily by companies or otherwise, like, how should this be taken down? Everybody agrees. It should be taken down. And it goes from like, you have CSAM and then we've had. copyright as having a bunch of systems built for detecting copyright. You've got terrorist content with GIF CT, and now we've got suicide and self harm content. And there's always, like, you know, we've been talking about trade offs where, the potential, over removal harms become more, potent. the further we stray away from, like, there is no upside. Like, if it's, if we're talking about something like CSAM, well, who cares what the context is? Whereas one of the concerns with GIF CT and with the GIF CT set being basically a black box is, well, does that include counter propaganda programming, like, counter terrorist type of programming, de radicalization efforts? And here too, like, honestly, I don't know a lot about suicide and self harm work, admit that, but, I can easily imagine how, you know, Trying to help, maybe people who used to have this as a problem themselves trying to talk to people who are still in the throes of an eating disorder, for example, or a cutting disorder, maybe it's helpful to be able to use as counterprogramming. I'm just speculating there, but you get these questions about, is anybody else going to be able to have access to see what all of these platforms are agreeing voluntarily to remove and audit that and find contexts where there is actually a downside because there are over removals. In context where, where context matters. and you have that even with, for copyright, obviously, like for fair use, right? Like, and so the more on the spectrum, the more you move away from, it's all downside and no upside. There's no legitimate context to it, to, something that is potentially definitely capable of, harming kids or adults for that matter. But that there may be upside uses. I share the concerns that you've got, Mike, for sure.

Mike Masnick:

Yeah. I think it's interesting. I think it'll be worth following. I think, it's good that the companies are taking this stuff seriously. I worry that it just becomes sort of like a universal block list on this one. We don't really know the impact of it. It might turn out, I hope that they're planning to study this and maybe it turns out that, having a sort of universal block list on this type of content actually has beneficial, You know, results, but really concerned. I mean, I had looked at a lot of the studies on eating disorder content, which is a slightly different area, obviously, but where a lot of the research there found that just trying to take down that kind of content actually led to worse results rather than better results. and so I fear that there might be something similar here, but. you know, it's something that we'll have to, pay attention to and watch, but I think that is it for this week. Rihanna, thank you so much for stepping into Ben's shoes and, joining me on the podcast and having this discussion where, we have solved all, international speech regulation issues, right? I think

Riana Pfefferkorn:

We're done now. We've put ourselves out of a job. Yes.

Mike Masnick:

There we go. There we go. Well, thank you so much. And, uh, thanks to everyone for listening as well.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode