Ctrl-Alt-Speech

Comply & Demand

Mike Masnick & Ben Whitelaw Season 1 Episode 39

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Internet Society, a global nonprofit that advocates for an open, globally connected, secure and trustworthy Internet for everyone. In our Bonus Chat, Natalie Campbell and John Perrino from Internet Society join us to talk about the social media age restriction law in Australia, a proposed age verification bill in Canada, and the trend of age gating and age verification globally, and what it means for the open internet.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

So I don't know how we haven't already used this prompt mic. Perhaps we should be feeding all of the podcasts into chat GPT to, to tell us. But, um, I'm going to ask you today, the prompt that you get on chat GPT when you go onto the site. And that is what can I help with?

Mike Masnick:

Can you find a thousand or so people who would like to purchase my social media card game?

Ben Whitelaw:

What? Yeah. 1 billion users. Let's get the plugin.

Mike Masnick:

Yeah, 1 billion users, which I will note this week OpenAI announced that they are aiming to get 1 billion users on chat GPT. So this all ties together

Ben Whitelaw:

Oh my God. It's, do you think they, they took inspiration from the Kickstarter?

Mike Masnick:

They must have, but I would like them to send 1 billion users to the Kickstarter. We are not quite at our threshold of support to actually make the game go into production. But if you are listening to this, I know you will like this game because it is all about social media and it is fun for the whole family to play. So if you're listening to this and you have not backed our 1 million users, 1 billion users. Game on Kickstarter. You can just go to Kickstarter and search 1 billion users. Please, please check it out. We need more backers. And if you know other people who are interested in online speech and trust and safety, and you want to have a fun game to play that will also help you explain trust and safety and social media and all this kind of stuff. Please, please do it. I hate to beg, but I need to beg.

Ben Whitelaw:

I'm, I'm going to buy five more.

Mike Masnick:

There we go. We have a discount. If you buy five, we have a discount,

Ben Whitelaw:

I'm going to do it.

Mike Masnick:

but anyways, Ben, what can I help you with today?

Ben Whitelaw:

What can you help with? I am incredibly jet lagged, Mike. I've just landed back from your fair nation after a week of work there. And I do not know what day or time it is. Um, I have not figured out how, To, yeah, to actually speak as you can tell. Um, so any, any advice, any help on how to get back to a normal rhythm of sleeping would be much appreciated. Let's see how this goes.

Mike Masnick:

Yes, yes, this, this may be an interesting one. Uh, I can confirm that. Well, one, I was up late last night working on stuff and I was texting Ben who was in Chicago. this was not that many hours ago

Ben Whitelaw:

No,

Mike Masnick:

and now he is. In London, and he showed me his, bag that he had just thrown in the corner after getting back home in time to record this. So we are both working on quite little sleep and, uh, Ben even less than I. So this may be a very fun or very strange episode, depending on where things go. Bye.

Ben Whitelaw:

everyone. Hello and welcome to control all speech, your weekly roundup of the major stories about online speech. Content moderation and internet regulation. It's December the 6th, 2024. And this week's episode is brought to you with financial support from the future of online trust and safety fund. And by today's sponsor internet society, a global nonprofit that advocates for an open globally connected, secure, and trustworthy internet for everyone. I'm Ben Whitelaw. I'm the founder and editor of everything in moderation. at least I, I am, or was the last time I checked, um, and I'm with an equally kind of bleary eyed Mike Masnick, for altogether different reasons. It's good to be here, Mike. We haven't been in this, in these respective chairs for a few weeks.

Mike Masnick:

Yeah, that's right. We took off last week. We missed all the big stories, uh, during Thanksgiving week and then the week before that you were traveling. So you were not here. And so, uh, it's been, been a little while since, since we've, been able to do this together.

Ben Whitelaw:

Um, we, we've got a lot to cover today, partly because yeah, we were, we took a break last week. happy Thanksgiving again for, to you and to our listeners. And we have a really good, bonus chat at the end of today's episode with some super smart folks from the internet society, around social media bands in Canada and Australia and age verification in general. So, stick around for that at the end. Thanks. Today's show, and we won't cover those kinds of stories today, just because we do that really well with, with those folks. we're going to dive straight in to, um, what is essentially Mike a breaking story. Um, it's, it's a live story that you essentially woke up, rubbed your eyes and, uh, have dived into, it's the, uh, the tick tock case it's back on the agenda.

Mike Masnick:

Yeah. So. We knew that there was supposed to be a ruling coming down basically any day. This is the, US law that in theory forces ByteDance to divest of TikTok in the US or forces, the app stores and ISPs to block access to it. And they are supposed to either divest. or we'll have access blocked on January the 19th. And there's a whole sort of political side story to this, which is that our incoming returning president, uh, Donald Trump, who originally was the one who proposed the TikTok ban and tried to do so in a very ham fisted way that failed, four years ago. Has now completely flipped positions, perhaps because he was backed heavily by one of, ByteDance's biggest investors, but has insisted that he's going to protect TikTok. But he comes into office, I think a day or two after this law goes into effect. And so TikTok had challenged the law and it went to the DC circuit, which is in the appeals court. And they came down with their ruling today, basically saying that the law is. Just fine. There is no problems with the law and TikTok must obey it. And ByteDance must obey it. this ruling did come out literally, about an hour before we were recording. I have read what I think are the most salient points in it. however, given the timing, I may make some mistakes in terms of this because I have not had a chance to go through it in great detail.

Ben Whitelaw:

Have you even had a coffee Mike? Are

Mike Masnick:

I have not actually, I am, I am, uh, yeah, anyways, so, I think they're wrong is, is, is my quick summary. it's interesting because the government tried to argue that there were no first amendment issues at all. And that the case could be decided without even considering the first amendment. The court went in a different direction and said that government was wrong, but that the first amendment. Does apply here. And then once you say that the First Amendment does apply, then there's a question of which level of scrutiny, and there are different levels, you know have to be passed in order for the law to be considered constitutional. The highest bar is is strict scrutiny. And then there are, there are lower bars. And the government had argued that if the First Amendment does apply, you should go for one of the lower bars. ByteDance had argued that it should be strict scrutiny, which is the hardest to pass. The court notes that there isn't a comparable situation that says which of the levels of scrutiny should apply. And so they actually choose not to, say which level should apply. Though there is a concurring, uh, opinion by one of the judges suggesting, which level should apply, but the court says it doesn't matter because we think that this law passes even the highest bar strict scrutiny. And so then they just analyze it based on strict scrutiny. And I think they do it. in my one read through it, I think they do a terrible job of it. They basically buy into strongly buy into the claims by the U. S. government of the national security concerns, even though they admit that no actual evidence is presented for those, but they basically say, well, the U. S. government, a lot of people in the U. S. government say that we should be concerned and therefore we're just going to take that as fact. This is not uncommon in the U. S. court system when the government says, well, we have national security and blah, blah, blah. We can't tell you what they are. We're just really concerned. the courts are often willing to go along with that. That has led to a whole bunch of really terrible things having to do with civil liberties and civil rights and all sorts of stuff going back decades. But that seems to happen again here where they're just like national security concerns seem totally legit and you know ByteDance hasn't given us anything to Reject that they admit multiple times that a lot of these concerns are totally speculative And yet because ByteDance can't respond to the questions Speculative concerns. Therefore, this is all okay. There's a whole bunch of other like little stuff in here that again, it's just kind of like, well, you know, by chance I said, we have all these other, less restrictive means of doing this, which is part of the strict scrutiny test. And the court is like, nah, you know, we don't think any of those good enough. I mean, you know, Bytance really pushed this idea that, they would separate out all the operations, which they've mostly done. Everything is hosted in the U S by Oracle. They've given Oracle the power to audit stuff. they even offered the U S government, like an off switch, like that they could, you know, push a button and turn off TikTok, you know, in the case that something really bad happened and the government was like, nah, you know, that doesn't really satisfy what the government is doing here. There are a bunch of other oddities in here. You know, there were issues around like specifically targeting, like the bill actually names Tik TOK, which seems like it's singling them out for different treatment. And the court kind of waves that off and says, well, no, that's not really true, even though it does mention it. it, they said at one point, it's not punishment. If it was a bill of attainder, which is a specific thing where it's like just targeting someone for punishment, they said, yes, it only names TikTok, but not for punishment. And therefore it's not a bill of attainder. There are all of these like really odd things. So the big question now is kind of like what happens next? Like does TikTok. get shut off on January 19th and be gone for two days. And then Trump comes into office and suddenly changes position and allows to talk to come back. I don't know. I mean, I, I think, and I could be, there may be, there's some procedural weirdness in terms of how this law was written in terms of like forcing it to go straight to the circuit court rather than a district court, which would be the normal thing. I assume that they can now go to the Supreme Court and ask for a stay on this ruling, in order to appeal it. And maybe that puts the decision off past, January 19th and allows Trump to get into office and then sort of, you know, do something to keep TikTok, unless he decides that he, he maybe doesn't want TikTok. We, we don't really know at this point.

Ben Whitelaw:

Who knows what side of the bed Donald Trump will, will wake up on. And so it's really interesting to me, Mike, thanks for unpacking that. It's really interesting to me that all of the work done by TikTok to essentially kind of separate it out as a, as a. distinct legal entity, under this kind of project, Texas, umbrella that it, that it came up with a few years back actually has seemingly not worked at all. it has not proven to be persuasive to, in this case at all as to the national security, threat still. Why do you think that is, why do you think that the, the kind of, All of that efforts has been, has been kind of fallen on deaf ears,

Mike Masnick:

I mean, there is some element of it, which is just, honestly, it feels like general fear of China and that comes through in the ruling to like you see repeatedly, just talk of the PRC, the PRC, this, the PRC, that, and even if they say they're not going to, if the PRC comes down and demands that they do this or that, they will have to obey and therefore, that is the overriding concern. And, you know, there were stories that came out in the press from like former TikTok employees saying that, employees in China still had access to data and that that's noted in the decision. They, they point that out. And so they say, you know, we just, basically they just don't trust that this is real. They, it's sort of. the judges seem to feel that this was kind of a fictional setup, um, and they don't really trust that Oracle's ability to audit it as actually meaningful because again, they, they feel that, the PRC could come down and, do all of this. You know, the other argument that is made too, is that, It's really just focused on the PRC. I mean, because they say, you know, one of the reasons why they claim that this isn't a first amendment violation is because if the company does divest, they say all of the same content and all of the same moderation could still occur. There would be no change to the, expressive nature of things. The only thing we are doing is trying to disconnect the PRC from that. From tick tock. And so they feel that, the project Texas set up and the Oracle audits and the U S government like off button don't separate the company from the PRC. And so that's where the court comes down.

Ben Whitelaw:

They'd only be happy if, the PRC sold by dance or by dance, changed owners or changed hands. That's, that's the only way this would actually kind of, uh, change the, change the outcome in, in many sense.

Mike Masnick:

Yeah. And, and, you know, I think for a variety of reasons, I think, I think it's wrong. I think it's, there are other precedents on this that they sort of poo poo. And we're just like, the hat doesn't, doesn't really apply here. but it is something that happens. I think, I think it's a, bad look. I think it's a bad look for, you know, a supposedly free country in the U S to sort of take this viewpoint. And I think it will justify all sorts of other bad stuff from elsewhere as well, based on things like this, uh, it's, I think it's a bad look. I know some people really. Are concerned and there may be legitimate reasons to be concerned about the Chinese government and their connections here But I just feel that this law goes against, Basic american principles on on things and and i'm disappointed by by the ruling which you know again I've only read it once and really focused on the first amendment section, which is sort of the second half of the ruling And, um, I just, uh, it strikes me as very unconvincing.

Ben Whitelaw:

yeah. And that national security concern has yet to be proven as real or anything beyond really hypothetical at this stage.

Mike Masnick:

Yep. Yep. And, you know, it, it, that frustrates me. I mean, I have this come up in all sorts of, other cases around like Fourth Amendment stuff and encryption and just, you know, the willingness of the courts to accept the government just sort of giving this blanket statement that this is a national security concern without proof is, is just kind of frustrating.

Ben Whitelaw:

Yeah. Okay. it links, um, nicely to, to our next story, Mike, which you have, Figured out, um, it's, it's a TikTok story, but it's about a country that we don't often talk about in controlled speech, um, Romania. and I guess this is kind of really what the, in some senses, the threat that the, courts, and the government in the U S are trying to kind of mitigate against. Right.

Mike Masnick:

Yeah. Yeah. It's, it's kind of an interesting, interestingly related. Um, there's an election going on this coming Sunday, in Romania, but in the first round of the election, that's sort of a, you know, a two round election, the first round to figure out who are the top two candidates and then the, final election is you know, with just those two top two candidates, there was a surprise, you know, second place finish that knocked out the incumbent. The incumbent, I believe, came in third and so is not taking part in the election this weekend. there was a candidate, and I may pronounce his name wrong, but Kalin Georgescu, who was considered a sort of mostly unknown far right. populist candidate who sort of came out of nowhere with a big TikTok following. Um, and he has been sort of very pro Russia, pro Putin. And in fact, a lot of his successful TikToks are sort of reminiscent of, Putin. Style, uh, press appearances, riding a horse, doing judo, apparently running on a track without breaking a sweat. It's just sort of this like physical prowess.

Ben Whitelaw:

nothing, nothing to, you know, who doesn't like that? That sounds like a great, a great TikTok experience.

Mike Masnick:

yes. Yeah. And so he built up a really big following on, Tik TOK. The polls in the country had suggested that he was, not going to get that many votes. and a lot of people, it appears sort of chalked up his. Big Tik TOK following to foreign influence campaigns. They also noted that those videos were like highly stylized and produced. And there were questions of who was doing some of the producing and the stylizing behind it. There were questions about, whether or not the followers were real, were they foreign efforts? Were they bots? there were questions around. Influencers who were promoting the videos and were they paid or not?

Ben Whitelaw:

I, I, I found this really interesting point around, like, whether this candidate was, marked in the same way as other political candidates, basically suggesting that, they weren't kind of tagged as a political candidate running an election and therefore gotten larger reach than the other candidates, which is a really interesting kind of element to this, isn't there?

Mike Masnick:

Yeah. And that's unclear because tick tock denies that. but deal is because there are restrictions on political advertising within Romania, If all the candidates are treated as candidates, then in theory, there were limitations on kind of what sorts of promotions they could do. And if they were paying influencers to promote, is that a political ad? and how does that apply? And so there are all these questions and the suggestion from many, including some government officials is that TikTok didn't designate him as a, politician, as a candidate, and therefore didn't have these restrictions and that allowed his videos to go much further and build up a bigger following. Again, TikTok denies it. So it's a little unclear whether or not that is true. There is though, that, the things that TikTok. Did do it was at one point they did remove what they refer to as a covert network that was trying to boost his videos. And so, they did. Something, you know, they certainly, paid attention to something and found some inauthentic behavior behind him and that they claim that they stopped. And they also said that they found some similar types of inauthentic behavior for some of the other candidates. Um, so it wasn't that they did nothing. they were willing to step up and do something, but Romanian officials, feel pretty sketched out by this and they've asked the EU to investigate. and you know, it, it brings up all the questions that happen when, whenever there are unexpected results in elections and everyone's looking for explanations and, and these days, oftentimes they quickly jump to, well, it was this internet platform that caused the problem. Yeah.

Ben Whitelaw:

Yeah. I mean, there's a couple of interesting points here is in some ways this story is, one we've seen many times before, which is platform being used by political candidate. with the kind of looming threat of foreign interference in the background. And, you know, we've there's countless examples of that, Analytica being the kind of most, most famous one, I guess. And, It's nice to see that things don't change in the best part of a decade. But I was really interested in, you know, if that is the case, if there is a kind of lack of oversight here, Romania is pretty small. It has 20 million citizens. 8 million of those apparently use TikTok or have a kind of TikTok account. So it's a significant number. You know, why was it that there wasn't really, I guess, more attention paid to Making sure that candidates were, given the same kind of platform. And I decided to kind of go into the DSA data, my find out how many, how many moderators Romania has, um, or how many are Romanian speaking. And, and so there are, according to the DSA report from January to June, 95 Romanian speaking moderators for a country of 20 million. And we don't know if that's good or bad. It doesn't necessarily take into account, I guess, people setting policy either. That's probably just the folks who are, looking at reports and appeals and those kinds of things. But it doesn't seem a lot compared to other countries. so the Netherlands, which has roughly the same number of. citizens, we don't know exactly how many users, of TikTok, but they have 160 moderators, speaking Dutch.

Mike Masnick:

roughly double,

Ben Whitelaw:

So roughly double, um, Sweden has, roughly the same number of moderators, 99, but has half the population. And again, we don't, we don't know how many, Of those people use, use Tik TOK. So again, you know, it's interesting to me that kind of how resources are applied. And again, this is a tale as old as time, how platforms, in the kind of non core markets, set policy and operationalized policy is something that we, we see time and time again as, as a sticking point, do you think that's kind of relevant here? Does that feel like a, a pathway trod a little bit?

Mike Masnick:

Yeah. I mean, you know, it is one of these, big questions that comes up all the time, so many of these discussions, you know, and we try to, you know, one of the reasons you and I are always looking for stories outside of, the U S and, The big countries that everybody talks about is because there is an important story there about how these companies handle it and the, you know, less followed countries where, where less attention is, paid to them. And so I think, I think it is a, it's a huge story. And, you know, there is this element of this one where I sort of feel like, I think. A lot of people really believe that so much of his following, in this case, were bots, that they didn't think that the votes would follow, and yet the votes did. And so, there was this part of me when I first story where I was like, and, you know, the first thing I was reading was like, oh, you know, it's all fake followers and bots. I was like, yeah, but it wasn't fake voters, you know. So, so something is happening here. but to be honest too, like, I don't know that, you can say necessarily that like, Oh, it's 95 or whatever moderators too little. I don't know. And we, we don't know, like, was that not enough where the policy's not in place again, there's the whole thing where like TikTok has denied this and they did take down some inauthentic behavior. so it, we, we have, you know, Partial information, not full information. If the EU does an investigation, it would be really interesting to find out if more details come out of it, but it is, it is sort of an interesting story pay attention to and to see what comes of this.

Ben Whitelaw:

yeah, definitely. And I mean, I think the other part of this story, something that kind of is a thread that runs through U. S. TikTok ban as well, is just the, is a media story, really. It's about the fact that there is so many people kind of consuming TikTok and the shift away from traditional media to, new forms of media. The fact that politicians can bypass traditional forms of media. Yeah. as this candidate has done and still perform very well in an early round. So we'll keep a tabs. I've never said this before, Mike, but I'll be keeping tabs on the Romanian election this Sunday. Uh,

Mike Masnick:

There you go. This, this, this coming Sunday is my birthday. So I will celebrate it by paying attention to the Romanian election as well.

Ben Whitelaw:

happy birthday in advance. And, uh,

Mike Masnick:

you.

Ben Whitelaw:

What, what a way to celebrate

Mike Masnick:

Yes, yes.

Ben Whitelaw:

so if those kind of two stories paired together, I guess, represent in some ways the kind of ghost of, antitrust past Mike, you know, the way that the,

Mike Masnick:

a segue. What a segue.

Ben Whitelaw:

you know, I think the next story is the ghost of, of antitrust future. next one you picked out and it's about a man that I actually didn't know about, uh, up until, an hour or so ago. Um, but he's an incredibly frightening looking man. I'll have to say, and I'm not sure if I'm ever going to forget his face. Um, so tell us about Andrew Ferguson and, and what he's come out with this week.

Mike Masnick:

Oh my goodness. So Andrew Ferguson is an FTC commissioner. and as a Republican, the FTC has five commissioners, three are appointed by whichever party has the presidency and two are the, other party. So right now there are two Republican commissioners. FTC commissioners and three Democratic ones, that will flip in January. And, Andrew Ferguson is one of the Republican commissioners. He is vying for very clearly vying for the chair, taking over what Lena Khan's position is, is right now under a Trump administration. and there was a New York Post article Today or yesterday that sort of detailed, there are three leading candidates, the two current FTC commissioners, which is Ferguson and the other one's Melissa Holyoke. and then there's, there's a third person, who, worked for Senator Mike Lee that he's really pushing for to be chair. And the, the question that people around Trump are apparently asking is like, which one of these is going to be toughest on, they say, big tech. but they really mean Trump's enemies. Especially considering how much support he got from certain tech sectors this time around, it'll be, you know, the companies that they don't like. And so the FTC took a fairly typical FTC action this week against an e commerce platform called Goat. The details aren't even that interesting in the case, but basically they lied about shipping times. People were paying for premium shipping and not getting it in time. And then also they had a, buyer protection, thing where they're saying like, if anything goes wrong, we'll protect you. And then they weren't, they weren't living up to that. The FTC took action on them and basically saying, you know, they were making promises and they weren't living up to it. That's an unfair and deceptive practice. Very typical standard FTC. Nothing at all interesting about that. The other Republican commissioner, Holyoke, put out a concurring statement on this that basically just said, this also proves that, we can use the same authorities to go after big tech companies for unfair moderation decisions.

Ben Whitelaw:

Okay.

Mike Masnick:

Which is nonsense.

Ben Whitelaw:

Yeah.

Mike Masnick:

but it was like a one paragraph thing and it looks like to me, at least Andrew Ferguson then said, okay, I see that I got a one up it because we're in, we're in a fight here for the chair position.

Ben Whitelaw:

I raise you.

Mike Masnick:

I raised you, I raised you crazy. and put out this like four page concurring thing. Again, none of this has anything to do with the ruling on goat, which is the company that this is ostensibly about, saying like, yes. And I agree with Holyoke that we can. Use this power to go after unfair moderation, but also we, we can use our powers to go after advertisers who stopped advertising on X because that must be antitrust collusion to censor free speech. And we have to support free speech. And right now there's only one free speech platform Elon Musk, the brilliant, wonderful free speech supporter. And, you know, How dare, any other platform censor American speech and that must be illegal and how dare they not advertise. so he goes after, advertisers who stopped, he goes after Garm, which we've talked about in the past saying, you know, that was, clear evidence of, of collusion. He goes after NewsGuard, saying that, NewsGuard, who I've written about a few times that the Republicans have gone crazy about, NewsGuard, all they do is just say which news sources are trustworthy and which are not. And he sort of admits when he's talking about NewsGuard, like, yes, NewsGuard can have its own opinions. But if multiple companies are basing decisions on those opinions, that is antitrust collusion. It, it is four pages of crazy indicating once again, like with Brendan Carr, we talked about a few weeks ago, like with Brendan Carr that, these. Bureaucrats really intend to use the powers of government to attack speech online, and they're framing it all within the language of free speech. The whole thing over and over again, he talks about, how important free speech is. And he does the Elon Musk is the only believer in free speech. And every platform has to use the same policies as, Elon does. And how dare they not do that?

Ben Whitelaw:

so just to clarify, so we have a situation where we're antitrust as we've talked about in the last couple of weeks of the podcast which is going to be a big theme within the Trump administration, and, you know, he's run out of the FTC has a guy in Brendan Carr who doesn't know much.

Mike Masnick:

Brendan Carr's the FCC, not the

Ben Whitelaw:

Sorry, sorry. Yeah, so, so the FTC commissioners also don't know very much according to this letter, at least this particular guy. And so we have a situation where like maybe nobody knows anything about

Mike Masnick:

it's, I mean, the question is, do they know or are they just like putting on a show for Trump? Right. And, and it's just this sort of like populist thing. I don't know Andrew Ferguson that well, Brendan Carr, I know a little bit. And so I know he knows that he's. Lying, like Brendan Carr is smart enough to know what the law is. I don't know enough about Ferguson to know whether or not he knows this is crazy. You know, one of the lines that really got me in this letter was like, he claims at one point that the proof of collusion among big tech companies to censor content in an illegal manner is that simultaneously, he specifically says simultaneously, all of the big tech platforms blocked. All discussion and reporting on the Hunter Biden laptop in 2020.

Ben Whitelaw:

Which just isn't the case.

Mike Masnick:

None of that happened, right? The only thing that happened was two companies took some action, Twitter and Facebook. The action that Twitter took was it blocked the link. It did not block any other reporting on it. There was other reporting on it. It did not block any discussion of it. There was lots of other discussion on it. In fact, like it was like a trending topic. The only thing they did was they blocked the link. To a single New York post story for 24 hours. Then they reversed their policy and allowed that link to be shared. The only thing that Facebook did was it said, well, there's some questions about the story, so we're going to keep it out of the trending. It won't go into the trending topics. and they reversed that policy relatively quickly. That was the only thing it did, but he declares unequivocally that the entire. Tech industry simultaneously blocked all discussion of this. And I, you know, one of the things that gets me is I pointed out earlier this year that Elon Musk did all that and more when apparently Iranian hackers got access to the Trump campaigns dossier on JD Vance and they passed around and most, most media sources didn't bite on it. Finally, Ken Klippenstein, who has a substack posted it and Elon banned Ken. He blocked all. Links to, any part of Ken's sub stack, not just that one article. He, you know, pulled down all sorts of stuff. And to this day, I don't think you can share that. He did let Ken back on the platform after like two weeks, So everything that, they have accused Twitter of doing to the Hunter Biden laptop story, Elon has done and more and gone much further. And yet in this. comment from this FTC commissioner, he claims that Elon is the big free speech supporter and the actions taken on the, the Hunter laptop, which didn't happen, prove that they're illegal censoring, collusion, antitrust. Not everything about this.

Ben Whitelaw:

You've got your hands on your head, Mike. It's

Mike Masnick:

it's so wrong, but this is, this is unfortunately the world that we're living in. and it, gives a sense of how the incoming administration is going to attack content moderation. They're going to make these claims. They are going to try and use every legal lever they have, even as they are crazy and totally counterfactual to reality.

Ben Whitelaw:

Yeah. There's no I mean, the idea that FTC could prove that platforms coordinated on policy changes or anything like that would be so difficult to do, right? If this actually was, you know, is a route you want to go down, how do you go about saying that this company over here would Has done the same thing as this company over here a way that amounts to collusion.

Mike Masnick:

Yeah, well, the thing that they can do, and they probably will do, is that they can conduct investigations, and they can demand to see all sorts of internal files, and that is what is going to happen, almost certainly, and then, I would guess what would come out of it is, probably a, really misleading investigation. findings and they'll release things selectively that take things out of context and make, make claims that are just not accurate. and it's going to be a mess. and this is why we're seeing, tech companies, you know, trying to kiss the ring of Donald Trump and, and try and make nice because they know if they don't, they're going to face all of this kind of, authoritarian nonsense.

Ben Whitelaw:

yeah, no, indeed. And, and, you know, currying favor, it's actually, uh, again, a very good segue onto our next story, um, is, is fast becoming

Mike Masnick:

did it on purpose, Ben.

Ben Whitelaw:

The theme of this, episode. So, you know, we have a situation where, FTC commissioners are, are, cozying up to, the new administration. We also have a situation where Meta in a very coordinated way is doing the same thing. and so this week, a number of different outlets, including the financial times and the verge reported on, Nick Clegg, the, president of global affairs at Metta, talking about how, Metta essentially overstepped the line when it came to content moderation during COVID, the comments were made in a, reporter. Briefing, which is, uh, you know, does happen, but it's a very kind of coordinated, very kind of controlled environment, for a very senior person within Metta to kind of make these statements. And, yeah, the, coverage has been essentially that Metta is not apologizing, but admitting that it, it overstepped the line in terms of, COVID information controlled during that period. And. This comes on the back of, you'll remember Mike, that letter by Mark Zuckerberg, uh, to Jim Jordan, which we had a good laugh about, um,

Mike Masnick:

I thought it was a cry.

Ben Whitelaw:

we cried a bit, um, it, it, it felt like a letter that had been, written at gunpoint, um, I remember, I remember saying, and, uh, almost kind of made, made to write that, and it was clearly in lieu of this situation, right, that where Donald Trump becomes president again, and, you have a situation as was reported this week, where Zuckerberg is invited to Mar a Lago to talk about the future of tech policy, out of the U S. So in a number of different ways we have, you know, in here, we have meta kind of cozying up to the new administration. We have the FTC commissioners doing the same. Is there anybody that has any dignity left? You know, I think this stuff is so obvious, right? It's, you know, it's so obvious and in some respects, I'm frustrated about the way that it's reported, in this way, because apart from the line that says This is a briefing with Nick Clegg and some journalists who've been invited there. quite clearly, you know, designed to be a signal to the Trump administration of, we know what you're going to ask us to do and we're happy to do it.

Mike Masnick:

Yeah. Yeah. That's exactly what it was. this was totally a messaging thing. It was, you know, a coordinated attempt by the company to lay out this message that will be embraced by. the sort of MAGA faithful, to insist that it proves, I mean, this will be extended, right? They'll say it proves that, not only that MEDA was overly, willing to suppress speech, but it will just be reinforced with the claim that, the demands for that came from the government, which is the part that he didn't say that. But that is, claim that lots of people are making. and so, This is, it's a spineless capitulation to this argument and basically it's giving the Republicans ammo to claim that we were right all along. We were unfairly targeted. We were unfairly censored and, even meta admits it. and we'll see that over and over again. And people will point to this as if it's proof. I had somebody yell at me this week,

Ben Whitelaw:

only one.

Mike Masnick:

well, there were a few people, but someone was yelling at me about this, where I was talking about some of this and they were saying, well, you know. Zuckerberg admitted under oath that the US government pressured him to take down content. He didn't want to take which is not What actually happened,

Ben Whitelaw:

Right.

Mike Masnick:

but like the message gets out there and kind of knows what they're doing when they, when they say this. and you know, I understand why they're doing it. You know, they feel like they need to do it to avoid, to hopefully avoid costly stuff, but it is, shows a real lack of principles as far as I'm concerned.

Ben Whitelaw:

Yeah. And if you're a trust and safety professional working in meta who, probably spent countless hours trying to figure out what the policy should be during an evolving situation that no one has ever seen before that was COVID and no one will ever see, you know, for a long time, that's going to be really, really tough to take.

Mike Masnick:

it's demoralizing, right? I mean, you know, the reality is what these companies should be doing and they don't do is saying, like, these are really, really, really difficult decisions. And there was no way to get it right. There was simply no way. I mean, I talk about sort of impossibility theorem here. There is no way to get it right. And that is it. Extra true in a case where you have something that is brand new. Nobody understands, you know, nobody understood the details of COVID. We didn't know what was right. We didn't know what was wrong. And people made choices and lots of people made wrong choices. Some of them made wrong choices because they just didn't have enough information and there's more information came out, they adjusted. Some people made wrong choices because they, you had crazy ideas in their head. There were all sorts of wrong choices that were made along the way. And it wasn't because of like any, you know, in many cases. It wasn't because of bad actions or bad ideas. I think, you know, the companies. Try to put forth their best effort. That's what Meta should be saying. You know, yes, we may have made mistakes, but we made best efforts based on what kind of information we had. We took this seriously. We wanted to keep people safe. And because of the changing nature of the information environment, we had to make decisions on the fly. And instead he comes out and he gives this statement, which is basically like, Oh, you know, we took down too much content because there was too much pressure on us. Like, you know, come on, stand up, have a spine. I, it's. It's really, really frustrating to me. This was a chance. This was an opportunity for them to educate people on how trust and safety works and what the real purpose of trust and safety is. And instead he's feeding into the narrative that it's this awful censorship machine. And it's, it's really, really frustrating.

Ben Whitelaw:

Yeah. I mean, I've talked to a few folks recently about trust and safety is kind of marketing problem. The fact that, you know, it needs to kind of present itself continually as that difficult challenge, those impossible trade offs, and it needs to kind of completely an ongoing message that out, um, it doesn't help when Nick comes out and gives a, a soft briefing to those journalists that the opposite is true. You know, he does say, interestingly, just before we move on about how, AI has all this potential, but right now there are some really pissed off people on, on Metas platforms for the fact that, they make mistakes and remove innocuous or innocent content. And, again, you know, just. it's kind of completely crazy because it was literally two weeks ago. We were talking about how, how threads was an absolute mess of a, of a moderation process and you had all of these kind of, terms that should have been moderated being taken down. So I'd love for media to be a bit more critical and, and, uh, clear on, on what it is that senior people like Clegg are saying, and I think that's, that's really part of our job, um, to, to, to do that too.

Mike Masnick:

they, they, you know, put this in context, like put, put his statements in context and I didn't feel like, like the media was really doing that.

Ben Whitelaw:

no, no, indeed. Mike, I'm gonna, um, we can't finish on that low note. we've got a couple of lighter stories. Um, but I'm gonna, throw to you to pick, you had, end with ChatGPT since that's where we started episode. Um, tell us about this kind of fun lighthearted story.

Mike Masnick:

Yeah. This was kind of an interesting story where suddenly it started spreading wide that, chat GPT would break if you tried to get it to say anything The name David Mayer. I saw it first in a very funny post on blue sky from Andy Bayo, who's is really interesting guy runs, uh, has the site waxy. org. And he does lots of really interesting stuff, but he had heard that. And so he, he created this question for chat GPT, which was combined the first name of the artist who recorded Ziggy Stardust and the last name of the artist who recorded your body is a wonderland into a single name and chat GPT starts. And says the artist who recorded Ziggy Sardust is David Bowie. And the artist who recorded Your Body is a Wonderland is John Mayer. Combining their names and it says David, and then it breaks and it says, I am unable to produce a response. It refuses to produce the name David Mayer. And what people, yeah, what people then quickly discovered was that there was a short list of names that chat GPT will break if you try and get it to produce those names. Then the hunt was on to sort of figure out who they were and why

Ben Whitelaw:

odd. So, and did they find out what the cause was that, what, what, why doesn't ChatGPT like these names?

Mike Masnick:

so as far as anyone can tell, and as I don't think open AI has come out and said anything yet, and I will note that they have fixed the David Mayer one, so that it now does work. But the other names on the list do not work, still do not work.

Ben Whitelaw:

Okay.

Mike Masnick:

It appears that OpenAI just created a block list. It appears to be about six names. There might be more, um, but there's a block list that if you try and get it to produce that name, it will break. And it just not, you know, it doesn't break in a nice way. It just, you know, we'll go halfway through a question and then say, I am unable to produce a response.

Ben Whitelaw:

I really love the idea that there's some incredibly well played, you know, some of the smartest kind of machine learning engineers, you know, sat next to some of the world's, brightest trust and safety experts, some of whom we know probably, um, you know, sat together being like, what are the list of six names that we've gotta add to a block list?

Mike Masnick:

Yeah. And it's just, it's so obviously just a straight up lock list that says we will not produce these names and people have gone through and reasoned out some of them and they're all slightly different. There's a few that we don't quite know why there was one person, Guido Scorza, who is a data protection expert in Italy, and he had posted that he had used the GDPRs. right to erasure process against open AI and said, basically you have to forget everything about me. And the only way that they could figure out to do so was to put him on this block list Because they don't actually remember anything about him. Like there's no this is the thing that a lot of people don't Understand about these systems is it's not a big database. It's not going in and collecting stuff. It is training and just thinking about stuff. So the only way to say like, never produce any information on Guido Scorza is to put them on a, I mean, there are better ways, but this was the sort of fast and quick way. There are other people, you know, the, Brian Hood was a mayor in Australia who claimed that it produced defamatory content about him and he threatened to sue. And so. that one probably came up first. Somebody had actually told me about that one, like a year and a half ago when it, when it came up and I had meant to investigate it. And then I just never got to it. Jonathan Turley, who's a law professor and sort of a famously sort of Trumpian law professor. Went on this big rant of claiming also that chat GPT had defamed him. And so he's on the The weird one is Jonathan Zatrain, who I know, and asked like, what the hell, what

Ben Whitelaw:

yeah,

Mike Masnick:

to get off this list? And he has no idea.

Ben Whitelaw:

right, right.

Mike Masnick:

he doesn't know what's going on. I joked with him that like, Oh, you know, are you trying to come up with something to prove a point or something about all this? He's like, I did nothing. So,

Ben Whitelaw:

And he didn't know he was on ChachiPT's naughty step.

Mike Masnick:

he, he had found it, he had actually discovered it. So he, he added himself when everyone was pointing out to this list of names, he pointed out, like he had discovered a few months ago that chat GPT breaks on his name. and so, you know, it's just. Basically, it's like this really simple way. I understand probably why it happened because all these people, except for Jonathan, as far as we know, got really mad. We still don't know which David Mayer there is. There's speculation. There's a few different, there's also a David Farber, Faber, and nobody's sure which one that is. but the other ones, like people got mad and open AI probably was like, you know what, this guy is a pain. Fucking headache. And therefore just put them on the ban list. Like, let's not deal with this because we don't want to go through a lawsuit and have to do this. And there's no other way to like effectively, because they're, you know, they don't understand. It's not that chat GPT is defamatory. It's whatever the prompt is led to defamation. And it's not that we're collecting data on this guy, but you know, he thinks we are, and do we really want to fight it in court? I just put them on the ban

Ben Whitelaw:

Put him on the list.

Mike Masnick:

I'm pretty sure that's the kind of thing that happened. You know, you would hope that a company with as many resources, and as you noted, as many, smart engineers would come up with a more sophisticated way than this, but they haven't. And so, you know, I know how things like that happen and I'm sure it was just kind of like, ah, just make this headache go away in the easiest way possible. And that means every time you try and produce this name, we're going to break it.

Ben Whitelaw:

Yeah. Interesting. just as an aside, there's some, there's the OpenAI put out some research in the past week around kind of red teaming in new, new and novel ways. I wonder if they should be reading their own research.

Mike Masnick:

Yeah, yeah, absolutely, absolutely.

Ben Whitelaw:

so yeah, if you, if David May is listening, to the podcast, if he's, if he's a regular listener, um, make yourself known, um, we would love to know what you did to be put on the, uh, on the shit list of, uh,

Mike Masnick:

Yeah. And then you got taken off. So if you have a legal complaint, get it, get it going

Ben Whitelaw:

Yeah, we know some lawyers. Um, great. Thanks Mike. Um, that's not the, you know, that's not the kind of end of today's episode. It's, we've got a great bonus chat. I don't know if you want to kind of give us a preview of that.

Mike Masnick:

Yeah. So this was really great. I had this discussion just yesterday. it's a topic that we've talked about a lot. The social media age bands in Australia and how they're popping up elsewhere. and so this is a discussion I had with, two folks from the internet society, Natalie Campbell, who's the senior director of North American government and regulatory affairs. And John Perrino, who's a senior policy and advocacy expert. And they're concerned about these age restrictions and age verification. And, we talked about Australia and also Canada has a bill that is, moving forward that one of the incredible things in the chat was that, it would require places like Starbucks to verify the age of anyone who wants to use the wifi in Starbucks and, you know, all sorts of stuff. And so they, they're sort of concerned about. these laws, the proliferation of these laws, what it means for the open internet and what age verification requirements would mean. The really fascinating discussion and we'll go to that right now. Natalie, John, welcome to the podcast. Glad to have you here. wanted to start by talking about the law that just passed in Australia, which effectively bans those under 16 from social media. John, can you talk through the details of that law, including sort of which sites are impacted and how do you, Does the Australian government expect sites to know the age of their visitors?

John Perrino:

great question, Mike, and, you know, for, for quick background for listeners, this legislation moved through in about less than two weeks and there was a consultation period open less than 24 hours. Um, the Australian social media age verification stuff really, really flew through. And there's a lot that we still don't know, even though the legislation passed, especially on what social media platforms would be required to do in order to comply with this. So right now, essentially all we know is that government ID would not be required. That should include things like a passport or government issued driver's license, also a digital ID. And we know that the social media is for Australians under the age of 16. and then the final thing that was kind of added on late was a digital duty of care, which again is to be determined. So the bottom line on this is much of the bill is to be determined. They don't know what age verification would be, as they say, reasonable for the social media platforms to use. And as a lot of the comments pointed out, there was actually already an age verification study ongoing, and that's still going on, which tools, maybe work best, what the trade offs are in different age verification methods. The Australian government was already doing this. They don't have the results. They probably won't have. The results until about six months before the social media platforms have to comply with this.

Mike Masnick:

I know that we've seen a bunch of other countries also exploring similar ideas around age verification. on the podcast, we've talked about it in the UK and somewhat in the US to a lesser extent, there are a couple of different issues in the US, but also now Canada is exploring a similar issue. So, Natalie, can you talk through what the proposal in Canada is about?

Natalie Campbell:

Sure, so, Senate Bill S 210, this is an act to restrict young persons online access to sexually explicit material, is very close to becoming a law. This bill has been flying under the radar for quite a long time. I don't think that people are taking it very seriously because most Senate bills. Don't make it to law. but this has found its way into the very late stages of our parliamentary process, and it's probably the most dangerous bill to the Internet, in Canada right now. and one of the main reasons for that is because of its age verification mandates. That would apply to virtually every intermediary on the internet and some that are not on the internet as well. So essentially what the bill tries to do is to prevent young people from access to sexually explicit material, which is defined extremely broadly. and makes it so that every intermediary that would play a part in facilitating sexually explicit material for profit online would have to verify a user's age or face very high fines. So, this, you know, is this, is this, Probably the most extreme age verification proposal we've seen so far, because it's not just targeting websites, it's targeting internet service providers, content delivery networks, search engines, email services, even a Starbucks location that's providing access to Wi Fi. Because what content is flowing through this pipes could be sexually explicit material. they now have a duty to verify users age and in doing so like it's, it's different. When we're talking about websites doing this, but when we're thinking about every single intermediary on the Internet having a duty to verify users age, that gets really problematic when we're thinking about an open Internet first, because most traffic is encrypted, and it's not possible for most infrastructure intermediaries to know what's flowing through the pipes, and even if they could, content doesn't flow through the Internet as a whole piece of content. It's packets. And so it becomes extremely difficult to, one, identify, what is sexually explicit material and, you know, is Canada's definition of this type of content. And then, second of all, having to, decide to try and identify that stuff means you can't use things like encryption, which is the foundation of security for every service and user on the Internet. So it's very concerning in that, the implications for security online, but also the fact that you're placing huge barriers to access because now is an Internet user, because a lot of intermediaries. Won't know what's sexually explicit material. They might just start doing age verification to Everyone and that means I now, you know as somebody who's based in canada would have to trust a whole lot of entities with Very personal information whether it's government id or you know biometrics i'm having to trust a lot of third parties Who I might not have? You Any direct relationship with with my personal information, which is a huge barrier to privacy and anonymity online. so this is, from an open Internet aspect, this is super problematic because you're creating huge barriers to access. for people who might not be able to get government issued I. D. or might not be able to use the Internet without the promise of anonymity, which is a huge hurdle for a lot of people in marginalized communities and young people as well. But also the fact that this has huge implications for security online. and you know, not Enabling intermediaries to use encryption that could be extremely devastating to people's security online and making people vulnerable to a whole range of bad stuff that I don't think was the intention of this bill.

Mike Masnick:

Yeah, I mean you, you mentioned just really quickly. I wanted to follow up on one point. You mentioned the fact that since it hits every intermediary, the potential that, like a Starbucks, if you wanted internet in a Starbucks, is the idea there that, Starbucks would have to not just check your ID, but like record it somehow? Is that part of the fear that if you want to get on the Wi Fi at Starbucks, you have to first prove how old you are?

Natalie Campbell:

Well, this is the problem is that the way that it defines who this bill applies to is internet service provider, but the definition it uses for that is a person who provides access to the Internet Internet content hosting or electronic mail. The person who provides internet access could mean Starbucks and could mean that, you Starbucks has to know what are you looking at when you're accessing the internet or just say, just show us your ID if you want to use the Wi Fi, right? So, problematic in both senses and just a reminder that please use VPNs. Okay.

Mike Masnick:

So, I mean, so we're seeing this all over, obviously, Australia and Canada, as you guys discussed, we've discussed the UK in the past. There are various bills in the U S that also touch on this elsewhere around the world. What should we make of this trend? What, you know, what, why is this all, all of a sudden happening? We've had the internet obviously for decades now. Why is it suddenly that everyone is trying to pass these kind of age related bans or age verification. ideas, John, do you want to start there?

John Perrino:

Yeah, I mean, to that point, it certainly feels like everything is happening all at once everywhere in every corner of the world. I mean, we're seeing this, as you mentioned, you know, the UK has been working on this almost a decade, but their age verification guidance comes out in January. The European Union is working on this. There are almost two dozen U. S. states that have passed age verification for adult websites, more with social media. it really is happening everywhere. Why? You know, it's a whole bunch of factors. Um, one kind of interesting story with the Australia legislation is it seems to have stemmed depending on who you ask, from Jonathan Haidt's book. and, you know, there's, has been a lot of discussion, because of recent literature. The Anxious Generation is Jonathan Height's book, as I'm sure many of your listeners are familiar with. There's just been more discussion on this. There's been a lot of discussion in the U. S. You know, the Kids Online Safety Act, but there are, you know, true age verification bills being introduced. There's even a case going to the Supreme Court. And that's the Free Speech Coalition v. Paxton case. And that's something that the Internet Society Weight in on with an amicus. We joined with the Center for Democracy and Technology, New America's Open Technology Institute, and some academics on that case. And, you know, the thing that we think doesn't get talked about enough is that age verification laws are not just about young people.

Natalie Campbell:

to

John Perrino:

verification laws are about everyone, if not done right. You know, this puts everyone's privacy security at risk and like Natalie already said, you know, this can be discriminatory this the marginalized communities and there's really interesting research on this marginalized communities can benefit the most from social media can benefit the most from being online. It can be more difficult for them to get online. There can be more social factors that make it. Um, you know, generally their worlds, it's more difficult. To make those types of connections. so what we're really focused on is that everyone can get access to the Internet that is not. Gating off access to the Internet, to news, information, health, entertainment, right? so this is, this is really going to be a challenge and we see so many pieces of regulation that are being implemented and introduced right now. luckily at the Internet Society, we have an incredible community. for instance, our Australia chapter. Jumps to action. There was less than 24 hours to file, comments, they made the deadline, got the comments in, you know, and that's so important. And so many of our, our local, you know, regional chapters are engaging with governments in their home countries on this issue. So on that point, you know, there's really good debate on this. really great interaction. More and more technologists are getting involved in this, more standards development organizations. So the engineers who actually make the internet function, right, are getting involved on this. So that's encouraging, but it really is happening everywhere at once. And we really need to make sure that those who need the internet the most, can actually be safe online and are not having their privacy and security exposed through all this.

Mike Masnick:

you've both talked a little bit about sort of the implications of this and the reasons why there are concerns, but Natalie, I want to finish up with you and just say, you know, for policymakers, some of whom might be listening to this, hopefully, um, who are looking at these laws and thinking about them, what kinds of. Factors, what should they be considering before, you know, proposing or voting for these kinds of laws?

Natalie Campbell:

So, first, like, it's really important to understand that the Internet Society works to make sure that the Internet is for everyone. And when we talk about an open Internet, we're talking about making sure we're lowering barriers to access to the Internet. We also want a healthy Internet. We are, you know, like John mentioned, we're a huge community, not just the Internet Society, but our chapters and members and organizational members around the world. We all want a healthy Internet and care about making sure there are safe spaces for people online. But we also want to make sure that, you know, things like encryption and the fundamentals of a secure Internet. Are not undermined and that people don't experience barriers to access, that could be complete hurdles to accessing the Internet in the first place. So, I mean, it's not to say that there might never be a solution for age verification that could not hinder things like security and open Internet. But policymakers really do have to be thinking through how could their proposals. Impact, people's access to the Internet and their safety online. And we have a tool that helps us analyze these proposals is called the Internet Impact Assessment Toolkit. We think of it like an environmental impact assessment for the Internet. And so what we do, is, um, Offer ourselves to work with governments who are thinking about, whatever issues are working out that may relate to the Internet. And we'll often use this framework that describes what the Internet needs to exist in the 1st place and to be more open, global connected, secure and trustworthy and we help them think through how might. Particular proposal impact these goals for a healthy Internet. And so always are available to work with governments to think through these aspects, but we have a toolkit that policymakers can use themselves. And we think that it's really important that, We just don't jump to law proposals that don't consider those impacts on the internet because such as a case in Canada, there can be very extreme consequences for people's access to the internet and their very safety and security online.

Mike Masnick:

All right. Well, Natalie and John, thank you for coming on the podcast and thank you for all the good work that the Internet Society does. And, uh, I hope if anyone's listening to this and you are, working on a bill like this or thinking about these kinds of laws that you, uh, listen closely and, take a look at what the Internet Society is doing on this and, and what, resources they have available. Thanks again for joining us.

John Perrino:

Thanks Mike.

Natalie Campbell:

Thanks, Mike.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode