Ctrl-Alt-Speech

The Haidt of Hypocrisy

Mike Masnick & Ben Whitelaw Season 1 Episode 70

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

So Mike, since we've last spoken, I've been dabbling in some new apps, some new social media apps that are apparently about bringing back the nostalgia of the two thousands internet.

Mike Masnick:

Oh, okay.

Ben Whitelaw:

must remember it. Well, it was a heady

Mike Masnick:

I, I, do remember it well, though. It makes me feel old that we were having nostalgia for that time period.

Ben Whitelaw:

Yeah, the pixels were larger, the harms were mostly spam. It was, you know, it was a nice time, a calm time. there's these new apps that are kind of bringing back that, that feel, that vibe. And I've been trialing one called Perfectly Imperfect, which looks a bit like MySpace and is a, it's kind of a magazine type vibe. The prompt on the perfectly imperfect app is one I'd like you to answer start today's podcast. And it is rec question mark. Ask question mark event question

Mike Masnick:

Oh,

Ben Whitelaw:

So the idea being you can, you know, suggest an event to go to'cause you're cool and hip, or you can provide an ask to the, achingly cool group of users on perfect imperfect, or you can ask for a recommendation.

Mike Masnick:

Oh, oh, how,

Ben Whitelaw:

will you choose?

Mike Masnick:

It's very broad. I will go with an event, uh, that I think people listening to this podcast will be interested in, especially if you're in the Bay Area, which is that later this month, Stanford is having their annual trust and safety research conference. And, it's always a really good time. It's sort of, it's different than Trust Con. It's a very different kind of event. There's some overlap, but it's, it's just always such a really nice event, really great group of people. I always have such great conversations there. and, I'm not gonna reveal exactly what it is, but we are, we will be revealing something during that event that I think some people might find. Uh.

Ben Whitelaw:

A tech dirt special.

Mike Masnick:

uh, yeah, you know, Opia special. Something, something good.

Ben Whitelaw:

Ah, very nice. Very nice. I think that would go down very well on perfectly imp. Perfect. There's an era of mystery,

Mike Masnick:

yeah. A little, a little,

Ben Whitelaw:

little tease.

Mike Masnick:

A little tease. There we go. So, uh, what about you? Do you have a rec ask or event for us?

Ben Whitelaw:

Um, my, recommendation is to not get three hours of sleep at night because you tend to not be able to read as much about the week's news and concept moderation as you'd often like. So I'm, I'm coming on this high on adrenaline, low on insight, but let's, let's see how it

Mike Masnick:

Yeah, but you're supposed to use those, those times that you're awake in the middle of the night, maybe dealing with little people in your house to, to then read as

Ben Whitelaw:

Uh, you are right. You're right. I'm doing it all wrong. Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's September the fourth, 2025, and this week we're talking about Nigel Farage visiting the capital AI companies collaborating over safety and whether the evidence against smartphones stacks up. I'm Ben Whitelaw, the founder and editor of Everything and Moderation, and I'm with Mike Masnick, who is both Achingly Cool and somebody who goes to research conferences.

Mike Masnick:

Yeah. Like going to academic research conferences is cool and hip. Yeah. Okay.

Ben Whitelaw:

It's a, it's a Venn diagram with a very small overlap, I'd say. but I actually haven't been to the Stanford Trust and Safety Research Conference. I'm a bit jealous that you're going again. uh, is there anything you're particularly looking out for or are you going just to kind of get a general sense of what researchers are talking about?

Mike Masnick:

Yeah, I mean it's, it's more the latter. there's always a lot of really interesting stuff because you know, it is very focused on the research side of it, but it includes like research from not just academics, but also the companies and, and other folks as well. And so you really begin to get a sense of like. Some of the more creative things that are happening or like creative interventions and, how well those work. There's always a lot of really interesting people there. It's smaller than trust con, so it's a little bit more intimate and you end up having, you know, a lot of really good conversations. And so, yeah, I'm, I'm really looking forward to it. It's always, a really good event. and always like a good, Like trust con is at the beginning of the summer kind of thing, and then this is at the end of the summer, and so you know, it's sort of like the, beginning and end of, of summer in trust and safety. and I think, I think I saw, I, I'm saying this now without confirming it, but I think like Dave Wilner is giving the keynote this year, which is always, you know, he's always fantastic. so it should be a really good, good event overall.

Ben Whitelaw:

Dave was one of the very early kind of trust and safety folks at Facebook and is doing some pretty cool things with, AI content moderation. so that will be fun. academia and research actually is a very useful kind of preempting of today's, controlled speech. We're gonna talk a lot about, scientific evidence of research and grounding trust and safety in. policy, uh, and policy and research. And I think that's something that, is a kind of nice segue onto, some of our first stories. Before we do though, just wanna talk about the, famous and anonymous reviewer from our last podcast. if listeners weren't tuning into that, you might have heard me get a little bit, jealous perhaps. I think that's fair to say. A little

Mike Masnick:

think, I think that's right.

Ben Whitelaw:

a little bit, uh, upset about the fact that one of our reviews from one of our listeners,

Mike Masnick:

A very nice review. Let me

Ben Whitelaw:

really nice

Mike Masnick:

one of the nicest reviews that the podcast has received,

Ben Whitelaw:

Yeah. Really nice. Did not mention me at all.

Mike Masnick:

right? Yes.

Ben Whitelaw:

Um, and, and I didn't expect to hear anything. We, we

Mike Masnick:

Well, well, that's because you, you thought I wrote it, so let, let's be clear here. You Facelessly accused me of, of sock puppeting that review because it was very nice to me.

Ben Whitelaw:

I'm still not a hundred percent sure that that's not, is what happening now. But somebody, uh, emailed in claiming to be OMI Hilo, who, who was the username from, that review And, I was gonna kind of read out what he said. I'm not gonna explain any more. Who he is and what he's doing. But he's working trust and safety for a major platform and he says this, he said, sorry I did not give credit to Ben in my review. I do appreciate him. I think I might have written the review when he was off, but I do appreciate his insight too. it, it's, it's a, I appreciate OMI kilo writing in, and I've had a bit of back and forth on email. the way that that email is written doesn't convince me that somebody wasn't writing that with a gun to their head. You know, I do appreciate him.

Mike Masnick:

Yeah. Yeah.

Ben Whitelaw:

I might go back through the podcast episodes, Mike, and just see if you might have said something similar in a, in a similar kind of construction.

Mike Masnick:

Ben, I appreciate you.

Ben Whitelaw:

I do appreciate you.

Mike Masnick:

I do, I do so much. Appreciate you.

Ben Whitelaw:

um, so that clears things up it wasn't you. I'm, I'm, you know, I'm happy to say that that. I'm 95% sure that that is the case. Um, and I'm very grateful to, mackayla for writing in with his review. you like the podcast, if you listen to it regularly, please do leave us a review wherever you get your podcasts. You can leave us a rating on Spotify. You can write some words on Apple or the other platforms where you might listen. Very much helps us, be discovered on the platforms and we are seeing, increasing amounts of listeners find us. our numbers are, are going up steadily, and that's all because of. ratings and reviews that you're giving us. So massive, massive help for that. Also, great to have some emails in and some contact from potential sponsors. We've been talking about making the podcast sustainable trying to make it, a kind of. Self-sustaining thing. and we've had some really great, companies get in touch about becoming sponsors over the next couple of quarters. So, thanks to to those folks for getting in touch. right, so let's get onto this week's stories, Mike. We're gonna talk about, as I said, the evidence, the academic evidence around some of these big topics that we talk about on control of speech every week. in doing so, we're gonna talk about some of our favorite pantomime villains on control or speech. People who, our listeners have heard us talk about. They've heard you, give your views on let's say, and, uh, I'm very much looking forward to hearing how you feel about this first one. Um, the big story this week, is. Jim Jordan's much touted house judiciary committee hearing, and now as I mentioned at the top of the episode, haven't had a lot of sleep, haven't done as much reading as I would've liked, and I purposely steered clear of this kind of circus that was taking shape. in your fair, your fair country.

Mike Masnick:

with your countrymen.

Ben Whitelaw:

my country, man, that's fair. Um, I would love to hear your take on it. and then there was a kind of interesting. Letter that was written, which we wanted to dive into as well.

Mike Masnick:

Yeah. And, and the letter is more interesting than, the hearing, the hearing was, was garbage. There was no reason to watch it. It was a complete circus. It was Jim Jordan doing what Jim Jordan does, which is sort of putting on a theatrical show that has very little basis in reality. and you could see that even from the fact that, you know, and so, so the framing of the committee was about, European censorship. Of tech companies, whatever, it, it doesn't matter. it was all sort of garbage and made up. and as is the, standard practice with US congressional hearings, the majority gets to choose basically three witnesses, and the minority party gets to choose one. So the deck is already stacked. oddly, the witness list was very, very last minute. You know, last week they only showed Nigel Farage and Terry Brion as invited. He did not come, uh, which.

Ben Whitelaw:

Cherry didn't show.

Mike Masnick:

It did not show. He said he had a, on short notice was, was unavailable. Um, and then they, they filled it in with a couple other people and, they didn't even announce the one, witness from the Democrats until the morning of, I mean, the night before I was looking, and it was David K, who's a wonderful, very thoughtful. professor and and expert on free speech and had been the, UN's Rapporteur on freedom of expression for many years. just a super thoughtful and knowledgeable guy. And of course he was mostly ignored the. As this goes, like at one point someone asked him how many classes he's teaching this semester, which is like, why, how is that relevant to this discussion? Uh, you know, and so most of the panel was garbage. Nigel Farage put on a show and then left early. The Democrats made fun of him for leaving early and for refusing to say if he was gonna go have lunch with the president. and so then they were mocking him, and then after he left for an emergency meeting that he, he couldn't stick around at the hearing for, it was later reported that he went to go do a, TV news hit on GB news. Uh, which you can give me your impression of GB news, but I get the sense that it's sort of a Fox News standin, kind of thing.

Ben Whitelaw:

Yeah, that's a fair response. I couldn't give you my take without swearing.

Mike Masnick:

Right. Well, you know, hey, we can swear here. Uh, but you know, it was all sort of for show and nonsense. And as certainly some of the Democrats on the, committee made clear, it's pretty crazy to be attacking, liberal democratic. Countries in Europe as being these strict sensors at a time when authoritarians around the world, including here in the US, are abusing every lever of power they can to silence, dissent, and, stifle free speech. And so that was made clear by the Democrats and of course Jim Jordan, you know. Just would ignore all of that and focus on the supposed evils of the DSA. There was, it was very clear at times that like, I mean, you know, since the, panel was sort of supposed to be about the DSA, but then Nigel Farage was there who does not live in the EU anymore because, partly because of Nigel Farage, you had this whole Brexit thing. You might remember it Ben, uh, you know, but.

Ben Whitelaw:

Every day, every day.

Mike Masnick:

so, so the DSA does not apply in the uk. Now you have the Online Safety Act, and I've said plenty about my problems with the Online Safety Act. but then Farage was complaining about people who were arrested not under the Online Safety Act, which is not a law that I think even leads to people being arrested. and certainly not under the DSA, but under, like, you know, existing laws that have been around for a while. And I have. Reasonable complaints about some of those laws. And I have in the past, and I've, I've mocked your fair country, about some of those laws in the past. But it was, it had nothing to do with like internet laws. and so the whole thing was just kind of this nonsense show trial sort of thing that Jim Jordan is, sort of famous for, is not worth paying attention to if you didn't pay attention to, but. Out of that. there was a really interesting thing which we did wanna talk a little bit about, which was a letter that was put together by a bunch of academics. and there are actually two letters. one that was sent to the committee and, and Chairman Jordan in particular, and another that was then sent to the eu effectively informing them of this letter and to the European Commission. And it basically lays out in a very, it is a bit academic and it is a bit policy wonky, but basically lays out how this, the entire thread. Of the argument that Jim Jordan was making about how the DSA is designed for censoring conservative viewpoints, which was the argument that he was making, is utter nonsense. and now I have been clear over the years that I have called out the problems with the DSA, and I've certainly called out Terry Brion and the way that he, tried to, use the DSA for what clearly seemed to be censorship purposes. And I think this letter actually does a really, really nice job. not that Jim Jordan will pay attention to it, but it does a really nice job of threading the needle between saying, here's what the DSA actually says. Here's what the DSA actually allows, and here's what the DSA does not allow,

Ben Whitelaw:

Mm-hmm.

Mike Masnick:

simultaneously calling out Terry Brion, in particular for going above and beyond. Not in a good way, uh, but going beyond what the DSA actually allows and implying it allowed for censorship when it does not. And so part of the letter is effectively saying it would be nice if the DSA were a little more explicit about what it does not allow and that it does not allow for censorship and that it is content neutral or content agnostic. in how it, the enforcement works, and a lot of it's just about risk mitigation. And I'm a little bit torn on this personally because. I agree that that is the intent of the DSA. I agree that that is, you know, that the process to put together the DSA is way more thoughtful than like anything that the US does in terms of regulation, certainly internet regulation, it was years of back and forth and years of thoughtful approaches. You know, the point that I had made from early on, and I still stick by this, was that parts of the DSA were written. In a way that was so vague and so unclear and so open to interpretation that it was only a matter of time until someone tried to use it for censorship purposes, which is what we saw with Terry Brion. And I think the letter does a nice job of basically saying, you know, Brion clearly went too far. And then. Suffered political consequences, from that. Right. You know, he, lost his job very soon after he started to do that, and, people were pretty clear about going after him. And so they sort of, the, the letter argues that the, political process worked and limited his ability to abuse the law. I still am not a huge fan of the DSA, but I think that this letter is, is a very good one and thoughtful in, threading that needle, in a very careful, nuanced way, which of course will go completely over Jim Jordan's head if he ever even reads it. but I, but I thought it was interesting and it's signed by a ton of academics, a ton of actual experts in the field, which again, is another reason why Jim Jordan will ignore it.

Ben Whitelaw:

Yeah, I mean it for context, for people who weren't following the Terry Britton story that we talked about. the whole kind of narrative that has been building over the last couple of years. This is, commissioner Britton who used to, be in kind of charge of the DSA within the eu and who in 2023 said that, the social media platform should remove hateful content and the DSA is not about removing hateful content. And so, there was other issues where he kind of went up against Elon Musk and wrote kind of. Odd letters, which we, flagged at the time, about the importance of Twitter and its, actions around the European elections at the time and, kind of threatening Twitter in a way that felt odd. and which kind of led to Elon Musk, uh, kind of reacting in kind, let's say. There was a bit of a kind of mano and mano,

Mike Masnick:

Reacting in an Elon Musk way.

Ben Whitelaw:

Yeah, yeah, exactly. Um, I miss that guy. Um, and both of them actually. Um, and so yeah, so it is, it's fascinating that this, group of academics who, as you say, are, super reputable in the field and who not only are, are, based in Europe, but also include academics in the us which is, you know, fascinating that it's, US academics kind of, calling out, I guess, where the DSA has merit and what it's for in a very, kind of nuanced way, in a way that not a lot of Americans, I would say, Beyond yourself and, and some others would actually, would actually do. And so the, the fact that they've got together and organized in this way is, is really interesting. And you know, it's, it's a really helpful, I guess, addition to the ongoing transatlantic beef that, uh, our, our countries have and, the EU has with the US as well, which I think we're gonna see play out over the next kind of six months as some of those trade. Implications as that we've talked about start to kind of come to the surface a bit more.

Mike Masnick:

Yeah, I, I think there, there will be a lot more, and it, it's, you know, this is a, a thoughtful, nuanced approach, from experts in a world that doesn't believe in thoughtful nuance or experts.

Ben Whitelaw:

Yeah, exactly. And that is a very helpful segue onto our next story. we talked about Jim Jordan. Our next story is about, our other persona non grata or yeah, I haven't read Anxious Generation, I'll be honest. I've read a lot of commentary about it, but I know you have, and I know you're not a, you're not the biggest fan of Jonathan Hyde. And so.

Mike Masnick:

I, I have no problem with the man. I have a problem with what he says.

Ben Whitelaw:

Yeah. play the ball, not the man as they say. Um, so here's March 20, 24 book. The youngsters generation has. sold millions of copies, as, as many people will know, many of our listeners would've have read it. And his argument that the kind of mental health and attentional decline of children can be put down to social media and phones has been taken up, significantly by parents who, who kinda have this, I guess, feeling and see this day to day, and also by teachers. Who are, are kind of on, on the front lines in terms of this stuff in schools. And so it's very, very interesting and helpful, that this week we had a pretty definitive piece by the Times educational supplement. The, UK journal, magazine for teachers looking exactly at this. And so I wanted to kind of, share a bit about what this piece said and what it kind of purports to do. the kind of TLDR on it is that basically. Lots of professors and academics do not agree with Jonathan Het and what he comes out within his book and is very critical of kind of the his claims. And, and that's something you've talked about Mike before. I'm very interested in your, in your take and. What they say basically, their academics from Oxford, Cambridge, the University of Bath, Washington Brown. It is like some of the, best professors in, in their fields, all of them that have been quoted in this piece, say the data that Haight uses in his book is noisy, that there are lots of other contributing factors to, what it is that he, he says the data is doing that he doesn't reference that he over claims on that data, which obviously is, never good. and that there are some children who are more affected than others, by things like social media and phones, which again is something you talked about, you know, the situational factors that kids and their lives and, and how they exist. can lead to changes in mental health and, and lead to the kind of suicides or, or deaths that we've talked about on this podcast in quite, tragic terms. So the piece is a long one. It's, it's a fascinating kind of, pulling of, lots of different threads about, whether Jonathan Heights claim should be believed. And if we're to believe what, these academics say, Mike, we should take it with a big pinch of soul. You must be pretty happy with that.

Mike Masnick:

I mean, yeah. granted, right, there's some confirmation bias, right? And so I, I had written right as the book came out, I was asked to write a review for it for the Daily Beast last year. and I went, I covered a lot of this similar ground in terms of like the book is just, it's shoddy. It's, it's not, it's not a strong book. And I, and I quoted some of the same people, when I wrote my review of it and this, what struck me about this, so first of all, for people who don't know, TES tests is like, it's well-respected. It goes back to it's over a century old, and it's sort of like the, primary educational. Journal, in the uk it's a, it's a, big deal. And this, the quotes in here are, I mean, astounding how directly people call out. I mean, there's one that says, when I read the book, I found it really hard to believe it was written by a fellow academic, this, these are not people pulling punches. The other one said, the work of John Height doesn't come up unless it's the butt of a joke. I mean. Like this is not, like people are not

Ben Whitelaw:

They're not pussy fitting around, are they?

Mike Masnick:

Yeah. and throughout this article it just, you know, rips to shreds like all the different bits you know, the, claims that he makes and it goes through why they're just not supported by the evidence. And the academics here are very clear that there are things that are worth studying and there are things that are. Worth understanding, but he's way overclaiming the impact and it's just not seen in the data, which is, you know, a point that I and many others have raised as well. And I think it's just a, a really, really damning article. in a really respected source from a, a whole long list of respected academics. and so it'll be interesting. I mean, he more or less, dismisses it. and, you know, and any sort of academic criticism, he, Completely brushes off and ignores and, and he can do that because his bank account, I'm sure is huge as this book has sold a ridiculous number of copies and is talked about. And you know, I found it interesting that basically the same day that this. article came out Political had an article, which I find an interesting contrast to it, which says it's titled There's Only One True Bipartisan Issue Left. And it argues that the one true bipartisan issue is that Jonathan Het is right, that. You know, smartphones and social media are terrible for kids. And so, where the academics here are saying that this is complete garbage and is not supported by the evidence, the political class, and I would say the media class as well have completely embraced it, and there's an element here that I think. is just pure confirmation bias and, and, height. Even in the, the TES article more or less says like, well, you know, everyone knows this is true, like in their heart. And it's like you need evidence to support these things. And like, just the fact that people believe it doesn't actually make it true, you know?

Ben Whitelaw:

Yeah.

Mike Masnick:

and so I think we're, in this world right now, which is a little scary, especially, you know, for people who deal in disinformation and conspiracy theories, and how do you deal with that and how do you respond to misinformation? This is one of the most successful misinformation campaigns around.

Ben Whitelaw:

Yeah. Yeah, I, I've got some thoughts on that. I mean, so I'm, I'm watching the jury, which is a program where two juries watch a reenactment of a trial, at the same time without knowing each other exists. Right. It's a kind of interesting social experiment. Um, and I haven't got to the end yet, but I was, when I was reading this TES piece, I was thinking about how in Jonathan, hes world, you would just let the 12 jurors decide the outcome of the case without seeing any evidence that that's, that's kind of the analogy, right? It's like putting a jury in a room and letting them kind of use their own person experience to decide. Whether somebody is convicted and jailed, whether the evidence says so or not. Let's, let's, is it a bit like that? Right. And, and hate's like book in many ways is. I was thinking like a bit like the content that he so worries about that, that children are consuming on platforms. Like it's, it's almost like the equivalent, it's the book equivalent of, viral content, right? It's easily digestible. It's based on kind of loose truths. it kind of pres on people's insecurities, which, you know, some of the most viral Content that reaches millions of people does and it pitches groups against each other. You know, parents, people who know and have suffered at the hands of social media versus those who maybe have benefited. And so he's almost doing exactly what the platforms have done in, in many ways. And the thing that is, uh, he's kind of concerned about and trying to, stop children from suffering, from, he's doing the adult book of equivalent of that, which I think is kind of interesting. and yeah, he, his narrative is winning out. As you say, the narrative of Jonathan had is, More powerful and some of the academics talk about this in the piece is more powerful than the more, kind of reflective, more hard to discern subtle evidence that, that is there for people to read it.

Mike Masnick:

yeah. And you know, it reminds me of something and I wrote about this. In my sort of wrap up piece about Trust Con, I had multiple conversations at Trust Con that really stuck with me from people in trust and safety who were talking about how much damage to trust and safety. Heights Book has done and that I, it hadn't even occurred to me. I think before that I obviously had problems with the book. I thought it was misleading. I thought that people buying into it was, was leading to really bad policy and, very problematic outcomes. But it, it hadn't occurred to me how directly it was negatively impacting the work of trust and safety professionals in that it was effectively saying that. They don't exist, right? It's just saying that like the internet itself, social media is just inherently bad for kids

Ben Whitelaw:

Hmm.

Mike Masnick:

all the work that trust and safety professionals put in to actually creating thoughtful, compelling interventions that are helpful and that, do protect kids and do help kids, Be safe. Feel safe. All of these things are completely dismissed and it's this idea that like anything that the kids are using online must be bad. Um. and you know, when you have all the, political class and the media class buying into that, it wipes away the field of trust and safety and, acts as if it doesn't exist or that it can't do anything when, they spend so much time, the number of people working on child safety initiatives that are having real impact, that are completely ignored by the book and, and in fact dismissed by the book.

Ben Whitelaw:

Yeah.

Mike Masnick:

and so the number of people who mentioned that it was doing real harm to the entire field, it, it really stuck with me.

Ben Whitelaw:

And, and so it was kind of come up in conversations and, and was felt to have like impacted the discussion in companies. That's fascinating.

Mike Masnick:

Yeah. Yeah. I mean, the result of the book is you have, I mean, multiple politicians. I think Barack Obama put on the top of his, must reads of

Ben Whitelaw:

Yeah.

Mike Masnick:

and Bill Gates talked about, you have all these people talking about it as if it's this, accurate, factual book. And it's so terribly written, but that, that's a side note. It's just not, it's not well written. But, it leads to this, this sort of dismissal of the idea that there is a role for people within the companies to do stuff. it assumes that the only way to deal with this is to have policy makers do these outright bans, which again, like a lot of the evidence suggests, will actually make things worse rather than better.

Ben Whitelaw:

Well, that's the thing. Not only has it affected trust and safety professionals in the way that you are talking about there and the, the way that you heard at Trustcom, but I was thinking about the timings of the book and his recommendations and, the extent to which that might have impacted the regulatory conversation now. Right. So, so March, 2024, the book comes out, sets out this problem that many people have, heard of, and then it suggests kind of three. ideas for solving this. Right. As far as I understand it, like his ideas were an increase in the age of which like kids,

Mike Masnick:

To 16.

Ben Whitelaw:

yeah. Are, are kind of adults online to 16. Age verification is posited as one of the answers, which we've talked about has large issues. we're seeing that there out in lots of ways. And then the phone free schools idea, which we've again discussed and has very significant holes and, you know, some schools are already using, so you know why there needs to be a blanket, approach that I don't know. Those are his three ideas. you know, we've seen an increase in people talking about all of those things. I would say, in that time. And I wonder if we'll look back and say, you know, to what extent was anxious generation responsible for some of these kind of wrong turns in, in the road when we came to regulating the internet?

Mike Masnick:

Yeah. Yeah. I, I think it's a huge deal. I mean, here in California the state. Senator, representative, I forget which, who passed the law, banning phones in schools here in, California, came to the hearing carrying his book. You know, the, like, it's clearly having an impact. And the thing, and I had in my review of the, book I wrote about this. the first half of the book is sort of trying to lay out the problem and it presents a whole bunch of evidence. And, and as I point out in my review, it's very cherry picked and it ignores counter evidence and, I think really misrepresents the evidence, which is what the academics in this TES piece are saying as well. The thing that really got me though is that the second half of the book, there's a weird interlude between the first half and the second half where it just talks about like spirituality and how kids need to be more spiritual, which is just like, I don't know, based on something. It's, yeah, it's really unclear. but then the, the latter part of the book is all the policy recommendations, which he presents no evidence. In that section of the book to support those. These are all just sort of like, well, you know, I think so, and that was the part that got me for everyone talking about like how the book is so full of evidence. The policy recommendations are not based on evidence at all. The policy recommendations. mean, the whole book feels rushed, but that section feels especially rushed. And it's just kind of like he waves off, like, he mentions the concerns about age verification, but waves them off in a sort of like, oh, you know. These nerds can nerd harder and figure out a way there must be a way to do age verification that is privacy protective and, doesn't actually grapple with the actual challenges of that.

Ben Whitelaw:

Yeah, and you know what as well? Nigel Farage said exactly the same thing in a, in, in an interview recently about the Online Safety Act. He, he said, you know, there has to be a tech answer. You know, age verification has its issues because of the v VPN problem, but there has to be a tech answer. You know, like none of these guys, men seem to be able to kind of take the evidence for what it is. And continually to kind of strive for some sort of magic potion or, you know, that is gonna fix these, various harms. If they listen to controlled speech, they would know that that is not the case.

Mike Masnick:

And I will, just throw out there too really quickly. I wrote about this a few weeks ago, but, since we're, promoting academics and thoughtful, reasonable academics here today, uh, Steve, Belvin, I think is how you pronounce his last name, who's a computer security expert, just like one of the most respected guys in the field, released a paper a few weeks ago, which didn't get much attention. I did write about it, but I didn't see it. Written about anywhere else about the idea of privacy, protecting age verification, and basically explained how you would do it and then why it wouldn't actually work. And so it, it, it was sort of this like, yes, it is possible to create something that you could call privacy protecting, but the practical. Realities of it is that you can't do it in a way that actually works. It was a very thoughtful paper and again, an important contribution.'cause you have people out there saying, oh yes, of course there's a way to do privacy, protective, age verification, and yet this security professional sort of walk through the different possibilities. And it's like, this will not work, this will not protect privacy for these exact reasons. And I, I, you know, we're, we're. seeing this where the, experts are, again, being thoughtful and nuanced and, presenting expertise and, the world wants easy, simple, answers that are probably wrong.

Ben Whitelaw:

Yeah, and we'll include that report, that research in our show notes as ever with all of the links from today's podcast. And, shout out to, to all of the outlets that we're obviously talking about today. That's, it's there. Coverage and their reporting. That means we can kind of do controlled speech in, in this way. So, yeah, not a lot of reason to be, chipper or to be, you know, happy from those two stories, Mike, but there's a bit of lightness I think, in our the next three stories, which are all about ai.

Mike Masnick:

don't know if there, okay. I'm, I'm wondering how you get lightness outta this next one, but, okay.

Ben Whitelaw:

Yeah. Maybe not lightness. I mean, these stories are all about the kind of way that AI is seeping into very human moments that we all experience, like, doctor's appointments and like death and grief and all of these things. and I think. Yes, it's, they're complex. But as we'll find out, there are some upsides. It's just about how we, go about them. So, do you wanna talk first about, the rest of world piece that you, you read

Mike Masnick:

Yeah, this is really fascinating. It's a long read, it's not a quick one, in rest of world called my Mom and Dr. Deep Seek. and it's a very personal story about the author's mother. who lives in rural China and has significant health problems, and has had health problems for a while. And, to go to a doctor, it's a very long and involved process, has to travel for like two hours each way. and how she has just started using deep Seeq, which is the popular Chinese AI tool, which, made a big stir this year because. Was developed much more inexpensively than the big foundation models that we usually talk about, and yet had quality that was on par with them. and so the article goes deep into talking about how the author's mother was, using deep seek for medical advice, and sometimes getting. Bad advice sometimes getting good advice, but always getting some advice and how much what the mother kept pointing out was that. It wasn't condescending to her, whereas like, it's not even just the travel to the doctor, that was the problem. It was that she would travel to the doctor and the doctor would talk to her for, just a couple minutes and, not in a human way, just sort of looking down at you, do this, do that, do that, and, and, not hearing you, not, having a better discussion with you. whereas deep Seek was always there. Always be to if.

Ben Whitelaw:

Hmm.

Mike Masnick:

And so I, I came outta the article really conflicted because there's this element of like, yeah, going to the doctor is sometimes a pain. And I've certainly experienced where like doctors feel very dismissive and they're, in there for a very short period of time. They never have time to really talk, or like, raising concerns with them. Like I had a, a, you know, minor medical issue last year that I ended up having to go see a bunch of doctors for. Over and over again. Like I would come in with a whole bunch of questions and I would never even get a chance to ask them because the doctors would, they had like a script that they would run through and there was no moment in which you were allowed to ask questions or discussion. Like I remember one where like I was literally like the doctor walked me to the front desk. at the end and like handed me off to the front desk to like set up the next appointment without ever giving me a chance to say anything. And I was like, I, like, I, feel like I'm, you know, a cog in the machine. And so I could totally see where it's like, oh, I, I can actually talk to. Something and get thoughtful answers, even if they're wrong. and so there's like a part that everyone reads this and it's probably horrified because it's like you don't take the medical advice, you know, like you can't trust it, but you have to recognize the human aspect of it, of like a human wants to feel heard and the medical system, the way we have it now, isn't always working that way. Now, I, I had told you right before we started as well, like I do have a counter example, which was last year when I went for my, physical, my annual physical. The doctor came in and had asked at the beginning, are you okay if I record this using an AI tool? And at the end of the, physical, we'll go through the notes together and make sure that it accurately captured everything. and she said to me like, if we do it this way, I can spend more time talking to you, rather than typing in the notes. that is exactly what happened. Where, we did the physical and, and then we reviewed the notes and it was, it was good, it was accurate and it captured everything, I think, in more detail than, I would normally get if she was typing the notes and we actually had time to, talk. And I actually felt very comfortable with the. You know, and the physical is usually a little longer than, you know, other kinds of doctor's appointments. So, so maybe that was different too, but like, it, was, so I think in that case it was like the technology was assistive, not replacing, but I understand why people are, seeing the technology as something that is like, it just makes them feel more comfortable.

Ben Whitelaw:

Yeah, my wife is a doctor and we, you know, she has lots of doctor friends and the thing that they all say, particularly those who are gps, the kind of general practitioners, the kind of first line of defense here in the uk, most of them within NHS, is that people just wanna talk. they're not really looking for necessarily, a kind of medical advice per se, but they're looking to be heard. And there's a, you know, there are specific people that go to doctor's appointments and they're often people who are, less heard in society. so the doctor's role, according to, again. This kind of self-selecting sample of friends is that, doctor's appointments are about being heard in the way that you're kind of describing. And here in the UK you get 15 minutes for an appointment, so you know, you've

Mike Masnick:

That's longer than the US by the way.

Ben Whitelaw:

Right. Is it okay? Um, I mean, that's either way you've gotta speak pretty fast. Um, and, and the validation, I think is what, this piece. brings out for me is like the validation that people want and the fact that, they want to know that yes, this is an issue and that they're not going mad Is, is often. The first step. and that's interesting because we hear a lot of commentary and analysis about how these foundational models are, too subservient and too kind of sycophantic. And, you know, they say, well done, Ben, on, on, you know, what a, what a great suggestion that is. here's a few tweaks to it. I think your average person, in the healthcare context, doesn't mind that at all. would be my bet, you know, Certainly in this case for deep seek and this woman in, in China, there was no qualms at all with the model saying, yeah, that's, that's a problem. I can understand that you're in pain or that you're suffering. that's often the first thing. We forget that, um, that, that that's maybe where these AI models have, have a. are better than humans. my only other thought on that, Mike, was like, so much of the AI model safety question has been discussed in relation to children. You know, we hear people, you know, the suicides we, we touched on where, children are left unsupervised and they are given advice that is unsuitable for them. This is a case where an adult has voluntarily used the model and continue to use it despite being told by her daughter that it's not safe and that she shouldn't follow that advice. What do we do in this situation? You know, it's very hard to regulate an adult using piece of technology in this way. the Jonathan Heights of this world, are gonna find that pretty hard to regulate or to advise platforms to amend their policies to cater for because she's, you know, in her sixties or seventies, like, so the, it's a really fascinating piece on a, on a lot of dimensions, I'd say.

Mike Masnick:

yeah. I think it's, it's worth thinking about. I don't think there's an easy answer to this. I mean, like, once again, you know, sort of complexity and nuance

Ben Whitelaw:

think we need an, there is no easy answer Bell or, or something else

Mike Masnick:

the, the bell is already taken.

Ben Whitelaw:

Yeah, I know. I'm, maybe there's a, I'll have a think.

Mike Masnick:

Aon, you know, like air horn, you know, something, something. Yeah.

Ben Whitelaw:

To compliment the bell to make, you know, the podcast a bit more, um, interesting orally. Um, but, but yeah, you, you're totally right. It, it's that all over again. Um, let's go onto the kind of the next AI story then, which again, maybe it's hard to claim, is a kind of nice, uh, story. Uh, but, I'm gonna try and convince you that

Mike Masnick:

be.

Ben Whitelaw:

Yeah, it is. Um, so. As many of our listeners though, there's a growing trend of AI bots that are being kind of trained On the, voices and the text outputs of dead people, people who have died and they're called dead bots. I dunno if that's the best or nicest term for them, but that's kind of the way that we've come to describe them. some people are calling it the digital afterlife industry, which seems a bit more, a little bit more poetic, a little bit more fitting perhaps.

Mike Masnick:

Yeah. But just feels euphemistic in so many ways, right.

Ben Whitelaw:

Either way, whatever we're calling it. this NPR piece that we read this week has predicted that there is gonna be rise in this industry. It's going to, uh, increase exponentially and that it's gonna become commercialized very, very quickly. a report that's linked to the, the piece says that this is an industry that's gonna be.$80 billion within the next decade. So a massive industry. And, the bots that we're seeing being created at the moment are, a mix of kind of, personal, you know, people who are, who are missing loved ones, but also people who are being kind of created to advocate for particular causes, which I had no idea about. You know, so examples are given around, Victims of road safety accidents who are being brought back, to give presentations and talks about things that they, previously, wrote or, or said, people who were killed and therefore are being used and kind of wheeled out to talk about gun laws and, and the unsuitability of those. And so it's a really interesting, again, quite kind of complex article about where. Deadbolts can be used to, memorialize somebody to have them, bring back memories of somebody who, who's lost and to kinda keep their, memory and their ideas and their voice and their personhood alive, but also perhaps how they could be used in ways to, advocate for causes and, concerns that maybe. Are not what we'd all want to, um, be used for or, or, you know, in a way that is perhaps not even explicitly something that those people have advocated for. right. So there's, when privacy laws are, so I would say so basic and we have so much discussion about them, it seems mad that we have this emerging industry of, that kind of uses the text and images and, outputs of dead people, you know?

Mike Masnick:

Yeah. It, it feels very exploitive, right? I mean, it is like, it, it's, you can see how it would be abused and the article itself talks about the possibility of using it in advertising. Can you imagine like your dead grandmother, like coming to you to like advertise something like.

Ben Whitelaw:

What? What do you think? Somebody will use our voice to advertise Mike, when we are gone.

Mike Masnick:

I mean, there, there is, right? Like I did go through this thought exercise of like, well, what happens, you know, if I, when I die, like, do I care?

Ben Whitelaw:

Yeah.

Mike Masnick:

And I'm, I'm not sure if I do that much, like, you know, like, but I can understand why people would. But I'm not sure. You know, once I'm, gone, I'm gone. You know, if, if you wanna make up a fake Mike Masnick bot, like go, go for it. Uh, you know, uh,

Ben Whitelaw:

tires,

Mike Masnick:

yeah. Because you know my enthusiasm for tires.

Ben Whitelaw:

really good car tires. That's, that's, I would buy car tires off you,

Mike Masnick:

It's funny, I'm literally trying to buy new tires for my car right

Ben Whitelaw:

Yeah.

Mike Masnick:

as we speak. And for some reason I couldn't get the website to work. So yeah, I have thoughts on tires all of a sudden, but,

Ben Whitelaw:

I didn't even know that.

Mike Masnick:

now whoever wants to, wants to, uh, make a bot has me saying that, so, so you can, you can do that. Um, yeah, it's, it's sort of strange. I think this is just one of those things that society is gonna have to adapt to. you know, and like there are like science fiction books about the idea of like. Uploading yourself to the cloud and like that was more in the, the idea of like, you know, turning yourself into, an immortal being, you know, with your brain in the cloud. But this is like, kind of, sort of like that because there are questions of if you upload your brain, is that really you, like Cory Rio had a book about this very topic many years ago, not many years ago, I don't know how long ago, five, 10 years ago. That really raised some of these issues as well around like, what does it mean to upload your brain to the cloud? That's like if you're choosing to do it. But what if in these cases, what if you know your family is doing it? I think it raises a whole bunch of really interesting questions. I don't think there are easy answers and in the US at least the article talks about. you know, sort of the state of here, we usually refer to'em as publicity rights laws, which were initially intended for. Like, can you make, a famous person advertise or, appear to, endorse a product. and there are questions about when they're alive. Then when they're dead, some of those laws don't apply after someone has died. And there was a whole legal dispute about like Marilyn Monroe and could you put a fake Marilyn Monroe into a commercial? and so, I don't think we as a society have figured this out yet. And I think it's going to be more of a challenge. And as with anything, there are. Complexities, nuances and you know, everyone who thinks there's an easy answer to this, discovers that your easy answer probably has a whole bunch of trade-offs that you haven't considered.

Ben Whitelaw:

Yeah, I, I can just imagine Charlie Brooker sitting back and like laughing at this. Do you know, you know, Charlie Brooker, the, the guy who wrote Black

Mike Masnick:

Yeah. I mean, this is like classic black mirror fodder, so,

Ben Whitelaw:

yeah, there's a whole episode in which like, a woman's boyfriend dies in a car accident and she brings him back as a kind of AI generated, synthetic person, like. I think Charlie Brooke has got a, a lot to, you know, he's got a lot of this responsibility is him for this stuff, I think. I wonder if we can get him on the podcast one day. That'd be

Mike Masnick:

Yeah, that would be great. That.

Ben Whitelaw:

Yeah. Um, but yeah, so I think I tried, I think I convinced you there that there was an, an element of likeness and maybe, uh, you know, niceness to that use of ai. Um, the final story we touched on is, the companies who are gonna have to think about all of these use cases and, and how they're responding to it. Right.

Mike Masnick:

Yeah.

Ben Whitelaw:

again, I think a positive development in the industry.

Mike Masnick:

Yeah, I think this one is really fascinating. It's based on, both Open AI and Anthropic, which is sort of the two of the leading foundation model AI companies. Um, and this is based on an article in TechCrunch. They apparently agreed to allow each other to safety test each other's models,

Ben Whitelaw:

Hmm.

Mike Masnick:

of time where they opened it up via public, API. They gave each other access to it to a version of the models without all of the safety guardrails.

Ben Whitelaw:

Right.

Mike Masnick:

there were some, but they left open, you know, they gave a sort of, open version that they would use internally for safety testing and allowed each other to test. And then each of them released a report about what they found about the opposing models. this was really just a fascinating concept beyond what they found. And like, they have different reports, but like, this, this model did this, this model did that. Just the fact that like these two companies, which are competitors and philanthropic, was founded by ex OpenAI people who felt that OpenAI was going in the wrong direction. Um. Just the fact that the two of them got together and agreed to open up each other's models to be safety tested by the other is something I don't remember ever hearing of in like a trust and safety context. I don't think we've had like, you know, Facebook letting Twitter people, safety tests their systems and, you know, I mean, it's a different context with AI than social media, but it just struck me as a really, really interesting experiment. I'm not sure. You know, it sort of comes across, like some of the quotes read to me, at least. I, maybe I'm reading between the lines in ways that like, a little bit of snarkiness towards each other and it, feels like There was like a grudging agreement to do this and like neither, neither side is entirely happy with it, but they did it and they released the research on it and I just thought we're entering a really interesting world in which that kind of thing is possible and that that is an idea that people are considering this sort of, allowing competitors to safety test your systems.

Ben Whitelaw:

Yeah, I mean, playing devil's advocate here, Mike, is this something that they might have done just to appease, regulators so they can say, Hey, maybe that you don't have ways of checking our models in, or auditing our models in, in ways that, you agree on yet, but hey, we actually did this, or we let Anthropic do it, or we let open AI do it. So we are trying.

Mike Masnick:

yeah, I don't, I don't know. I didn't get that sense. It is possible that there was a regulatory component to it, but I think if it was really designed to sort of get regulators off their back. I would picture them doing it in a slightly different way, which would be more designed to sort of appease regulators. I didn't get that sense from this. It may be true. but my guess is that this was just like an actual intellectual exercise that both companies, and their security teams were sort of excited about the possibility of testing each other's models.

Ben Whitelaw:

Yeah, that is interesting. If that is the case, and you know, like you say, there is so much heat on those companies right now. There is so much emphasis on how they approach safety, that to have them look under each other's hoods is. I think a positive thing, we can definitely say that. So, hey, we got there eventually Today we, we've ended on a bit of a bright note, and we need to take that into next week, Mike.

Mike Masnick:

All right.

Ben Whitelaw:

We'll try our best to do so.

Mike Masnick:

try to find brighter, happier stories.

Ben Whitelaw:

Yeah, we are, we are somewhat beholden to the news agenda, as ever on control of the speech. Um, that brings us to the end of today's episode. Mike. We, have covered a whole RAF of stories, including our, our, good friends, Jonathan and, Jim. Um, we've tried to lift up. Good academia, and the imports of an evidence-based. And, uh, we'll continue to do that in control speech as ever. raise us and review us listeners, wherever you get your podcast. And, uh, we'll hopefully see you next week for more of the same.

People on this episode