Ctrl-Alt-Speech

Smells Like Teen Safety

Mike Masnick & Ben Whitelaw Season 1 Episode 29

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

So like many couples nowadays, Mike, I actually met my wife on Tinder, true story. And, uh, I won't go into the whole backstory, but I will tell you that back then, the app didn't allow you to pick prompts, custom prompts. These are like questions you could put on your profile to make you sound smarter and allow you to be creative. I would have been on the app much less if they'd have, uh, uh, If, if they'd been there in my time, but I was looking through them today and I thought there'd be a great way to open the podcast. Okay. So

Mike Masnick:

this has me nervous.

Ben Whitelaw:

the one I liked most was this one. The prompt goes a shower thoughts I had recently was

Mike Masnick:

Okay. Uh,

Ben Whitelaw:

me, tell me your thoughts if you can, if they're broadcastable.

Mike Masnick:

yes. So something I have been thinking about is when. Is it appropriate to have, this is going in an okay direction. Don't worry. When is it appropriate to have a conversation with an AI chatbot and when is it maybe problematic?

Ben Whitelaw:

Ah, interesting. Okay.

Mike Masnick:

All right. And so, uh, what, kind of shower thoughts have you been having?

Ben Whitelaw:

Well, this is quite revealing, but I, my shower thought was who will Mike Masnick direct his ire and frustration to when Thierry Breton is no longer in post and it turns out we're going to find that out.

Mike Masnick:

Yes, yes, yes, we will.

Ben Whitelaw:

Hello and welcome to Ctrl-Alt-Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you with financial support from the future of online trust and safety fund. My name is Ben Whitelaw. I'm the founder and editor of Everything in Moderation. And I'm back in the chair, back side by side, talking about our intimate thoughts with, Mike Masnick, founder of TechDirt. Mike, how are you? How's your week been?

Mike Masnick:

I have been doing quite well. Uh, it's been another busy week. There are lots of things going on in the world, but, uh, overall pretty good and, fun to hear about history with, uh, Tinder. I, think, I think everything, that I got married in the pre tinder era and, uh, never had to go through the dating apps though. Lots of people seem to really like them and obviously you met your wife there. That is, wonderful.

Ben Whitelaw:

Yeah. Fairly successful. I think there's a decent hit rate.

Mike Masnick:

yeah, no, I know lots of people who've, found their significant others that way and I think it's, I think it's great.

Ben Whitelaw:

Yeah. And they, you know, they do it fairly decent job as a bunch of platforms of moderating content, which is something

Mike Masnick:

Yes.

Ben Whitelaw:

talk about at various points over the coming episodes. Um, we've had a couple of really nice. Since we last were on the podcast, we mainly because I think people were fed up of our incessant pleading. I don't know how you feel about it, but, there's been a nice few rolling in over the last few weeks since I've been on.

Mike Masnick:

Yeah. There've been some really, really nice reviews. And so again, we'll recommend if you'd like the podcast, I guess, even if you don't like the podcast, you can also write a review, but we certainly appreciate, uh, we appreciate the very nice reviews. So we had a couple of, very nice ones. Someone called it first rate and, talking about how some other podcasts were maybe a little too musky, but ours

Ben Whitelaw:

Musky as in smelly or.

Mike Masnick:

think they meant a little too focused on one Elon, uh, person. We do mention him occasionally, but as this review notes, ours covers more platforms, more countries, and more thorny problems that need resolution, which is definitely part of our goal. we had another really nice one that, uh, this is a pretty long one. which is great. But I'm not going to read the whole thing, but they talk about how, things look and seem easy from the outside. And it's very easy to assume that, there are easy answers to all these things, but that we do a really good job of explaining how, as you listen to the podcast, you will realize how difficult these decisions are and how often there simply isn't a good answer to a given content moderation question. So I, I really, appreciated that one

Ben Whitelaw:

Yeah. Talking about the trust and safety professionals rather than us as podcast hosts,

Mike Masnick:

Yeah, yes Uh,

Ben Whitelaw:

distinction.

Mike Masnick:

yes, yes, absolutely. And then, uh, we had one, which is also a very nice one though. I will note that among the, it says, you know, that they enjoy listening to it and that, the discussions are great and all this stuff, but it does say it isn't necessarily the most entertaining podcast, at least in the traditional sense, which I'm not quite sure what the traditional sense of entertaining podcast is, but we have not, gotten up to that bar Ben.

Ben Whitelaw:

No. Okay. We need to make it more entertaining. Don't we? We need to. Yeah. I mean, we're talking about, you know, showers and, and dating apps. I mean, that's a start. That's surely got to be a start.

Mike Masnick:

we need like sound effects, air horns and, and, you know, or do we need like a wacky sidekick, maybe, maybe that's it.

Ben Whitelaw:

I thought that was me. Um, anyway, I think we'll have to, ponder on that. Maybe some side effects.

Mike Masnick:

There we go.

Ben Whitelaw:

maybe some visuals as well. When we need to get on camera and show our faces, maybe that's

Mike Masnick:

Oh, geez. Maybe, maybe, you know, the term, uh, uh, a face made for radio. Right. So

Ben Whitelaw:

yeah, we might have to wear masks, both of us.

Mike Masnick:

we'll see. We'll

Ben Whitelaw:

Um, but that's really helpful. And yeah, thanks to the listeners for taking their time to rate and review us on the platforms where you listen. It's really, really helpful. last week's podcast was great. Mike did a fantastic job with Rhiannon Pfefferkorn from Stanford. We've got another whole tranche of stories this week that we're going to, run through in, as detailed, but as swift a time as possible. And, uh, we're going to start Mike with, both some good news and some bad news, as you've no doubt been following good news that Thierry Breton is no longer. Uh, in post, at the European commission, but the bad news is that you won't be able to rant about him for any longer. It's, it's a, it's, it's a bittersweet news in many respects.

Mike Masnick:

Yeah, well, it's a little bit of, a surprise, I think, because he, the, commission, was, you know, there's a new slate of commissioners who were going to be, presented this past week. And, each country gets to designate someone, for the commission. And then, um, that slate then eventually has to get approved by the EU parliament. and Macron had put forth Breton again, and so he had been listed. And then the story that came out was that Ursula von der Leyen, had sent a message to Macron basically saying that if you don't pull Bertone's designation, replace him with someone else, he's going to get a really weak portfolio. And you know, there's always the issue of who gets which portfolio and which, you know, And the bigger countries, the France's and the Germany's, want the high profile ones. And basically she was saying he was going to be, doing something with like fisheries or who knows what, you know, and so, Breton obviously found out about that. And so he sent a, you know, somewhat petulant, letter as he, that's a little fitting with his character, basically saying that, you know, I understand that you asked Macron to, uh, designate somebody else, for reasons, outside of the official ones, you know, suggesting some sort of personal issues, and therefore I am resigning immediately and, you know, it's a, he,

Ben Whitelaw:

Pretty simmering letter, wasn't it?

Mike Masnick:

Yeah, yeah, you know, and so there's lots of questions as to what was actually going on. it might be as simple as Bertone beyond being, sort of brash and, making sure that his name is out there. And, you know, as I've joked about every time he posts on social media, it comes with a a high quality photo of himself Uh, he just seems like someone who thinks highly of himself and wants pictures of himself everywhere unlike us being concerned about our faces made for radio. He he Wants to be everywhere. But you know back in march he had really criticized her and in fact suggested that she should not um this next term and so Um I think there was some animosity that probably lived on from that. I think also, you know, it could have something to do with, we had spoken about he had sent this letter to Elon Musk before Musk had done the spaces with Donald Trump, kind of warning him that he might face some punishment. And, you know, There were reports that the rest of the EU commission was totally unaware of that. It took them by surprise and they were, upset and they thought it was a problem and that he was undermining the seriousness of the DSA. And I wouldn't be surprised if that had some calculation in the request as well. And sort of this recognition that he was a bit of a loose cannon and was doing some things that could undermine the larger project of the EU and their attempt to regulate social media. Yeah.

Ben Whitelaw:

very personality driven, hadn't it? You know, Musk versus Breton. We talked about them being kind of too macho, kind of almost like guerrilla type figures beating their chests against one another. And that's, that's, that's really where a lot of the media coverage had focused in, some of those discussions. And, um, you know, I'm going I'm going to read quote from Politico. from that story just after the Donald Trump space again, because it's fantastic, you know, Politico had this anonymous quote saying that he'd become an attention seeking politician in search of his next big job.

Mike Masnick:

Yeah.

Ben Whitelaw:

Once somebody says that about

Mike Masnick:

heh heh

Ben Whitelaw:

and it gets read by, in Brussels in a publication like Politico, you know, your race is run. I think, you know, there's, it was both surprising. And once It kind of sat with me for a day or two. I was like, that kind of makes a hundred percent

Mike Masnick:

Yeah. So, I mean, there is a question of what is that next big job? Uh, so now he's, he's got to find one. So there may be more opportunities for me to yell at him, uh, but it won't be as a part of the EU commission.

Ben Whitelaw:

No, no. And I don't think he'll be, welcomed within the ex Twitter leadership

Mike Masnick:

can you imagine if, if he's the, exes Nick Clegg?

Ben Whitelaw:

Yeah. Yeah. Nice little kind of global affairs job coming up. Um, nice move to San Francisco, maybe. yeah, I mean, Musk obviously responded in the wa the way that Musk only knows, which is to, tweet, Bon voyage,

Mike Masnick:

Yeah.

Ben Whitelaw:

to the letter from Briton. And they had a nice bit of a back and forth there. so it's still going on. They may be,, they, maybe they will still provide fodder for the podcast even though that he's no longer in post. Um, but my question Mike, is like, who is gonna replace Breton in your. Line of fire, if I can call it that. Um, what do we know about, you know, the person who's going to replace Breton, what brief they've got, kind of person with completely different kind of makeup and experience in a way.

Mike Masnick:

Yeah. So the, proposed slate, and again, the EU parliament has to approve this, but I haven't seen any indication that they wouldn't, is that, Hannah Verkunen, has been proposed to, handle this brief. And she is a Finnish lawmaker. She's been in the EU parliament for a while, apparently had something to do with, The DSA and writing the DSA is involved with it. the letter that was sent to her sort of listing out the things that would be part of her portfolio, is pretty broad and sort of tech. You know, there's a lot of different tech elements to it. But it, you know, it's pretty open ended talking about things with AI, next wave of frontier technologies, cloud and cloud development, chips act, secure, fast and reliable connectivity, cybersecurity, all of these different things. And then sort of at the very end of the discussion, it talks about the DSA and the DMA, and then even something about, you will help combat unethical techniques online, such as dark patterns, marketing by social media influencers and addictive design of digital products. So things like that are within her portfolio. it'll be interesting to see how she handles it. It does feel like a larger part of her. mission statement is really going to be about competition. And there've been all of these other discussions recently about how the EU has really fallen behind in, competitiveness, especially within the tech industry. And there have been these discussions going on for a long time. You don't have any sort of really. there are at least too many very large technology companies emerging from the EU, and the EU economy has suffered, especially within the last decade or so, at least compared to the US. And so there are definite concerns there. There are some people who will immediately point to some of the regulatory positions that the EU has taken. that may have something to do with it. There may be a bunch of other issues related to it that, you know, it's, it's not really clear. So what'll be interesting to see is how much her focus is actually on competitiveness versus, more narrowly focused on things like the DSA and the DMA. I do think that If the overarching theme of her portfolio is competition, hopefully that will lead to a broader, more comprehensive view of how these things play together. One of my big, big complaints, and this is everywhere where internet regulations come up is how sort of viewed as silos. You have, competition here, you have privacy there, you have, disinformation over there. And yet all these things impact one another.

Ben Whitelaw:

Mm hmm.

Mike Masnick:

And so if you have somebody who is taking a comprehensive look at this stuff, you know, because like, one of the classic examples is like the GDPR, which is, one of the, you know, it's a data protection regulation touches on privacy issues. and there were a bunch of studies that suggested that that actually. diminished competition, especially in sort of the advertising space within the EU, because smaller companies were sort of driven out of business by the GDPR. That's a very big simplification. I'm not going to get into more of the details, but like taking a step back and saying like, what is actually going to drive competition while also working on these other policy levers at the same time could be a good thing. I think that, when politicians get too narrowly focused on a particular thing, as if it's a silo without recognizing how it impacts these other things, that's where you run into problems. And so, look, I, Whether or not she ends up my line of fire of angry speech, we'll see. but I'm, I'm willing to give her a shot. I mean, you know, let's see what, uh, what she comes up with what her focus is. and maybe it is more thoughtful, careful policy, and I would be happy to see that.

Ben Whitelaw:

Yeah. I mean, her title of executive vice president for tech sovereignty, security, democracy does, feel like a large umbrella and the list, as you say, of to dos on her, on her list is very long. noting as well that, you know, she's, It's gone down very well in Finland that she's got this post because Finnish politicians, Finnish MEPs don't typically get a job as big as this. and so that's the kind of big, interesting, change. I'm not that familiar with tech policy in Finland. You know, we'd have to kind of gen up on that over the next kind of weeks and months. But maybe there's some clues in there as to, how she will, approach the role and what she'll bring to it. Politico also report crucially, and this is, the most important thing we'll share with the listeners today, is that she's a, horse fan and a iron man runner, you know? So if you just want to get a sense of what she's like outside of, outside of work, that's what she does for fun. So.

Mike Masnick:

apparently she uses Instagram, more than X, whereas, you know, Breton was a big X user. She uses Instagram, but she posts like training stuff for her Ironman training is, is a part of how she uses that. And so, this is somebody who has endurance is what I'll say is if, if you're doing an Ironman, that is impressive.

Ben Whitelaw:

Yeah, no, indeed. So she's, she seems to have all the characteristics of somebody, able to go head to head with Elon Musk and, others of his ilk. So big, big change at the head of the EU commission. we're gonna see how that pans out and. Let's move on to our next story, which is, one of the platforms that, Verkonen will be thinking about and, which already has an EU, investigation pending on it, it's, Instagram and a bunch of new features that they have launched this week for what it calls teen accounts. Now, when the New York times, uh, claims that a change like this within the platform is quote the most far reaching set of measures undertaken by an app to address teenagers use of social media. You kind of sit up and take notice. Um, right. that's how it's being billed. And to give you a sense of kind of what it, Involves many people will have seen this, be announced this week. Essentially accounts for users under 18 are going to be made private by default. And this is, something that's going to roll out current accounts by users. You're under 18 are going to flip. To being, private. So an interesting of what happening in the U S the UK, Canada, and Australia. That's going to happen almost immediately. It's going to happen in the next few weeks. And then in the EU, that's going to happen over, the rest of the year. And we can talk about why those countries have been picked might in a second. the, Bigger shift as well as that. If you want to make your account public as one of those under 18 users, you can do that if you're or 18, but if you're under 16, you're going to need to get parental permission for that. So big change in the kind of parental involvement in the safety features that a, child uses on their Instagram account. And there's a couple of other things rolled up in that as well. So. As a parent, you'll be able to see the people who are messaging your child. You won't be able to see the content of those messages, but you can see who is in their DMS. you'll be able to change sensitive content measures and be able to restrict those according to what you feel is a kind of, appropriate level. And, Instagram, we're also going to limit notifications for under 18s between the hours of 10 and seven. So this is a kind of ploy to, try and get, I guess, kids to go to sleep earlier, to have more sleep. and all of those changes are starting on Instagram, but they're going to, in theory, roll out across other meta platforms next year as well. So a big, big shift really in how Instagram is thinking about teen safety. what is your kind of initial reaction to this, Mike? What, why now? what do you feel about this as a change?

Mike Masnick:

Yeah, I mean, I thought it was interesting on a variety of levels. It is obviously a big change. It is a lot of the stuff that people have been pushing for. There is one element of it. Um, and we, we can discuss sort of how this came about, which is that, in the US, at least, while there have been attempts to pass laws to require certain things similar to this, or, within the same arena. those laws don't really exist or to the effect ones have been passed, that would require similar things. They've generally been found to be unconstitutional. So there is not a legal reason for them to do it in the US. Elsewhere is a little bit different. But they still did it. So there is a part of me that sees this as a response to the claims that, Oh, Meta and other big tech companies don't care and will not do anything until there is legislation in place because people are They did this. There is, the reality that we've seen, which is that companies do care about this stuff and companies don't like to have headlines about how Children feel bad on the platform or that bad things happen. by users of their platform. And so there is a reason why a trust and safety industry exists. Now, there are a lot of competing factors in that and the trust and safety teams tragically do not run these companies. And often there are other interests and other stakeholders involved that will sometimes overrule what trust and safety thinks is important. the question then is. how much of this is driven by Meta wanting to actually make Instagram safer and being sick of these kinds of headlines and how much of it is driven by there is this ongoing pressure all the time by this, regulation, by different regulatory attempts, and, um, That's something of an unknown, like, you know, I think back to this was a discussion that I got into maybe 20 years ago or so, around like net neutrality fights where there were proposed laws for net neutrality in Congress, And I was worried about the laws, but I am broadly supportive of the idea of net neutrality. And there was an argument that was first put forth by Ed Felton, who's a professor at Princeton, really, really thoughtful guy on tech policy stuff. He worked, I think he worked at the FTC for a little while and he's just been a really, really thoughtful commenter on all sorts of tech policy things. And I, I'm pretty sure he made this argument first 20 years ago, and then I just sort of picked up on it, this idea of that, like the best. Net neutrality tech policy was not an actual law because the actual law and whatever was specific in that law would have problems and unintended consequences and all of that. But the concept of net neutrality was really important. So the best policy was continually threatening a law, but never actually getting there. So you have, you know, I think he described it as sort of like the sort of Damocles that's sort of hanging above your head, where like, if you don't do anything, we're going to regulate, we'll probably regulate badly. So do something to fix it. And there's a, a sense that maybe this is kind of what's happening here. like in the U S context, certainly there's COSA, the kids online safety act, which moved a little bit forward this week, but also there are indications that it's not actually going to get passed. It, passed out of committee, the house energy and commerce committee. though a different version that passed out of the Senate and there are complaints and there are concerns about, The bill still, and it feels like it's not going to have the momentum to actually pass this year. But here's an example of just even the fact that that is being debated in various states, you're debating things. The Instagram is coming forth and saying, look, we're doing a whole bunch of stuff. And whether or not all these things are good or not, or whether or not there's some problematic aspects and they're expanding their age verification tools, among the other things that they're doing to make all this work. Parental controls are sort of, sometimes really good, occasionally problematic. If you have a kid who's, maybe LGBTQ and their parents are not, you know, Accepting of that, there are concerns about, does parental controls lead to a situation that could actually be dangerous for those kids in certain situations as well. There are all of these kinds of questions, but I think, meta taking a step forward and just building out these tools, offering these tools, setting things up where the defaults are a little bit different. it seems like a good thing. It seems like they're, you know, making steps in the right direction. I think. it's good if it sort of becomes the standard that more and more companies just think about doing this without having to wait for the legal requirements to do it.

Ben Whitelaw:

Yeah.

Mike Masnick:

and I think, you know, we'd be in a better world if more companies were seeing that, and if, and other companies have done some of these features before, I know Discord for example, has a bunch of features that are fairly similar to some of the things that are here. I think it's, It's a good thing if more companies are doing this kind of thing, but without legal requirement, once you have that sort of the liability aspect and locking you into certain things, which may not be the best or may not be the most appropriate, uh, allowing the companies themselves to experiment and try different things and see what actually works, I think would be a good thing.

Ben Whitelaw:

Yeah. think you're right. I think the, the Sword of Damocles. Analogy's a really good one. And if you think about it as well, from obviously the coastal debate and discussion is, is informing this change, but also what we've seen in, in Europe as well. I think, you know, if you look at the timings of, the changes the matter have made with regards to teen safety over the past couple of years, they line up pretty neatly with when the DSA. Um, the big change that they made just six months ago was introducing another default where teams could only be messaged or added to a chat if they were following an account or connected to them. And again, like pretty big change in the course of the way that messaging worked, but Why did they do it at that point? probably in part because of regulation, as you say, and, and, probably in part due to the investigation that the DSA was, you know, had brought out against meta of which child safety is part of it. And then the other point, which you, kind of make is yes, there is. There's headlines and we've seen a hell of a lot of headlines. We've talked about a hell of a lot of headlines where Instagram is very much involved in some of those really tough sextortion stories and cases. And it's not that long ago where Mark Zuckerberg got to get up and turn around apologize to parents. Right. it was only the start of the year. It feels much longer ago, for, the company's part in some of those tragic cases. So. We're seeing a kind of push and pull, I think, in many respects from probably, you know, media coverage of, some of the harms and also the, the Damocles sword effects of regulation and it's culminating in these quite significant changes as you said, is there kind of any, any more to it than that? Is it, how do you see it kind of going from here? Do we think there's gonna be more defaults? I mean, this is a big suite of changes. Like where's next?

Mike Masnick:

I mean, I imagine that they're going to keep testing stuff and trying different stuff. And actually that is one of the reasons why I think it's generally better if they're free to do that without like, regulatory specific, like you have to change this, this, and this, because. it's not entirely clear which of these changes will have the most impact or even if the impact will be good, like again, like, as I mentioned, there could be some negative impacts to this and learning from that and being able to adjust and experiment try different things, I think is actually really important. So we'll see. I mean, I do think, it's absolutely true that the DSA sort of hangs over all of this, and, some of the other, laws, the, you know, Online Safety Act and whatnot. hanging over this. And I think that obviously, these companies recognize that to be sort of global providers in this space that is the reality. And they're going to do that on a global level. they don't want to be in the business of having totally different apps in every location. and I think that certainly, the bigger companies have all come to the conclusion that this is the new world that we live in. And therefore, they have to, make their systems work within those constraints. And so this is sort of where it is, and I think it'll be interesting. It would be nice if Meta is able to do some research and share that publicly or to enable academic researchers to look at the effectiveness of these changes. at times, Meta has been open to academic research, and at times, they're not. And I don't know, if they're going to, do that and, be willing to share the data on how well these changes are impacting things. I mean, part of the problem with this stuff is that some of the research that they've done is taken out of context. And this is a point that I've complained about in the past, where you had stories where internal research at Meta showed, teens feeling bad about themselves in certain circumstances and some number of teens. And that was interpreted as, and has been trumpeted over and over again in headlines as Meta knew that Instagram was, bad for kids, which is not what the study showed. And it showed, you know, in certain circumstances for some kids, there were problems and Meta, looked at that as a problem that needed to be dealt with. And this is one Potential result of that as well. But, because of the way that researchers presented in the press and the way the common theme is like these companies, I mean, at the COSA hearing this week in the house constantly over and over again, you heard, these elected officials saying, these big tech companies, they do nothing. They don't care. They will never make changes. and we're seeing, they actually are making changes and they're willing to make changes and they're willing to do stuff, but that narrative is still going to be there.

Ben Whitelaw:

Yeah. I mean, we've talked a bit about kind of research around social media usage and harm for children. And there was that big university of Oxford study of studies not long ago, which you've talked about, um, a bit Instagram have made this change based upon that research? because that research seems to suggest the point you just made there, which is only in some cases, do children, um, teenagers suffer the harms as a result of the kind of design features in the app itself. Like is this. Cracking a Walnut with a hammer, like to kind of play devil's advocate here. I mean, I'm, to an extent I'm, I'm, I can see, the benefits for it, but have they gone a little bit too far and maybe,

Mike Masnick:

I, that I have no idea. I honestly I couldn't tell you and it's possible that they have and we'll see. I mean, simple fact is like, reality wins out. In all of these things, right? And sort of, you go back to the example of COPPA, Children's Online Privacy Protection Act in the U. S. And the fact that like all these limits were placed on sites that were targeted at people under 13 and what that led to eventually was, kids under 13 still signing up for these things, but parents teaching them to lie, right? It was like the thing that we learned. researcher Dana Boyd over a decade ago, had done all this amazing work on sort of the end result of COPPA and how problematic it actually was because there are people who want to use certain apps in certain ways, and they figure out a way to do it, you know, no matter what restrictions are put on them. And so we'll see how people actually use these things. I mean, from a glance, like most of these restrictions and defaults sound. reasonable, but I, you know, I don't know. And I've not done the research. I would be interesting to talk to some of the researchers and hear out, how they think about one thing I do think is interesting is that, one of the arguments that I've made is that, the important thing is not to just ban these kinds of apps outright, but to teach. people how to use them correctly, how to use them appropriately. That is something that like it's, sometimes referred to as media literacy, but I think it's bigger than media literacy. It's, internet literacy, understanding how to use these things appropriately. And I think that's important. And I think some of the defaults starting out as private, you know, limiting communications, those kinds of things are like, the kinds of training wheels that are actually probably useful for that. and saying like, okay, you know, begin to recognize, enter this space more slowly and don't be totally public from the very beginning. sounds like it's probably useful, but I don't have the data. I haven't seen it. I, you know, I know that there are tons of researchers out there who are experts on this and I hope that they get a chance to study this and, give us some real, real information Mm hmm.

Ben Whitelaw:

yeah, no, it's definitely true that this change places the emphasis much more on, parents, obviously on the kids themselves, but also on parents. Right. So you have the responsibility to adjust the settings of your child's account in a way that was. Opt in before, but it's now default. And that's a big shift. something that Adam Massary, the head of Instagram shared in what I would say is quite a pointed quote to the New York times, where he said, we decided to focus on what parents think because they know better what's appropriate for their children than any tech company, any private company, any Senator, I'm not finished, or policymaker or staffer or regulator. I think that's a really interesting, quote to share in an interview with the New York times. And, basically it's kind of to your point says, look, regulators, we're doing this, we're working on safety. It may not be to the kind of degree that you you're wanting, but we're doing it. And like, almost like a kind of back off, you're not needed. So, we'll be interesting to see how this fares. If there's research about it, if we're going to get more data about implications, because it feels right, but we never necessarily know where these things.

Mike Masnick:

Yeah, it'll be interesting. I mean, and again, like some of these things, I don't know, like the, like stopping notifications from 7 AM. Like, I totally understand that there's been some research on that point. In fact, um, that is probably good, but like, it's a little weird that it's like, they decided it's 10 PM to 7 AM. You know, like, again, like maybe that's something that the parents should be able to set in terms of what the time limits are and things like that. And maybe it can be adjustable in certain cases. but Yeah, it's an interesting move. It's going to be worth following. It's going to be worth seeing if other apps start doing similar stuff. and we'll see. And I know that I've definitely seen like supporters of Cosa came out and said, this is too little too late and we still need Cosa and stuff like that. but you know, we'll see. I think it's, it's a really interesting move. And, I don't think it's like, it's a cynical one. I think it's, something that, meta is actually taking seriously here.

Ben Whitelaw:

Yeah. Okay. thanks Mike, for sharing that, your thoughts there. we'll move on now to, an AI chatbot story, which really does go to show the sublime and ridiculousness of AI use cases, at the moment, you know, whether or not we're still in the kind of hype phase of AI is hard to say, but you've pulled out a kind of really interesting study, Mike, on, uh, AI. positive potential application of AI for missing this information.

Mike Masnick:

Yeah, this is really fascinating. It's a study that was published in science, that just came out where they were using generative AI chatbots to see if they could counter, conspiracy theories and convince people that. conspiracy theories are, not true. and so, you know, there's a few thousand people in the study and basically they found people who were willing to say that they believed in certain conspiracy theories. And then they just had a relatively short. Conversation with a chat bot that was trained specifically to, respond to them clearly and with factual information and give references and explain why the various conspiracy theories were wrong and have this ongoing conversation. And it led to an actual decrease in people believing conspiracy theories, like to a significant degree. And that effect appeared to last for quite some time. So it wasn't that, you know, just like they, got through it and, you know, went back to believing in crazy conspiracy theories, but actually.

Ben Whitelaw:

it's a lot, right? So it's like 20, a 20 percent decrease in, belief in the conspiracy theory and two months. And the research was saying that it's like versus one to 6 percent decrease in conspiracy theories for other kinds of interventions. So it's like very much outstrips other, similar methods of, decreasing trust in those things.

Mike Masnick:

Yeah. And, really interesting because this has been like a big question for the past decade, you know, or more where it's like people go down, rabbit holes and, get pulled into various conspiracy theories and then start, believing all sorts of nonsense. And a big question has always been, how do you get people out of that? And, and we haven't had really good answers for that. And, this is obviously not a perfect thing and there are all sorts of questions about it and, how accurate is it? And is the methodology really that sound, but it, from what is said here, it's really, really interesting and how powerful this was. And I was, you know, I was thinking about it a little bit because, normally the way that. people talk about getting people, out of conspiracy theories, it's a lot of work. and it's really uncomfortable work in that, you know, you can present facts to people, but it doesn't. tends to get contentious very, very quickly, like in a human to human sort of situation, and that, doesn't help, but the, the one thing that is kind of amazing about these generative AI things is that they are infinitely patient. And so, This is in a slightly different context, but, I don't think I've talked about on the podcast, but I've talked about elsewhere before that, like, I've been for the last year or so been using AI as a tool in writing TechTurtle, but not in any of the actual writing, as a writer, it's terrible and I would never trust it to write anything. But after I finished writing an article, I have a few scripts that are set up that I run against the article to discuss it. And I have like a scorecard and I say, score this article. And it gives me back different scores and says like, this could be stronger. This could be weaker. I ask it, what are the weakest points? What are the things that people are going to complain about the most? Like I have a whole list of questions that I ask it and I have a conversation back and forth with the AI. And. Consistently it gets me thinking more deeply and it, it makes my writing stronger because I'm like, yeah, you know, I didn't really address that point and it will often call out the points that I sort of, you know, when a little too quick on the details

Ben Whitelaw:

yeah. In a way that a human colleague or editor would probably not, right? Or that you would, you'd get annoyed by

Mike Masnick:

Exactly. Right. And, and they would feel bad, like the, you know, and I have human editors who review all my stuff as well. and they would, you know, feel like, ah, I'm not going to push Mike on this point, like, you know, whatever, it's fine. But the AI has no qualms with that. And then on the reverse side, and recently did an interview with, Nathan Beches, who runs AI platform that I'm using. And he pointed out something that was really interesting that I hadn't thought of before also, which is that there's something also about the AI giving you recommendations that you also feel comfortable ignoring them, right? Like when the AI gives you a bad suggestion, you're like, eh, whatever. whereas if a human editor gives me a suggestion, then it's like, oh man, like if I don't agree with it, like now I gotta, ah, you know, I gotta

Ben Whitelaw:

a kind of, there's a kind of social contract that's there. Right. I have to pay attention to them. Otherwise they'll get sad and, or like, I'll have a issue on my hands.

Mike Masnick:

Exactly. Exactly. And all of that is taken out of the equation when it's an AI. And so this is really interesting to me because it's something that I don't think anyone's really thought about the fact that the AI is not human, even as it feels human like, but the fact that, we're not going to fight with it, like we would with a human, and we're not going to have the social concerns with it as a human, we can ignore certain stuff easily, but also we can take it more seriously and we can have a longer conversation with it without it. Causing problems. Like, I'm wondering if that's what's playing into this. And that's just like absolutely fascinating to

Ben Whitelaw:

Right. And there's, there's a name for it, right? There's this idea of like having to contend with somebody's incessant and ongoing, kind of misinformation around a topic is apparently called gishgallop.

Mike Masnick:

yes, yeah, yeah,

Ben Whitelaw:

I had no idea. And it's, uh, it's

Mike Masnick:

I mean, it's a, it's a commonly discussed thing in, in sort of debate circles, this idea of a gish gallop, which is like this idea that if you just spew so much bullshit and so much nonsense that is impossible to refute at all, and Trump is somewhat famous for this, like this is his strategy in debate, but lots of others have used it as well, which is you just spew constant nonsense so that. It is impossible for anyone to refute every single bit of that nonsense, because like, you know, you start to refute one little bit and then you're off down this whole trail, just arguing about that. And it allows the person who is engaged in the, gish gallop to frame the debate you sort of stray so far from reality just because you're nitpicking everything. And then they get to nitpick back. It just becomes this mess. Whereas the AI is actually. So much better poised to do that than a human. There are sort of human techniques. and like, I had mentioned this to you we started recording. Mehdi Hassan, who's a well known newscaster who now has his own media site. he wrote a book last year. Called how to win, I think it's like how to win every argument, um, or something like that, which I read, I got a copy out of the library and my son saw it and was like, Oh, I want that book. And I was

Ben Whitelaw:

like, you're not meant to see that.

Mike Masnick:

I don't know if you should get that book, but he has a whole chapter on like dealing with the Gish Gallup and like, there are human strategies that have been around to like, how you deal with that. But I think they're different than the ones that an AI can use. And so that creates this really interesting scenario where maybe this is something that is uniquely compelling that an AI chatbot can do that humans couldn't do.

Ben Whitelaw:

Yeah. I mean, the, the other thing which I loved about it was that the user spoke to the AI chatbot. In their own language, right? They, they, they were writing free text about the conspiracy theory that they had in mind and they were asked to provide evidence for it. That again, gets around this kind of idea that humans have to kind of, frame, an idea in a certain way. Like if you were to come to me and say, Ben, I don't think 9 11 was true. And I'd be like, what do you mean? Can you, can you like frame it to me in a certain way? And I'd ask you to kind of frame it in a way that I understood. The AI didn't need any of that. It didn't need to kind of clarify, the information in a certain way as to, refute it. and I love the fact that the, person holding the conspiracy theory wasn't made to kind of act in a certain way, before the, pushback happened. It's like, okay, like you tell me in whatever way you, think this is true and I'll come back with some evidence. And, and that, obviously, over a period of time, like. Did have a significant effect.

Mike Masnick:

Yeah, and I thought it was interesting in the context of a different story this week, which, was widely mocked and attacked, but when put next to this story suddenly becomes a little bit more interesting, which was that there was an app that was released this week called social AI. Everything about it sounds ridiculous and stupid, and lots of people were making fun of it. And it is an app that looks like Twitter or, you know, any other, micro blogging service, except if you join, there are no humans who will follow you. It is only AI. So what you post to this app, unlike Twitter. Or any other social media network is never seen by any other human. What you do get then is just a bunch of responses from the different AI bots that you can choose sort of personalities for the kinds of followers that you want.

Ben Whitelaw:

Oh my God. I'm looking at the, I'm looking at the app store preview that says be the main character

Mike Masnick:

Yes.

Ben Whitelaw:

infinite AI followers, private only to you reflect posts and feel heard. Oh my God. What is

Mike Masnick:

Yes, it, it is a, it is a dystopian nightmare. It is straight from black mirror or, whatever sci fi nonsense that you want it to be from everything about it seems stupid. I saw people totally reasonably mocking it and pointing out how stupid and like posting really stupid stuff, and having the AIs respond encouragingly, you know, like to absolutely ridiculous posts. And agree with all that. It is. It is silly. It is dystopian. It is ridiculous. But take a step to the side and then look at this other story about AI convincing people, away from conspiracy theories. And you're like, there's something interesting about the idea of interacting with AIs when done right. And when done in a useful way, this is not it. You know, this app is not it.

Ben Whitelaw:

Well, not at scale, right. But you know, not at scale. It's the, it's the idea that for some people, probably not you, probably not I, probably not the many people who are mocking the social AI app launch, but for a small amount of people, this is going to serve a purpose, much like, much like the, debunk bot AI that we've just talked about will serve a purpose. Like. if you, believe in conspiracy theories that will serve a purpose for you, if you want to get something off your chest in a way that you can't do, on a public forum or with your mates on WhatsApp or whatever it is. Social AI is for you. So it's that idea that maybe AI is not for these kinds of use cases. It's not for everyone, but for specific sets of people to do a very specific job.

Mike Masnick:

Yeah. Or even imagine, right? I mean, like you could imagine a situation where before you posted something to social media, you had a bunch of AI responses first to give you a sense of how people were going to respond to stuff. I don't know if that makes sense, but I could see people wanting that.

Ben Whitelaw:

Yeah. Based upon previous similar posts, based upon the kind of data set that we have available to us, this is the kind of responses you're going to get. You can amend your post accordingly to get those kinds of, yeah, I can see that happening. Um, I can see that happening. I mean, I particularly liked the data point from the research, which said that, force for conspiracy theories about which there was less contextual information, the bot did a, a still good, but less good job of pulling people out of the conspiracy theory. So the example was given, I think the researchers right after the Donald Trump assassination attempts, where, he was obviously on stage and had his ear pierced by a bullet they ran a similar experiment and they found that because there was fewer news articles, less information on the web about it, the bot didn't do as well. And for me, that's also a really positive sign for me as a journalist, the importance of news media. but also the way that, AI can work in tandem with quality information produced by users or by experts. that was a really nice sign. I thought,

Mike Masnick:

Yeah. And it seems like a totally understandable result, you know, like in the heat of things when details are not known. And there's a lot of, totally unknown things going on. The idea that, the bot is not going to be as good at responding to those things totally makes sense. But you could see where, in conjunction with, you know, thoughtful, detailed reporting. It could get better and better over a week or a month. you could see that being really useful.

Ben Whitelaw:

And just before we move on to, the next story and probably our last story, how do you see this being kind of integrated into. Platforms into, into other spaces. Like surely this is something where to have a real significant factor needs to be taken on by a platform and used, at scale. Right.

Mike Masnick:

Yeah. I mean, I think there's all sorts of interesting things, right? This is one of the questions that comes up in the trust and safety field all the time, which is like, is the best way to deal with certain information to take it down, or is it to create some sort of intervention, and I've looked at things like there's a service called Cocoa, um, where you Which does interventions for like teens who are thinking of like self harm or something, if they indicate on certain platforms that Coco works with, uh, you know, around self harm, it might try and get you into an intervention, which is often talking to other kids. Um, it's actually a really interesting, interesting app and an interesting platform that focuses on that. But you could see this being a really useful tool for that kind of thing. Though I will note Coco did experiment with chat GPT and got trashed, for trying to do that. But, done well and done carefully, you could see examples where, all sorts of information, you just use the AI chatbot as an intervention, where it's like, it looks like you are posting about this kind of thing. Would you like to have a conversation here? it could be conspiracy theories, could be disinformation, but it could be eating disorder. It could be self harm. It could be lots of categories of information, that everyone is concerned about and where, you know, the immediate reaction from a lot of people right now is, well, just ban it, the apps have to take it down, whereas this is suggesting maybe there is another way to deal with that. That might be a lot more effective than just like have all this information disappear, then you can guide people towards better information, more useful information in a way that actually is helpful.

Ben Whitelaw:

Yeah. I'm excited by this. It might be because I'm, I'm wanting to believe that it's true,

Mike Masnick:

Yeah, we, we, we, there, there, there absolutely may be confirmation bias on our part of like, this seems really cool. It seems really exciting and hopefully it's true, but yeah, we'll find out.

Ben Whitelaw:

yeah. Okay, cool. And then finally, Mike, we're going to whiz through this last story. I think it's worth noting because, it's been a story that's been in the news for a bunch of weeks and we haven't quite touched on it, but you wanted to note the fact that Meta has finally in the U S after, banning it and else in other countries and other regions taken, Umbridge against RT and Russian state media for foreign interference.

Mike Masnick:

both meta and YouTube, I think also, terminated RT accounts and other Russian state media, you know, RT, and some of these other ones have been around for a while, but, uh, It's all been sort of widely recognized for while as pretty propaganda ish. and RT was kind of banned in the EU entirely after, the invasion of Ukraine, right? I, I don't remember the exact details there,

Ben Whitelaw:

in 2022, both EU and, UK blocked it, in various forms. So I'd actually didn't, I kind of forgotten all about it because I haven't seen it come across any of, my platforms, but it's now global, right?

Mike Masnick:

Well, it's not, entirely global because RT in theory still can exist within the U. S. It's not illegal and would violate the First Amendment. Likely though, the whole like TikTok ban fight, which is a story we're not aware of. covering this week, even though there was a hearing on that, might get at that and, but I think under the first amendment that the U. S. could not ban R. T. from, publishing content in the U. S. I think that would be a real first amendment issue. But again, as we've discussed many times, platforms have their own First Amendment rights to determine who is on their platform and meta and YouTube are totally free to determine that we don't want state propaganda on our platform. You know, in the past, they've done things like label it as, state controlled media or whatever. The big thing here was that the Department of Justice came down with, these charges against RT folks, and these indictments with some pretty, sorted details about RT, through a shell or a series of shell organizations influencers. to ridiculous levels, 000 a video for folks like Tim Poole, and Dave Rubin and, all sorts of stories are coming out of that. And this idea of like, how much of how, state influence and influencers, interact and all of these things that I think are interesting, but just the fact that, you know, I think it took, you know, The DOJ coming out and saying this directly and showing so much evidence of RT being engaged in really, pretty, pretty bad practices that allowed Meta and YouTube to both say, that's it. We're banning your accounts on our platforms.

Ben Whitelaw:

Yeah. It was almost impossible to be able to detect. this as a policy violation and, before the DOJ came out, right? Like you, you can't, you can't tell the intent of a YouTuber unless you've seen the money coming into their bank account. You know, that's the,

Mike Masnick:

well, that's, I mean, that is a different, that is a really big issue. And, to the extent that these bands are just on the official account. So in theory, RT could still be funding influencers and they could still be posting and meta and YouTube would have no visibility into that

Ben Whitelaw:

Yeah. And I think it's important to point out that arty have not approached control or speech, uh, with any kinds of money, let alone 100, 000 per episode. So I just want to put that on the record. That's important to say

Mike Masnick:

yes, yes, uh, we are, we are unbought off by, uh, Russian state media.

Ben Whitelaw:

yet.

Mike Masnick:

know, if somebody else wants to give us a hundred thousand dollars per episode, um, uh, you know where to reach us

Ben Whitelaw:

Yeah. I think that's an important story kind of note. And like you say, there's a whole bunch of. other stories and other information coming out about this. This is going to be ongoing story into the U S election. So expect more from this. Mike, that concludes us neatly. today, we've gone around the houses. It's been great to, be back on the pod with you. thanks very much for your time. And we're going to wrap up there listeners. If you've enjoyed today. you know where we are, please do rate and review us. and if you want to send us your Tinder stories as well, um, or your shower thoughts, find us on

Mike Masnick:

or, or if you think we've, gotten more entertaining this week, please let us know.

Ben Whitelaw:

or that, or that, yeah, get in touch with us, podcast at control. speech. com. thanks for listening. We'll see you all next week.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode