Ctrl-Alt-Speech

The Difficulty of Being a Teen Online

April 19, 2024 Mike Masnick & Ben Whitelaw Season 1 Episode 6
Show Notes Transcript
Ben Whitelaw:

So Mike, borrowing from Mastodon's call to action to post, what's on your mind this week?

Mike Masnick:

So Ben, I have a question for you, which is, do you know who Wally Pipp is?

Ben Whitelaw:

Wally Pipp. Uh, I, I do not, it sounds like a kind of American folk hero. Is this, is this a history story?

Mike Masnick:

it is a history story. Uh, and I, I promise you this is relevant. Wally Pipp was a baseball player for the New York Yankees in the 1920s. Uh, and in, in, 1925. He had a headache and he said he couldn't play that day. And so the manager of the New York Yankees substituted in a, a very young player that they had instead named Lou Gehrig, I don't know if you've heard of Lou Gehrig since

Ben Whitelaw:

No, still, still blank.

Mike Masnick:

okay. He, he was one of the best baseball players ever. And that day. When he came in for Wallypip because Wallypip had a headache, began a streak in which he played 2, 130 consecutive games over a course of many years. He basically never left the field. So, Wallypip's headache, destroyed his career. And I am wondering if Me being out last week and having Alice come in and do such a great job is going to turn me into the Wally Pip of this podcast. And we're going to have to have Alice come back and become the Lou Gehrig, the, the, the greatest podcaster of all time.

Ben Whitelaw:

I think I, I, I appreciate

Mike Masnick:

that was what is what is on my mind. What's

Ben Whitelaw:

yeah. Well, apart from, you know, the, your fragile state of mind about whether you're in this or not, which you are Mike, um, I'm wondering how much tiramisu one man can eat in three days. I am in Perugia, Italy. Okay. And I have consumed so much tiramisu. I'm a kind of walking, cream guy right now. Um, it's for a conference that I'm at, the International Journalism Festival. There's a whole bunch of people here. But boy, boy, do I need to hit the gym after this?

Mike Masnick:

Well, I'm glad you are enjoying Italy in the way that you're supposed to enjoy Italy. So

Ben Whitelaw:

Yeah. Thank you. That helps. Hello and welcome to Ctrl-Alt-Speech. Your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you with financial support from the future of online trust and safety fund. My name is Ben Whitelaw. I'm the editor and founder of Everything in Moderation, and my cholesterol is a lot higher than it was a week ago, but boy, Mike, am I glad to see you again? Welcome back. How are

Mike Masnick:

yeah, yeah. I'm happy to be back. As I noted up top, I think Alice did a wonderful job. I listened to, the episode last week while stuck on the tarmac on a plane trying to fly home. Uh, from, from my, my adventures in eclipse chasing.

Ben Whitelaw:

Yeah. How was it? Did you, did you see what you wanted to see?

Mike Masnick:

I, we absolutely saw what we wanted to see. I had a very wonderful trip. We did have to make a sort of game time, change of plans to try and avoid the clouds. We had staked out a spot and it turned out to be super cloudy where we were, but we got up early and we drove about 80 miles, in a mad dash in the morning to a spot that turned out to be. As my very, smart wife figured out, less cloudy. Uh, and so we were able to witness the full glory of a total eclipse. And it was absolutely amazing. This is the second one I've seen. If anyone listening to this has not seen an eclipse, I, recommend very highly figuring out how to get yourself to a total eclipse because it is pretty amazing.

Ben Whitelaw:

That is awesome. I'm so glad you got to see it. Um, it sounds like that kind of game time decision was not something that Wally Pip would have done. You know, you know, there's a story there, isn't it? There's the lesson to be learned.

Mike Masnick:

Yes, yes.

Ben Whitelaw:

Um, so, so Mike, you're back in the chair now, and we talked a few weeks ago about maybe sharing some reviews from our trusted listeners about how the podcast is going so far. You've been scouring through, all of what we've been, seeing on the app stores. Are they bad? Are they good? I haven't had a chance to check. Lay it, lay it on the line for me. The

Mike Masnick:

we've gotten some very, very nice reviews. So, uh, thank you to everyone who has rated and reviewed us. As everybody says, it is really important. It, it does influence the algorithm, which we all are, you know, trying to please. Uh, and so if you are looking around and listening and appreciate what we do here, it really does help us if you rate it and, even better if you write a review. So, we wanted to go through a few of these reviews cause there were some very nice ones. We had one that came in, uh, pretty soon after we launched this at, only two episodes in, this was from, Greg W. random hash. I don't know what that is,

Ben Whitelaw:

I like Greg's style already.

Mike Masnick:

episodes in. Yes. Uh, only two episodes in, but I'm hooked. Ben and Mike provide authoritative opinions on the free speech issues of the week. And so it says, uh, they were a fan of TechDirt. Thank you. Always like fans of TechDirt. You should also become a fan of everything in moderation as well. So they, if you care about speech issues, in the U S and around the world, particularly at the intersection of technology, then this is for you. So that was a very nice one.

Ben Whitelaw:

Very good. Thanks, Greg.

Mike Masnick:

And I wanted to balance that out with a negative review.

Ben Whitelaw:

What?

Mike Masnick:

We're going, if we're going to be totally upfront and fair here, uh, this one, this one, however, came in before we even launched, it was the day we announced the podcast existing. Uh, and, and I thought this was, interesting in the sense of a trust and safety content moderation question of like, you know, how do you deal with, with bad faith. Reviews and content. Uh, this is clearly a bad faith and it's from someone who is, who, who I know who it is, it's a TechTert troll who spends hours a day, trolling TechTert, but, but wrote a review claiming that, uh, this podcast was going to be awful because I apparently love government censorship. Um, this is, this,

Ben Whitelaw:

Yeah. Okay. That's a slight mischaracterization, I'd say. But,

Mike Masnick:

Yeah, it's a, it's a big mischaracterization and I have discussed that with this troll in the past, but I thought I, I actually, I, I just think it's kind of funny that, you know, content moderation issues come up all over the place. And like, it's not like we're going to do anything. I don't even know if we could do anything. It's fine. If people have a negative review, like. They, they can post it. If they hate me, they can post it. Hopefully, if you're listening to this, you, you do like us. So I'll just mention one more good review and then we'll get on with the show, which is a very nice review from TSF in Balboa. So trust and safety, something that maybe, uh, it says this This podcast is essential listening for anyone who wants to know more about content moderation and speech online, and also is sick and tired of uninformed takes. So we are the informed takes here.

Ben Whitelaw:

Wow. That's, that's a, that's a new one for me. Um, I've also got some really nice comments from people at the conference, Mike, who, who haven't, who've given me verbal reviews and, um, which is really nice to hear. So, and, uh, I do encourage people to, to share more of their thoughts and views on whatever. Uh, platforms and app stores that they listen to the podcast. So yeah, thanks for people taking the time to share their thoughts. So I think that's, enough of the kind of back slapping and, uh, you know, giving ourselves credit. We, we, we need to deliver Mike. This is, uh, we need to give the people a reason to rate five stars. Um, and we've got a good selection of stories as ever. A couple of big stories that we're going to go deeper on. And then we've managed to condense the wide range of stories that, we've both read this week into a couple of shorter ones that we'll, we'll whiz through as ever. We're going to start, I think, with a story we both read. It's a very, very kind of, interesting story. Both of our stories today are about teenagers experience of online life. And you, you've picked out one from the markup and. Talk us through it.

Mike Masnick:

Yeah, this is really interesting. And I will note, uh, congratulations to the markup. They just got acquired this week after the story came out by Cal Matters, which should be interesting to see what those two organizations do together. Um, But the Markup, which does some really fantastic data driven investigative reporting, decided to look at, in very great detail, how different schools were making use of internet filters, particularly in response to the Children's Internet Protection Act, CIPA, uh, which goes back to the year 2000 and effectively has some requirements for schools and libraries to filter their internet connections. There was a big lawsuit about it where the ACLU tried to get it thrown out and the courts gave a kind of weird ruling on it, but the interesting thing is, you know, what is, what has happened in practice and how are these filters working? And, and the answer is they're not, and not working particularly well in that the schools where they were able to get records or where they were able to talk to students and teachers. It appears that the filters, as you might expect, are prone to overblocking, and in particular, blocking sites that might be really useful for teenagers, whether are, you know, uh, so things like the Trevor Project or Planned Parenthood, these are sites That students very much in need might be trying to do research on or reach out to in the case of the Trevor project. And it turns out that a lot of schools block them. It also found that students doing. Research on particular topics found that often content that was useful for school projects and in some cases necessary for school projects was being blocked. And there are even stories of kids having to like look up stuff on their phones and then typing it in because they weren't able to actually access it,

Ben Whitelaw:

Right, right, right.

Mike Masnick:

the, the school computers. And I think this is, it's a really, really detailed as the markup is known to do really detailed, very thorough analysis of this. And I think it's really, especially important, not just because of that is the reality today, but because we're seeing all of these laws and this is around the globe, and in the U S in particular within different States. And then at the federal level and obviously, across the globe in terms of, different laws around online safety and online harms and child protection. Um, and that's, that's how this law came about, how SIPA came about in the first place was very much the same thing. Like, Oh no, the kids online these days may access bad content and therefore schools and libraries need to filter out that potentially bad content.

Ben Whitelaw:

Yeah,

Mike Masnick:

and I think we're learning a lesson here, which is that. This can go wrong. It can go very badly in that in the effort to protect children from potentially harmful information, you often end up blocking potentially very, very useful and very, very helpful information. And, you know, this is, you know, It's often in the case of controversial subjects. And, I've done some research in the past and published some stuff in the past on, like, the difficulty, for example, of preventing, eating disorder information. Um, where it, it is actually much trickier than people think. You know, the people who are writing these laws or who are concerned about eating disorders, which is a very, very real and very serious problem that impacts many teenagers and certainly many teenage girls. It turns out that like preventing that information is much more difficult than people think. And often it's because people use code words. And even if you try and block that information, kids are going to figure out other ways of talking about it or what some studies have shown that they'll, they'll go into sort of deeper and darker parts of the web, to find that information, to have those discussions. Whereas when it was on more mainstream platforms, you had, like Instagram and even TikTok trying to present. useful resources or other people would show up and try and guide, people towards recovery resources or say like, Oh, I experienced this and this is, you know, this is how I, was able to recover. But when it goes to darker places, that sort of information and those sorts of resources aren't there.

Ben Whitelaw:

Yeah,

Mike Masnick:

I see this. article as a kind of warning sign for everyone pushing this idea of, we need to force the companies to block any kind of harmful information. This is kind of an example of how that plays out where it may actually cause a lot more harm.

Ben Whitelaw:

yeah, I mean it is a really interesting story Mike I was I had a couple of thoughts on this one of which is That this feels like something that has been happening for a long time, but has been hidden from view Right, like I remember in school in middle school The early two thousands logging online and having this same issue, right? Where, a school IT system has, you know, implemented a kind of myriad of tools that, with a series of filters probably involved that prevent people going to certain domains, most of those. I have to say we're like miniclip. com and other kind of time wasting game sites that me and my friends used to go on to. Um, and so this, and so, you know, that did affect my, uh, school scores, unfortunately, but the fact was that this has happened for a long time, obviously it's at that point as well, there would have been people trying to get onto resources, which were helpful. To them and and so to what extent is this a new thing and to what extent is this a Something that the markup are kind of bringing to light through the reporting that they've done.

Mike Masnick:

Yeah, I don't think it's necessarily a new thing. I mean, I think you're right. Like this has always been an issue. This was the issue that was raised, when these laws were passed, as I said, SIPA dates back to 2000. Um, and there have been other concerns and other laws that were, tossed out. On this very basis that filtering software, was not very effective. So I don't think it's new, but it's also one of these things that just hasn't been checked in on in a while. It felt like, and there is also this sort of. assumption that I think has made the rounds, especially among, the political class that like somewhere in the last 20 years that suddenly the, the filters that maybe two decades ago didn't work very well, like suddenly they've gotten better and I'm sure they've, they've gotten better, but the point is they're still really problematic and still, it's still going to end up overblocking stuff and not doing a very good job. And that should be a real concern because, one of the arguments, and this is, a U. S. specific argument regarding some of the legal standards here, mandatory filters were rejected by courts in other contexts because of the lack of quality, and that has led some people to argue that, that they're Particular legal ruling could be revisited because now the filters work great. And I think what the markup is showing is like, that's, that's not true. You know, these things still, they, they, I mean, and, and there is a separate question of, which is like, Some of the people implementing these things might not think these are mistakes. Maybe they want to block things like the Trevor project and Planned Parenthood because they don't want kids getting any information on abortion or, having resources for LGBTQ students. And so that's a bigger concern, but that's not a technological issue that is a societal one.

Ben Whitelaw:

Yeah, I mean and also we should recognize that there are if schools don't exercise the No care in terms of how people use the internet within the confines of school. There are risks that kids might stumble across content that they shouldn't be. And, you know, kids are inquisitive and they will always try and find things that they shouldn't and share things that they shouldn't. So it's important that children are protected while certainly work at school in the education system. But yeah, there is this, you know, I was reading the piece and I kind of envisaging. These heads of IT at these schools who are trying to figure out what system is, good enough to be able to keep kids from harm, but also allow them to, access these resources and then figuring out how to kind of configure them to be able to do that. And I just thought, God, like that's. That's something that it's probably going to be a nightmare job for, for somebody who's like, I don't know, not at the forefront of technology and who's up to date knowledge of systems like this is probably not good enough. Um, but yeah, it's, it's a real, It's, it's really interesting.

Mike Masnick:

Yeah. And, even if someone is, is, is up to date on this stuff, I mean, the natural incentive is to overblock, right? Because the concern is that you're going to get called out for allowing kids to access dangerous or harmful content in some form or another. So the natural thing is like just overblock. And so, you know, there is some stuff and there's some in the article about it where like, it is possible to set up a system where if you come across a block, you should be able to request. To have it unblocked. But in a lot of cases, students are unaware of that. And the administrators of the tools here don't spend as much time thinking about that, whereas like a good system might be one that very clearly has a, Hey, give me a reason why you want to see this content and it'll be reviewed in a timely manner. Um, but many of these systems don't really do that.

Ben Whitelaw:

No. Okay. I mean, there's, a lot to unpack there and I think we'll share the article as ever in the show notes, I want to move us on Mike to talk about, a teenager. Obviously faces risks at school and we talked a bit about that in that markup article, but they also face risks obviously when they're out Out of school when they're at home and they're using the internet more freely And the piece that I was reading this week, which we both again read was a Bloomberg piece about how scammers are targeting teenage boys on social media. And, basically kind of completely changes, certainly changed my view of what kind of sextortion looked like and the growing trend of teenagers, particularly boys, being. Scammed into sharing intimate images of themselves, which they then have to, pay money to, uh, have, kind of referred back to them. And this is a really, really sad story of a young guy, called Jordan DeMay. Who basically, was a really super achiever at school, you know, football and basketball star, 17 years old and suddenly gets a message on Instagram, I think it is from a profile of a kind of cute girl and they start talking and she kind of spins him a lie that, she's due to move in into the area soon. And she asked him for a picture, an intimate picture with his face that he shares, and suddenly he's asked for 300. And obviously he's like super shocked and Bloomberg actually published the messages that went back and forth with the scammer in which the scammer kind of demands to be sent thousands more dollars and threatens to send the pictures to his parents and threatens to tell his girlfriend. And I was kind of reading this story and thinking. God, this is awful. But, you know, this is part of a broader trend that, I was really unaware of, in which, more and more teenagers are being kind of sextorted online. And the Bloomberg piece has some stats from, 2022 and 2023, which basically states that, Kind of show this exponential graph. It's basically the numbers of reports of people being sextorted go up 10, 12, 14 X over the course of kind of 18 months. and NECMEC, the national center for missing and exploited children who get a lot of these reports are saying that this is a huge, huge issue. And the article quotes the FBI as well. Who are saying that this is something that really a really serious problem. What did you think about this piece when you read it, Mike? Did your like heart sink in the way it did mine?

Mike Masnick:

Yeah, it, it was, it's. It's a horrifying piece. It is well worth reading. It is well reported and obviously deeply horrifying and emotional piece. The stories are, just incredible and your heart obviously goes out to the families who have dealt with this.

Ben Whitelaw:

Hmm.

Mike Masnick:

and, and it is impressive. I, in terms of the numbers of these cases, I've heard from various trust and safety folks that this is like a massive, massive problem these days and really did come out of nowhere. And it is interesting, and the article notes this, how, most people, when they think about, kids and naked pictures or intimate pictures, as you noticed, as you said, the traditional way of thinking about it is, is like older men trying to groom young girls as an issue. And in some ways this flipped the script, and involved these scammers who were targeting teenage boys and, setting up fake profiles and flirting until they can convince, the boy to send a picture and then, the article also goes deep in how there are training materials out there that are making the rounds that are teaching scammers how to do this. And the one that, struck me is, incredible. And it plays out in that story of Jordan to May was that basically the second that they send. an intimate picture, you've, you change the script and immediately jump into blackmail. And, the playbook details that they shared was, you know, basically, the split second that you get a picture, you just turn around and say, I am going to destroy your life. Your life is ruined. Everything is awful.

Ben Whitelaw:

Yeah.

Mike Masnick:

And that is exactly what happened. And in fact, the article notes that the texts that were sent to Jordan, came straight from one of the playbooks that were making the rounds, like it wasn't even that the scammers involved were making this up as they went along, they were just cutting and pasting, from a resource that told them how to do this and how to try and scam people. So it's. You know, it's really eye opening. It's a really, really interesting. Interesting story and sort of makes the case of how horrifying this is and how serious it is and even also at the same time like how, in the case of Jordan, who, otherwise seemed like, a very happy, you know, popular, successful, high school student was like prom queen. King or, or whatever, you know, the equivalent is, um, and how, just in a moment of weakness did this and then that eventually led to him taking his own life, which is obviously horrifying

Ben Whitelaw:

Yeah. No, it's a really, it's a really awful case. And, you know, Jordan took his own life and that is awful, but. It doesn't need to result in suicide for this to be a terrible, terrible thing, right? You know, this, this is a potentially kind of life changing issue, for not just teenagers, but anybody who has this done to them. And, it was shocking to me again, that, this idea of kind of scams as a service. You know, it's the new, the new SAS product is that you, you can kind of buy a guide to doing this and you can quite quickly rack up thousands of pounds or dollars by doing this and the consequences of a massive, so I think this is a really big step forward. Um, really kind of timely piece, I think in many respects also because you noted Mike, that Meta have also just launched some new features and products to help protect young people from sextortion. Right.

Mike Masnick:

Yeah, and it was actually a little bit interesting and possibly curious that the timing of this, which was meta released some, some new tools a few days before this article dropped and I imagine they were aware that it was coming. But I, I thought it was interesting and actually I'll get to the tools in a second, but I, I did want to. Talk a little bit more there. I did have a little bit of a problem with the article in one sense. I do think it is really worth reading because of the detail and just, the story that it tells. But, I found it a little bit odd that the article talking about all this. It talks about two different stories of teenage boys who both of them end up taking their own lives, which again, is absolutely horrifying, both of which had easy access to guns, but the story sort of brushes over that, doesn't talk about the fact that they had access to, I think in both cases, their, their parents guns, uh, and use that to, take their own lives. But instead sort of suggests that the real problem here was meta and section 230. And I think that is just fundamentally wrong. Um, I think that there, you know, if you didn't have section 230, I don't see how these stories turn out any differently in particular. I don't see what it is that they were expecting, meta to be able to do to jump in. to this conversation. Do they want them to be surveilling everyone's conversation? Because that is also horrifying in its own way. Um, but, at the same time, as we noted, like, Meta just released a bunch of new tools to try to prevent sextortion type scams in terms of trying to make it harder for people who don't really know each other to communicate, especially with teenagers, warning people if it senses that you're sending, an intimate image, in some cases blurring out images that, that are likely to have nudity in them, a bunch of things to sort of recognize That, this is an issue and that they're trying to prevent it. And. One of the arguments I'll make is that things like Section 230 are part of what allows Meta to experiment in this way and to see that this is a real problem, this is a serious problem, and they need to work on tools to do that, and they are, and they can do that because of, in part, Not entirely, but in part because Section 230 allows them to take these actions without fear of liability for if those tools make a mistake or something like that. Without Section 230, it actually becomes harder to fight these things. And that's why I felt that that part of the article was really misleading in that, it really does try to blame Meta and I understand that the families who are totally understandably devastated and upset and are trying to, to find someone to blame for it have pointed a lot of their anger towards Meta.

Ben Whitelaw:

yeah,

Mike Masnick:

but, I don't see how that necessarily helps.

Ben Whitelaw:

no, I mean, we know, and this is the same in other countries as well, where regulation, exists and, you know, it's somewhat controversial that people will try to turn these real life events and these stories that are heartbreaking into reasons to, increase regulation or deregulate, or, there's always a, somebody who's going to claim that it's as easy as just, you know, changing what exists or adding a few, a few more lines into the existing regulation. Right. So it's a lot more complex than that, as we know. And, there's actually a really good piece from new America this week, which I included in this week's everything moderation that says, we need to shift away the focus from, from changing section two 30, if we're going to be able to combat online harms in a meaningful way. So again, we'll, we'll

Mike Masnick:

and the other. The other thing that I think is worth noting and was really interesting and honestly kind of surprising to me in the case of Jordan DeMay, the scammers were in Nigeria, and as the article notes, like, The FBI was actually able to track down who they were, have them arrested, have them extradited to Michigan, and have them plead guilty to crimes. And they are facing at least a minimum of 15 years in jail and possibly up to 30 years in jail for the, the scam they did, which gets to the larger point, which is that this is really a law enforcement issue. This is crime. These are crimes that are happening and our society, the way we fight crime is through law enforcement and, The argument that meta should have magically been able to prevent this is, is kind of a request for having large technology companies act as law enforcement, and that is horrifying in a whole other sense. We, we don't want them to be law enforcement. We want people who. Are designated as law enforcement to act in that role. Um, and here it, it appeared that it was, you know, it was successful. Obviously, the harm was done, the crime was done, and it was horrific, but law enforcement was actually able to track them down and bring those individuals to justice, which is a big deal because oftentimes we hear about scams on the internet that are done. Overseas and there's no recourse, whereas here there actually was recourse and they were actually able to do something about it.

Ben Whitelaw:

yeah, I mean the one thing to note Mike on that point of law enforcement Is that the scale of this stuff is massive, you know? There's a paragraph in the piece that says that snap did a survey of 6, 000 young Users in six countries and found that half of them had been targeted in online sextortion So then you know It's impossible for the judicial system, for the law enforcement to actually, do that kind of level of work. What kinds of, apart from the tools that Meta have released, what kinds of ways do you think this can be mitigated, this particular harm for this particular group of people?

Mike Masnick:

I mean, I think the biggest thing has to be education, right? I mean, you have to have kids well aware that this is a real threat and to recognize when it might be happening to them because that is the only way you're going to prevent it, right? Kids are going to make mistakes. Kids always make mistakes that is natural and to be expected, but the more that you can educate them on what to be aware of and to recognize when they may be going into these situations and to be cautious about it, um, the more that they have that in their, in their heads and, and sort of recognize when they may be facing an issue that, that is in the end, that really is going to be a huge part of it. The second part. That I think again, gets to kids education is having kids understand that there are people that they can talk to if they get into this kind of situation in the cases where the kids end up taking their own lives, which, you know, as a Tragic as it is, it is often because they feel like they have no other way out and they, they have no one to talk to. They're afraid of admitting that, that they made a mistake and they made a mistake. And so teaching kids that mistakes are going to happen. You are going to make mistakes. That is a normal, healthy part of being a teenager and learning as you grow up, adults make mistakes too. Everybody makes mistakes, but like recognizing that that is not the end of everything that, you know, hoping that they have someone somewhere that they can turn to, that they can talk to and admit that they made a mistake and that they're in a horrifying situation and that they need help. But teaching kids that there is an alternative path, I think is hugely, hugely important. And again, it comes down to education.

Ben Whitelaw:

Yeah. I would also know, you know, Instagram has a suite of parental supervision tools. And I would like to see personally, those tools continue to be developed and, you know, those tools be continued promoted. As possible ways of helping kids as they transition from being under the age where they should use these platforms to being of the age where they can use these platforms and, parents supporting that process as well. Because, you know, the reason why these, These parental controls and systems exist because they, do provide a means of overseeing behavior in that period. And, you know, snap have done a lot of really good work on this Instagram they were pretty late on onto this in 2022 released a similar suite of tools. And I wonder, you know, to what extent, Jordan and other cases like him could have been, helped by some better supervision, um, to kind of say, okay. Are you getting a series of messages in this case, do you think they're real? This seems to be too good to be true. Let's talk about it. Which I think is to your point.

Mike Masnick:

Yeah, I mean, I think those tools are good and useful. I do think that, like, there's always going to be ways to get around them. and there are always concerns about how much parents should be, surveilling their kids at the same time and sort of where the, the privacy elements are and how do you balance those things. And, and a lot of those are individual choices, having more tools that allow people to decide. Uh, I think. is important. But, even in the case of, of Jordan, that as it's outlined in this story, you know, it's unlikely that parents would have been alerted within the timeframe that all of this went down. Um, and so, you know, it's, it's a combination of all these things.

Ben Whitelaw:

Yeah. Okay. So let's leave that there. Let's kind of park those two stories and, and we'll move on now to our, our quick wins, our, our other, stories that we've been interested in this week. You caved one from ABC in Australia about the recent Bondi Junction stabbings, Mike, you talk us through that.

Mike Masnick:

Yeah, this was a really, really fascinating Article and basically, it gave you a, a tick tock of not as in the app, but as in the, the, uh, uh, you know, time, direct timeline of how disinformation spread. And so there was this, stabbings in Bondi junction, and there were reports online blaming. A few different people and one of them caught on and sort of went viral and it was the, you know, the wrong person was identified and it spread very, very widely. And this article, which will again, we'll have in the show notes is absolutely fascinating in that they actually tracked, from the time of the stabbing at the To, who said what and, and how the false information got picked up and spread. And you see this pattern, which has been detailed in, in other, areas in the past. And in particular, it reminded me of the book from a few years ago by Yochai Bankler and a few others called Network Propaganda. And, and that book was, was looking at, how disinformation spread in the 2016 U S election where like stuff may pop up online, but didn't really go viral until Fox news, picked it up and, you know, took this story and made it big and you see that same pattern here where you have this, this online troll effectively, who keeps trying to push this story, That it's someone else and, and making up who was involved and, and the troll actually, pushed a few different stories of who it might be and eventually picked on this person, Ben Cohen, because somebody else brought up his name. And it was like some random account that like had very few followers. And then another small account that had very few followers started pushing this. And then this larger troll count picked up on it. And eventually where it went really. Where it finally picked up and spread really widely was after, Seven News, which is, a massive news organization in Australia, picked up the story and ran with it, despite the fact that it was just false. And then, it spread everywhere and it, you know, the guy in question, Benjamin Cohen. Basically turned his life into a living hell for a few days as everyone reported that he was the stabber and, it was a huge mess, but it is really like minute by minute. This timeline starting from, when the stabbings happen on Saturday afternoon to false information, just, spreading widely through the evening, uh, and overnight. And it's, it's just, it's absolutely worth reading. If you're interested in disinformation and how it spreads, it is, one of the clearest, most detailed explanations I've seen.

Ben Whitelaw:

yeah. And it refers a lot to, the work of Marco and Jones, who does some really, really good work. He's a kind of disinformation researcher, right? And it, it, he talks through kind of how this spreads. I thought it was really interesting because we, when we talk about. Dis and misinformation, we hear a lot about Russian trolls and, the ability of states from outside of a country to influence the conversation in big news events like this, there was a Russian troll involved, but he was actually a real Australian guy. This blew my mind. He was hiding inside the Russian consulate in Sydney. Tweeting this stuff like he, he was, he had his face online. His name is, um, Simeon Boykov, who goes by the, by the kind of nickname Aussie, Aussie Cossack. And so this is like a new kind of Russian troll, like one that is looking you directly in the face. And, you know, as you say, spewing out kind of conspiracy theories, left, right, and center. he actually calls himself an independent journalist, which is horrifying. If you know, if you try and think about how you would regulate for somebody like Simeon, how you would ensure that his, his content is, not classed in the same way as other reputable journalistic institutions. I don't even want to get into that, but yeah, that was, that was a wild part of the story for me.

Mike Masnick:

Yeah. Yeah. And, you know, I mean, we do see that in other contexts too, where, again, like the bad information did come from these small accounts that had very few followers originally, and there is some implication that they might be like the traditional Russian trolls, but. You know, we've seen this with disinformation in other contexts as well. Certainly in the U S there were stories of how, some of this information would start from small, probably Russian controlled troll accounts with little followings, but they would get picked up by American. Trolls, grifters, independent journalists, which I would put in quotations, and they take that information because they think that finding information from some random small account on Twitter, uh, or X, I guess, uh, is journalistic, uh, effort and, and they're, they're discovering something new and, and so then they turn into something and that's, it's sort of this way of effectively laundering, foreign influence peddling, through a local, a sort of gullible willing local. And that seems to happen pretty frequently and, and appears to be the case here too.

Ben Whitelaw:

yeah, no agreed it was an interesting story. Thanks for bringing that to us. Um, we are Running a little bit behind time in the sense that we are We we're not going at our usual kind of relatively fast pace, but we don't have a bonus chat today Mike so we can we have a few more minutes to kind of You We're through these other stories. Um, my kind of best of the rest story was a piece in time, which looks at the development of, a tool, which you'll know, and many of our listeners will know, called perspective API, which is kind of built by jigsaw, which is a unit within Google. That tries to kind of create tools and technology to help foster more healthy online debate. Perspective API is a kind of, uh, piece of technology that is used by lots of other companies, around a thousand, the piece notes, including the New York times as a way of, helping raise, up the level of debate in a comment section among other things, and this new development Some work they've been doing on it basically adds new classifiers to the API, which means that you can kind of better discern attributes, within the text itself, right? So it's, Jigsaw have like looked for indicators of higher quality comments and, and, online posts. And they're, they're basically. Being able to detect things like nuance and, reasoning and, and evidence and where people have used personal stories and, essentially things that make online contributions, posts more valuable. And so the, the theory here is that, by having those attributes available within the API, people that use that technology can start to rank, you know, the, Posts in different ways. And so rather than ranking in terms of kind of crude metrics like engagement or, you know, moderating them based upon certain kind of words or tone, you can start to kind of elevate, posts that are doing a better job of kind of adhering to community standards and contributing to discussion in positive ways, and as somebody who worked in the comment section of a newspaper for a long time and had a team of moderators, like this is a, this is a really nice. Kind of story in many respects. And I think, you know, we can talk about whether it's as good as, as it sounds. Um, cause I know you have some views, but like the idea that you can, somebody will leave an interesting personal story underneath an article or in response to a post and that you can respond to that differently than, than other posts is really, really. Something that I, you know, took a long time when me and my team were sifting through contributions and this makes that a lot easier. Oh,

Mike Masnick:

it's great. I mean, I think I'm really happy to see this and I think it's a useful contribution and, begins to, change some of the debate on these things and, and, and the fact that they're making it openly available for others to build on, you know, as an API is, is wonderful and I'm sort of excited to see. and how it's used and implemented. You know, I think the approach is interesting because, most of the focus of these kinds of tools, including the earlier tools from Jigsaw, we're really focused on like, finding the, the bad content, you know, what are the flags of, of problematic or dangerous content to potentially flag or limit. Uh, the spread of, whereas now they're talking about like finding the good content and raising it up. And I think that is a really valuable contribution. I, I do have some concerns about the quality and how do you handle mistakes on this and whether or not people start to figure out how to game this kind of system, you know, can you, can you fake sincerity? Can you fake a personal story? Yeah, absolutely. I mean, AI makes that easier as well. So it becomes a sort of AI versus AI battle to some extent. But I do, I do think it is a really cool tool. I think we're going to start to see some more of it. I'm aware of at least one other set of folks working on a similar kind of tool as well. and I expect that we're going to see a lot more of these things, in the relatively near future. And it'll be great to see them in action and how well they really work. And if we can come up with better, more sophisticated, more nuanced tools to judge the quality of content, both good and bad, there's some real value there and, and like, to be honest, I, I'm, I'm working on a story about how I've been, you know, playing around with an AI tool to, edit my own, writings and including, like, give, giving me feedback on, like, have you made the strongest argument possible is one of the things that I have. The AI look at in my articles and it's. Kind of incredible at like the level of nuance that it is able to like, look at written content and make a judgment call on these things. That's not always right. It's not always good, but it is interesting how good the tools are getting.

Ben Whitelaw:

Yeah, no, it's a, it's a nice development and, um, you know, hopefully we see some case studies from. The people that use perspective AI and how they're doing this. Reddit is one of them, the article notes. So let's see how that, how that pans out. Um, okay, let's, let's kind of choose one more, Mike. I want to, we were going to do two more, but I think let's do one more and let's talk about the kind of Mexican election and, the efforts that the election agency in the country is doing to combat missing disinformation, because this was a story that came across your inbox.

Mike Masnick:

Yeah. I thought this was really interesting. It was basically, Mexico has an election coming up and they, are using a tool for fact checking. And so it's actually like the Mexican election agency and they've partnered with this company, Medan, who I assume a bunch of our listeners will be aware of, uh, and they're, basically trying to deal with the misinformation threat by having, this company. This organization and some various news, and media organizations effectively do fact checking to try to dispel misinformation.

Ben Whitelaw:

You're, you're, you're, you're not a big fan of this, right? This is, this is dangerous

Mike Masnick:

I'm, I'm, I'm. Yeah, I'm, I'm, I'm sort of mixed on it. Right. So like, there is this element of like, okay, here is an attempt to do something about misinformation, which is a real issue, but I worry about when it's the government doing that. Um, I think there are ways that governments can be helpful in those situations, but as soon as you have this situation where it's the government saying, Hey, we've hired this company to determine what is and what is not misinformation, you run the risks of like. Determining what is and what is not misinformation is there's a lot of subjectivity in there. And there are questions of like, if that misinformation is favorable to the party that hired the fact checkers, you know, how is that dealt with? And so there's an inherent conflict of interest built into it. And I worry a little bit about. How that plays out and, whether or not, whether it's on purpose or not, the, the incentive structure works well here when it is the government themselves doing it, you know, that said there are situations where, you do want the government responding to misinformation. And if they come across something that is false, putting out Corresponding information that says like, this is wrong. Like if you say like the polling places are in the wrong spot, like somebody should respond to that. And so there is a role for government to respond to fact checking to, to, to false and false misinformation. It's just, you know, how is this tool that is designed to fact check, how is it actually implemented and used? Ah,

Ben Whitelaw:

there's probably so, so two thoughts here. I think there's. This has worked, uh, pretty well in Brazil where Medan partnered with the kind of federal, election body for the 2002 election and use this check software in partnership with some media and fact checking organizations to again, try and kind of essentially canvass what you, you know, citizens, wanted to know about candidates and parties and try to kind of essentially preempt that. They've got 347, 000 contributions from users. So there's a demand there. And actually I'm on a panel, incidentally, tomorrow at this conference with one of the fact checking organizations that did this with Medan. So I will ask, um, how this works. I, my, my guess, my guess is that there are kind of lines in place for the media organizations, for Medan and for the kind of government involvement. No credible news organization would kind of go into a partnership like this, giving up their, their kind of responsibilities to adhere to strict processes on this kind of stuff. So, but I will, I will check and feedback. And I think this, it's government's basically kind of stepping forward and saying, you know, we know that this is going to happen. So why don't we kind of coordinate some of this? And I'm in favor of that. As long as it can be done in the right way.

Mike Masnick:

yeah. And, and I think honestly, from the details that I've seen about this, like it is set up in a very smart and thoughtful way to deal with that, um, I, I just, you know, want to be conscious of the potential for problematic incentives. I'm not saying that this is a problematic one. And as you noted, like it's been very successful in the Brazilian context. And I think there is a way to do it right. And that's it. Looks like it's being set up in a smart and thoughtful way where it's not the government putting its fingers on the scale here. Um, I just always want to be careful about when the government gets involved in anything related to like fact checking.

Ben Whitelaw:

Yeah. No, I think that's fair. And, and, listeners will be the first to hear about that on this podcast. No doubt. We will keep our ear to the ground. Um, so that, that's an interesting, interesting story and a good point to, I think, round up today's podcast, Mike. Anything else you wanted to note this week? any other kind of reviews you want to bring to us? Um, any other, any other one stars that you want to flag to me?

Mike Masnick:

No, we just have that one, one star, uh, and so we have a bunch of other very nice reviews. And so if you would like to add another one, uh, I'll pull up another one real quick here. We've got one last week that said, uh, always interesting topics that are very relevant. Um, and so, you know, we try, we, we are out here trying as much as possible. Um, and, I think we will continue to do that as, as you noted, Ben, we don't have a bonus chat this week. We do have more planned for the future, so, stick around for that. And, we're, we're working on getting some more guests as well. So, um, we have lots of good stuff coming up. Ha

Ben Whitelaw:

to my mom for sending in that review last week.

Mike Masnick:

ha.

Ben Whitelaw:

Um, She really does sort me out. Um, Mike, thank you very much for this week. Great to have you back. I hope you'll be in the chair next week. Please don't, uh, have a crisis of confidence and, uh, thank you all to the listeners for joining us this week. I'm off to eat more tiramisu.

Mike Masnick:

Excellent. Thanks, Ben.

Ben Whitelaw:

care. Bye.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.