Ctrl-Alt-Speech

Do Not Leave a Fake Review for this Podcast

August 16, 2024 Mike Masnick & Ben Whitelaw Season 1 Episode 24

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

I've always been a bit of a lurker on Quora, Mike. I've never contributed my expertise. You won't be surprised to know that, but maybe the reason why I've never gone to posting is because the prompt isn't really speaking to me. So I'm going to test it out on you. Okay. The core prompt is. What do you want to ask or share? So ask or share away.

Mike Masnick:

I want to ask if we are contractually required to talk about Elon Musk every week.

Ben Whitelaw:

I don't know how much he's putting into your bank account, but, uh, or the terms of that, but it feels like we're going to have to do again. I'm afraid.

Mike Masnick:

Unfortunately, do you have anything you would like to ask

Ben Whitelaw:

Well, I'd like to share that everything in moderation, the newsletter that I've been running, on the side for a while now has just had its sixth birthday. So I I'm not in the Mike Masnick, tech, uh, You know, 20 plus years of time that I've been running something, but Hey, it's something.

Mike Masnick:

now that we're, in August, August is the anniversary. This would now be, I think we, I'm going to try and do the math wrong on the fly. This would be 27 or years of doing TechTurt. I think as of. Perhaps this week, or next week, it's around

Ben Whitelaw:

We're content moderation, online speech side project twins.

Mike Masnick:

There we go.

Ben Whitelaw:

Hello and welcome to Ctrl-Alt-Speech. Your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you as ever with financial support from the future of online trust and safety fund. My name is Ben Whitelaw. I'm the founder and editor of the six year old everything in moderation newsletter, and I'm back with, the much longer in the tooth tech founder. I'm, I'm, I can't stop explaining how, old Tector is. I'm I'm just, got to stop after doing

Mike Masnick:

How old I am. You just, just make it before we started this podcast, we were actually talking about birthdays and age and, how it's a state of mind. And I am still trying to feel young, but you're, you are consistently pushing me to, recognize my age my old manhood.

Ben Whitelaw:

I'm it's only cause I'm impressed. I believe me. It's no other reason. Um, Hey Mike, we've got, a sparkling new. Sponsorship page on ctrlaltspeech.Com, which I think is worth celebrating. Uh, it's not quite, it's not quite birthday ready, but, we've got a brand new page where you can go and listen to examples of sponsored chats that we've done with really interesting folks within the trust and safety industry. Having kind of topical and timely conversations that we've been able to work with them on. And, I think it's a great addition. It's got some of our best hits so far, but we're hoping to have many more in the future, aren't we?

Mike Masnick:

yeah, absolutely. And, for folks who are listening, you know, we try and have sponsorships work in a way that are actually interesting to our audience. This is really important to us. You know, this is a, very focused podcast of people who are interested in online speech and trust and safety. And we want to make sure that any sponsorship we do is interesting to these people. We are not selling mattresses. We are not, you know, whatever the new hot podcast advertising topic is of the week. We are looking for sponsors who actually have something interesting to say, you know, we did the recent sponsorship. That discord did, which had people working on the, open source, trust and safety tools. We want these conversations to be interesting. We want people to listen to them. we, we need more sponsors. So, so, uh, if you are listening to this and you happen to work for a company that is doing something interesting. Maybe you've released a report, maybe you have some news, a new product that you think is interesting to the world of trust and safety, people who are interested in online speech, we're looking for you. Uh, and

Ben Whitelaw:

Not in a creepy way.

Mike Masnick:

not in a creepy way. But please check out the website. If you just click on the sponsor, the podcast link, we have a new page that has some more information, and we will be happy to talk to you. And, uh, we'd be happy to have you sponsor the podcast and get you on for an interview, or it doesn't have to be you from your company directly as well, but it, can be, and if you listen to some of the example sponsorships that we've done, hopefully you get a sense of how we like these things to work. And so, uh, we'd love to hear from

Ben Whitelaw:

Yeah, ctrl alt speech. com we'll add the link in the show notes. Mike, let's get going with today's stories. We've got a whole smorgasbord of different, takes to bring to our listeners. We're going to start off with the aforementioned M word. Um, he, you know, we, we said in last week's podcast, we wouldn't mention him. It was, literally never going to happen. And it's taken less than like literally seven days. I can't believe it. Um, Musk has gone from pissing off UK politicians to going back to his old job of pissing off politicians in the EU. Hmm.

Mike Masnick:

happening in the UK and how, violence had broken out and how they were concerned about all of the different issues around, What was happening there this week, Elon Musk interviewed Donald Trump on spaces, which is the, you know, the live audio component of X, uh, which is now famously done multiple failed technical difficulty interviews with Republican politicians. And leading up to that, there was a lot of hype around this interview between Elon and Donald Trump. and a few hours before it came out, Terry Breton, who is another person we probably talk about way too much, but who is the EU commissioner in charge of the internal markets, which is all the sort of digital portfolio, DSA related stuff. sent this open letter, which he posted to X. I don't know if it was, posted anywhere else. And there is this question of like, if you're really trying to, limit X's power, like, why are you posting it directly to X instead of elsewhere as well? Um, but he posted this open letter, which was, weird in a few ways, I think I would say. And, sort of, you know, Talking about the interview and effectively saying you have requirements under the DSA, and I am just reminding you of them, to not allow for harmful content to spread and then a bunch of other stuff that actually is in the DSA. The harmful content part is not Really technically in the DSA and goes beyond what most people believe the DSA says. And in fact, there's been a lot of discussion because like in the run up to the DSA, there were concerns, some of which I have raised myself and others have raised that how this law would be used to target lawful content and try and get for the removal of lawful content. Lots of people put in a lot of effort to try and. They told us, limit that possibility to write the DSA in a way that it was not meant for that, it was not meant for censorship, and yet here you have Pertone going out and saying, you have to limit harmful content, and in this context it came off particularly Particularly poorly because this is, one of the candidates for president in the U S giving an interview, having a European official come in and basically say, you have to be ready to not allow some of what he says to be spread online because I believe it is harmful. comes off badly. it certainly came off badly to lots of people in the U. S. It came off badly to Elon, who posted a meme from the movie Tropic Thunder, which was, something to the effect of take a step back and go fuck yourself in the face. Very diplomatic. I'm shh. Yes. Yes. Absolutely. but there is this element of. You know, it looks bad. the Trump campaign came out and said, Hey, we don't want Europeans interfering in American elections, which it does kind of look like that. I mean, you can argue that, Trump is a unique individual in all sorts of ways and creates unique challenges in all sorts of ways, but still it felt really inappropriate. And the really interesting thing to me was that a lot of Europeans came out and said the same thing. There, there was, I will note, some knee jerk reactions of Europeans, who insisted that this was all above board and all good. I got yelled at, for complaining about this letter because I apparently do not understand that the EU does not have a First Amendment, and so that's like the common complaint whenever I point out Problems with speech in the EU. People tell me, Oh, it's just fine. But the interesting thing to me was that various officials in the EU and the EU commission itself, though none of them put their names on it, came out and said, uh, yeah, this was bad. We shouldn't have done this. Breton did this on his own. He did not run it by anyone. the FT had an article that went through people saying, we had no idea that this was actually going to happen. Politico in the EU had quite a quote that was like pretty stunning for me again, anonymous, but it was an EU official saying, the EU is not in the business of electoral interference. DSA implementation is too important to be misused by an attention seeking politician in search of his next big job. Yeah. Yeah.

Ben Whitelaw:

I bet whoever wrote that story was that, that was a really good get. I think it was Mark Scott. Who's very good at the

Mike Masnick:

Yeah. It is.

Ben Whitelaw:

it's a great quote.

Mike Masnick:

Yeah, it's a fantastic quote and it, didn't get nearly as much attention as I thought it would, because that is, that is pretty, you know, that's a direct

Ben Whitelaw:

Yeah. So, so, so what is behind. I think that's the kind of question, the next question, right. It's like, if the commission doesn't know that this is going to happen, if it's a little bit outside of kind of what the DSA. Scope is, what do we think kind of Breton's doing here? other than kind of maybe placing himself in the prime position for his next job, which is just what some people have claimed.

Mike Masnick:

Yeah, you know, I mean, he has, since he's been in this job, he has not shied away from, One, constantly making the story about himself, which, has always struck me as more of an American style political move. Uh, but, you know, I've joked, repeatedly that it is very difficult to find a social media post that Breton has ever posted that does not have a photo of himself with it. Whatever he's talking about, this is happening in the EU. It always has a picture of himself with it. It's, it's, it's odd. I don't get it. but you know, some people like to do that. he's a selfie kind of guy. But the bigger thing is that he really has, from the beginning, seemed totally unconcerned about those larger concerns about how the DSA might be used to impact

Ben Whitelaw:

Hmm.

Mike Masnick:

there are lots of other European officials who I think have been very thoughtful and very careful and talked about how We have to be careful with the DSA to make sure that it is not a tool for suppressing speech, that it is really about process, and it is about best practices, and it is about those kinds of things, and then you have Pertone sort of doing this, like, uh, You know, there's like the, the Leroy Jenkins meme of just like, you know, whatever your plans are, the best laid plans, just rush in and just be like, you know, I'm doing it the way I want to do it. and he's acting as though, he is the official determinant of what is good and bad content online. And part of that is making the story about himself. Part of it is that I'm sure he legitimately believes. That, Trump has a very high likelihood of saying something that could inspire, mass activity, potentially violence. And there is a history there, obviously. And he's legitimately concerned about it. And so is what he thinks he's supposed to do. I think that's wrong. And I think it's counterproductive in all sorts of ways. And it does, you know, as the person the anonymous official who was quoted in the political story, no, it's like it really takes away, I think it actually harms the EU's position and it's sort of moral leadership on regulating internet companies when that tool is then being used by the person in charge of enforcing it to some extent, In this manner.

Ben Whitelaw:

yeah, seems like a kind of odd move from him, particularly as no one else seemed to see it coming. I mean, I, it tells me about the importance and the difficulty that the DSA is having around implementation. You know, like this, it's, I wondered to what extent Breton is kind of using his letters to Musk and the kind of theater that comes with posting them on X slash Twitter as a kind of interim implementation or. means of, you know, like basically softening him up for, the day when he X in Twitter, we'll have to really abide by these rules formally. And the investigation is still pending as we know. there's this longstanding beef between the commission and Twitter. And I wonder if this is a kind of him trying to do his own kind of soft, soft regulation via his social media account, which obviously is not a good thing, but kind of tactically and politically. Maybe that's part of it.

Mike Masnick:

Yeah. But I mean, I think there was a related story this week, which, was, one that, that you had called out, which is that your mayor, Sadiq,

Ben Whitelaw:

London's mayor, not just my, my own personal mayor.

Mike Masnick:

yeah, but you live in London, so it's your mayor. I can say that. Uh, called out the online safety act as, not being able to handle, what is going on. And, and we've talked about the situation in the UK and the violence and everything that is happening there, but the, There is this element of the UK passed the Online Safety Act, Europe has the DSA, Australia has its, everybody's passing these internet laws, and we're sort of figuring out the implementation, and where I and other, civil society folks have been concerned is how Even when, officials are saying they're writing these things carefully in manners to not be used for censorship, as soon as there's bad content or bad things happening, it becomes a really easy tool for politicians to do the, we have to do something, like we have to stop this and it's understandable and it's like, I'm not criticizing it as a reaction because it's a, Instinctual, like bad things are happening and you want to stop it. And as a politician, the first thing you do is you sort of reach for the tools that you have. the reality is often a lot more complex, but it worries me that, we're already seeing both in the UK and in Europe, this idea that like, we need to crack down on speech, which is effectively the underlying theme. in both of those

Ben Whitelaw:

Yeah.

Mike Masnick:

and that is exactly what, I've been worried about, what lots of people have been worried about, because that's kind of how this always happens. As soon as you start to regulate speech in some form or another, it becomes a really tempting opportunity. because like bad things are

Ben Whitelaw:

Yeah.

Mike Masnick:

but, you know, they're bad things in very complex ways. And the, sort of simplistic tool of like, you have to prevent this from happening, tends not to work very well and tends to, cause other problems that people are not considering.

Ben Whitelaw:

Yeah. And it's what makes, speech regulation so interesting. Right. Is that in other forms of regulation, there aren't these moments like the UK riots or like, things like Capitol Hill where. All of a sudden, online speech gets put in the headlights in that way so directly, and you have politicians calling out whether they know much about the trade offs that they're talking about, but they're, they're wading in on these topics and they're potentially undoing years and years of work. That's, parliamentarians and also regulators behind the scenes have done to create this kind of very steady methodical way of putting together the act. So yeah, it's, it's what makes this fascinating. The political nature of, you know, speech generally and how things like the UK riots and the EU elections and things like even just Donald Trump joining spaces for a chat with a CEO can really just like all of a sudden create this interest around it.

Mike Masnick:

Yeah. I mean, you know, one of the things that I've been sort of talking about this, I might write about at some point that I keep thinking about is how, politics ruins policy is, is this theme that I've been thinking about a lot because it keeps coming up where it's like thoughtful policy. is great. And we have politicians who create policy and sometimes they try and create thoughtful policy, but the politics so frequently gets in the way and whether it's pandering to a base, whether it is, making a deal with, other parties in order to get something through the politics of, the situation often messes up good policy.

Ben Whitelaw:

and I think, yeah, that will be an interesting place to read because every week we're kind of touching on that same story to an extent. Um, we should move on. We are cracking through the stories this week. We've been a little longer in. Previous episode. So, the story I want to bring to you, Mike is a interesting Washington post piece about the accuracy of deep fake detection tools. Now, this is a really interesting story in that it really breaks down how AI detection tools work and has kind of taught me something that I didn't know before around the different models that go into determining whether a video of Kamala Harris is real or not. And. The story kind of goes through, and it's one of those kind of interesting scrolly stories that breaks down visually different aspects of, a topic, but it goes through six different images that many of our listeners will know, including the Pope and his puffer jacket, including, Taylor Swift with the Trump one Democrats cheated flag. And it basically highlights that of the eight image detection tools that these photos are put through. None of them. None of them were able to decide whether they were fake or otherwise. So, you know, there was, there was some consensus on some of the photos. No one got an eight out of an eight. So the, there was, only one tool managed to know that the Taylor Swift picture was fake, which is fascinating. And I don't know if you remember the Olympics recently, there was that amazing photo of an Olympic surfer standing in the air. Uh, there was two that thought that was fake. So like very quickly, this piece shows how difficult it is to really unpick whether a picture is fake or not. And it goes through exactly how the tools work by demonstrating. Where, by looking like at a face and whether the face of the person in the picture or the video has changed, these models will look at the lip movements and see whether the, that's been manipulated in some way. And if they're potentially not saying something that. They're saying it looks at the audio in various ways, including whether there are kind of strange pauses. it even looks at the kind of pixels and whether there's any strange compositions of pixels. And it just does a really nice job. I think of, demonstrating, how these tools are meant to work whilst also pointing out that actually. They're not very accurate. And this is something we've known for a while. The story doesn't add anything in that regard that the accuracy is somewhere between 25 percent and 80 percent for these tools, the story notes. So there is this massive, caveat, I guess, when using these tools that I think is really important for readers and the public to know about that. Yes, you can be trying to detect whether a photo is fake or not, but actually you're not likely to get a very solid answer, so I really like this piece for kind of how it, breaks down something that many of us are going through, whether trying to work out whether a picture or a video is real or not and the treatment of the story, really, I think more. More media outlets should be doing more stories like this, I think it's a nice way of bringing to life some of the, the issues we talk about in the podcast.

Mike Masnick:

I think it's, it's really well done. It's really interesting. It really does a great job of, demonstrating directly, the nature of both false positives and false negatives within these detection systems it's very educational in terms of talking, as you said, sort of going through the details of what the, detectors are doing. The thing that struck me as, kind of interesting about it is that, I'm, I'm always a little concerned about how quick people are to assume that these kinds of tools work much better than they do. So I think it's useful to highlight that they don't work all that well, though I don't think that's going to change like. People will still take them as truth. And we've had this issue for a long time, even going back before the current generative AI boom, were people who had these various tools that tried to determine who were bots on Twitter. You know, I remember those and they were awful. I mean, they were absolutely terrible and, you know, would judge all sorts of very real people as if they were bots. And it just became this whole, you know,

Ben Whitelaw:

did you ever get classified as a bot?

Mike Masnick:

I did on one of them, on one of them, I was called a bot. Um, but you know, it's like all of these things are, it's so subjective. And so my fear though, is that when you have a tool, like it implies, or people just mentally take it as if it's like

Ben Whitelaw:

Yeah.

Mike Masnick:

form or another. especially if like some of them score stuff, right. They give like a percentage and human brains are just not really good at recognizing like what percentages mean, like we have this sort of natural inclination to take. If it says like 55 percent likely to be a bot, you're like, yeah, it's a bot,

Ben Whitelaw:

yeah,

Mike Masnick:

like our brains round up.

Ben Whitelaw:

yeah. Right.

Mike Masnick:

and so I'm a little concerned about that people are going to take these things too seriously. And we've already seen like, there are some, regulatory proposals here in California, for example, that require companies to release detection systems that will detect their AI, which is impossible. Even the companies themselves that are making the AI tools. to think that they can create a detector. OpenAI released their own detector at one point, and I think for a while pulled it. and I'm not sure if it's back, but even like when they released their own detector, they were very clear that it made a lot of mistakes. It says that our classifier correctly identifies 26 percent of AI written code. Texts. So the true positives were 26 percent of the time, which is not great. Yeah.

Ben Whitelaw:

like anything, like any kind of large language model, it's only as good as the data that you feed it. Right. so my sense from this story is that because the Pope in the puffer jacket story, which is one of the photos has been around for a while and has been kind of used as almost like the archetypal. Example of deep fakes, like that's why so many of the tools got it right. Whereas the stories that have had the photos that had less exposure tended to be wrong. And I think that's something that comes through in the story. It's like, if the data is. And some of these are relatively small companies producing these detection tools and don't have the scale and the ability to kind of create large data sets to feed into the models. Then naturally they're not going to be that accurate. And, should there be a, basically a kind of accuracy number for these models? That we, I don't know, flag when users are using these tools. Or again, you, you're right that you have to do that and expect people to still, still just use it anyway. But I wonder how many of it's compounding the, an already bad

Mike Masnick:

is tricky. I mean, the, the other element in all of this is that, the underlying issue behind so much of this is whether or not these deep fakes matter, right? So there's obviously a lot of concern and discussion about, you know, Deepfakes and what impact it will have, especially within the election context. And as we've discussed before on the podcast, a lot of these are not having an impact. People recognize that they are, you know, almost every deepfake is being called out as a deepfake. There aren't too many stories of deepfakes having an impact. And in fact, again, as I think we've discussed in the past, the one thing that seems to be happening much more is people claiming real photos are fake. And we just had an example of that this week with Donald Trump claiming that a crowd at a Kamala Harris rally was faked when it wasn't. And so there was all sorts of back and forth on that, where it's, it's sort of enabling people to claim true things are false rather than. Tricking people into believing false things are true. And so there is this question of like, obviously having detector tools short, that's great, but how much is that really needed when it seems that the true detector are human beings looking at this stuff and saying, no, I was there. That really did happen. Or no, I wasn't there. That didn't happen rather than using a technology solution. So there's a little bit of like technology solutionism jumping in where it might not be necessary.

Ben Whitelaw:

fear not, because in, in the UK at least, the story came out this week that the new Labour government is planning to introduce, some lessons in which kids will be taught how to spot misinformation and extremist content in all kinds of lessons. So in. computer science and in English, and they're going to have different lessons introduced to do exactly that, I think. So I think this is something that actually you, you called for in last week's control or speech. You wanted immediate literacy to be kind of higher up on the agenda. And it sounds like the UK government has been listening and they've, done exactly as you've asked in the space of a few days,

Mike Masnick:

because of us. Absolutely. Yeah. No, I mean, you know, some people will make fun of the idea. You know, media literacy is often the answer that comes up, but it's like the one thing that has been shown to be effective. Of course, that also means it is now becoming a target on the political side of things, which is that when we've seen some of these media literacy programs pushed, we're seeing them. The people who want to get away with pushing disinformation, claiming that the media literacy campaigns are actually propaganda campaigns. but yes, I think it's, it's a good thing. I think, teaching people how to determine true things from false things, propaganda from, reality, all of this stuff I think is really important and it has to come. It has to be the people who figure it out. we can't rely on anything but human brains to actually understand this

Ben Whitelaw:

Yeah, for sure. And on that topic of treating true from fake, that leads us neatly onto our next story. Um, I mean, Google results are an area where most folk would probably say that they trust. The answers and the information that are, are there. But you found a kind of interesting story that, that says that maybe actually that shouldn't be always the case.

Mike Masnick:

Yeah, so this was, an Axios story that I thought was really interesting also in sort of raising some of the different issues that come up in trust and safety and content moderation questions, which is that the Kamala Harris campaign has been buying Google ads, the search ads. So when you do Google search that the ads show up and they're, marked in a way that says that they're

Ben Whitelaw:

Mm

Mike Masnick:

and what you normally think of when you see ads is that the ads are to like a campaign page or, you know, for a product page or service page, it goes to that page. What the Harris campaign has been doing is buying ads that link to new stories. And then because you get to write the text of the ad, they're effectively changing the headlines of those stories to be more favorable to them. So they're not the actual headlines of the article. They're framing the article in a way that is positive to the Harris Walsh campaign, and that's it. Interesting. And it's this sort of interesting affordance that Google ads allows you to do, which is that you can, buy an ad that links to something that is not your own and you get to put in the text of the ad. And so some people are complaining that this is misleading. Because, you know, it implies that, if they're linking to a Guardian article and they're putting a headline that is very favorable to the Harris campaign, it implies sort of an endorsement from the Guardian in, in that, in that kind of

Ben Whitelaw:

So the article itself, is, would that have a different headline to the one that users see on Google? Okay.

Mike Masnick:

so basically you would see the ad. And so like they had examples of, VP Harris's economic vision, lower costs and higher wages, right? Talks about that. That was to an AP article. The actual headline was not that I don't know exactly what it was, but this is presenting the story in a light that is, you know, is important. And, one of the things that I certainly know in writing Tector is that how you. Frame a headline often really has a major impact, a surprisingly large impact on how people read the article itself. The influence of a headline can be quite stunning.

Ben Whitelaw:

Yeah. No, for sure. I mean, the story, Axio story points this out, but Facebook in 2017 stopped. The ability for page owners to change the image and the text headline text when posting to, it's followers. So I was running a few different pages at the time and it radically altered our ability to drive people to the story because we were having to then kind of go back and change the article headline. Which wasn't kind of designed for social and wasn't as catchy. Wasn't as kind of eye catching or click baity as you might say. And so,

Mike Masnick:

I was about to accuse you of clickbait, so I'm glad you admit it.

Ben Whitelaw:

yeah, that's one of the worst things I've done. I'll be honest. There's, um, So that, that is something that it's funny that it still exists, you know, six, seven years after Facebook decided to make that change. It still exists in the form of advertiser tools and the publishers don't know anything about it. So the story went to each of the publishers that Harris's campaign features. They had no clue that their articles were being used. They had no idea that the headlines were being changed.

Mike Masnick:

yeah, and, some of them are upset about it, but, you know, there is this element of, is it that bad and sort of Google's response to all of this is like, this is within our policies. This is, we allow them to do it. And there is this element of like, if someone has written an article about your campaign or a policy that you're pushing and you want to sort of highlight a key point from that article, Is it that bad if that's the point that you, turn into the headline? and so I, I, I kind of see both sides of it. Like, I'm not sure this is that bad, but there is this element of like, I could see how this could be really abused. And it creates one of these really interesting challenges on a content moderation, trust and safety standpoint of like, should this be allowed? and if so, what are the limitations or what should be allowed? And How sensible are these different approaches between like Google saying this is totally fine and meta saying like, no, you can't do this at all.

Ben Whitelaw:

yeah. You would have thought that some of the publishers would have been glad to have had traffic funneled to their sites,

Mike Masnick:

Yeah. Right. I mean, it is driving, it should be driving traffic, you know, it's people taking ads out, sending people traffic to your site for free. That's free advertising. And so, I sort of understand conceptually where the complaints are coming in from the news publishers, but. not sure it really makes sense.

Ben Whitelaw:

I mean, I was joking that I think, I think, I think if you're positioning yourself as an impartial objective view on us politics, it's kind of hard to make that claim whilst one of the presidential candidates also uses you for.

Mike Masnick:

Yes, but I mean it like presidential campaigns will do ads that show, you know, a news article all the time, right? Or like quote, a news article. And very selective, right? So you'll see like a TV commercial that will take like, three words out of an article saying like, Harris is great, or whatever it might be. You know, that goes back forever. And people sort

Ben Whitelaw:

this is the same.

Mike Masnick:

you know, I sort of feel that this is kind of the same. Each of these ads say, paid for by whom, though it is noted in the article that apparently at one point there was a glitch that potentially may have hidden that, disclosure, but most of these do seem to reveal sponsored and by whom. So again, like if there's media literacy from the people reading these things in theory, they should recognize kind of what's going on here.

Ben Whitelaw:

Yeah, no fair. going back to the clickbaity headlines, I've got one for you. which is our next story. I feel like this is, no, one's not going to want to read this next story, which has the headline, the friendliest social network you've never heard of. I mean, that's like all the kind of, that's like Buzzfeed back in the day. Right. You know, that's curiosity gap to its finest.

Mike Masnick:

I was going to say, or Or upworthy,

Ben Whitelaw:

Or up worthy. Yeah, yeah, yeah. Even before that. So this is a Washington Post story. Again, basically a profile of a forum called Frontport Forum, which is a Vermont based kind of social network, come new site, come community forum, which, is being profiled by the Washington Post basically because it has a ranked really, well for, making kind of users feel engaged civically. So new public who we've featured before on the podcast a few times, did some research and, did a questionnaire with users across a number of different sites from porch forum ranked really highly for making people feel like they were kind of connecting with people online in a civil way. And so the piece kind of runs through, exactly what it is and it's, It's, it's a really nice kind of quite quaint profile of, of a site that's clearly not designed to be a competitor to the other social platforms. It's very niche. It's the classic. local stories. You've got things like people sharing, lost cats. You've got things like people trying to sell couches. and the really interesting part, I guess, for us is how French porch forum moderates. And it moderates every single contribution that comes onto the site. Has a very, very strict. Moderation policy, and that allows it before it's

Mike Masnick:

Yeah, that's, that's the important part. It's not that everything is reviewed, but everything is reviewed before it goes live.

Ben Whitelaw:

Yeah. So there's, no expectation from the user side that a post would appear imminently, everything is, it goes for review by its team of moderators. And, founder and CEO, Michael Wood Lewis talks about how the site is designed to kind of foster actual real world Conversations and, interactions. It's not really about engagement in the sense that social platforms have always been the case. It's about creating social capital and to kind of, really interestingly talks about wanting to hold people's attention for just 10 minutes a day, and make that time really worthwhile rather than have them engage for hours and hours at a time. So it's a really nice piece. I liked it as well because I spoken to the guys at front porch forum a few years ago. I was really impressed with the way that they thought about some of these issues, the very conscious way that the site is designed, it's very kind of privacy focused. even back then I wasn't, allowed to even have a test account to go into some of these community forums because they're very, very precious about who gets to go in, you have to show that you're part of the, the towns or the areas that you're involved in. So it's, it's very much linked to geography. And since then I've, I know they've been really thinking about how to update their policies as times change. So it's a really nice piece basically. And I know we've talked a bit about some of these issues in the past, but to kind of look back on this again.

Mike Masnick:

Yeah. And I sort of joked that the headline sounded like Upworthy and that's in part because New Public is Eli Pariser, who was the founder of Of Upworthy.

Ben Whitelaw:

Of course. Yeah.

Mike Masnick:

this was, this was, uh, you know, Washington Post piece. And it's really interesting. there were a couple of things that I thought were worth calling out, obviously the fact that everything is pre vetted moderated, leads to some interesting issues, you know, in terms of like, a large portion of the small staff for front porch forum are moderators who are reviewing everything. People know that it is, there's a, a slowness to it, which I think might take away some of the vitriol as well. It's not the sort of, you know, these. Things sort of ratchet up, you know, there are questions about how scalable this is, it is mostly dealing with smaller communities, towns with a, maybe a few thousand people at most. but it, you know, it does strike me as like, this is sort of the anti next door, um, where, I have next door and occasionally I will log very occasionally I will log in and just learn that. Gosh, my neighbors who I thought I liked all seem really awful, whereas, From Porch Forum really feels like a way to, have like much more civic engagement. There are questions, of course, when you're pre moderating everything about, They are directly gatekeepers at this point. And so there are always questions when you have gatekeepers, are they gatekeeping certain viewpoints or certain perspectives? obviously in this case, it feels like they're very thoughtful about this, but you could see somebody trying to set up the same thing and not having that kind of thoughtfulness and issues. And so, there are interesting lessons there. Um, I don't know how replicable they are, on a broader scale, in terms of, is this something that is useful for other sites? Maybe, it's worth learning about. It's worth knowing that there is this process. You can do this thing where you absolutely do moderate every post. It means a smaller, more focused kind of social network, but it's an interesting story.

Ben Whitelaw:

Yeah. No, I'd like to kind of. Rereading some of their, and there was a similar piece by the verge in 2019, looking at them. And, uh, it's nice to see them still getting some press. Cool. So onto our next story now, Mike, this is a, an FTC ruling that you've spotted this week on your travels, which looks at online reviews and maybe something to clamp down on, on that practice. Yeah.

Mike Masnick:

issue for a long time in the question of are they real or not, and we know that a ton of them are not real, and here the FTC is putting in place, it's, it's an ongoing process for this to actually happen, but it was a unanimous vote by the FTC, 5 0, to move towards putting in place rules that say that it violates the law to do fake reviews. And there's a number of different elements to it in terms of, buying fake reviews, using AI for reviews. There are questions around, buying negative reviews for your competitors, whether or not there are insider reviews. So if you're, just having yourself write reviews and pretending that you're a customer, all of these Kinds of things they're sort of calling out as, unfair business practices, which once the rule goes into effect, we'll allow the FTC to potentially take action. Some of these rules could be challenged in court. We now in the U S have, because of the Supreme court in the last term, which changed the way independent agencies can make rules effectively, there'll now be a question of whether or not the FTC can actually do this under its authority. And I. I think the Supreme Court might say that this rule is not allowed, um, which I think would be unfortunate. But there are, online reviews are really influential. and I will take a quick break to say we haven't gotten many new reviews lately on our podcast and we are not asking for fake reviews and we are not. Going to fake anything and we do not pay for reviews FTC. I want them to understand this. We don't do anything, but we can request nicely for anyone listening to this. If you would like to review the podcast, please, please, go ahead.

Ben Whitelaw:

Yeah. Yeah. I was going to say, we're going to have to stop posting fake reviews now that this has come into place. I know, but we don't do that. We don't do that. Hmm.

Mike Masnick:

Yeah, we're going to get a subpoena now. I mean, look at what you've done, Ben. Um, but no, you know, I think it's, good that they're trying to crack down, but at the same time, to get back to one of our earlier stories about the AI detection, it is not always that easy to determine what is fake reviews and what is not, and there have been tools for a long time that have tried to call out and distinguish, what is a fake review and I've seen these tools. And I don't think they're that. Good in terms of detecting what is a fake review and what is not. And then there, there are some other like edge cases. So there's an author, Rob Reed, he's an entrepreneur and an author, and, uh, he's a really interesting guy. He's founded a few companies in Silicon Valley, and then he's written some really great novels. and at one point. He was, I guess, bored. I don't know exactly what the story was, but he was sort of looking for a creative outlet and he decided that Amazon reviews were going to be his creative, outlet. And so he started finding just random products that had no reviews that were not that important and he would. Write these long, he created a persona. Uh, I'm forgetting the name of the persona. He created these long, winding reviews that had very, maybe started about the product and then wandered off into very entertaining worlds. Uh, and so he did that. And then about 10 years afterwards in his novel after on, which is a very funny sort of comic novel about Silicon Valley, he, uh, wrote a book. Brought that character and the reviews into the book.

Ben Whitelaw:

Whoa.

Mike Masnick:

And so you can read this novel and it has stories about the character who wrote the reviews and some of the reviews. And then if you go on Amazon, assuming Amazon has not yet taken them down, you can still find some of those reviews. And so. You know, I look at things like these FTC rules and I wonder like, is, is Rob violating these FTC rules? They're fake reviews. They're not impacting whether or not people will purchase these. They are sort of just like a, sort of performance art kind of piece that then show up in his novel. so I always wonder about like edge cases like that. Like maybe that doesn't matter. Like, okay, if Rob can't. Do his funny reviews. but there is this element of like, you know, you have to be careful about what it is that you're allowing and not allowing.

Ben Whitelaw:

I think there's, there's some really interesting stories that came out last year about people who. Made a living from writing these reviews, right? And they had a whole theory around how much they should write. And the kind of slightly shorter reviews tended to be seen as less fake or got taken down less often because the longer ones suggest that somebody has like way too much time on their hands. And maybe that's, that's not so, so there's all these kind of like interesting theories that the reviewers have, which again, plays into this idea of. How do you predict what it is a fake review is I think what, this reminded me just how much of the internet is potentially fake reviews and how much emphasis I put on the reviews when I buy something. there was some UK gov research, I think last year that said that up to 15 percent of all reviews were fake between, between 11 and 15%, which is fast. And so, yeah, it feels right that. The FTC is kind of paying attention to this, even though we think that maybe it's not as important or as severe, kind of internet harm as

Mike Masnick:

I, I mean, I, I think it can be, it certainly can be a big thing because people do put weight in reviews in the same way that like, again, unlike the AI detector stuff, I, you can't not be influenced by like the number of stars that a product has, you know, you look at it and you see like different comparable ones. It's like, you're not going to go with the one that has three stars. If there's one that has five stars

Ben Whitelaw:

Yeah. and I didn't realize this, but you need five, five star reviews to make up for a one star review.

Mike Masnick:

really

Ben Whitelaw:

Yeah. So to kind of get you back up into the higher echelons of the kind of fours, you need to really over index in fives, which I had no idea about. And obviously demonstrates why

Mike Masnick:

Yeah. And so it's, it's a challenge. It's just, the question is how do you really determine it? And I think what this rule will really let, the FTC do is go after the really egregious cases. And there have been some egregious cases where it's just, flooding, reviews with fake reviews or, flooding competitors with fake reviews. You know, I also wonder, how does this rule apply with situations where people use it as a sort of protest tool. If a company does something badly or acts badly, people will flood their reviews on Yelp or on Amazon with negative reviews, not because of the product, but because the company or the president of the company or whatever, did something bad. And that is like, that is a kind of expression. Um, and you can argue that it's, maybe it's not the appropriate way to do it, but like, is that something that the FTC should be looking at? I don't think they would go after those kinds of things, but does raise these questions of how are those sorts of situations going to be handled?

Ben Whitelaw:

Yeah. Okay. I want to keep an eye on. And so we'll wrap up today, Mike, with less of a story than more of a kind of public service announcement for me of, of a, of something I read this week, which I wanted to bring to listeners. I never got to see Del Harvey's 2022 TrustCon presentation about, trust and safety as a kind of healthcare intervention, and this week she has put Published it on her site and we'll include it in the show notes. Del Harvey, for those who don't know, is the head of trust and safety at Twitter for a long time. She left in 2021 and this was the presentation that she gave soon after she left and has done a subsequent number of times you've actually, I think, seen her do this presentation, right, Mike? you thought it was interesting. Basically, she kind of looks at, the similarities between trust and safety and healthcare intervention levels and kind of maps different ways that platforms and kind of intermediaries can have a, impact on internet health, much like The healthcare system does for, people in real life. And so I won't go into it in detail here now, but I find it really interesting. My partner is a doctor. We talk a lot of the time about how infectious diseases and, harm on the internet are quite similar. And I, I really, I hadn't seen this before and I really liked it. What it was trying to suggest.

Mike Masnick:

Yeah. It's a really interesting framing. Even if just to think about whether or not you agree with it, I think it's worth thinking about just this idea of, you know, within the healthcare realm, there's been so much more emphasis lately on preventative healthcare. It's about, stopping you from getting to the situation where, you need health, you know, direct interventions. Whereas in the past healthcare was viewed as well, once something bad happens, then we intervene. And so. Thinking about trust and safety in that same way, like how can we structure incentives and, everything so that we don't have to intervene at that later stage when things have already gotten out of hand, there are some real benefits to thinking about trust and safety in that manner. So it's absolutely worth reading. I don't know what's really changed from when she was presenting it a few years ago, but it's, it is really, we're thinking about for people who are, trying to think big picture trust and safety. How do we approach this? What does it mean? How can we build a safer internet in general? I think it is really, really good. is this element and I believe strongly in this, that like incentives. Determine everything and thinking about thinking about everything in terms of incentives. And I'm working on a big long paper that I've been working on for years. So I cannot promise when it will ever come out about like regulatory approaches to like, how, how should we think about regulating the internet? And it just keeps coming back to incentives rather than like. direct mandates and structure because those keep leading to really bad outcomes. Whereas if you can structure things that just align incentives better, you often get better results. and I think this is, a sort of similar way of thinking about it. It's really well laid out, very thoughtful, it's worth thinking about for people who are in the trust and safety space. If you haven't seen it, if you haven't seen Dell present it, absolutely worth checking out.

Ben Whitelaw:

Yeah. Great. Now you've talked about that paper, Mike, we're going to expect it. It's going to,

Mike Masnick:

I, it's, I've, I've been getting pressure from people I've, I've talked to about it lately that I need to get it out there. And so it's one of the things that I'm really, really trying to carve out time for, but you know, you keep dragging me back here each week to, to do this podcast. And that's, that takes, it takes away, way away from my ability to get to it. But I, I will, it is one of the things that I'm really trying to complete this fall if I can.

Ben Whitelaw:

makes sense. We'll look forward to that. You're also going to be not only doing your share of the podcast next week, you're going to be doing, uh, bit more than that, because I'm not going to be here next

Mike Masnick:

Yeah. Yeah. we'll have a guest, uh, and it should be excellent. We're already, I was going to say, I think. We may have already found a story that we'll be doing because as we were recording this podcast, Ben, breaking news, breaking news, uh, the Ninth Circuit Court of Appeals here in California has come out with a decision in the Net Choice vs. Bonta case, which is about the California age appropriate design code. Uh, I have talked about, yes, somewhat controversial, the lower court had struck it down and, uh, Uh, Rob Bonta, who's the attorney general here in California, had appealed, and I am just skimming it because we've been talking, uh, so I have not been able to read it, but I think it will be interesting. It does look as though the Ninth Circuit has upheld, affirmed the district court's injunction blocking the law from going into effect because of First Amendment concerns. And, um. And I will note that I had filed a, declaration in this case explaining how this law would have a really negative impact on TechTurt in that I thought, for TechTurt, it was basically there was no way to comply with this law. and so I filed the declaration in the case pointing that out. Um, and so I'm sort of happy to see that the law will not be going into effect. Uh, but something, something to discuss perhaps next week.

Ben Whitelaw:

indeed. Well, I will look forward to listening to you and our co host for next week. Talk through that. That sounds like a really interesting discussion. that wraps us up for this week. Thanks to everyone for listening. Take care. As Mike said, so subtly, if you enjoyed the podcast, and want to review us and see us climb the algorithms on the platforms, which is my only aim in life, really, then please do

Mike Masnick:

Authentic, authentic reviews, no fake reviews. We would not want to send the FTC after you.

Ben Whitelaw:

no, you're right. You're right. I'm too pretty for that. Um,

Mike Masnick:

And, and, uh, again, a reminder, check out the sponsorship page, the new sponsorship page. If you are interested in sponsoring an episode or multiple episodes, we have lots of information there for you to check out.

Ben Whitelaw:

Good to speak to you. I'll let you get back to your paper, uh, and I'll, uh, bid farewell to our listeners. Thanks all for listening. Take care. See you soon.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode