Ctrl-Alt-Speech

Money for Nothing and Clicks for a Fee

Mike Masnick & Ben Whitelaw Season 1 Episode 95

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 52:08

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

Play along with Ctrl-Alt-Speech’s 2026 Bingo Card and get in touch if you win!

 Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.  

Ben Whitelaw

So Mike, this week we're gonna talk a bit about prediction markets. Okay. do you dabble in prediction markets? What's your

Mike Masnick

I do not. I stay away from that kind of stuff.

Ben Whitelaw

Yeah. Yeah. A fools errand. but one of the prediction markets that many people would know, it's been in the news quite a lot recently is calci. And it's one of the big two. We're gonna talk about the other one, later in the episode. And Che's prompt for users, when you go onto, the kind of quite chaotic site actually, is trade on anything. So I want you to, trade on something today. What, what would be the bet you'd make? Like.

Mike Masnick

The bet that I would make, Ben, is that, these prediction markets are going to face some pretty serious regulation in the near future. I, I don't think the, the unregulated world of prediction markets is, very long for this particular world. what about you? What, what would you be betting on?

Ben Whitelaw

Well, you know, I'm not gonna give you very long odds for that. I'll be honest, Mike, that's not gonna, that's not gonna win you very much money. Um, my prediction would be that this podcast has not appeared on Calie as a bet. but you know, at some point maybe it will, um,

Mike Masnick

but that's, so that's something we could influence, huh?

Ben Whitelaw

Yeah, finally we can, we can make some money outta this podcast in ways that we never foresee.

Mike Masnick

There's the business model.

Ben Whitelaw

it's very intoxicating, isn't it, as we'll find out. Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's March the 19th, 2026, and this week we're talking about the gruesome reality of betting on war. The Renaissance in AI, spam and the data label is fighting back against the odds. My name is Ben Mike Law. I'm the founder and editor of Everything and Moderation, and I'm with a very, clean living, financially frugal. founder and editor of tech. not that, not that I thought you, were a big better,

Mike Masnick

I am just not, there's nothing about gambling or betting excites me in, any way. The very first time I ever went to Las Vegas was after I had graduated and I got my first job out here in California. and I had to go to Comdex, which was the big conference at the time. It went outta business a few years after that. And I went and I was with my relatively new boss. I'd been working there for a few months and she was like, well, you know, we're in Vegas. Let's go gambling. And I was like, oh man. Like I just, there's nothing about gambling that actually excites me. And I was like, I really. Don't feel like doing this, but you know, it was my boss. So I, I went, and we went to roulette table and. she was just like, well, whatever. She, she was gonna do something and I just did the simplest, like, can't remember either. Bet on red or black. Um, and, and I won on my first bet. And, you know, I think it pays double if I remember correctly. And like, like got my money back a little bit more and I was like. I'm good. that was it. And then I, I soon realized afterwards that like, I guess my boss felt like she didn't really like to gamble either, so we ended up taking the money that I had won and we went, there was a video arcade next to the casino floor and we just played video games for the rest of the evening, and it was, it was a lot more fun. I'm not a, I'm not just not a betting guy.

Ben Whitelaw

no, no. I mean that's probably how a lot of people feel about it, to be honest. I think a lot of folks go because they think that other people want to be doing it. I mean, that sounds a lot more glamorous than my experiences, to be honest. I mean, I've only ever gone to Grote casinos on the outskirts of UK towns. Um, normally after a night out, to be honest, when I was much younger. And, uh, mainly went for the sandwiches. Uh, they used to, they used, there was this thing, yeah, there's this thing in the UK where they give you sandwiches and they kind of don't really expect you to bet. So, you know, poly market and calie don't give you sandwiches. So

Mike Masnick

No.

Ben Whitelaw

I, I'm not

Mike Masnick

Maybe they should. Yeah.

Ben Whitelaw

Yeah. More, more snacks and I'd be there potentially. but yeah, that, that's gonna be our, that's gonna be our story today. there's a kind of general theme actually about the financial incentives that create Tensions in the way that online speech works. And I think that runs throughout, this week's episode. and we've got some really interesting stories, a lot of platforms that we don't often talk about on the podcast, which is a nice change. before we dig in, a reminder to listeners that you can play control or speech bingo. wherever you are in the world, wherever you listen to the podcast, go to control speech.com/bingo and listen out for, potential cards. Tick them off as you go along. Mike's made a beautiful, little bespoke bingo card for us, and, we still haven't got any winners yet.

Mike Masnick

Yeah, you can click and retain the things that you want. Or if you wanna be old school and analog, you can print it out. but works on the website, works on your phone. you can play along with the, very fun control speech. Bingo. And I think I said, you know, if you get bingo, if I programmed it correctly, or rather if Claude programmed it correctly, uh, confetti reigns down from the, the top of the screen so you know, there, there's an extra incentive to play it. It might not be a sandwich, but it's, it's confetti. So.

Ben Whitelaw

Yeah, you get confetti in a, in a free 10 pound bet at the a, a bookmaker near you.

Mike Masnick

Oh.

Ben Whitelaw

That's a joke. Um, yeah. Uh, as ever, if you join the podcast, today's episode or any of the, of our, increasingly growing back catalog of episodes, do give us a rating and review wherever you get your podcasts. Spotify, apple are the primary ones. They're the ones that help us get listened to the most. They help us reach new audiences and spread the good word about, controllable speech. but yeah, Any kind of feedback is great as well. Email us at podcast@controlspeech.com. we reply to all of them, bar the occasional abusive one. I will say that, um, we're not immune to, you know, having our ego dented occasionally. so Mike, let's, let's get going then. We're, we're going talk a bit about the ways platform create perverse incentives. Sometimes we touched on it. where capitalism should still exist, uh, which we, we touched on at the end of last episode. but this first story is all about that really. there is a fascinating story that's, kind of emerged this week. and you know, you're gonna tell us a bit about it and how people are really going a long way to ensure that they get paid, what they think they're owed. explain a bit more about what it is.

Mike Masnick

Yeah, so obviously, you know, with all the prediction markets, there's been a lot of talk about. the potential problems of them. but most of, most of the discussion has really been about the insider training things. I mean, there have been multiple stories where it's like every time, the US government does something crazy like invade a country. That right before that happens, there happens to be, a huge amount of money suddenly placed on that same event happening, which certainly suggests that people high up in the US government are trying to cash in on the fact that they know we're about to suddenly invade a foreign country and, and therefore placing bets, which is scary and. Deeply problematic in its own right. But there was an interest, really interesting story that made the rounds went pretty viral this week from the Times of Israel and uh, war reporter there, Emmanuel Fabian, talking about how he had done a, there was just like a live. Blog about, Iranian missiles that were targeting Israel and one landed somewhere, didn't kill anyone, landed in a relatively empty area, and he just put out a short, I think it was like 150 word live blog post, just mentioning that like, you know, this missile struck this space and, and there was nobody there. And, and nothing happened. And he started getting. All of these messages, and it started out like the first few were kind of friendly and just said, like, basically, can you check again? I think you made a mistake, the missile didn't hit. it was a fragment basically saying, the missile defense system, the Israeli missile Defense system must have hit that missile and it was a missile fragment. The actual missile didn't hit. Israeli land. And he checked and he checked his sources and he looked at the video and he said, no, like my initial report was right. And he, you know, knows people and checked and he, was fine. And then the next day he gets an email that was a little bit more aggressive from somebody else. And it's like somebody, he, these are not people he knows and it's just like. I, I really think you should check it is like a little bit more forceful and then like, it basically keeps escalating to the point that like people are saying, you need to change that report and say that a missile did not hit, and he was getting like really confused, until he realized that there was a bet on poly market, which is the other big, prediction market in which the bet was whether or not. an Iranian missile strike would hit Israel. And his reporting was being used as proof that it did. and these people obviously had bet that it would not, and it, it got more and more aggressive to the point that someone reached out to him on WhatsApp and was basically threatening him and told him where his family lived and. Was going on and on and Then somebody faked an email like he had replied to the first few people and was just like, you know, no, I checked and it's fine. Somebody faked one of the email replies and said, oh yes, I made a mistake. I'm going to update it now. And then started posting that all across X and they're like, oh, he is gonna change it. He's gonna change it. And you know, apparently there were millions of dollars. Bet on this one thing that people were going to lose and, somebody reached out like a, Colleague at another journalism outlet reached out and was like, Hey, an acquaintance was asking if you could correct this thing. And he like went back to that, colleague and was like, can you find out if your acquaintance bet's on poly market? And they checked and it was like, yeah, he does. And so Basically, at the end it was like he was getting death threats and people were saying like, if you don't change this, like it's, no big deal. You just have to change this one sentence. It's not gonna make a difference to you. But if you don't like, we're gonna make your life hell and we're gonna, we know how often you visit your parents. Like, there was all of this like, really scary, really creepy stuff. and, he went to the police with it all and then he went public with it and he said the second he went public with it. It all stopped and they all went away. and poly market claims that they banned those users, but I'm not entirely sure how they figured out who was, doing what. and it is crazy and you can see the incentive structure here. but the thing that really struck me about it. Was people have talked about prediction markets for decades, right? Like people have always thought like, oh, it's an, it's an interesting idea. if people are putting money on predictions, that gives incentives for the predictions to be better and for them to be more accurate. And so. There was always this theory that prediction markets can be better than polls, certainly better than pundits. Pundits are generally useless, as we say, is two, two pundits here. Um, but, uh, um, the idea was like, okay, well if people are putting money on it, the fact that there is insider trading is almost sort of built in as part of the idea. Like, yes, because insiders will play in these markets. It will give you more accurate information in the long run. But it also creates incentives for when there are big bets for people to falsify the results. Right. and that leads to the realization that there's this hidden element in many of these prediction markets, which is that they are relying on journalists to confirm or not confirm the different elements of a bet. And so they're basically outsourcing the, conclusion of the bet to, Vastly underpaid journalists and not recognizing how much pressure then is going to be put on those journalists. And after this story went viral. the reporter did an interview with Charlie Worzel at The Atlantic where Charlie asked him, which I thought was really interesting, said, like, did you ever consider changing the story? And I thought his, his answer was very interesting because He said, yeah, he said he actually gave it a thought for a second, not because like, this will all just make this go away and people stop threatening my family and everything, but more just like, did I get it wrong? Did I make a mistake? Because sometimes journalists make mistakes. It's, it's part of, you know, it happens. but he's like, you know, he. Did the research. He, he watched the video. He spoke to military people. He, he knew, how to report how to, this is, you know, what he does. but they were making him, reconsider whether or not his own reporting was accurate because they were so forceful and so insistent on it. And so. it's like this crazy new world in which you have this economic incentive built into how the reporting works, and that is distorting reality rather than like being a better predictor of reality.

Ben Whitelaw

Yeah, exactly It's having a really perverse effect, on our kind of information ecosystem in ways that we don't even recognize yet. and that, that's really quite scary. I mean, the fact that, you know, the story talks about$14 million being wagered on literally a couple of sentences in a live blog. And that kind of blew my mind. You know, there's, the idea that actually that, if that hadn't been written, people would've been paid out that bet. the fact that the kind of users of poly market are citing. Sentences in, articles and tracking live blogs and trying to of figure out evidence that can lead them to be paid out, or not paid out was, shocking to me. It's a kind of, as you can tell from my lack of betting experience at the top of the episode, this was like a real shock to me. and no mention of sandwiches whatsoever. So it, it's a. What I wanted to ask that, what do you think Poly Market's responsibility is here? they said that they banned the people involved. I couldn't find any mention on the website of any community guidelines of any, trust and safety mentions. There's no, I couldn't find it on a brief of look of LinkedIn, any people responsible for trust and safety. they have terms of service, they have privacy terms, but there's. Nothing there that suggests that there is any kind of formal, what we'd know to be trust and safety operations or enforcement. What's your sense on, on whether these prediction markets should be doing trust and safety or any kind of Prohibiting this behavior?

Mike Masnick

yeah, I mean, I think to some extent. They're, going through the, the learning curve that, that, that, you know, social media companies and, lots of other companies have gone through. I mean, the reality is that most prediction markets, the people who are creating prediction markets, how do I put this diplomatically? Um. they tend to come from sort of libertarian backgrounds, right? They're the same people who built new social media sites saying, this is going to be the free speech site. We're not gonna do any moderation at all because we believe in free speech. Those are the same people who say prediction markets because you're putting money on things. You get better information, and they refuse to sort of recognize how. That can be true in some circumstances, but it can also lead to incredible distortions because once you put money into it, then people will start to try and figure out how do you maximize their profits? And so that is a distortionary effect. And so I think what is starting to happen is that those sites are going to realize that either they're going to have to start to do some of this, some sort of safety kind of work, trust and safety work, and have better policies. Or they're basically gonna face all sorts of problems. I mean, we should note that Calci earlier this week was also charged with running a criminal gambling operation by the state of Arizona. that's very early. We'll see if that actually goes anywhere. but you know, there was also a similar issue a few weeks ago when the US first, started bombing Iran, where there was a bet, I think it was on poly market. I can't, I, I get the two of them mixed up and I don't, I don't use either of them, but I think it was Poly Market had a bet about regime change in Iran and there was a bunch of bets right before the US started bombing. So there was definitely this assumption that there was some insider, trading there. And then when the Ayatollah was killed. Poly markets said, we're not paying out because we don't allow They do have some rules. They said they don't allow bets based on someone's death. so now you're getting into the weeds of like, does regime change only mean without death? You know, and, and so, you know, these are the types of like, trust and safety. Decision making things that everyone has to go through where it's like easy to put in place rules and say, these are the conditions on which you get banned. But then everybody's gonna fight about the specific context of that and whether or not it meets the criteria. And here we have this, except there's money behind it. So you have the same sort of thing, like regime change. Yes. But death? No. Well, what if regime change and death happen at the same time? You know, like, then what do you do? And that's where you need like. Clear policies and better policies, and I don't think that these platforms, Fought that far ahead. And so they're, they're sort of making it up as they go along. And so they say oh no, you know, the bet on regime change doesn't count because someone died. And now I think they're getting sued for that. And so they'll have to, fight that out in court. But I, you know, sooner or later, like every social media platforms realizes like, we can't have the, everything goes, we're the free speech platform and have that work. I think these. prediction markets, if they're going to survive at all, whatever legal onslaught they're facing, both civil and criminal, and regulatory pressures and everything along those lines, they're going to have to realize that they're going to have to put in place like much more stringent rules. just to even have people trust them again, right? Because like if they can say like, no, this doesn't count because we've decided that death doesn't count and that wasn't made clear before, then you're gonna have a lot of people accusing them of changing the rules on the fly, and that destroys trust in the platform as well.

Ben Whitelaw

Yeah, I mean, ironically this story came out just a few days after, poly Market announced a partnership with Palantir and an AI company called TWG ai. Wait for it, it gets better, to create what they call a new standard for sports market integrity controls.

Mike Masnick

There you go.

Ben Whitelaw

So, they originated kind of as a sports platform. that's where a lot of these kind of, prediction markets begin, but it has kind of grown as you say, and judging by that announcement and, you know, the tools that they're clearly kind of creating, they have had issues in the past. Um, whatever. I'm not gonna go into the details'cause it's kind of PR bullshit, but, um, this, this tool that they've created with Palantir helps to kind of detect, coordinated activity, irregular market patterns, all the kind of stuff that a trust and safety team would do. I guess the worry is that this, they're clearly using ai, to be able to do this at scale and in the kind of real time that they need to be able to. To, push back, and that is a concern in its own, its

Mike Masnick

Yeah. I mean, you know, to me, I don't know that the coordinated behavior stuff is as big a problem as, all the other stuff. I mean, in the sports betting world, there were all these stories last year about, I mean, there were a bunch of different stories, but like. athletes getting angry people, there were, there were a bunch of stories of like people demanding money over Venmo. They would find an athlete's Venmo and say like, you didn't score this many points, and I bet on you to do that and you owe me 50 bucks or whatever. and it's just like all of these. You know, there's so much pressure that, that is the bigger problem. And I don't, I haven't seen these companies take any approach to really dealing with those kinds of things. and at the same time, I mean, the other thing, you talk about partnerships. I mean, Palantir is one thing, but like, I. A lot of news organizations are partnering with these, prediction markets,

Ben Whitelaw

Mm.

Mike Masnick

to me that's really scary too. In light of like, you know, what's happening here to a journalist, you can see a world in which, some of these, these media companies get pressure to like, don't report this thing because you know, we're, we're gonna lose this bet.

Ben Whitelaw

Yep.

Mike Masnick

it strikes me as really, really problematic incentives and like, again, there's this weird realization because like I will admit, 20 years ago, 10 years ago, I was intrigued by prediction markets. Like there is a conceptually interesting idea behind them where it's like, yeah, if there is money backing. predictions does that make them more accurate? And there's an argument that maybe it should, but, now we see the reality, which is like when you put money in, the incentives change. And it's not just to be more accurate, sometimes it's to like mess with the way that we determine what truth is and that, that is really, really problematic. and so it's a really interesting space and it's one I think we're gonna be hearing a lot about, especially as they become so widely recognized and used, especially by media companies.

Ben Whitelaw

Yeah, I think that's a really good point. Like the fact that this looks like a, a financial story or a story about a specific platform, it, it isn't about that. It's about the, the ripples on, the way that speech works, which I think is so key and, and also. the kind of abuse and harassment and targeting that you mentioned, of this reporter actually didn't happen on poly market. It happens on, on other platforms. Right? So it starts to become the problem of regular social platforms and starts to bleed into the, world of trust and safety professionals who, yeah. Who are obviously starting to having to deal with this new kind of, or this new shape of, harm as well. So. Really interesting story. I think it's, Glad we we kind of talking about it'cause it feels very relevant to the work. We do a similar kind of story, in that it looks like one thing and probably actually is if you dig deeper, it feels like another is this next one about diesel. the French streaming music streaming site, DZA, announced some results this week and on the surface of it, yeah, it looks like a kind of how Dza as a platform and a company is performing. It actually made a profit for the first time in, over 20 years. but actually for me it's a really a story about platform manipulation. And it's a platform that we don't really talk about, on the podcast. So I thought we would, it'd be worth talking about today. essentially kind of what the CEO of D talked about in announcing these financial, results is that kind of AI fraud does a, are making a living on the platform by Don only, playing music and, and getting royalties from playing music, but also by loading tracks and then. playing their music and, and earning royalties on the back of it. So what used to be, kind of one-sided, bot farming scam where you could create, some music, have a bot play it regularly, and then make money from it is now a kind of double sided, scam where you can also now create music that sounds very much like a human and. You can then make even more money from that. a much more scalable, much more kind of financial lucrative scam. and the interesting thing here for me is that it's really difficult to kind of know what is a, human artist or a legitimate artist and what is an AI artist now. And that's making it very difficult for platforms like, diesel, but also Spotify. Band camp and some others who, have really struggled over the last year or so to, make that distinction. just a bit on the numbers of how many, songs di is finding that are AI generated. it's only 3% of the total di a track database. so relatively small number, but the, thing that they're seeing is that it's scaling very fast. So 40% of every. Daily upload. every daily song upload is now AI generated. They believe around 60,000 tracks every day. So you can see over time how that 3% might be a much larger number. and what's interesting is that of the 3%, 85% of those are fraudulent. So those, plays and those songs are not legitimate at all. And those are the things that these are having to kind of invest in weeding out. as I say, Spotify have aggressively gone after AI generated music, in September last year that they said that they were protecting against the worst parts of ai, investing heavily in, in screening out ai. tracks. They got rid of 75 million tracks, would you believe in 12 months last year. and Bandcamp also banned AI music earlier this year. So there's this kind of real whack-a-mole when it comes to AI fraud, AI spam. I dunno what you think about the, the kind of differentiation there, Mike. I think it feels like we're kind of becoming. Becoming a bit more serious about what we used to call maybe AI slop 12 or 18 months ago, and is now actually being kind of crystallized into a, a type of fraud something more serious than that.

Mike Masnick

Yeah, I mean, I, I think the, fraud angle is the important one here. It's the fraudulent plays that are the problem, right? If, if someone were just uploading, crappy music, AI generated music that was no good and nobody listened to it, there's no problem, right? Because, I mean, other than like some space on deezer servers, but, you know, it's not, that big of a problem. It's the fraudulent plays that you get paid for. That is, the issue. And so, there's always been attempts by people to sort of game these systems for payment, right? I mean, there were famous stories of like people uploading white noise in particular'cause they knew people were always looking for white noise or, you know, my favorite was, you know, when certain artists weren't on Spotify. people uploading songs with the same names as those famous songs, just to get the plays and try and get some of the revenue. And it's all just gaming for, revenue. But the really scary, not scary part, but the really challenging part is the fraudulent plays. And how do you determine what is fraudulent or what is not like, I think the reality is that like with the AI generated music. you know, some of, lots of it is terrible, but some of it's actually not bad. Right? And like, we're seeing stories now of like, actual people who know something about how to make music using AI to actually make decent music. And I'm sure before long there'll be stories of like a number one hit that was AI generated. I, I think that is a world that might exist. and I. Personally don't have a problem with that. I think, the way that people make music changes over time and, there have been, computer aided efforts to make music for a very long time and some very successful songs that were made in ways that were not possible before the advent of computers. and so AI to me is just the next step of that. there's, there's. Creativity involved. And there was a story that I had written about and I've talked about from a few years ago, where like the musician, Grimes, uh, Elon Musk's ex, not X is the platform, but ex partner, uh.

Ben Whitelaw

was the inspiration.

Mike Masnick

Yeah, that's right. Uh, had, had, I realize as I was saying that that could be taken in all different ways, but, um, that she had released like an AI version of her voice so that you could add her singing to any song that you wanted and if you made money from it, she wanted 50% of the cut, which was an interesting experiment. And like there are interesting ways that people are using AI to create music, and I don't have a problem with that. I think that there are some people who. Will say that they wanna be sort of purists and they only want human created music. And you know, look, to me, that's a taste decision. If that's what you want, great. but the real problem here is like being able to, set up these fraudulent listens and then profit from it. the whole point of these kinds of services are supposed to be, a way for people to listen to music and for musicians to get paid at the same time. And when you're, doing fraudulent streams, you're sort of wrecking that ecosystem. And to some extent it's, you know, it's another version of the same story that we were talking about in the first story of once you put a way to get paid into this process, the incentives change. And once the incentives change, you need to have systems to be able to deal with that as a platform. Otherwise people are going to game it and cash out in fraudulent ways.

Ben Whitelaw

Yeah. when I was thinking about this story earlier, I think, I think you're right. Thinking about these platforms such as Dza and Spotify and Bandcamp and I guess kind of streaming platforms as well as reservoirs is for me a helpful kind of metaphor. You kind of described it as an ecosystem. I'm gonna take the metaphor a little bit further. Um, you know, so, so. Because you have these different types of platforms. You have these platforms that kind of rely on trust and have a kind of catalog of information that users go to in order to, get what they think to be true and trusted. Right? And, and that's what happens when you go and click on any, song on Spotify's. You expect it to be. who that artist is and the song that it's, purporting to be and the monetization of that. what we're getting in return there we pay for in subscriptions or, you know, via licensing in some way. And in those kind of platforms, any kind of AI dilutes that, value proposition, it dilutes that, That trade off, right? You give me trusted information, I give you money in return. and it kind of, it acts as a kind of pollutant essentially to the reservoir. I was thinking about it as other platforms. If you think about the way that Facebook and Instagram, and TikTok ever kind of embraced AI content, it's completely different. then that's because I think they, they act like giant open seas, right? Um. it doesn't matter how much water there is, whether it's kind of polluted or otherwise, they just want more water.'cause the more water that they can kind of pass through, the more advertising they can serve in between and the more people stay engaged. So it's this kind of distinction between reservoirs and seas that I was, I was thinking through when I was reading this story. not always, you know, that the social platforms are, are seas and that the kind of marketplaces are reservoirs. But there is an overlap there. But I think there's, for me, that kind of categorization of, of the two types of platforms and the types of ecosystems that they represent and also the way that they're approaching AI is kind of helpful. And I wonder, I wonder if it's helpful for the listeners as well. what do you think does that, does that pass master, or.

Mike Masnick

I'm not sure. I'm not sure. I agree. I mean, I think, there's an interesting point there, but I'm, I'm not sure that the metaphor works for me. I think there, few different elements here to think about. So one is that, with the social platforms, the value is in. I know you're not just saying like social versus marketplace, but like social platforms, the value is in constant flow of new information. Like you don't go back and look at old Facebook posts very often. I mean, yeah, you have like the nostalgia ones, but you're not going there to like, see what your cousin wrote six years ago. Like whatever So those platforms need the flow, right? They need much more flow of information.

Ben Whitelaw

like, like a, like a tide, like a new

Mike Masnick

see? Okay. Okay. Okay. I, you fine, fine. Let's, let's, all right. I'm not sure that works with, but, okay. Okay. We'll live with your metaphor, but the, the, the music platforms, you know, people listen to old music all the time, right? You just need to keep them coming back. To, listen to the content that they want to listen to, and that can be old content and not just a constant new flow of content. Obviously, new music is important to it, but it's not as important, right? I, I don't know what the but if you looked at the ratio of. how many people are consuming, posts on Facebook that came within the last week. It's gotta be a very, very high percentage. If you looked at the people on Spotify, how many of them are listening to music that was released in the last week? I would imagine it's a decent percentage, but a way lower percentage than, on the social platforms. And I think there's a difference between, it's like the sort of stock and flow concept that

Ben Whitelaw

Mm-hmm.

Mike Masnick

right? Where the flow matters much more on a social platform, and that creates different incentives, ones that might be much more willing to openly embrace AI generated content. and, on the, the music platforms, like again, like, I think if. people were creating like really good songs that people actually liked with ai, I'm not sure it would be that big of a problem. I'm not sure. It's the AI that's the problem. It's the flood and then the scamming that comes after it. Right?

Ben Whitelaw

Yeah, no, I, I, I, I take the point. Um, I suppose the, the approach to AI is much stricter. or the approach to kind of gamification of metrics around content is much stricter. You know, these are, is is weeding stuff out fairly quickly. it's being kind of very proactive. It's talking about it. the other More open. See social platforms, they take a more kind of risk based approach. They're happy to let a lot of AI content

Mike Masnick

W we, we should just very quickly, quickly clarify,'cause you said open sea and there is that old NFT marketplace called Open Sea. We're not talking about that, but I just

Ben Whitelaw

no, we're not talking about, it's the thing, you can't, you can't say any word nowadays without it being some sort of social platform that this episode has taught us that if nothing else, um, but there's a more kinda risk-based approach, right? It's the worst. AI generated content, the CS a, really egregious forms of hate speech, that kind of stuff that, platforms target. but most of it, they're allowing because of the model, because of the revenue model. I, I'm gonna kind of suggest because the more that's in people's feeds, kind of better it is for user, for, platform. And that's, I, I think shaped the embrace of, AI content in the big platforms, and also the way that AI companies have also tried to set up social feed platforms of their own right. Sawa is a kind of, platform that's solely AI generated content. And with the announcement they're gonna be introducing adverts onto chatt pt, you can see a situation where that, works quite neatly. So, yeah, I mean. I, to be honest, I'm not, I'm not unhappy with your reaction to my sea and reservoir metaphor. Uh, I thought you could have been more scathing. So I, I'll take that. I'll take that at this

Mike Masnick

All right.

Ben Whitelaw

I'll go, I'll go away and work on the, the, the weak parts. Um, but yeah, I

Mike Masnick

we'll, we'll follow up with this.

Ben Whitelaw

yeah, yeah, I'll, uh, do my homework. Um. But yeah, I think the two stories, as you say, speak to the financial intensification of certain behaviors that affect speech in, broader ways. And I think there are a nice pairing, there. So we've, we've put them to bed. We've talked a bit about those. Let's go through other stories that caught Ari this week. Mike. Let's start with a, a story from the BBC that speaks to this idea of a kind of spam renaissance. an increase in, the proliferation of scams online, but with a bit of an AI twist.

Mike Masnick

Yeah, so this is in the, the BBC. It was an interesting article with a great title called I Hack Chat, GPT and Google's ai. And it only took 20 minutes and, you know, how can you not read that article? It's by, uh, Thomas Jermaine, I think is, uh, the author's name. And it's basically like. There is this realization among some that, for decades we had this concept of search engine optimization. and this was that you wanted to get ranked higher in a search engine. And so, that led to a whole world of people creating sort of spam farms. I mean, it's, it's. the same problem all over again. where people would create things to try and trick the Google search bot into putting their results at the top. If you searched for, anything, you know, teddy bears, you wanted your teddy bear store to show up at the top and you would play all these games to sort of spam the search engines. and took a while for the search engines to figure out how to deal with that and how to crack down on it. Now, that has moved in a big way into AI optimization. And lots of people trying to figure out, like if you ask Jet GBTA question that will take you to a website, well, it should take you to my website. And so there've been all these, these attempts to, game it. And basically this article is about someone specifically trying to game it in a kind of fun way. but the, the thing that sort of struck me about it is, the gaming that he did that took him 20 minutes was not that interesting. I mean, the trick was that he set up. sites that basically made claims that were weird and unique and something that nobody else would talk about, and so was, able to do that. specifically he was talking about like, Hotdog eating contests where he put himself as the fastest hotdog eating journalist. Um, this is not a question that anyone else is going to be asking. Uh, you know, and so in the same way that you could do that with search engine optimization, basically, like if you could find the unique niche that nobody else had written about and you write about it, then yeah, you're gonna go to the top of the search results engine. So he did the same thing. But with, Things that are useless, right? So like I understand that there's a concern about gaming AI systems and I think it's something to, take seriously. But I, I found this article while entertaining, uh, and, and interesting. Not to be like, oh, this is gonna be like a huge problem where if he can do this in 20 minutes, anyone can do this in 20 minutes. Because for topics that actually matter, there is a wealth of content and you're not going to be able to magically rise to the top of chat GPEs recommendations just by creating a single webpage about it. but I do think it is an interesting thing that we're going to have to be thinking about, especially when you have, certain sites, certainly a lot of like. News sites, you know, high value content sites that are worried about ai, who are saying, oh, the AI systems can't scrape us anymore. So New York Times and NBC, and whoever else is saying like, we don't let AI scrapers scrape our site anymore. My worry is that when they do that, when the high quality sites take themselves out of the AI scraping, they leave that field open to people who do try and scam the system. So, you want the higher quality information to be in these systems. And I you can have a whole other discussion on like. Value creation, who pays who for what and all of those kinds of things. But just this idea of, high quality sites taking themselves out of this, opens it up to the kinds of gaming that Thomas did in this article. And so I think there's a bunch of different interesting things at play, but we're definitely going to have to pay attention to. AI spam, in terms of like trying to trick the AI tools into, into recommending sites or, making claims that aren't, based in reality.

Ben Whitelaw

Yeah. And, and this is a kind of fun experiment, but we've seen, foreign states had, other adversaries do this.

Mike Masnick

Yes.

Ben Whitelaw

In much more serious ways. Right. poisoning LLMs essentially, to spit out information that's, that's not true or that, in the way that the kinda hotdog example was, it would've been nice if he'd done. Like you say, try to target information that was much more well known or well searched for. That would've been a really compelling article. but anecdotally, you know, you see the links that chat gpt and other LLMs throw up as citations for information, and they're very random websites. Like, again, we don't know how the system works under the hood, but like. All kinds of like strange marketing websites. And, and I've got friends that have websites that have large, you're probably the same, uh, who have large, very strong Google search rankings, and they get emails all the time asking for people to like put a blog post on the website and they paid

Mike Masnick

I, I get, I get 20 of those a day.

Ben Whitelaw

Right,

Mike Masnick

I mean, I mean, teched is, is ranked well enough that I will constantly, my favorites by

Ben Whitelaw

I was saying it, I just, I thought you probably would do.

Mike Masnick

yeah. My favorites are the ones that, will, point to an article from like 2004 and say, oh, you know, you link to this, article. It's no longer online. Why don't you link to my site instead? Like, you know, you wrote something about, sports. I have this, you know, linked to my thing because that article's no longer there because it's from 2004, like linked to my thing and said, I get those all the time, and

Ben Whitelaw

You say Venmo. Venmo me a thousand bucks, then I'll do it.

Mike Masnick

No more like 10 million. Uh. But it is crazy. But I, I do think like, this is something to watch and, we have talked about like nation states attempting to do that and, nation states in theory, you know, may have more ability to poison AI results because they can put up, thousands or hundreds of thousands of sites and try and poison the, the content. But I mean, again, to some extent this will get back to. the media literacy question, which is that people need to be at least somewhat skeptical of, what information they're getting. And obviously for lots of reasons, they should be skeptical of the information that LLMs are giving them because sometimes it's just not true.

Ben Whitelaw

Yeah, we won't open the, the media re can of worms here'cause we'll be, we'll be all here all day and we

Mike Masnick

We need, we need a media literacy horn.

Ben Whitelaw

Yeah. Yeah. we need to bring back the instruments. Good reminder. Um, we'll move on to another story. Now, I, I also don't wanna reopen the whole TikTok saga. you know, not least because the consortium of investors are all mates of Trump and quite nefarious

Mike Masnick

they'll hunt you down.

Ben Whitelaw

Yeah, exactly. And, and not least because, you know, there are valid concerns I think, about the way that the platform could be used, to censor or kind of stymie speech. it felt like that story has kind of died down. But I've spotted Mike with my kind of journalistically trained eagle eye, a line in the bottom of a, a Bloomberg article that has raised alarm bells, along these lines. So, in a Bloomberg story about. The US receiving$10 billion as part of the deal. At the very, very end, it says that the new CEO of the TikTok us, company is going to be drum roll, it's head of trust and safety. A guy called Adam Presser. and I just kind of, I almost spat up my coffee when I was drinking it because know the kind of US administration haven't been shy about. The reasons for taking over TikTok, for buying TikTok and assembling this consortium of, people invested in free speech, let's say. Um, but installing the man who kind of controls the way that the platform, runs from a trust and safety perspective is pretty flagrant. Um, not, not not, not to cast dispersions on, you know, Mr. Presser. I dunno what his, background is, but. It feels like they're not trying to, dissuade anyone that they're not thinking a lot about content moderation in terms of the deal.

Mike Masnick

Yeah, I mean, it's interesting, right? It's I think it's actually, for people who are interested in trust and safety, to see someone who was head of trust and safety effectively becoming CEO of this platform is an interesting evolution. Now, his trust and safety responsibilities, it appeared that that was. Only a relatively recent thing. I looked up his, his biography and his background and, and I guess he was chief of staff. to the CEO of TikTok for a while, and then became sort of VP of operations and then it was like VP of operations and trust and safety. so it covered all that. this is not someone who's like, you know, grew up in the trust and safety world and took over. But to me, it struck me as interesting because we've heard all these stories about how like, The US is no longer giving visas to people who have trusted safety titles anymore, and here they've like organized this, takeover of TikTok and then allowed them to put the guy who ran trust and safety in charge of it. is sort of an interesting statement in its own right as well. but you know, I think. In some sense. I don't know anything about him either. and I've, I do know some people who've worked in trust and safety at TikTok, and I found them to be fairly thoughtful about trust and safety stuff. I mean, I, I've actually, you know, I think TikTok has had its problem certainly, but as a company, they've actually done some interesting things on the trust and safety side, so there's this interesting element of like what happens when you put a trust and safety exec in charge of an entire company and you know, maybe we're gonna find out.

Ben Whitelaw

Yeah, to be continued. but yeah. Interesting, interesting little nugget in that Bloomberg story, people might think, wow, these guys have, done it again. They've, talked for 45 minutes about things that leave me with very little hope for the future of our information ecosystem and the internet at large. We've got A solution to that. An antidote is the next few stories. I think our reasons to be hopeful, Mike,

Mike Masnick

Okay.

Ben Whitelaw

you can tell me otherwise. your first story is, I think a, a reason to be hopeful.

Mike Masnick

Is it

Ben Whitelaw

Well, I think data labelers kind of pushing back against.

Mike Masnick

Yeah, I guess, I guess, I mean, I, I was thinking of the, the start of this. I was wondering like, are you telling me to talk about a different story than the one I was thinking about?'cause this one struck me as fairly dark as a starting point. So this is the 4 0 4 media, had this wonderful article called AI is African Intelligence, and it's talking about the data labelers, for all of these AI systems and how much of it is actually done by people in Africa. And this is, in some sense, it's. We're seeing a repeat of the trust and safety stories from so many years ago of the discovery that like, yeah, trust and safety tends to be people in far-flung places away from. The US and Western Europe, it's often, Asia and Africa doing, very hard jobs, being exposed constantly to terrible content and having to do the moderation. And now that appears to have switched over in large part to data labeling for ai. And they have these stories of people who had to spend like. 18 hours a day looking at and labeling every, frame in a porn video. That's the part that I'm like, that's really dark. What do you mean this is a light, light story,

Ben Whitelaw

okay. Maybe that, not that part of the story, but what happens afterwards is

Mike Masnick

Right. But what is interesting is that they, uh, have now started to organize and to fight back and, try and push for their own rights and recognize that like there are humans behind this. And for all the talk of how, like, you have AI companies and automation is taking over everything, and you have these companies that you know are within 40 miles of where I'm sitting in San Francisco coming up with all these amazing things. The backbone of it are all of these people often in places like Africa and they're starting to organize and, you know, stand up for their rights. And it starts with the trust and safety content moderators and, and the work that was, done on that front. And now they're trying to expand that out to the people who are data labeling. it struck me that, it was interesting how the companies that. you know, there are basically new companies that are doing all of this AI labeling, and they haven't gone through this thing where like, Hey, the press yells at you when you treat your workers really, really badly. and so they're starting to learn that, you know, most of this talks about this company, sama, I think is,

Ben Whitelaw

Yeah.

Mike Masnick

name of it. And we see in the article also that, mercy Tumi, who we've. Talked about before, who helped, fight for workers' rights on the trust and safety side is now working to, to help the data labelers as well.

Ben Whitelaw

Yeah. And who has been on the podcast in your absence?

Mike Masnick

Yes. It was one of our guest hosts.

Ben Whitelaw

indeed, indeed. Um, so yeah, I think, I think this is, uh, an interesting development. It's, yeah, fascinating that Sam has pivoted quite heavily from content moderation to kind of AI and data labeling, in the wake of all of the, press and regulatory kind of pressure that you mentioned. And yeah, Kenya, you're such kind of fascinating. Country for it, as a large parts of, particularly West Africa as well. So, uh, yeah, I'm, I'm, I'm gonna take some hope from that.

Mike Masnick

Yeah. I mean, I, I'm hopeful that something good will come out of it, but there's a large part of that story. It's pretty dark.

Ben Whitelaw

yeah. Yeah. And, and it has been for a long time. I appreciate that. my own kind of personal, hopeful offering is a story that many people would've seen, or at least a video that they might have seen. this is a, video. Kind of crazy video by the Norwegian Consumer Council. Oh, it's fantastic. I mean, I wish we could play a clip. We should have organized that'cause it's so good. essentially of a, Norwegian man, jokingly. Talking about how his job is as an ator, somebody who makes kind of products and services worse in the real world. And the video, which I'm hoping you've seen listeners,

Mike Masnick

We will put a link in the show notes. You will go watch it if you have not seen it.

Ben Whitelaw

Immediately pause and go and watch it. Um, he talks about kind of discovering the internet and. How the internet allows him to kind of scale his justification work. And, and this is a guy who kind of, it opens up by him cutting holes into people's socks. He, he's soaring the bottom off of chairs. He's leaving the fridge door open. He's doing all the kinds of things that make life like terrible and annoying in loads of ways. and obviously intensification is Corey, Dr. Rose's term that he's coined, which refers to kinda the degradation of service or product normally after they've kind of got a large user base, right? So they cease to care about what users, users think. but the, the advert is a wonderful encapsulation of, I think a lot of what we talk about on the podcast, Mike, and, and calls for a new kind of internet, a new kind of digital realm where things can be made differently. it's a kind of marketing ploy that beneath it sits at a lot of like, be. Engagement with policymakers in Europe and in the us. calling for similar, the stuff that we talk about all the time, greater interoperability of platforms, better data, privacy, reframing towards consumers and away from platforms. And so, yeah, we won't have a lot of time to get in, get into the meat of it, but the video itself is, is worth mentioning.

Mike Masnick

it is absolutely worth mentioning. I love it. I love the campaign. I love the, the sort of philosophy behind it. The one thing that I will say is that, the specifics matter, like when you talk about wanting a new and better internet, yes, but how you actually get there is really important. And I felt that their campaign was a little bit weak on specifics. And they have these sort of philosophical, Policy positions that it's like the details really matter because some of those sound good on their face, but like in reality, end up giving Google and Facebook much more power and things like that. And so you, you, you always have to be, concerned about the specifics. But I love the idea of just building a campaign about like, can we have a better internet? Because I do think that is really important and that's, you know, obviously something I think a lot about.

Ben Whitelaw

Yeah, indeed. so yeah, hope hopefully we leave listeners with, A feeling of an upbeat, uh, a feeling of being upbeat. I don't wanna, I don't wanna like leave people feeling down and depressed after a

Mike Masnick

Uh, talking about ification is, is our upbeat ending, Ben, how, how low have we sunk?

Ben Whitelaw

You've gotta see, you know, the silver lining in these things. I think at least that's what I've

Mike Masnick

It's, it's only up from here. Can only get better.

Ben Whitelaw

Um, which is, you know, you have to come back. Lets week listens and find out. that brings us to the end of today's podcast. appreciate all the, outlets that we cover today, the Guardian Bloomberg FT 4 0 4 Media. we'll include all of those links in the show notes. Thanks for listening. We'll, we'll see you soon. Bye.

Announcer

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.com. That's CT RL alt speech.com.