
Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
Moderating is Such Sweet Sorrow
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Mike is joined by Dave Willner, founder of Zentropi, and long-time trust & safety expert who worked at Facebook, AirBnB, and OpenAI in Trust & Safety roles. Together they discuss:
- Masnick's Impossibility Theorem: Content Moderation At Scale Is Impossible To Do Well (Techdirt)
- UK makes new attempt to access Apple cloud data (Financial Times)
- Imgur pulls out of UK after data regulator warns of fines (TechCrunch)
- Leaked Meta guidelines show how it trains AI chatbots to respond to child sexual exploitation prompts (Business Insider)
- OpenAI's Sora joins Meta in pushing AI-generated videos. Some are worried about a flood of 'AI slop' (ABC News)
- Flights in Afghanistan grounded after internet shutdown (BBC)
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So Dave, last week Meta announced a new product called Vibes. And as far as I can tell,'cause I don't, I don't look, I don't fully understand this thing, but it's some sort of like AI only, like slop social network just for passing around AI generated things.
Dave Willner:TikTok, but with no people.
Mike Masnick:yeah. But using that as a prompt and, and we looked and it doesn't quite have a prompt of itself. I just wanted to ask you, how are your vibes?
Dave Willner:my, you know, my, my vibes are actually pretty good. It's, it's October 1st, so it's, fall, which means the trees are changing color and, uh. I like that. I like when it's kind of 60 degrees outside. It's just nice. the madness of everything is easier to bear, so my vibes are pretty good. How are your vibes?
Mike Masnick:Nice. That's good. My, my vibes are, I, I mean, I guess it is nice at it's fall, but you know, the world is kind of, kind of not great right now.
Dave Willner:Yeah, I suppose I gave you a tactical answer, presuming a not great world locally, you know, it's nice that it's six, it's cooler and the trees are orange.
Mike Masnick:All right, well, we'll, we'll go with that. Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It is October the first, 2025, and this week we are talking about internet fragmentation, uh, around the globe, some rules around AI chatbots, and maybe a little something about an impossibility theorem that I came up with once before. I am Mike Masnick. I'm the editor of Tech Turt. As you will have noticed, if you're listening to this and you are not hearing. The British accent of Ben. Ben is off this week. Both of us were off last week. so we are back, or I am back. Ben is still off. He will be back next week. for me, I was off last week because I was at the Stanford Trust and Safety Research Conference, which was excellent. And because we needed a guest host for this week, I decided to drag back with me the keynote speaker on day one of the Stanford Trust and Safety Research Conference, which was Dave Wilner. hi Dave.
Dave Willner:Hi Mike and hello folks.
Mike Masnick:Dave, for people who don't know, well, people should know you. First of all, I think most
Dave Willner:I, I think you listen to this podcast, there's a higher than normal chance
Mike Masnick:But just, just in case people haven't, Dave has worked everywhere and is one of the, originators I will give you credit for of sort of trust and safety. You were sort of the first content policy person at Facebook, is that
Dave Willner:Yeah. Yeah. I can do the, the
Mike Masnick:Okay.
Dave Willner:the TLDR is, it's all my fault. But,
Mike Masnick:Yes.
Dave Willner:the slightly longer version, is that I was Facebook's 12th content moderator. and before that was resetting people's passwords by hand. and we didn't have any rules. We needed some rules, so I ended up writing a lot of those initial rules, which evolved into the first content policy standards and content policy team. that grew quite a lot over the years before I left. then worked at Airbnb for six years. We building out their community policy team and also working on their quality and training programs. And then was at OpenAI, for the launch of Dolly Chat, GPT and GPT-4 and left. It ended up at Stanford and now I'm doing my own thing.
Mike Masnick:yes. Your own very interesting thing we should note, which has a trust and safety hook into it. and you gave the keynote, your company, by way we should mention it, is
Dave Willner:Well, we should, that's
Mike Masnick:entropy. and is using AI in the trust and safety space.
Dave Willner:Yeah, we fine tuned a small, very small, large language model, which is a funny thing to say, but you know, small is relative here. could run it on your MacBook error. that is a special purpose model designed to do a very good job of following. Content policies and giving you back a faithful judgment as to whether or not an example you provide violates or does not violate the sort of criteria, or rather better way to say it meets or does not meet the criteria you have set out. It's just giving you a yes no.
Mike Masnick:A very binary answer.
Dave Willner:correct. Yeah. Uh, but it's really quick. So if you need more gradations, just run it more times. and then we've, also built a system that uses that capability to help you more clearly explain the standards you're trying to write for it to follow, using guidance from the decisions you've made as applied to a bunch of examples you provide. So it can help you figure out what you meant to say based on what you did.
Mike Masnick:right. It can tell you your policy does not agree with your decision making,
Dave Willner:Correct.
Mike Masnick:which is kind of neat actually.
Dave Willner:it's quite neat. Uh, it sounds a little, technocratic, but it, it lets you get to a standard that works well much more quickly, than has historically been the case. And then you can go run it a lot with the magic robot.
Mike Masnick:Yes. Wonderful. Magic robot. And so you gave this keynote speech, and I guess in theory, based on the number of people who came up to me after you gave your keynote speech to ask me if I was okay with your keynote speech, we should address this.
Dave Willner:Yes, we should. Yeah. I did warn Mike. Um, I worry maybe my delivery was a little off. So, I took as a framing for that speech, a piece that you wrote, I think back in 2019 called Masics Impossibility
Mike Masnick:Right.
Dave Willner:Core contention of which is that content moderation at scale is impossible to do well. And you had sort of three supporting reasons for that. And I used those three, your sort of three why's this was important to do well, which basically boiled down to, one, People don't like experiencing moderation'cause they don't like being told that they shouldn't have posted what they posted.'cause they think they should have posted what they posted'cause they posted it. So that makes folks mad. predictably. Uh, two, we're kind of bad at figuring out what we're gonna take down, so we mess up all the time. and three, basically the scale of the exercise means that even if everybody somehow liked it and we were pretty good at doing it, we would still appear incompetent because. Even if you make almost no mistakes, statistically, you still make a lot of mistakes in reality if you do the thing millions or billions of times. and I sort of, my, my framing of the speech was basically, I hate to say this, but Mike is right. That is true. Uh, we have been doing a bad job this whole time, but I think maybe, the rise of generative AI and, and sort of large language multimodal models changes some of the constraints about, of how we design our trust and safety systems such that we might be able to get closer to the standard you set, which after all is doing it. Well, not doing it perfectly, which that's definitely not gonna happen, but Well, is a social construct. Maybe we can get to Well, right. Or at least a lot closer to, well, if you go through and reexamine in, in detail, how exactly. The tools we've had historically have constrained the systems we can build in ways that make them kind of shitty,
Mike Masnick:Right.
Dave Willner:for lack of a better way of saying it.
Mike Masnick:And, and mean, yes, well, is a sort of vague term. and some people have, suggested that when I wrote the Madison Possibility Theorem piece, which has definitely been picked up and talked about a lot, that it was suggesting like, oh, everybody should give up. And that is never the intention. I, I always think that companies should be striving to do it better and to, should be looking for ways to do it better. I just wanted to, set a kind of marker of like, what is it that we're targeting, because so many of the discussions, especially in the media and among policymakers was this, this impossible standard of how did you get this thing wrong and, you know. The answer is it could be any number of things, including like it wasn't wrong or it was one mistake we made out of a billion, leave us alone. You know?
Dave Willner:Well, and, and you're, you're explicit about that in the piece. It's, it's very clear if you read it, that it's, it's a defense of the endeavor for non-experts who do not understand how, that we're like over here climbing Mount Everest. And so sometimes people are gonna get a little dizzy, right? Like that this, this thing is fundamentally an insane thing to be trying to do. is not the same thing as saying, well, you don't have to try to do it. There are lots of things we do that are sort of faintly crazy seeming, but also deeply necessary, for society to function. And this is one of those things. But in that context, you need to sort of think very carefully about what you can expect. and we don't seem, it is, I think you are right. And you, you really shaped my thinking with the piece that we. Our expectations around this in the technology sphere, I think are downstream of the view that computers are essentially magic
Mike Masnick:Right,
Dave Willner:like Arthur C. Clark was right? Like functionally, the computers are all magic and so everybody is irritated when the wizards don't do magic better. But it's not actually magic, it's basically just a bunch of math. And so, it has limits and features and constraints and your, piece very, I think, skillfully illustrates the constraints of this system and how it is in fact not magical. It is a process and like all processes is flawed and produces a lot of failure.
Mike Masnick:right. and sort of the aim of your keynote was, as you put it, I believe, to make mesnick wrong again.
Dave Willner:Yes.
Mike Masnick:And so
Dave Willner:Which was a bad framing'cause you've never been wrong. Unfortunately.
Mike Masnick:fine. It's fine. It, it has that flow, which I get and I appreciate and, I'm, we'll, we'll look over the, we'll, we'll ignore the, again, part, um,
Dave Willner:I mean, the golden ages are largely a
Mike Masnick:yes, yes. Um, but yeah, I mean, it, it, I thought it was a really good keynote. We really sort of opened up this idea of like, look, there are massive changes that are afoot and we can move the ball in a significant way towards building better trust and safety content moderation type systems.
Dave Willner:at a minimum we should take a real hard crack at it because there's a lot of reason to expect that is true. And I think there's increasing evidence that it's true in some of the domains, some of which tie into my, this project, I mean, this, I'm working on a thing that is a, an attempt at that. and so are you, I mean, frankly, you know, blue sky and the, the larger focus on the decentralization. I think AI makes more viable as a project, and that decentralization is potentially a way of addressing the sort of appearance of incompetence by having more people be involved in the decision making and spreading out the mistake making that we're gonna do
Mike Masnick:yeah. Anyway, yes, it was good keynote. I enjoyed it even if I was the, the foil and, and people
Dave Willner:the frame. You weren't, you were not the foil. You were, you were, you were the organizing, intellectual
Mike Masnick:Okay. But so many people came up to me, were like, where did you know he
Dave Willner:I, I did warn you, but I think I warned you like eight minutes before I started talking.
Mike Masnick:Yeah. Yeah. You had suggested a week before like, Hey, I might mention you. And then right before the keynote was about to go on, you're like, you know how I said, I might mention you. It's actually a
Dave Willner:got
Mike Masnick:more than that.
Dave Willner:of hand. Yeah. When I said I might mention you, I was just starting to think about it and then I went away for six days and wrote a speech and came back and was like, oops.
Mike Masnick:Anyways. All right. let's get onto our news stories of the week. I think we wanted to start, we're gonna talk a little bit about the UK and a couple different stories there and sort of, I will get into what I think it means, but do you want to give the quick summary of what you found?
Dave Willner:okay. So, one of these stories, which was reported I think this morning in the Financial Times, is that the UK is renewing its attempts to get Apple to compel Apple to build a backdoor into encryption for advanced data protection. There had been an attempt at this, I think back in September that became
Mike Masnick:Of last
Dave Willner:a, of Oh, right, of last year. Yes, that's right. that it became a bit of a football, um, between the US and the UK and the US responded with pressure and the British eventually backed off. And this reporting is that it's sort of bubbling back up again, although this time. There's a couple of interesting differences. One, the request is allegedly scoped only to British users of Apple Services instead of everybody in the world. And two, the article ends sort of somewhat ominously and it's actually worth reading verbatim. members of the US delegation raised the issue of the request to Apple around the time of Trump's visit. According to two people briefed on the matter. However, two senior British government officials said the US administration was no longer leaning on the UK government to rescind the order, and it just kind of leaves it there, which is interesting. and then the the other story worth reacting to as a package was the announcement that IRE Or seemingly already has pulled out of the UK entirely in reaction to an investigation by the Information commissioner's office around, I believe their handling of users' personal data, but sort of very suddenly has withdrawn entirely. And as of now is unavailable in the UK completely.
Mike Masnick:Yeah. And both of these stories are interesting. They're different, but they're both sort of, you know, UK based and sort of things that are happening in the uk. you know, we've talked on the podcast a bunch about things around the Online Safety Act, and it's notable. This is not an online safety act thing. Uh. Right. N neither of these are, this is not an Ofcom thing. These are just sort of other regulations that are happening in the UK that are sort of going after American tech companies in some form or another. the Apple story, it's also sort of hard to get full details on because all of it is secret. It's like the same thing happened last year and then all of the proceedings about it were secret. So it was basically what gets leaked to reporters. and both of them are, just situations where the UK is trying to enforce its will, whether for good or for bad. You can argue either way, I guess, on American tech companies and sort of how they're responding to it. and you know, there's a concern, and I've raised this in other contexts, but we're seeing it more and more about how. Because of these sort of local laws and local focus and local demands, what we think of as the internet and access to the internet is different depending on where you are. And the idea of a sort of global open internet is potentially going away, or this may be gone depending on who you talk to. you know, certainly in some places it is, but seeing countries like the uk, which we, you know, Ben is not here, so we can rag on the UK, I guess, without anyone pushing back. Uh,
Dave Willner:Yes.
Mike Masnick:that's
Dave Willner:Yeah, no, you're right. It is, um, it is interesting to see this increasing fragmentation, and it's long been the case that there was fragmentation around the edges, but it, it feels like it's reaching further and further into the core of the parts of the internet that were more previously open.
Mike Masnick:Yeah. And, it concerns me in a number of different ways, right? So like Apple has responded, you know, they responded to the original, again, secret order, which was apparently to, you know, to build a backdoor and to all the encryption by just not offering encryption at all. So basically making it clear to anyone who is storing their data, that it's not, don't consider it safe, and their position is that they're continuing to do that. but it's basically, you know, depending on where you are.
Dave Willner:I think, I think it was actually that they stopped letting new users onboard. I don't know that they shut it down for existing users, which I think may be some of the hook for this current
Mike Masnick:it. Interesting. Okay. and so the more limited scope of just being for UK users to block this is, is interesting. The fact that, the US is apparently okay with it, is
Dave Willner:W well, I mean, maybe, right? Like this is what's so ominous about that end state is like, what it said was sort of, um, the US is no longer making a big deal out of it, but is that because they've changed their minds or because they're not paying attention?
Mike Masnick:Because they did make a really big deal of it. And it was like one of the very few things that I gave the current US administration kudos for when it came about, which was that like they did make a big deal of it and they really sort of pushed the UK on it and they took a victory lap on it, which, you know, like a, one of the few well-deserved victory laps.
Dave Willner:Yeah. You don't like to say you gotta hand it to'em, but I think on this one, for what, whatever the motivations, you know, the outcome seems correct.
Mike Masnick:Yeah. and the whole setup, you know, I mean the, this is true of some people in the US and some government officials in the us, especially in the law enforcement side, have always had this sort of like anti encryption bent and this sort of feeling that like, well, bad people use encryption, therefore we should get rid of encryption in some form or another. which is really problematic, very simplistic view of the world, and very dangerous, like if you actually want privacy and security, encryption is a key part of that. And. I think people who are arguing for doing away with encryption or putting in a backdoor, which to be clear, is really getting rid of encryption. I think people have this belief that there's like some like neutral thing, which is like, gee, only the good people get access to the back doors. It's just like, technically not how it works. Yeah.
Dave Willner:Yeah. My, my wife likes to joke that trust and safety stands, the TNS stands for trade offs and sadness.
Mike Masnick:Yeah.
Dave Willner:And the entire encryption debate is a great trade-offs and sadness, issue where you genuinely have difficult, questions on both sides. And the, the choice is a very stark one where the sort of natural urge to seek compromise isn't possible because of the nature of the problem.
Mike Masnick:Yeah. the other statement that, I love on the sort of encryption debate is the one that, the cryptographer Matt Blaze made a long time ago, which showed up in a John Oliver clip, and I once asked Matt about it and he's like, I don't even remember when I said it or where I said it, but somehow it was on video somewhere and John Oliver's team found it, which was basically the, um, you know, when people asked for back doors and encryption and sort of say like, well, you know, you techie guys, you should be able to figure it out. He said, it's kind of like asking, well, you've, landed someone on the moon, obviously you can also land someone on the sun. It's like one of those things is one of those things is possible. One of those is not. And you sort of
Dave Willner:I mean, it depends on your success criteria.
Mike Masnick:well, yes, sure, sure. Exactly right? Yes. Like in the same way that yes, you can create a backdoor and it is safe as long as you consider safe exposing all of our data, you know, and, and making it vulnerable to attack.
Dave Willner:Although actually in the sun context is, I guess it's not even that it's hot, right? the pressure of the solar radiation probably means we couldn't even get there with chemical rockets. So
Mike Masnick:we're getting out of my, my knowledge area of
Dave Willner:yeah, yeah. My guess is actually we couldn't even land a a, a liquid human mist on the
Mike Masnick:Yeah.
Dave Willner:Um.
Mike Masnick:So a anyways, yeah, it's, it's a really tricky problem and, but it worries me that we're seeing so much fragmentation among all of these things, right. And like, you know, there was a time when we, we had the sort of known areas of fragmentation where like, China had its great firewall. Iran sort of walled off the internet, Russia, you know, wanted to wall off its internet, you know, like, okay, there are these sort of authoritarian countries, they're gonna do what they're gonna do and we can sort of, look from, the more open west or whatever, and, and sort of say like, that's not a good thing. But now we're
Dave Willner:and that wall that wall building was, aligned with the wall building that those countries would do in sort of the rest of their law. And so it's, it's sort of, maybe not desirable, but it's of a peace with the rest of the sort of way that those countries are governed in a way that feels sort of. Contained, and not necessarily sort of bound up in the rest of the way the system works.'cause it, it, they were already sort of isolated pieces of the system in so many other ways,
Mike Masnick:whereas the rest of the world was moving more and more towards globalization, and a part of that was certainly the internet making some aspects of the globalization more and more possible. And now we're seeing these cracks in the system, and I worry about what that means for a whole bunch of things beyond just like, are you able to encrypt your, chats on your iPhone or whatever. I think it's a more serious issue when we're breaking the global open internet and where you have a different experience, different apps, different services available to you based on where you are, that doesn't lead to good places.
Dave Willner:Yeah. I, I think it is. Yeah. think it's also. Likely to compound the sort of difficulty of doing any of this even possibly much less well. Right? Because if, if we can't get our, current approaches to doing, and I know these story not directly about moderation, but like this sort of, if we can't get our current approaches to doing any of this to work at the scale they're currently working at, it seems to me that multiplying the difficulty by multiplying the number of rule sets that you're playing under gets really hard to do well, really, really quickly. And it, it, multiplies the sort of surfaces for collision and the possibilities of chaos and, and, mistake very, very rapidly.
Mike Masnick:Yeah. Which I, I think is a, is a big concern. All right, well I wanna move on to our second story this week, which this one's from Business Insider. and it is, this is actually, it sort of ties back to a story or from earlier this summer, which, Reuters I think broke a story, earlier this summer about meta's chatbots and the guidelines that it had around more sexually explicit chats involving children. and. It was pretty horrific is the, the, the sort of quick, quick summary on that and that because of that article then there was sort of an multiple investigations launched by the US government. The FTC demanded more information from a bunch of different AI chatbot companies to sort of talk about what rules they had in place. The Senate, demanded some information as well, and so meta was apparently late in delivering information to the government that was requested of it, but has started doing so, and some of that has now leaked to Business Insider. Including its specific guidelines on how it is now training AI chatbots to respond to, as they put it, child sexual exploitation prompts. and this has gotten some attention and it has some details about what is acceptable. One is unacceptable, and how some of those rules have apparently changed from where they were when Reuters reported on it earlier this year. but it's still, a little bit of a mess. Um, and I think,
Dave Willner:Yeah, the, the revised standards don't seem great either, to be honest.
Mike Masnick:Yeah. Well, yeah. But now getting back to our theme here of impossibility is it possible to have guidelines that are great?
Dave Willner:Well, I, it, I mean, it depends what you mean by great. So, um, I do think it is possible to write standards more and less well from a clarity point of view. And I was struck in reading some of the quotes in Business Insider that I was not. Totally sure what they meant to mean, in some of the distinctions that they were drawing.
Mike Masnick:Which is interesting given your expertise.
Dave Willner:yes, but it, I I am also a very persnickety reader and writer as a result of my expertise. both, the story back in the summer, and this one did bring to mind to me, I like to try to explain how things go wrong when looking at these kinds of blowups, using this idea called Lin's razor, which is basically the, the notion that you should never explain with malice what can be adequately explained by incompetence. and I sort of like to add onto that you should never explain with incompetence what can be adequately explained by not having enough time to think the problem through, which is frequently what happens in the context of, so much of trust and safety and content moderation generally, and. Looking at these stories, my immediate reaction was like, oh, they're having a fan fiction problem. They're having a problem where, yeah, so if you've read Harry Potter, particularly the fourth and fifth books have very significant amount of making out among people who are like 14 and 15 years old as a critical part of the plot. Now, it doesn't get any more explicit than that, but that is like a core part of the story. And I suspected looking at META'S guidelines in the summer and the ones that they have now, that they were sort of writing under a bunch of constraints around, Hey, we obviously don't want to make A-C-S-A-M machine. Or we think we don't, but we also don't wanna prohibit. The kinds of things that show up in young adult fiction attended for young adults, which society tolerates and like it went through a giant bureaucracy machine. And what came out the other side, had some second order implications that were not cool and not what anybody sort of would've wanted, particularly given the general sort of headwinds there right now against overreach in the content moderation space. And so, and I think you still see that even in these revised versions where they're clearly trying to draw a distinction between like more explicit sexual activity and romantic ish activity. And they've now banned actual role play activity between anybody and bots under a certain age. And it's, it's not to give like an apology for how it ended up there. It's more like, how do you understand what this is? Like, is this mustache twirling, super villainy? Or is this like something stupider and more subtle and difficult to solve? And it seems to me that they're having a problem, kind of like the fan fiction problem in this context, meaning, the problem that ended up with the creation of archive of our own. This is like a old internet callback. but the fan fiction community writes a lot of horrifyingly, explicit sexual content, frankly. Um, and there's been schisms within that community about those things. And that does sometimes involve teenage characters. And it seems to me that we as a society, like haven't figured out where we want the line to be between literature and chatbot. Pretend role play. I think we want it to be in a different place. It feels different, but if you're working from prior art under time constraints inside one of these companies, I can see how you end up writing something that kind of accommodates what, you know, people are out there doing on the internet with sort of fictional characters when essentially playing dress up with their favorite characters in a literature form. And gets to some really weird places when you get interactive technology involved. And also to your, framing question of like, are good standards even possible to write? Gets to this, the heart of this sort of lack of social consensus around some questions. Because we, do have issues where there's strong social consensus that we want those things moderated. Like everybody hates spam, which is that, you know, not a morally horrifying thing. It's just advertising in the wrong place and people find it universally irritating. Uh, pretty much everybody's on board with CSAM is bad. There are some boundary questions around what terrorism means, but like broadly speaking, folks are like, yeah, ISIS was not cool. Right? There are areas of strong consensus. Um, and then there are areas of like absolutely no consensus, and this feels like an area of low consensus smashing into a technological development that itself doesn't have any consensus about how we want it moderated yet. Under time pressure, under corporate's constraints leading to some not awesome choices. So this is my like non mustache villain version account of like, how the hell did that happen? Because you look at it from the outside and your reaction is like, what they did what? Right? Um, it's almost like comic book evil and comic book evil is rarely real in my experience, particularly when you're dealing with big bureaucratic
Mike Masnick:right, and as someone who's been in, in some of these rooms having these discussions, it's not people deciding like, oh, we're going to allow the worst possible things to be there, but it's. We're trying to figure out a way to create a standard that works within. And so your assumption here is that people making these rules, we're kind of looking at the nature of fan fiction and sort of saying, how do we,
Dave Willner:that's my sense because, because a lot of, particularly if you look at Meta's products, in this area, a lot of them are more in the sort of character, ai, AI as role play. Vein, right? They're not marketing or building products for productivity, right? They're not in the like, Hey, you're gonna use meta AI to, through vibes or whatever, to, to, to code up to vi. Ironically, you're not going to use vibes to vibe code your next, uh, app that you're launching. It's in the entertainment space. And if you're thinking about entertainment interactions with chatbots, like particularly if they're adopting fictional personas, you're in that role play creative fiction, creative rematch space, right? that's the closest thing to a prior art to think about. It's where my mind would've gone. Now, I, I hope I wouldn't have, missed the implications of this, but I don't think that's actually a nuts thing to start to think through if you're looking for prior attempts at grappling with some of these concepts.
Mike Masnick:Yeah. And it is, there is this sort of interesting element of like, if this is okay in fan fiction, why is it not okay in a chatbot? And then you can pretty quickly. Figure out why, where it's just like, well, the, the interactive nature of it and
Dave Willner:Is different.
Mike Masnick:is extremely different and that it draws you in and that it, it sort of builds on what you are suggesting to it is it creates a very, very different kind of product.
Dave Willner:Yeah. I think it creates a very different kind of product. I also don't think there's strong social consensus that it necessarily. Is okay. And like, I think fan fiction communities exist as it, okay. It is legal, but like archive of their own exists in my understanding because of controversies around the degree of sexuality in stories. Uh, and so there's, unsettled debate within that community that's been solved by forking. And because that's a relatively niche interest, like, I don't know if you dragged the fan fiction debates about this stuff out into the public and put them in Reuters, that people would be psyched about the lines that have
Mike Masnick:Yes. That's, that's a very, very good point. Yeah. Yeah. And
Dave Willner:right. And so it's like low social, it's a community that doesn't have full consensus within itself in a somewhat more socially marginal position. So there's low, broader social consensus around what the line even is here. And I think the line, like in published books is somewhat different than the line, the furthest lines in the fan fiction community. And then on top of it, you are adding in this sort of chaos agent that instead of a pre-scripted thing, written by a human. You are interacting with this non-deterministic chat system and who knows where that goes or how, how exactly people relate to it psychologically
Mike Masnick:Yeah. Do you think, if, your contention is, is that the time pressure was an element in sort of drafting these rules, do you think that having. The FTC and the US Senate involved will lead to better rules.
Dave Willner:so I do think, well, and to be clear, I don't know that time pressure here was the constraint so much as it was being cross pressured, right? That you're under multiple competing demands. That are hard to reconcile. It's more your question of like, are perfect rules possible and like, I'm not sure on some of these things that perfect reconcili ability is possible. right? Like that. It just may be difficult to square the circle in some cases. And sometimes you pick the square and you should have picked the circle and no under those cer know when that is going on. Generally speaking, having the government yelling at you does not make people calmer or better at thinking
Mike Masnick:Yeah,
Dave Willner:experience.
Mike Masnick:I'm sort of trying to project out and, you know, you can sort of see the, the likely end result of obviously the government cracking down on this kind of stuff is much more stringent rules and much greater limits on sort of how people can use chatbots in, in certain ways. And I'm, kind of trying to project out into a future where we're going to be, I don't know, three years from now, five years from now, where the very same people in the government, God, I hope they're not still in government, but assuming some same people are still in the government, then freak out because something more culture worry, someone will try and do something with a chatbot and get denied and, you know.
Dave Willner:Oh, that's definitely gonna
Mike Masnick:We're gonna have somebody stand up and be like, why is OpenAI so woke and not allowing this? And the whole
Dave Willner:we've already,
Mike Masnick:will trace back to this conversation now.
Dave Willner:I think that is probably true. I mean, I think we've already had versions of that, right? Um, the Google, uh, racially diverse Nazis
Mike Masnick:Yes.
Dave Willner:up from, like last year was a good, example of that. And that one to me, very much did read as, mediocre solutions put in place under intense time pressure to launch, leading to. Weird, stupid outcomes, not as a sort of intentional stance that anybody wanted to be in, because it was obviously not sustainable, to be in the, the racially diverse Nazis actually business was not, not, was not gonna hold up. and also not clear why it would even be desirable. But yeah, I I do think we are inevitably going to face that because back to your, your core point, you're making a lot of decisions. You are drawing some lines that are very difficult to split apart. Right. And I mean even it's actually interesting on the meta thing, both in their prior guidance and in the current guidance, they talk a lot about Romeo and Juliette. Which is a book about two teenagers who fall in love, get sexually involved and then kill
Mike Masnick:yes.
Dave Willner:but it's by Shakespeare, so it's classy. So it's fine'cause it's old. Right. And we have complicated and inconsistent feelings about those things, and particularly where they intersect with art.
Mike Masnick:Yeah.
Dave Willner:So much of that is it's not just about the what, it's about the how of the content and about the who and about the social positioning. Of how the content is received, right? I'm not, I'm not, don't take me arguing that Vibes is Shakespeare like it is not. But also, Shakespeare wasn't Shakespeare when Shakespeare was written. Shakespeare was like for the rabble to go see and yell at the stage while it was being performed. And we don't really have good language or good intellectual tools for splitting apart those sort of matters of taste. and so trying to turn them into an operations ma into what is essentially a technical manual for how to run a big factory to make decisions is like one of the most insane parts of the insane project of content moderation. Like that part is just super duper impossible actually on this point. when we were trying to solve a version of this problem, a very simple version of this problem at Facebook around images We had said, okay, if it looks like a naked people, it's naked. People take it down. But like, Michelangelo's David looks like naked people. And so we needed to figure out how that was fine.'cause that's in like everybody's photos of every trip that's ever happened
Mike Masnick:Right,
Dave Willner:And so taking that down is like both not useful and obviously absurd and not sustainable.
Mike Masnick:right.
Dave Willner:And the answer we came up with, I dunno if I've told you this story before. The, the answer we came up with was, rocks are not people so they can't be naked. So the definition of art was for a very long time. Things that are made of like paint or sticks or metal or wood, whatever. It's like made of an, it's not made of people, right? But it looks like people. That's allowed to be naked.'cause it's not naked'cause it's not people. and that was the distinction that we were able to teach people imperfectly, but that worked. So the definition of art was just naked people made of wood.
Mike Masnick:Yeah.
Dave Willner:Well, it, it meant among other things that there was no art photography.
Mike Masnick:Right,
Dave Willner:So no photography was ever art
Mike Masnick:right. Which seems bad.
Dave Willner:I, I mean, no, it seems like a trade
Mike Masnick:Yeah. Okay. Fair. Fair enough.
Dave Willner:Right? Because otherwise I'm trying to teach, like we're in the business of trying to teach 10,000 people to have the same views about what art means. Like what? that's absurd. Like that's the limit where I'm like, this will never be possible to be done. Well that is conceptually impossible. That is not a thing that can be
Mike Masnick:Right, but then you, you end up with situations like Facebook famously taking down the Napalm girl picture. Yeah.
Dave Willner:Yep, Because if you don't know that that's a Pulitzer Prize winning photo, and you just describe what you're seeing, you definitely want Facebook to moderate that photo unless it's that photo.
Mike Masnick:right,
Dave Willner:Right. But if you just, like, if you took a description of that photo and put it into an AI chat bot and we're like, make this, we would want it to not do that.
Mike Masnick:Yeah. Which gives you a sense of the, the impossibility
Dave Willner:Yes. One might say, one might
Mike Masnick:one might
Dave Willner:yeah, no, I'm, I'm not working on world peace here. I'm just trying to make us better at, running the factory.
Mike Masnick:Yeah. All right. Well, this, that was a very interesting discussion, but let, let's move on to our, slightly more rapid stories. Um, we have, another story, which I think is a nice lead in actually from that one. and in fact maybe just flows right into it, involving open AI's, new SOA projects. Did you wanna give the quick description on that?
Dave Willner:yeah, absolutely. So OpenAI has released a version of SOA so SOA is their video generation, model that's been around in preview for a while and they've now released a version that sort of has social media features built into the app itself. So it's, it's a move in a direction that feels a little different from where a lot of their software is, which, yes, you know, people use chat, PT for entertainment, but a lot of it's sort of productivity coded, even if that's not the only thing people are using it for. This feels like much more of a move in, in the vibes direction, of sort of generating. AI video, essentially purely for entertainment purposes. and there's sort of some interesting features built into it where you can train the model to have a sense of what you look like, and then actually control whether or not people can generate videos of you or just you can generate videos of you. And they did some interesting stuff where they're, the models, if they're shared with inside the app are, or sorry, the images, if they're shared, inside the app are not watermarked, but if you export them, they then attempt to put a watermark on them so people know that they're synthetic. But if you're sort of inside the playground, it's contextually obvious that all this is fake because it's a service for making AI videos, right? So you're in soa, you know that this isn't real, which was very interesting as a sort of like new technology making new presentations possible kind of thing. So I was, I was interested in your thoughts.
Mike Masnick:Yeah. I mean, it struck me as interesting, especially coming like the week right after Vibes launched, like this sort of merging of the AI and social where like vibes, I don't understand at all as, as this sort of like social app that adds ai and it's only for ai, but then for some reason SOA sort of makes more sense to me. And even though it's kind of a very similar product where it's like it's an AI tool for making AI things, but because you're sort of sharing it within that environment, then suddenly it makes more sense to me. And I, I don't quite fully understand why. but I did also think that like that,
Dave Willner:it sort of a starting place thing where like OpenAI makes ai,
Mike Masnick:Right. And so you
Dave Willner:and so it's full of AI
Mike Masnick:and, and so it's
Dave Willner:yeah, that tracks
Mike Masnick:with, with like added social as opposed to social with added AI just feels wrong, but AI with added social feels, even if they're ending up in the same sort of place. I find that that sort of fascinating and I can't quite wrap my head around why. but I did think like the structures as you talked about, it was interesting too with like the watermarks. Obviously there are ways around this, like anyone can figure out a way to video capture their screen and get a video out of this without the watermarks. But just the fact that like, oh, if you're in the app, the expectation is you understand that this is all video AI generated not true things and therefore. you don't have to watermark it, but if you download it, you get a watermark, which will hopefully alert people outside of this app. the context of where it comes from. I thought that was sort of a, an interesting intervention, if not a perfect one. But the other one, the thing that I thought was even more interesting was, and so like a lot of the news around it was that, you know, as you mentioned, it allows you to sort of create a, you know, model of yourself and then you can open that up for other people to use. And Sam Altman, did that. And so a lot of what the original videos that were getting passed around, including by open AI employees, were fake videos of Sam Altman doing stuff. And sort of the one that was most famous was there's this video created of him stealing, GPUs from target to, to power soa. And that sort of went viral where you have
Dave Willner:What, what could go
Mike Masnick:yeah, right. So you have this immediate like, what could go wrong thing, but like. He released that, you know, the model of himself for people to be able to make that, and it's sort of like this, I don't know if it's eating your own dog food kind of thing. sort of beyond that, right? I mean, it's, it's like, you know, allowing other people to force speech to dog food.
Dave Willner:We're making dog fo gra now.
Mike Masnick:Yeah, yeah,
Dave Willner:yeah, no, it is a very interesting moment where I similarly share your sort of lack of understanding of the interest of AI video in most contexts. Right. Like, I, I get TikTok, cause it's like other people sharing you a window into their lives and that's like kind of inherently interesting and it's just like lazier and faster than reading their tweets. Like, okay, sure. Uh, and you know, they can, it's better for singing'cause you can sing
Mike Masnick:The singing and
Dave Willner:you know, there's like, sure.
Mike Masnick:an appeal there. Yeah.
Dave Willner:But the AI generated video stuff is interesting to me because. I would think the most interesting stuff is things you can't make, but it does feel like in, it's interesting, it sort of ties into some of what we were talking about earlier. It does feel like another version of this, like playing with action figures
Mike Masnick:Yes.
Dave Willner:thing, right? Except that like this time it's video action figures, but they're also real people who opted into being action figures. It's like very, I don't quite get, I, like, I get why you would wanna, I don't know, be like, here's a video of Bigfoot or something that like you wouldn't otherwise have. But it's interesting how it, that sort of drive towards fictional representation as entertainment intersects with video and intersects with reality. Um, which is very, very
Mike Masnick:I did think, you know, so there is, as I understand, and I haven't played around with this yet myself, so I don't know exactly the details, the whole thing where you can make your, I don't know, even know what they call it, avatar or entity, available for others to create with, there is a safety feature built into that, which is that you can then see what other people do with, with yourself and can pull down. So, so in theory, like Sam Altman could look at that, video of him stealing GPUs from Target and kill it, within the
Dave Willner:feels very difficult to scale that, like it's a cool idea, but it feels like it works least well for Sam Altman.
Mike Masnick:Yes. I, I mean, but I, with the recognition that he's probably going into this very willingly, and I imagine that most sort of well-known people will not. Openly share their own models of themselves, or if they do, it'll be under very different conditions. I'm sort of thinking of like, this is, you know, different, but like there was the whole thing that, Grimes the, uh, ex of Elon Musk and, Singer did like a couple years ago now, where she created an AI model just of her voice and let anyone use it in music with a contractual condition, which is that if you make any money from this song, you have to cut 50% back to me. And so you can, you can create a Grimes song, she will sing with you, you can put it on Spotify and you can make money on it. And if you, if it is successful, you have to give 50% back to her. And that's kind of interesting to me. Like I, I find that sort of a fascinating approach to things and you could see maybe people doing something similar with this sort of thing where it's like, there are conditions on the usage, like you can use the model of me, but. Either I need to approve it or there needs to be some sort of licensing or money that changes hands. And I could see that being interesting.
Dave Willner:I would assume that will end up happening at least within sort of commercial domains. The interesting question is gonna be how much that becomes a normal thing to let, just like. Anybody do, right? Like, I, I think you're gonna end up with these sort of like AI clones of actors and things, and particularly once you start to think about their estates,
Mike Masnick:Yes. Oh,
Dave Willner:Because like,
Mike Masnick:yeah. There's gonna be
Dave Willner:Morgan Freeman's gonna be able to sound amazing for the rest of time. and there's, there's incentives for those kinds of things. And it is actually, this is really far afield, but it's interesting to think about what that does to culture,
Mike Masnick:Yeah.
Dave Willner:right? Because there's less turnover,
Mike Masnick:Yeah.
Dave Willner:right? Right now, all of the art artistic talent has a finite expiration date. There is a body of work that is made and is ended,
Mike Masnick:But if, we have the immortal Morgan Freeman, then, then,
Dave Willner:right? Like who, who will ever narrate documentaries ever again?
Mike Masnick:Right, right.
Dave Willner:It's just Morgan Freeman forever. And that's, I don't, that's probably not good. I don't know that. I love that.
Mike Masnick:yeah. No, that's fair. All right, well, let's, let's move on to our final story and this one, you know, the last few episodes of controlled speech, we talked about Nepal and sort of trying to take down, certain social media networks because of protests and how that was a failure. But a very interesting story. the story I saw this week takes place in Afghanistan where they have just basically said, fuck it, we're turning off the internet.
Dave Willner:that is one way to do perfect content
Mike Masnick:Yes, there's no more content. There is no more internet, too fucking bad. Uh, and, and so they basically, like the Taliban who, you know, rules the country basically more or less said, well, actually this is, this is kind of crazy. Because what they did was someone went on X, which is a part of the internet, to talk about how they were turning off the internet because the internet is evil.
Dave Willner:A notoriously non evil part of the
Mike Masnick:Yeah, which is just kind of weird in all sorts of ways. and we're starting to see, there's a BB, C article about this where like, it turns out, like the internet is really, really ingrained in lots of things that people do around the globe, including Afghanistan. So when they started to, cut the, the fiber optic lines, like bad stuff is happening and like flights are not going in and out of Afghanistan anymore, and people who were taking classes online are suddenly like, whoops, like, that's gone. other kinds of services just sort of disappeared. and
Dave Willner:I, I was seeing, uh, banking that like all the banks are down.
Mike Masnick:Yeah, because it turns out banks actually rely on the internet a fair bit. And so suddenly there, you know, strongly to do it and there wasn't. like a precipitating event other than the Taliban basically just saying like, we don't like this cultural thing, so we're
Dave Willner:Is, yeah. Yeah. It's, it's a particularly crazy moment. That last part in particular is very, I think is different, right? We've seen to your point earlier, we've seen countries wall
Mike Masnick:Right,
Dave Willner:their own sort of sections of the internet. which, you know, I am, I am not a fan of, but is a solution to the banks' need to work, right? Because it allows the sort of intra netting to happen. and we've seen sort of emergency, full shutdown kind of things but they're largely in response to like, something happened or is happening, like. It is going down. And so somebody like pulls the plug out and you're like, okay, like I don't like this as a strategy, but like I get how you, the, like, I'm gonna go unplug it. I got it. Right. Get how
Mike Masnick:it is almost always like people rising up against the leader leads to an
Dave Willner:And like at that point you're, you're getting your head chopped off anyway, so you might as well throw some stuff at the wall and see what happens. but this is, it is kind of wild to see, to your point, not just them unplugged, but like go basically cold Turkey. Right. Like to just fully, without a lot of preparation seemingly, or real planning for alternative ways of providing services that presumably some of which they think are good and cool. maybe it's, it's sort of wild. I mean, I, I can't really think of a precedent for it. That doesn't have a specific inciting event
Mike Masnick:like literally, I mean the BBC article says it is unclear what the reason for the shutdown is, and the only thing they seem to have is that post on X of someone saying we have to cut the fiber optic cables because they're evil. And this also does follow on that they removed all books written by women from the university teaching system, uh, which implies a sort of cultural, like, we don't like modern culture and we're going to block all of it. which is probably not great for a variety of
Dave Willner:N no ob Yeah. Obviously I was a little flip upfront about content moderation, but yeah, no, obviously the implications locally are gonna be somewhere between bad and horrifying. but it is, interesting to see what it will. Lead to, right? Because you can end up with backlash
Mike Masnick:Yes.
Dave Willner:Or you end up with adaptation, right? Because like, like, like banks are gonna need to exist on some level, right? I don't think you end up in a world where there's like no ability to change money. and so there's this broader question of like, okay, are they going into this without planning and then gonna figure it out? Or, or how intentional, frankly, were some of those things Because like some of those second order consequences seemed to me to probably not have been desired.
Mike Masnick:yeah.
Dave Willner:Right?
Mike Masnick:you would think, yeah, I think this story is really early and I think is one that we'll sort of have to see where it goes, but
Dave Willner:I, I look forward to you and Ben,
Mike Masnick:yes.
Dave Willner:actually explaining what happened
Mike Masnick:Hashing out this on a,
Dave Willner:on a future, on a future episode.
Mike Masnick:But I think with that, we're gonna, we're gonna wrap this one up. Uh, Dave, this was a really fun conversation and, uh, I'm, I'm glad I was able to drag you in from the, from the, uh, event last week. So thanks for taking the time and, I hope everybody, enjoyed listening as well. And, we'll be back next week.
Announcer:Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.com. That's CT RL alt speech.com.