Ctrl-Alt-Speech

Minisode: The Robots Take Over Ctrl-Alt-Speech

Mike Masnick & NotebookLM Season 1 Episode 34

Ben and Mike are technically off this week, but we decided to run an experiment. After discussing Google’s NotebookLM and its ability to create AI-generated podcasts about any content, Mike experimented with how it would handle one of the stories Mike & Ben discussed last week: Daphne Keller’s The Rise of the Compliant Speech Platform on Lawfare. Mike explains why we’re running this, some of the work that went into it, as well as his thoughts on the experiment, followed by the AI-generated version.

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Mike Masnick:

Hello and welcome to a special mini sode of Ctrl-Alt-Speech. I know that Ben and I said last week that we would be off this week, and I know that the last time we said we were going to be off in July, we also did a special minisode. That one was because of some breaking news from the Supreme Court. That is not the case this time. However we thought that we would do something a little bit different since we are off this week. In last month's episode, Is this the real life or is this just fakery? Ben and our wonderful guest host Catherine Weems talked about Google's Notebook LM feature that allowed you to give it content and it would create a podcast for you. And, Catherine worried about how that might replace Ben and I with AI generated voices. And since we're off this week, we thought that we would maybe play around with notebook LM and see if it actually could replace us. And we figured we would bring you, our listeners along for the ride. So after a little intro here, where I will explain what is going on, I will play the whole sample of what notebook LM did for us to make it somewhat comparable. To Ben and I, in actuality, I decided to use one of the stories that we discussed in last week's episode, and that is Daphne Keller's wonderful article on the rise of the compliant speech platform. Ben and I had a lot to talk about there, and so it was a really fun discussion, and I think we went back and forth and we went. a little deep on it, so I thought it would be interesting to see what the AI could do. You can listen to our original discussion in last week's episode if you haven't heard it, or if you want to hear it again, and then you can listen to what notebook LM came up with. So will we be replacing Ben and I with AI? The short answer is no, I don't think so. the longer answer is, well, we'll hear what you, our listeners have to say. I will note that the AI is really impressive. It is a kind of party trick. It's, it's actually pretty cool what it can do. And it certainly is much more advanced than lots of other things out there. And the fact that it can create a. Realistic sounding podcast is actually pretty neat, but to actually get it to the level that it would be good enough to ever replace Ben and I, I think it has a long way to go. I will note in getting it to do the output that you will hear in a moment, I had to do a lot of shaping and customizing to get it to remotely sound plausible. This is actually a new feature that Notebook LLM just added, about a week ago. Originally, you just had to give it the sources and it would create the podcast however it sounded. I had played around with it at that time and it got stuff wrong. It didn't focus on the right things. It was not. Now, they added the ability to prompt it. You have a limited amount of space to prompt it. I have played around with it in all sorts of ways, to try and prompt it to get something that would be better. and, well, I'll let you decide whether or not we succeeded. among the things that I had to do in my prompting of it was to tell it to not prompt it. Use cliches because in an earlier recording, we noticed that it had something like, I don't know, five or six cliches per minute. It just sort of completely fell back on cliches every chance it had. I told it explicitly, no cliches, avoid cliches. I probably should have said avoid cliches like the plague. But, it didn't really work. You'll still hear some cliches, though. I think it's a little bit less cliched, than it was before. And if I hadn't told it that, I also had to tell it to make sure to name the piece that it was talking about, to mention that it was by Daphne and that it was published in Lawfare because if I didn't tell it to do that, it's sort of ignored those things. I had to suggest some of the key points that it needed to discuss and Even gave it a little direction in terms of which things to maybe highlight. And I also had to encourage it to not be superficial, but to actually try and have a deeper conversation about what this actually means and what the implications of Daphne's article were, and I think it still struggled there, but it at least tried to do a little bit more. One other thing that I noticed in trying to set it up, we explored doing other articles beyond just Daphne's article. and one thing I noticed was that Google LM would not read certain articles. I am pretty sure this is because many of the major news sites now Block scraping or block AI bots from accessing them. I have some thoughts on this, which I won't go into here. I did find it a little bit strange that this was not for scraping and training purposes, but if I was directly pointing the system to read the article and do something useful for me that it still refused. I did, after playing around with it a little bit, figure out some ways to sort of trick it into reading some of those articles, but I didn't want to do that here. So Lawfare does not appear to block it. And so I was able to have Daphne's article, be, turned into a podcast this way. Also, I'll note that with all the time I spent trying to customize it and to tell it how to do things, it probably would have just been faster to get Ben and I onto the microphones and have us record our own discussion, as we already had in this case. and so the idea that this is a time saver, not necessarily, though obviously in some cases, It could potentially be that way. and then even then with all that shaping and all of the effort I put into customizing it and making it sound semi reasonable as a story It's still just kind of okay. I mean it sounds cool It sounds like two real people talking to each other and having a discussion about Daphne's article But I don't think it's that insightful. Um, I think some of this is just the nature of AI. you know, what AI is doing is it is taking all sorts of content and it's sort of, trying to pull what is normal and natural and kind of what is the median expected out of it. And so what you get is kind of a median result. It's sort of very average. It's very general. Difficult to get real insights out of it because that's not what it's trained to do. It's trained to do more of what you expect rather than what will give you a sort of big aha moment. And I think that comes through in the discussion that you will hear in a moment. And on top of that, I'm just not sure it's that interesting when it's done that way. I think it's interesting as an experiment and to think about how this technology works and what can be done with it. But. I would like to believe that Ben and I are a little bit more insightful when we talk about things like Daphne's piece. As I mentioned, the Notebook LM version still has cliches. It summarizes the points okay, but it doesn't really get into anything new. You don't sort of get anything more out of it from, LM. Also, I noticed that And I didn't notice this in earlier tests, but, you know, the more I tried to customize the output, I noticed that notebook LM started to repeat itself a little bit. So you'll hear the actual recording is about 15 minutes long, and you'll hear that it starts to repeat itself a couple of times. And I'm wondering if that was because of way I was trying to customize it and instructed on how to work. But I don't know for sure. I was just a little surprised that it's. Kind of started to repeat itself at points. but still, this is an experiment. It's kind of fun. don't expect anything super insightful other than just learning what notebook LM can do again with a lot of instruction and, suggestions for how to, to properly do this, we will play it here right after I'm done speaking. It's definitely a neat. Kind of toy and we'd certainly love to hear from all of our listeners about what they think of it because maybe I'm just biased and worried about being put out of a job by a generative AI but For your listening pleasure. Here is Google's notebook LM and its attempt to be Ben and I though without the British accent Ben really wanted it to have a British accent to match us. But as of right now Google only offers the two voices The same two voices on every podcast it generates. I imagine that very soon they will introduce others, perhaps eventually including a British voice. But for now, we are stuck with two very American voices. And, uh, you are about to hear them discussing Daphne Keller's excellent piece. And, feel free to let us know what you think.

AI Voice 1:

Okay. So today we're diving deep into this whole world of online speech, like how it's being governed and all. Specifically, we're looking at this article from Lawfare. It's called The Rise of the Compliant Speech Platform

AI Voice 2:

by

AI Voice 1:

Daphne Keller.

AI Voice 2:

Right.

AI Voice 1:

Have you had a chance to read through it?

AI Voice 2:

Yeah, I have. What I find really interesting is this shift that Keller's talking about, you know, content moderation. Yeah. In the past, it used to be a lot more, I don't know, almost like an art form. Mm hmm. But now, it feels like it's turning into this compliance function. Right. Like it's becoming more like financial regulations at banks.

AI Voice 1:

Uh huh.

AI Voice 2:

Or safety inspections in factories, you know.

AI Voice 1:

It's a really interesting comparison, isn't it? Mm hmm. And it makes you think. Like Keller even tells that story about handling a request from a fraternity.

AI Voice 2:

Right.

AI Voice 1:

They were all worried about someone revealing their secret handshake online.

AI Voice 2:

It's funny.

AI Voice 1:

Yeah. And like, right next to that, she's dealing with a demand from a Turkish politician trying to suppress some bad news coverage about him.

AI Voice 2:

Totally different ends of the spectrum. Right.

AI Voice 1:

Totally. It's crazy how things have changed, right?

AI Voice 2:

It really highlights how, I don't know, kind of ad hoc things were back in the early days of like, you know, content moderation. Yeah, yeah. But then obviously, as the amount of content online just exploded, platforms like Facebook and YouTube, they had to change things, you know, they needed some kind of like standardized rules, you know, clear cut enough that like, you know, massive teams, even algorithms could apply them consistently.

AI Voice 1:

Makes sense.

AI Voice 2:

Right.

AI Voice 1:

Yeah. So this is where the whole idea of compliance really comes into play. Yeah, especially with legislation like the EU's Digital Services Act, the DSA. Right. That feels like a big turning point, wouldn't you say?

AI Voice 2:

Oh, absolutely. I think the DSA is like a major, major force behind this whole shift. Now, platforms, like they have to keep such Detailed records, like meticulously document every content moderation decision they make. Wow. And report all that to the EU regulators, like literally every comment they remove. Every video that gets taken down, they have to record it all.

AI Voice 1:

It's crazy. We're

AI Voice 2:

talking about like a database with over nine billion reports.

AI Voice 1:

Nine billion.

AI Voice 2:

Detailing exactly why specific pieces of content were removed.

AI Voice 1:

Wow. I can't even wrap my head around that. It's

AI Voice 2:

a lot.

AI Voice 1:

Nine billion. That's just, yeah. It feels like this whole push for compliance, I don't know, it's creating this massive audit culture. It almost feels like Orwellian in a way, like everything you say online is being, you know, tracked and scrutinized. Is this a little unsettling to you?

AI Voice 2:

Yeah. Keller raises some interesting concerns about this, right? So for example, the DSA, it pushes platforms to mitigate what it calls like systemic risks. And these are kind of big broad values. We're talking like free speech, civic discourse, human dignity, protection of children.

AI Voice 1:

Those are pretty heavy concepts. I mean, how do you even like quantify something like human dignity? Right. Or even like, assess whether a platform is actually upholding that properly.

AI Voice 2:

I mean, it's tough, right?

AI Voice 1:

It feels like we're kind of trying to fit a square peg in a round hole here.

AI Voice 2:

Yeah.

AI Voice 1:

Are we at risk of oversimplifying some incredibly complex issues just to, you know, just to tick boxes on a compliance checklist?

AI Voice 2:

Yeah, and that's exactly one of her, I think, main concerns, right, that this drive towards standardization and compliance, that it might actually be fundamentally at odds with the nature of online speech. Right. I mean, because online speech, it's so dynamic, it's always changing. What's considered harmful in one context could be totally fine in another context. And those lines, they shift constantly.

AI Voice 1:

Yeah. And that's what worries me. You know, are we creating a system where platforms, in their eagerness to comply and like avoid any regulatory headaches, they're just going to err on the side of caution and suppress anything that falls into any kind of gray area.

AI Voice 2:

That risk of overreach is a big one. It's one of the, I think, key concerns that Keller highlights, right? Because the line between regulating systems and dictating what's acceptable speech, it can get really blurry. Right. And you can imagine, like, governments may be exploiting that. Yeah. Right? Yeah. Platforms are under pressure. Uh huh. They might end up censoring speech that's perfectly legitimate, but just because of these, you know, compliance frameworks.

AI Voice 1:

It's a slippery slope. It

AI Voice 2:

is. It is.

AI Voice 1:

You know, it's like that quote, uh, those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety.

AI Voice 2:

It feels really relevant here, doesn't it?

AI Voice 1:

It does. Yeah. It does. Okay. So we've got this push for more control and more accountability online, right? Yeah. But at the same time, you know, we have this inherent tension with the nature of online speech itself.

AI Voice 2:

Yeah.

AI Voice 1:

And then there's the potential for this, you know, compliance focus to create this, you know, chilling effect where people are, you know, self censoring out of fear of running a foul, some vague rule. Uh huh. What are some of the other You know, downsides that Keller brings up.

AI Voice 2:

Well, one of the big ones is the impact on competition. You think about smaller platforms, right? Oh, yeah. Startups, maybe they've got really innovative ideas. Yeah. But then they've got to navigate this, like, really complex and costly world of DSA compliance.

AI Voice 1:

Oh, right.

AI Voice 2:

Like, can they even compete with these established giants who have the resources to deal with all that?

AI Voice 1:

That's a good question.

AI Voice 2:

It

AI Voice 1:

is. You know,

AI Voice 2:

it's a huge

AI Voice 1:

question. Yeah. You know, like DSA compliance with all of its audits and reporting requirements. Right. It can be insanely expensive. I bet. So you've got these smaller platforms and they might just not be able to keep up. Right. Which then could lead to like a less divorce, you know, online ecosystem.

AI Voice 2:

Right. So instead of this like vibrant marketplace of ideas, you know, we end up with just a handful of massive platforms that are calling all the shots. That's kind of bleak, isn't it?

AI Voice 1:

Yeah. It's not a great outlook.

AI Voice 2:

It feels like we're risking a lot of collateral damage here in the pursuit of this, you know, like, standardized and compliant online world. Yeah, and that brings us to another, you know, key point that Keller raises. Yeah. She calls it The Perils of Metrics.

AI Voice 1:

The Perils of Metrics. That sounds like a, like a title of a dystopian sci fi novel or something.

AI Voice 2:

Maybe it should be. But basically what she's talking about is like, you know, this risk of focusing too much on things that are easily quantifiable. Right. Yeah. Metrics that look really good on paper but might not actually reflect the true nuances of, you know, content moderation. Right.

AI Voice 1:

So,

AI Voice 2:

for example, turnaround time for resolving content complaints. Sure.

AI Voice 1:

Yeah, you'd want platforms to deal with issues quickly.

AI Voice 2:

Exactly. But if moderators are being evaluated solely on speed, Then what happens, right? They might be incentivized to just make these like hasty decisions, you know, and potentially sacrificing accuracy and fairness in the process. Like, imagine if you have legitimate speech being taken down just because it was faster and easier, you know, just to hit that target number.

AI Voice 1:

It's like you're prioritizing. Efficiency over effectiveness, which could have really bad consequences for, you know, free expression.

AI Voice 2:

Absolutely. And it kind of leads to this broader point that Keller's making about the limitations of trying to standardize and control something as complex as online speech. She actually draws these really interesting parallels to a historical example.

AI Voice 1:

I'm always up for a good history lesson.

AI Voice 2:

All right. Well, she mentions Frederick Taylor's efficiency focused factory model. So this was all about, you know, Braiding down tasks into these simple, repeatable actions.

AI Voice 1:

Right, yeah, like assembly lines.

AI Voice 2:

Exactly. And it was revolutionary for manufacturing. But you have critics, like Karl Marx, who saw it as Dehumanizing.

AI Voice 1:

Yeah. Treating people like cogs in a machine, it doesn't really encourage, you know, creativity or critical thinking.

AI Voice 2:

Exactly. And she also brings in Max Weber.

AI Voice 1:

Okay.

AI Voice 2:

Who was this sociologist who was really skeptical that complex human conflicts could ever be resolved by some kind of algorithm. Automated judge. You know, just spitting out standardized rulings. Right. It's like trying to reduce human interaction to a set of predefined rules and metrics.

AI Voice 1:

Yeah. And you're missing out on all the messiness and unpredictability of, you know, actual human communication.

AI Voice 2:

Exactly. And to kind of illustrate this point, she uses this really fascinating example from James C. Scott's work on scientific forestry.

AI Voice 1:

Scientific forestry. Okay. That doesn't exactly sound like the most thrilling topic. I

AI Voice 2:

know. I know. Yeah. But trust me on this one. So Scott. He studied how, like, 18th century German foresters tried to apply standardized scientific principles to managing forests. They were really focused on, like, maximizing timber yield.

AI Voice 1:

Right. Make as much money as possible.

AI Voice 2:

Exactly. So they planted these, like, neat rows of identical trees. And they got rid of anything that they deemed inefficient.

AI Voice 1:

It sounds very compliance driven, just like what we're seeing online now. So what happened? Did they create these, like, super efficient, super productive forests?

AI Voice 2:

It was a total disaster. Yeah, these meticulously planned forests, they became so vulnerable to disease and pests, even windstorms, you know, could just wipe them out. Because they didn't have the natural diversity and resilience of a real, you know, naturally occurring forest.

AI Voice 1:

That's a really powerful analogy. So are we risking that same kind of ecological collapse, but like, in the realm of online speech?

AI Voice 2:

It's a really interesting question.

AI Voice 1:

Yeah, are we creating these sterile Overly controlled platforms that are ultimately fragile and unsustainable.

AI Voice 2:

I think it's a question that, you know, we as users and participants in this digital world really need to grapple with.

AI Voice 1:

So where does all this leave us? Is this shift toward a compliance based model for online speech really the way to go? Right. Or are we sacrificing too much in the process?

AI Voice 2:

It's tough. There are no easy answers. Yeah. But Keller really urges us to be mindful of the potential downsides of this approach. She outlines three major pitfalls. The first is the risk of government overreach.

AI Voice 1:

Right. Yeah. It's not hard to imagine governments using these regulations to, you know, kind of pressure platforms into censoring content that they don't like.

AI Voice 2:

Exactly. So platforms are trying to avoid regulatory scrutiny. And they might err on the side of caution, you know, suppressing speech that's totally legitimate, but just because it falls into like a gray area,

AI Voice 1:

right? And it creates that chilling effect where people self censor, right? Because they're afraid of, you know, breaking some vague rule or algorithm.

AI Voice 2:

Absolutely. And then there's her second concern, which is the stifling of competition. Right. We talked about this, you know, how DSA compliance is so expensive, smaller platforms might not be able to keep up. Right. And that could lead to a less diverse online ecosystem.

AI Voice 1:

Yeah. Fewer choices for users, potentially less innovation. Exactly. So we end up with just a few dominant players kind of dictating the boundaries of acceptable speech for everyone.

AI Voice 2:

Right.

AI Voice 1:

That's a little disturbing.

AI Voice 2:

It is disturbing. And it's, you know, it's why this whole thing about the perils of metrics is so important because when we just focus on these numbers, right, like stuff that's easily quantifiable.

AI Voice 1:

Yeah.

AI Voice 2:

We kind of miss the nuances of online speech.

AI Voice 1:

Yeah, absolutely.

AI Voice 2:

And we might end up creating these like unintended consequences.

AI Voice 1:

It's like that saying you get what you measure. Right. Right. So if platforms are getting rewarded for just, you know, Taking down content quickly,

AI Voice 2:

right?

AI Voice 1:

Regardless of what it is or the context or anything. Yeah.

AI Voice 2:

Yeah. That's

AI Voice 1:

exactly what they're going to do. And I mean, are we essentially creating a system where the easiest solution is actually the worst one for free speech?

AI Voice 2:

It's a real concern. You know, people might be hesitant to share their thoughts, their ideas out of fear of being flagged by some algorithm or by some moderator under pressure. Right. You know, just trying to hit their targets. And that could really stifle genuine. Discussion and debate.

AI Voice 1:

Absolutely. So it's like, we're seeing this push for these standardized systems, these metrics. But at the same time, We have this tension, you know, with the dynamic nature of speech itself. Like, what's harmful or acceptable can be so different.

AI Voice 2:

Right.

AI Voice 1:

Depending on the context, the culture. Yep. All sorts of factors. I mean, how do you even begin to reconcile those two things?

AI Voice 2:

Yeah. That's what makes this such a complex issue, right? It's not like regulating a factory. Right. Where you can kind of predict the outputs. Yeah. Human speech is messy. It is. It's nuanced. It's always evolving. So Trying to, like, force it into some rigid compliance framework, I think it can have these, like, really serious unintended consequences. It's

AI Voice 1:

like that forestry example, right?

AI Voice 2:

Exactly. Trying

AI Voice 1:

to impose some artificial order on a system that's inherently complex.

AI Voice 2:

Right.

AI Voice 1:

Could actually make it more fragile.

AI Voice 2:

Totally, yeah.

AI Voice 1:

Less adaptable in the long run.

AI Voice 2:

Yeah. Yeah. Like we could end up creating this online environment that looks great on paper but doesn't actually work for real people. Exactly. You know, it's just not functional.

AI Voice 1:

So Keller, she doesn't really offer any easy answers here. Right. But she definitely urges us to, like, be mindful of the potential downsides of this whole shift towards compliance driven content moderation. I mean, for you. What are some of the key takeaways you think our listeners should kind of keep in mind as they're, you know, navigating this whole crazy world?

AI Voice 2:

Well, I think first and foremost, we really need to be aware of that risk of government overreach. Yeah. Like, transparency and accountability are super important. Uh huh. But we need to make sure that these regulations don't become a way for governments to just censor stuff they don't like. Yeah. You know? Through the back door. Right. Right. So government should be very, very careful about dictating specific content moderation decisions.

AI Voice 1:

Absolutely. And I think we also need to consider that impact on competition and innovation.

AI Voice 2:

Right.

AI Voice 1:

Like these regulations, they shouldn't give an unfair advantage to the big guys. You know, the ones that can already afford to deal with all this. We need a healthy online ecosystem, which means you need a variety of platforms with different approaches to content moderation. We don't want just a few companies deciding what everyone can say online.

AI Voice 2:

Absolutely. And I think. I think finally, that whole thing about the perils of metrics, you know, we have to be really, really careful about that because it's tempting to want to measure everything. But when we're talking about something as complex as online speech, if we rely too much on these simple metrics, we could end up with some really messed up results. We've got to remember that behind all those numbers, there are real people, real stories. It's

AI Voice 1:

a lot to think about as we try to figure out this whole digital world. I think it's more important than ever to be aware. Yeah. You know, the ones shaping the future of how we communicate online. And it's also a good time to ask ourselves, like, what kind of online world do we actually want?

AI Voice 2:

Yeah.

AI Voice 1:

You know, one where everything's about following the rules perfectly. Right. Or one where there's still room for open discussion, different perspectives, and maybe even a little bit of, I don't know. Chaos.

AI Voice 2:

Yeah. That's the question, isn't it?

AI Voice 1:

Yeah. That's the question. Well, it's definitely something worth thinking about and it's something we'll probably be talking about for a long time to come. So thanks for joining us for this deep dive into Daphne Keller's article.

AI Voice 2:

Yeah, it was a good one.

AI Voice 1:

It was. It's really thought provoking. We hope you keep exploring these ideas and, you know, share your thoughts with us because ultimately the future of online speech, it kind of depends on all of us being informed and engaged.

AI Voice 2:

Absolutely.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode