Ctrl-Alt-Speech

This Episode is Broadly Safe to Listen To

Mike Masnick & Ben Whitelaw Season 1 Episode 88
Ben Whitelaw:

So Mike, I was reading this ft piece this week about a 23-year-old or roundabouts buying a dumb phone and what his reflections were after 12 months of using this phone. Right.

Mike Masnick:

Okay.

Ben Whitelaw:

It made me reflect on my own phone usage and I checked never a good thing. Uh, I blame this, you know, upstart 23-year-old for making me do that. but I looked at the stats and I'm gonna go back to one sec. The app that helps you kind of like it purports to kind of cut your screen time in half. And I've used it in the past. It's pretty good. You can set time, barriers for use of certain apps. So it's pretty good for like stopping you using Instagram if that's your thing or whatever it is. it asks you to add an intervention when you go onto the app, and I thought it'd be a great way to kind of start today's episode of Controlled Speech. What would you add an intervention for?

Mike Masnick:

Oh, now that is a good question. I don't think I'm quite ready to add an intervention yet for this, but in the last, few weeks I've gotten really into Claude Code and have gone super deep in discovering the. somewhat scary but impressive power of what Claude Code can do for you, whether it's making apps or just doing stuff for you. and yeah, I guess it sort of has a built in intervention in that it limits how much you can do in four hours, and I will sometimes kind of start to max that out while continually reloading the page. That tells me how close I am to running out of, uh, uh, tokens to do stuff. I, could see me getting to the point where I need an intervention to stop me from using cloud code.

Ben Whitelaw:

Yeah.

Mike Masnick:

What about you? What, what intervention are you adding?

Ben Whitelaw:

Well, I, I'm not there yet with using Claude code, but I am frantically refreshing, blue sky for January transfers for my football team. And it's, got obsessive, I'm, I'm like every transfer window, I'm desperate to find out what they, who they've signed, how it's gonna help us stay up. So, it's too much. I want, I want an intervention, for that.

Mike Masnick:

I will say I'm, this is the same, same season in baseball as well, and I've been doing that as well which is a nice thing that Blue Sky is becoming very much a sports app. So I will, uh, mention that as well.

Ben Whitelaw:

Agreed. Agreed. there is hope for our sports themes yet maybe. Hello and welcome to Control Alt Speech, your weekly roundup. Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's January 22nd, 2026, and this week we are talking about how to make a constitution in 20,000 words, YouTube safety flex and why. Maybe just maybe. We just start feeling sorry for Pav ov. My name is Ben Whitelaw. Wait and see. Uh, my name is Ben Whitelaw and I'm the founder and editor of Everything in Moderation. I'm joined with Mike Masnick, founder of Tech, and we are not in Davos. Mike, we are not invited. We are not there.

Mike Masnick:

We are not the elite, Ben, we are the sub elite. I actually, I had posted this to, blue sky, earlier this week. I, I, you know how like everybody, it was sending out like year end notification. This is, you know, everything you did last year, blah, blah, blah. I got an email last week from you. Some hotel property, and I don't even know which one it was. I must have stayed there on some trip. and it said, you know, like, your stats for 2025, it says you've been a member for all of zero years. Uh, and in that time you have reached non-elite status. And I was like.

Ben Whitelaw:

how damning,

Mike Masnick:

Yes, I am officially a non-elite according to some random hotel chain that I probably stayed at once.

Ben Whitelaw:

yeah. How do you feel about that? Is that

Mike Masnick:

I feel good, you know, this is a anti-elite era, and so, so, you know, if I were elite then maybe I would be at Davos. But, as a, confirmed via this hotel chain, non-elite, I am not invited and not allowed.

Ben Whitelaw:

Yeah, I mean, I'm still waiting for my invite. Maybe it'll come next year, I think. one thing I, I would say we're gonna talk a bit about a couple of things that have been discussed and announced at Davos in today's episode. But one thing I think we would be good for at Davos is drumming up sponsorship. Interest in control or speech. I bet there's a load of people at Davos who would be very up for sponsoring this fair podcast that we, painstakingly handcraft each week. what do you think about that? Do you think we should be there selling our wares?

Mike Masnick:

well, yes, maybe because we do have, certainly, if we are not elite, Ben, I do feel like we have a lot of elite listeners,

Ben Whitelaw:

Yeah.

Mike Masnick:

of whom I, I would imagine we have plenty of people at who are at Davos who listen to this podcast regularly. hopefully as soon as it comes out, first thing, it should go right to the top of your, your player. yes, you know, as we've discussed before, we survive on sponsorship. Revenue that allows us to do this. We put a lot of time into everything that we are doing and, uh, we're, making an another sort of call for sponsorships right now. So, if you are listening to this and you happen to work at an organization, of any kind, that is interested in seeing control alt speech continue to be the wonderful thing that it is, we would love to have you on as a sponsor and again. We feel that our, our sponsorship setup is one that is, unique but also really beneficial for everybody, including the listeners. We really feel that our form of sponsorship can be a win-win win, where the most common form is that we get to do. Some sort of interview with you or someone that you designate. And it can be a very interesting, it has to be an on topic interview. It is not designed to be super salesy, but rather to let you demonstrate something interesting and powerful and thought provoking, in a way that, is interesting to us and interesting to our audience. And I think that is unique and fun. Uh, but we always need more sponsors. I know we have, we have some, we've, done some really great ones already. and we have some more coming up. But, uh, we're putting out our plea for sponsorship help and we may have a, a unique sponsorship opportunity. For people who want it right now. This is a, very short term offer, which is if you are a regular listener to controlled speech, you know that we discussed in our first episode of the year, our 2026 Bingo card, and one of the things that I did with Claude Code as I revealed my new addiction,

Ben Whitelaw:

O, other, other coding tools are available. Claude. Claude is not one of the sponsors of this podcast, although it probably should be.

Mike Masnick:

Yeah, yeah, we can, we can work on that. but I have, built a Bingo app, a control alt speech bingo app, which we will be putting up on the webpage at some point soon. and we are open to having somebody sponsor that. And so it would be a thing that all the regular listeners of controlled speech every week may open it up, or we have a version that you can print out, including sponsorship information.

Ben Whitelaw:

Mm.

Mike Masnick:

if you are a sponsor, you could get your name or some idea in front of all of the listeners of Controlled Speech basically every week that we release a podcast as they play along with our Bingo game.

Ben Whitelaw:

Yeah, you'd be so good on QBC Mike. That is like the perfect, like this is a limited time only offer for a Bingo card

Mike Masnick:

the phones are ringing now.

Ben Whitelaw:

Get it while it's hot. People, don't delay. This, this won't happen until January, 2027.

Mike Masnick:

Exactly. There is a, a short term offer. We would like to, we will get this on the site relatively soon and it would be nice if, when we officially announced that it is available on the site if we already have a sponsor in place. So time is limited. Call now.

Ben Whitelaw:

Yeah. Oh 800 Bingo card. That, that's, that's not a, that's a fake number. Um, think the, the point about having. Really great listeners is also true. You know, I spoke to a bunch of people this week, somebody from a, a regulator, one of our favorite regulators, a great, fantastic regulator, as Donald might say. I, I spoke to somebody at a platform, you know, senior within the trust and safety scheme. I spoke to a, a lawyer for a firm who, works with a bunch of other platforms. We have a, a really great listenership and our sponsors will be. perfectly situated within our, our episode to reach those. So, and again, you know, it's what keeps the episodes coming out every week. It's what keeps us able to do the research and be able to of record this each and every week. So, yeah, if you're interested, get in Touch podcast that control alt speech.com. Get it while you can. just one thing before we, jump into, one of the Davos kind of news items today, Mike, just on that ft piece that I mentioned at the top of today's episode, we'll include it in today's show notes, but did you know that there's a, dumb phone called the Sunbeam Juniper?

Mike Masnick:

I had not heard of, I have seen a, a few like attempts at, creating dump phones or flip phones or whatever, but I had not heard of that particular one.

Ben Whitelaw:

So it's a makeup I've never heard of. it basically stood out for me because it sounds like a vape flavor.

Mike Masnick:

they know their audience. They're targeting the, you know, gen Z folks and

Ben Whitelaw:

Yeah. and it's a really interesting piece. It speaks to a lot of the, under 16 social media ban, the kind of antis smartphone campaigns that we've been chatting about in recent weeks. worth separately going to have a read of that if you're interested business. but yeah, let's get, let's get started, on today's news. Obviously the world's kind of great, good and some kind of less favorable people are in Switzerland this week, as part of the World Economic Forum's, Davos convening. And one of the people who has been there and been talking, he is Anthropic, CEO, Dario Modi, where he's announced updates to, one of its foundational elements, safety elements. Which kind of underpins a lot of how Claude prevents users encountering harm. And Mike, you thought this was a really kind of pivotal moment in, trust and safety landscape, and you wanted to, to talk a bit about why.

Mike Masnick:

Yeah. So, anthropic is a really interesting company, for a variety of reasons. I think a lot of people sort of consider it the little brother to open AI in many ways. And it was sort of a bunch of people who had worked at OpenAI left and, and went to create Anthropic. And, the sense, and, and some of this is always marketing speak, was like, oh, they're gonna. build safe AI in a better way. They've talked for years about the idea of constitutional AI and that they are training. Claude with a Constitution, and they had released something a few years ago that was, uh, a couple thousand words, basically sort of laying out their framework for thinking about things. But what they did this week was they released the new Constitution that apparently they've been working on for some time. And it is a massive document. I believe it's like 23,000 words or something like that. I forget the exact count. I saw the announcement and I saw, somebody else refer to it as Claude has released its soul to the public. Uh, they put it under a, CC zero license, effectively putting in the public domain, basically saying, anyone can go use this without permission, or without even crediting them if you want. But, but they, put a lot of work into the, the document and I started reading it and realized. I hadn't gotten very far after a fair bit of time getting through it and looked at the scroll bar on the side and said, oh, wow, wait, I'm, maybe 5% into this document, and, and realized how, how long. It's, as they say, they created this for Claude itself. It is written for an AI and not a human. Now, I've seen different reactions to this, including some people saying, oh, they're acting as if. AI or the LLM is human, which is something that, anthropic sometimes does. They sort of blur the line between what an LLM is and what a human is. but what struck me as really interesting about this is that it is one of the most sort of deeply involved documents, exploring values around trust and safety and helpfulness. The thing that, people in the trust and safety field have talked about for years, right? Like, how do you, not just define values, but put them into practice? How do you determine, what you are trying to do? How do you deal with the different trade-offs? How do you set all of these things up? but operationalizing that has always been, one of the big challenges of trust and safety, and this is an attempt to do that, in the AI era, and I think it's sort of, it's a moment worth marking. And we're thinking about the fact that have written out this, novella sized document of. How they think about their values and how they define those values. and even the values themselves struck me as really, really. Interesting and thought provoking. I don't know if they're good, but they're, it's thought provoking because they have sort of these four key values that they talk about, and even the way they define them is fascinating.'cause they say broadly safe, broadly ethical. Then compliant with ANTHROPICS guidelines and then finally genuinely helpful. And so the compliant with ANTHROPICS guidelines, like that feels like sort of a weird catchall kind of thing.

Ben Whitelaw:

Mm

Mike Masnick:

but the other three were like, the fact that you say broadly safe, right? Like the typical thing that you would see for most companies is like safe, right? Just you define it and then you have this thing and you sort of set up this idea like, we are the safe platform and safety is our, main virtue or whatever it is. And then you sort of try and operationalize that. And what everyone in the trust and safety field realizes at some point is like there are trade-offs and safe for whom? Safe for what? Safe and what context safe over what time period. There are all of these different variables that make it very. Difficult to figure out. What do you mean by safe? You can say safe, but How do you operationalize that in a realistic way? Dealing with all of the trade offs, but by starting out, by just saying broadly safe, you're sort of. Opening up this conversation and then of course within the document you have however many 5,000 words on what do you mean by broadly safe and what do you mean in a way that a non-human LLM can understand what you mean and make decisions based on the different trade offs? To me, this is fascinating.

Ben Whitelaw:

Yeah. And so this, these 20,000 words, 23,000 words, essentially shape all of Claude's outputs, right? So if a user comes to Claude and asks it to generate csam or to generate, terrorist material, it would essentially look it up against a constitution and kind of provide a response based on that. that's how it works,

Mike Masnick:

yeah, it is part of it. You know, there are a number of different things, right? There's the system prompt and there's all the training and all these other things. But yes, this is a part of what defines how Claude works and, sort of what kinds of broadly, broadly safe, uh, features are, baked in to the underlying system.

Ben Whitelaw:

Interesting. I wanna talk a bit about how it came to be.'cause there's, it's probably an interesting story behind that, but it's interesting. I remember in 2023 when they first started talking about the constitution that it was. Basically based on the United Nations Universal Declaration of Human Rights with a, a few other kind of things sprinkled in. it had some, I think some rules, created by Google. Some of its own research, which was obviously in its early stages. Then actually I think had Apple's terms of service kind of looped in as well, but it wasn't a very. It wasn't a very big or kind of detailed document, how much of that human rights emphasis is still in there from what you've read about it and the bit that you've read?

Mike Masnick:

I mean, I think that is definitely there. I think it is a, a part of the, larger thinking. I mean, I think what has happened over the last few years are a few different things. And one is, anthropic realizing the limitations of what they had written a few years ago and the different challenges and trade offs inherent in all of these decisions. And so the rewrite of this document was to take The general concepts, the, the broad concepts that were included in the original document, and try to make them more concrete for an LLM to understand, right? Because, and they say this over and over again. You know, the document is, it is human readable. It's absolutely human readable. You can read the whole thing and you could understand it, but you can see elements in there that. you know, are from people who have spent a lot of time working with LLMs and realizing where they go wrong. And often where they go wrong is where they don't. There's some element of humanness that they miss. Right. the extreme versions of it are, you know, it will take something way more literally than it needs to. Right? And that, can create problems and, create difficulties. And there are ways to tune that. And you can set the different models to sort of have more creative responses or more strict responses. But this is an attempt to figure out, like, is there really a way that. As these technologies are becoming more and more ingrained in our lives, is there a way to really define values for. These non-human, entities. and with companies, you know, you see things like statements of values and you have training and you're training humans and to do things, and you have correctives when things go wrong and there's a whole sort of process involved. But as LLM models are becoming. Really integrated in all sorts of ways. Some good, some bad. I'm not saying, you know, I'm not one of those people who thinks like everything needs to turn into ai, but I do think it is a really powerful tool. And as it is becoming more and more embedded in, in different ways, there is a bigger question of how do you embed the values? And I think, you know, some of the, there are lots of reasonable criticisms of ai. There are some less reasonable criticisms of ai. But, I think one of the legitimate, reasonable criticisms is as certain companies are turning over more and more decision making to these different tools in some form or another. How are they able to actually build in the values? And Anthropic is in some ways a step ahead in that they are really trying to think through how they're doing it. And their answer is these 23,000 words.

Ben Whitelaw:

Mm-hmm.

Mike Masnick:

to me it's just fascinating to see that as this evolution of thinking about trust and safety in an AI era. Very different than what we've seen in the past. Even if the end goal is somewhat similar, I mean, I think this is, I don't think they would call this a trust and safety document, but to me this is very much a trust and safety document.

Ben Whitelaw:

Yeah, I mean, it's certainly an AI governance, initiative, document, whatever you wanna call it. and increasingly the lines between trust and safety and kind of AI governance ethics standards are blurring. I, I wanna kind of get into. Who has had a role in shaping this document? You know, listeners might be thinking, okay, it's, all well and good that you have this document that it exists. It's a reference point for within the model, but kind of what do we know about the people who've created this document and who's shaped it to date, do you know?

Mike Masnick:

they do say, and at the very top of the document, they, mention the people involved. To be honest, I don't know very much about them. You know, I've met a few people at Anthropic, and they, they've always struck me as very thoughtful, sometimes overly earnest about how they, how they look at these things. but it does sound like it's a group of people within Anthropic who've been thinking about these things. They have, you know, because of. partly the way they've always framed the company, they've always had people whose job it is at Anthropic to sort of think through the different challenges and, try to be forward looking in terms of where might things go wrong and how, how might we prevent that. I mean, I had a conversation last year with, an employee at Anthropic on the ethical issues of. whether or not you should ever be able to shut down an AI and effectively, like, is that the equivalent of killing a living being,

Ben Whitelaw:

Right.

Mike Masnick:

which was, you know, a little bit out there,

Ben Whitelaw:

Yeah.

Mike Masnick:

but someone's job at Anthropic is to sort of think through these kinds of ethical issues. They have a number of people who spend a lot of time trying to think through values and ethics related to the technology and. it's easy to sort of mock some of that and plenty of people are, and it's easy to be cynical about that. But compared to the alternative, which is like we don't care and we're just building stuff and stop making us try and think through consequences. I find this really. Unique and there's value in that even if, maybe there are parts of it that are, are strange or I, I might not agree with. I'm impressed by the fact that, they've shown up as having a corporate value that the technology should not be doing harm. and they're trying to figure out ways to build that in, in a way that, Actually works that it's not just a poster on the wall that you're supposed to look at and make your decisions based on, but that you've actually built it into the systems and technologies that you're building. And, you know, we will see how, how it actually works out in practice. But to me, I think this is, just a, a key notable moment in how the concept of, technology safety is evolving.

Ben Whitelaw:

Mm. Yeah. Really interesting. and, and you know, it would be particularly interesting if in future. Iterations of that, they allow external parties or external, institutions, bodies, experts to input on this as well. Right. I'm sure there are, there are elements of it that are informal right now, where the team responsible for are of soliciting ideas for the, the kind of constitution. But, It could be a, a situation where you kind of actively seek ideas for the Constitution in a, in a way of building a kind of more representative, document, right? That underpins a model and that that could have both safety benefits, but also benefits in terms of attracting a user base that cares about these things.

Mike Masnick:

Yeah, you could even imagine a world where, like the US Constitution, in theory, you can add amendments to it that you know, you need to get sort of democratic support for, right? So you could see a system like that happening. I also think the fact that it is that they put in the public domain is interesting because in theory you could see other models or other systems try and adopt it or modify it to their own needs and it would be interesting to see if other. open source models or whatever, try to take this and change it and adjust it and see what kind of results we get from it. it'll be interesting. I mean, I think this is probably so specific to Claude that I'm not sure, but Claude has honestly, you know, such a good reputation among the different LLMs that it wouldn't surprise me if some of the models, or even honestly some of the open source Chinese models that are. Coming on the market might try and adopt this just to sort of improve the, quality of their systems.

Ben Whitelaw:

Yeah. Yeah, it would be, uh. Interesting to see who picks it up. let's move on now to, someone who is probably at Davos based upon his previous attendance record And the platform that he, is chief of, Neil Mohan and YouTube. in previous years, he has, been at, in Davos, he's been talking about YouTube's, growth and in some cases it's safety, credentials as well. but there's been a kind of flurry of really interesting updates that YouTube has been putting out in the last couple of weeks. and a large part of that has come from him who, you know, he writes this. Annual letter, which acts as a kind of communication to both creators on the platform, but also, viewers as well as kind of other stakeholders and, and the media. So you can kind of get into it and understand a bit about where its focus is. I'm really interested in YouTube, Mike, because it's often aligned for its trust and safety practices, as well as getting away with not a lot of coverage for,

Mike Masnick:

I was gonna say that second part is the big one, right?

Ben Whitelaw:

Yeah.

Mike Masnick:

I mean, for all the complaints about every social media platform people have said like, YouTube wears a magic cloak. That that seems to, they get a pass on so much stuff. It's, it's kind of amazing.

Ben Whitelaw:

exactly. so, so the, the kind of the flurry of announcements and the activity is, is I think, notable. And so last week it was, putal a blog post saying that it was refining its approach to parental controls, particularly around the way, viewers were able to, change how they looked at shorts. Shorts is their kinda short form video format. It's very, very, popular. a bit of a competitor to the Instagram reels. And, it's kind of developed a set of principles where, parents can now specify how much, reels content their children can consume on the platform. Interestingly here, they said. That one of the principles is that you as a parent, can set the limit of reels content to zero. So, you know, I don't want my child to watch any reels content whatsoever. They can watch longer form YouTube content. And YouTube believe that this setting to zero is an industry first. That, that there is no parental control. That means you can set a specific kind of, content to zero, which I found fascinating. I was pretty surprised by that. I felt like that should be Baked into some of the, the settings. anyway, like it was a signal, I think to, to people that, they are making some effort to improve the way that, particularly under sixteens and, children consume content on the platform. then there was Neil Mohans letter, which came out a couple of days ago. He has been kind of. You know, this is basically a strategy for 2026 and beyond, and there's a big focus on creators and helping creators monetize their content on the platform. there's a big focus on AI, as you might expect, but sprinkled in somewhat surprising. There are a couple of safety elements in there too. So, one of them is that, they want to improve parental controls, which is the blog post that was announced last week. And also that they want to remove what he calls harmful synthetic media that violates YouTube's guidelines. and he kind of details a little bit about how. YouTube are using the spam and clickbait systems that they have used to reduce low quality content in the past to then also address AI content. So again. Within this kind of big glitzy letter in which he name drops a bunch of creators and, explains about how YouTube is gonna be the TV of the future and somewhat already is there are like strong safety messages in there as well. Now I'm gonna say, Mike, there are some sort of cynical reasons for this. You could argue that it's about like anticipating a regulatory response here and making YouTube seem. Like the good guy in case anything goes wrong in many of the jurisdictions where it's likely to face heat over the coming years, you could say it's about kind of aligning with the pressure, to keep children safe and, to avoid, endangering children in the way that so many people are caring about. I think it's about cash. I think this is.

Mike Masnick:

You cynic.

Ben Whitelaw:

I think this is about cash I say this because Mohan kind of outlines the three ways that kind of YouTube is moving towards creators, towards a connected TV future and towards a education play. And now there's are three areas that are famously, incompatible with harmful content. I dunno about you, but like trying to get YouTube into education settings or into the living room of your family home without some sort of like. strong safety guidelines is gonna be really, really tough. And so there is clearly a commercial impetus to do this, right? he's not saying explicitly, we're doing this because we want to grow, but you can see from the letter that these two things are connected. And then if you factor in another kind of announcement this week, about the BBC making a bespoke content for the platform, I think this only backs up my point. You know, big media organizations, particularly public broadcasters like BBC, are not gonna start creating content that they only put on YouTube or that they first put on YouTube before putting on, their own platforms. If they can be assured that they're not gonna appear next to some sort of like, terrorist video or beheading, like there was in the mid 2010s, they're gonna need to have strong reassurances that. when they're directing people to watch content on YouTube, those people are not gonna be recommended, content that they shouldn't be watching or that's gonna cause them harm. So I think all of these kind of signals together point to a. two things. I think YouTube is probably taking safety a bit more seriously than maybe we think, or maybe we give them credit for. whether or not they're, doing that right is another question. But also the, the kind of commercial impetus of trust and safety. You know, there are business reasons, good business reasons to care about safety measures and, clearly Neil Mohan has, has realized that.

Mike Masnick:

Yeah, I mean, I, I think, I've tried to make this argument for years is that, trust and safety, having a. trustworthy and safe platform is a good business decision and anyone who thinks of it as a cost center is really missing the big picture because you are, will drive people away. And I think there, you know, I would say YouTube has actually recognized that for, quite some time. And there have been different studies. You know, there are all of the, the. Sort of early reports when the, recommendation algorithm on YouTube is first becoming very popular about, how it was driving people towards potentially extremist content or, or whatnot. And YouTube actually appeared to take that fairly seriously and there have been multiple studies. I think going back about six or seven years showing that, these days it's almost impossible to get YouTube to sort of push you down a dangerous rabbit hole unless you start in a dangerous spot. And so, a few of the studies basically said yes if you are. Driven to certain YouTube videos via outside sources. So some substack, someone on X pointing you to things like it might start taking you to similar kinds of videos. But if you just go to YouTube straight and watch some videos, it tends to push you more towards the, commercial happy, good videos. And he mentions somewhere in this, the thing that is, I think. Totally true. Or becoming true, or already became true, which is that YouTube is TV today.

Ben Whitelaw:

Mm.

Mike Masnick:

unlike traditional TV where there were giant gatekeepers involved, you had the television studios, which would pick the few shows that they would allow each year, and there would be millions of dollars. Production that goes into it. The incredible thing about YouTube is that it effectively has a minor leagues and a and a professional, uh, space that they can play together. And you, if you're very good at your content, you can move up in the world. and it's a great thing for YouTube itself because they don't have to go through the risk and, capital expenditure of a tv, studio. To figure out what to invest in. You can just sort of see naturally who's building an audience and, the tools are getting, cheaper and cheaper in terms of good cameras, good microphones, good lighting. the ability to make professional level content, has really democratized in all sorts of ways. And so I think he's realizing that, that the, or realized that a while ago, but is expressing it here, that if you want to have a really successful platform that continues to grow, you need really good professional content and you need people to feel safe on that platform. And that's people of all ages. I mean, I think one of the things that comes through in all this is he talks a lot about. The appropriate kinds of content for all different ages and all different kinds of people that you can sort of find the, niche that works for you and build from there. And then that includes this idea of like, not removing necessarily, but limiting the appeal and the, the recommendations of content that. takes you away, away from that. So they talk about AI slop, he literally calls it AI slop and the clickbait and other things. And sort of, you know, there is a real focus on trying to build out the platform as a place that people feel comfortable going. can question whether or not, you know, how well does it actually work in practice. And of course, with. However many billions of users they have. You know, sometimes it fails and sometimes it, doesn't quite work. Right. but, I think it is, as you said, it is this clear recognition of, there is a, business case, a strong business case for having a trustworthy and safe.

Ben Whitelaw:

what do you think this shift towards, more professionalized creators? What impact does that have on trust and safety on the platform? Because in tv you know, you have, editorialized practices, you have, a fairly kinda homogenous set of, ways of doing things. and you have a very centralized way of deciding what gets shown. You don't have that in YouTube and so. You have all of these creators you can kind of go and follow and subscribe to. As as many, creators as you want, each of them will have very different approaches to not only content, but the way that they address comments and the way that they manage their own corner of YouTube that a kind of part of the trust and safety ecosystem that we haven't really contended with yet is how the creators think about these questions.

Mike Masnick:

Maybe, but I'm not sure. I mean, you know, it's funny, he addresses in the, in the letter, the days of considering YouTube to be a user generated content platform are over. You have these professional, content creators who are part of the platform. I think, I mean, it does matter to them, right? content creators are always concerned about, what kinds of content they're near and, what else is showing up and what kinds of users, and always have questions about the comments and, and all of that kind of stuff. I think all of that applies, but I don't, I don't, it doesn't strike me as all that different than the trust and safety questions from before. you do have elements of. The brand safety questions become a little bit bigger rather than I mean, brand safety is always a part of trust and safety, but it becomes a larger element in terms of like, which parts of the. Trust and safety ecosystem, or you're looking at making sure that, someone who's built up a huge, valuable brand doesn't have that undermined by, problematic content appearing next to them or something along those lines. but it strikes me as part of the continuum.

Ben Whitelaw:

Yeah, I just wonder if there's the potential for a replica of the meta system, There was a VIP content moderation system, right, that came out in some of Jeff Horowitz's reporting, for the Wall Street Journal five or six years ago. And, you know, celebrities and sports people and global superstars and probably some, creators with large, Facebook page followings got special treatment. In some cases being allowed to kind of overturn content decisions and being able to kind of have access to policy

Mike Masnick:

this is the the famous X Check system. Yeah.

Ben Whitelaw:

check exactly there is a risk of something similar happening if it doesn't already happen again,

Mike Masnick:

I, I am maybe, I mean, I think the X Check system was basically Facebook was overwhelmed and this was klugy solution like. We need to duct tape something to deal with the fact that I think it, it even started because like the, Barack Obama page got, blocked, for violating policy, but it wasn't really violating policy. It was basically like there are some accounts where like a human should review it rather than automated systems automatically making the decision and they just sort of duct tape that on, to some extent. I actually think that the YouTube trust and safety setup may be a little bit more professional, a little bit more organized, that they have better systems in place. A lot of the bigger accounts, perhaps all the bigger accounts, I don't know what the thresholds are, but they often have YouTube representatives. So it's not just like the sort of typical trust and safety set up is. user's sort of blind to it. It happens behind the scenes nobody knows who the trust and safety people are with YouTube. Any larger account has a representative that they can reach out to and talk to. So there's, more of a relationship building there. And maybe that is something that is different that has, come up from the more professionalized content is the ability to like actually work on. You partnerships and relationship building and you know, when you hit certain numbers, they send you like physical, loose sight awards and everything. so, you know, it is different to some extent. I don't necessarily think we'd see like an X check thing, just in part because I feel like YouTube is maybe a little bit more competent.

Ben Whitelaw:

Famous last words.

Mike Masnick:

Yeah, I know, I know. That's gonna come back to haunt me, isn't it?

Ben Whitelaw:

somebody clip it. Somebody clip it. Um, that's a really good segue onto I think the first of a, a couple of, quick stories that we'll talk through to round up today's episode. Mike, people with I influence, contacting people within platforms and trying to kind of, move the needle on content that they don't like. This week.

Mike Masnick:

Yeah, I mean, this is crazy. Rand Paul, you know, US Senator from Kentucky has been, historically he's actually been pretty good on internet policy. and in particular he's been one of the, few remaining senators who has been a strong supporter of section two 30 and the value of section two 30 and, and. Putting liability in the right place. we found out this week that that was all a facade. he wrote this long whiny article in the New York Post basically saying he's completely changed his mind on section two 30 and he was wrong and he didn't realize the implications and why. Because there was a video on YouTube claiming that he had taken money from Nicholas Maduro the, kidnapped, uh, former president of Venezuela, I guess. Uh, you know, and, and that is apparently false, and I believe that it is false and defamatory and Rand Paul reached out to Google and demanded that they take down this video and they said, no, we're not the arbiters of truth. We're not gonna determine whether or not something is true or false. And he completely lost his shit, basically. Uh, and, he did legally threaten the person who uploaded the video and they took it down, which is exactly how the system is supposed to work. And I found it unique for a variety of reasons. One, this idea that like, oh, you have all these principles, and then suddenly there's a video about you and you flip your principles, that means you didn't have. Principles in the first place. Also, Rand Paul had introduced a bill that did not pass a few years ago about jawboning, which would've made it completely illegal for any government official to speak to any platform about removing any content, which is exactly what he did. And there are all sorts of things. And I wrote an article about this sort of like. Picking apart all the different pieces of how incredibly Unprincipled and Hypocritical Rand Paul is in this. But I think it's actually a really bad sign for section two 30 as well. This idea that one of its remaining supporters in the Senate is like, oh no, somebody said something mean about me on the internet. I'm going to now shut down the open in. was a very frustrating thing. I would guess that it has to do with different people working in Rand Paul's office. A lot of these things are driven by staffers, not the senators themselves. And I do know that, one of the strongest section two 30 supporters who worked in his office no longer works there, and suddenly this comes out of it. So that sucks. I'm disappointed in him. I mean, I, I've long been disappointed in Rand Paul for a variety of reasons, but here's one more reason.

Ben Whitelaw:

We, we had on our Bingo card politician demands magic automated system. We didn't have politician demands, video of them to be taken down. Maybe we should add that too.

Mike Masnick:

that's that's a possibility. Absolutely.

Ben Whitelaw:

if, if you have ideas, listeners for the bingo card, reminder to get in touch with us, blue Sky LinkedIn, or podcast@controlspeech.com. Interesting. So a bit of hypocrisy there. I wanna shine a light on a story in, the Moscow Times, Mike, which is not a, often media source that we include

Mike Masnick:

Yeah,

Ben Whitelaw:

controlled speech. But it's, interesting to note that, the Russian kind of regulator, well that's Commodore. I think it's probably pronounced very badly. Um.

Mike Masnick:

say it's way better than I would've done so.

Ben Whitelaw:

tell us if I'm wrong, uh, there have been reports about the regulator threatening telegram, about some of its content moderation decisions and potentially throttling the app. And, users have reported the fact that they have struggled to get messages and the kinda app has been very slow. This is me. I, I've never thought I'd say this, but I feel sorry for Telegram, Mike. You know, in all honesty, this is the kind of story of like an app that has basically become too close to a state, and who's tried to kind of thread the line between being principled in its content moderation decisions. And, I wouldn't agree with those content moderation decisions, but it's long said that. It would not, have kind of established trust and safety processes in place. that was why its CEO Pav OV was apprehended, in France, last year, in a long running saga that we talked about. but it has also tried to kind of tread that line in a sense of being used. Globally, but with a strong emphasis in Russia. So it's used by Russian state officials. It's used by kind of activists there. It's used by people who are you know, sharing kind of Russian propaganda. It has lots of very large channels in there related to the Russia, Ukraine, war, and,

Mike Masnick:

I will note really quickly it's, it's a little bit crazy because, Paval Dav was the founder of vk, which was sort of like the Facebook clone in Russia, and he fled because the Russians were demanding things of him that he didn't wanna do. And so he left Russia and basically got rid of VK and handed it off to somebody else, and then started Telegram outside of Russia. So it always struck me as weird. How much Russia has embraced Telegram since, he had been on the outs? It appeared, but maybe I, who knows?

Ben Whitelaw:

Yeah. And, and it seems like they have kind of maintained that interest, I guess, with the view to controlling, in effect Telegram. Right. You know, they, have said that they haven't, Affected the app. They haven't kind of reduced speeds. but it seems like, it seems like that is the case according to these reports. And, and basically, you know, for all telegrams leaning between, trying to be principled in its content, moderation decisions and establish a global user base, it's, it's ended up being, a slow almost unusable app according to users, in this report. So I, I feel a bit, sorry. Basically, you know, this is. I never thought I'd say that. Um, and I don't condone, some of the content on the app,

Mike Masnick:

I wonder if there's more stuff going on here. You know, I, I don't know if I trust the reporting on this, but it feels like something is happening, but the Russia is now denying that they're, trying to throttle it. it'll be interesting. I think this will be an interesting story to follow.

Ben Whitelaw:

Yeah. And let's round up then, Mike with uh, your last story, which is a story which is. I think at the end of today's podcast was, it is a bit of, a bit of a joke. It's a bit of a, I I also, I, I thought this was in April Fools when I read it.

Mike Masnick:

Yeah, this announcement at Davos of a new, new Microblogging platform called W, not X. It's w. Uh, we'll see this. So it was announced, and it's a former eBay executive who was there for over a decade, worked on, privacy, related things. It feels, and I've heard from multiple people that this is like the most Davos brained approach to, to social media. they announced it like, oh, we need a new platform. And it's the same thing that we've heard with like new Twitter. And like all of these, everyone who comes up with this idea is just like, we're gonna do it better'cause we have integrity as if nobody thought of that before. There was a strangeness to the original announcement in that they said a new platform, and I am famously in favor of protocols instead of platforms. So I was a little bit like, oh, come on, we don't need a new silo. It has since come out that they are most likely potentially using at protocol, and they have a staging site that is effectively. a wrapped version of Blue Sky. and so I'm all for experimentation and if they wanna experiment on a protocol, and it doesn't have to be at protocol, it could be Activity Pub or whatever else, I'm happy to see that kind of experimentation. They also talked a lot about like, every user's gonna be verified, every user's gonna have to upload an id, and that kind of stuff I think is very problematic and makes you a target and, a risk. And, and I'm not sure that people are actually really going to embrace this, but, you know, it's an experiment. it strikes me as one that probably won't work, but I do actually appreciate the fact that they're trying to do it on an open social system.

Ben Whitelaw:

Yeah. And explain to listeners why it's called W

Mike Masnick:

I, I mean, they've come up with a few different reasons, but like, one is to make fun of X, but then also they, the w split into two Vs. Which was like verification and values and

Ben Whitelaw:

oh.

Mike Masnick:

like, it's, uh, it's, I'm, I'm not even gonna, and there was also, what was the last one? There was like, it was for like. We, I, I can't even remember what the third reason was. They had all these reasons. They also like launched it with like a Star Wars scroll in like 1990s rock music. It was, it was very out of touch with the youth of today, I would say. So we'll see what happens with it. I think it'll be an interesting experiment. apparently they ha they're well funded, so we'll see what that means.

Ben Whitelaw:

Okay. Maybe well funded enough to sponsor an episode of control or

Mike Masnick:

There we go. We're always open to that.

Ben Whitelaw:

Um, however silly your idea. No, that's not true. Um, Mike, that, that rounds us up neatly for today's episode. Thanks very much for giving us your, Fun and unfiltered opinion on this week's news. thanks everyone for listening. It's been great to have you. and thanks to all the outlets that we've featured this week. Go and subscribe. Go and read them. We couldn't do without them. And, thanks very much for listening this week. We'll see you soon.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.com. That's CT RL alt speech.com.