Ctrl-Alt-Speech

A Tale of Two Internets

Mike Masnick & Ben Whitelaw Season 1 Episode 76

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you by our sponsor Clavata.ai, a first-of-its-kind, automated content safety platform that allows you to go from defining a policy to enforcement in minutes. In our Bonus Chat, we speak with founder Brett Levenson on how to make T&S more consistent and explainable and the benefits of treating policy as code.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

Mike, if I asked you to spill some tea,

Mike Masnick:

no.

Ben Whitelaw:

what would you do?

Mike Masnick:

Uh, considering I am not a, a Gen Z person, spill some tea is like, you know, talking about like gossip and rumors, right?

Ben Whitelaw:

Yeah. Okay. So yeah, you're, you've got the lingo enough to understand what you're being

Mike Masnick:

I have kids.

Ben Whitelaw:

yeah. I, I Would've thought it was a, a kind of incident with a beverage. And, uh, and so I'm, glad your, one half of control or speech knows what they're talking about. But basically there's, there is a plethora of apps where you can, you know, spill your tea. And one of them unsurprisingly, is called spill,

Mike Masnick:

uh,

Ben Whitelaw:

right? Which helps you find the, the hottest spills and the most popular topics on the internet. it's great for kind of discovery. It's not got the best app reviews, I will say that. But nonetheless, we're using it today on control speech. its user prompt is not spill, which is, I think is a missed

Mike Masnick:

Yeah. Really? Or what's the T or or something. There's like a whole bunch of opportunities there.

Ben Whitelaw:

right,

Mike Masnick:

They should hire us for marketing.

Ben Whitelaw:

free product feedback right here. Um. But instead it asks its users to comment and, share what you are interested in. So I'm gonna give you the very generic, question today. Have what you're interested in.

Mike Masnick:

Oh, interesting. well, this week I was really interested in the sort of confirmation of the idea that trust and safety people are the ones who raise their hands, the ones who see a problem and rush towards it. And, and there have been various discussions on this, like, you know, startup companies that don't have trust and safety, something goes wrong. And someone who, it's not their job, but they say, Ooh, this is a problem. I need to solve it. I need to step up. And we saw that in partly horrifying, but also partly amazing way last week at a Wikipedia conference where unfortunately, a very troubled individual showed up with a gun, climbed on stage, and basically threatened to take his own life. Two people from the Wikipedia community stepped up and ran towards him, literally, and, got him, you know, someone sort of gra held him and they got him to calm down and, you know, got the authorities involved. But Showing that, you know, there, there are people out there who see a problem and rush towards it as opposed to run away from it. And I believe some of these people were involved in trust and safety with Wikipedia. And so it fits that stereotype of the trust and safety people are the ones who rush towards the problems and say, how can I help? And it's a crazy, crazy story, but I thought it was really. Interesting. A really interesting demonstration of that concept of the trust and safety people are the ones who, who rush towards the problem to try and do something.

Ben Whitelaw:

very much so. Internet heroes and Real Life heroes at the same

Mike Masnick:

Yeah. So, Ben, what are you interested in?

Ben Whitelaw:

Well, I, I'm interested, Mike, in whether I can remember anything from my history GCSE class, which is gonna be very important for today. if we are going to get through today's episode, I already am worried about what my history teacher would think about my knowledge of the Victorian era, which I'm going to try and delve into my brain and, and, regurgitate some of today, but. That gives you a little hint at what we're gonna be talking about in today's episode. Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's October the 23rd, and this week's episode is brought to you by Clavata, a first of its kind automated content safety platform that allows you to go from defining a policy to enforcement in minutes.

Mike Masnick:

Yeah, I, I stay tuned for the, end of today's episode because I had a fascinating conversation with Clavata's founder Brett Levinson on how to make trust and safety more consistent and explainable. he's really focused on this concept of policy as code, which. Absolutely fascinating, and he and I had a really, really interesting discussion of what that means and how it works. So, definitely stay tuned for.

Ben Whitelaw:

Perfect. That sounds great. that'll be at the end of today's episode, but before then, we're gonna be talking about, whole raft of stories that have come out this week, including, a really interesting op-ed about Victorian morals and whether we're entering into a kind of Charles Dickens version of the internet. Disinformation as a mindset, which I think some of my, uh, family members know a bit about. And, uh, the constant battle between humans and ai. so look forward to that. Mike, before we crack on on today's stories, in last week's episode, we ask our listeners to, rate and review us, as we always do. we challenge listeners to leave a spooky review, where. They mentioned Halloween or some sort of Halloween reference, and tis the season after all. Um, we, we had a very good one, which I think you'll like, this is from a user in Germany and I won't expose them because you'll see why in a second. Um, they said that this Halloween I'll be dressing as EU chat control because there's nothing scarier.

Mike Masnick:

There we go.

Ben Whitelaw:

Which is a link to last week's, uh, episode for those who haven't listened, and this use. So thanks Mike and Ben for the inspiration. So not only are we getting reviews about Halloween, we are the inspiration for some people's Halloween celebrations.

Mike Masnick:

how one would dress up as chat control, but I love it. It does seem very, very scary. I appreciate it. That's.

Ben Whitelaw:

this user, wants to send us in a picture of themselves, a chat control, in a couple of weeks time when they're going to a friend's party, we would love that. We would absolutely love that. but yeah, we won't be telling, listeners who they are in case they're going to the same party you are.

Mike Masnick:

Consider it encrypted.

Ben Whitelaw:

Consider it encrypted. Indeed. Um, we also had another very nice, review from somebody who's a regular listener. I also went share this person's, username or who they are because they've dubbed themselves a non-techie. this review says, um, very accessible for non-super techies. These guys are smart and fun to listen to, and they cover a very important topic that is accessible to a Jen Dial phone. Like me, keep podcasting and I'll keep listening. Is a Gen, is that a kind of a Jen dial up phone. Are you familiar

Mike Masnick:

I, I've not heard that term before, but,

Ben Whitelaw:

somebody who's of the generation where Dial Up Internet existed is presumably or was the common

Mike Masnick:

Well, I, I, well, I was wondering, is it, it was a dial up internet or was it like from the generation where phones actually had a dial on it? Right. Which is what I had as a, as a kid was, you know, like dialing a, a number was, you actually had to turn the dial. and if you are of that generation, and this person may well be, there are some really fun YouTube videos, which I have watched in the past of handing kids today, an actual dial phone. And asking them how to call a number and they have no idea. And it is, it is both amusing and infuriating because like some of us grew up with actually having to dial a phone number. Now I'm older than you. So did you ever have to do that

Ben Whitelaw:

We didn't have one of those. I'm a, I'm a kind of button

Mike Masnick:

push button?

Ben Whitelaw:

yeah, I'm from that generation, so I, I would know how to use it now, but, um, back when I was younger, I would struggle, I have to say, so. You know, it's good to hear we have listeners of, of all kinds, of all shapes and sizes. and, uh, yeah, appreciate those listeners who, left us a review in the last week. Please, listeners, if you're a, regular listener of controlled speech, drop us a review. you don't have to do it. on your keyboard. You can do it via a, a gen dial phone. Why not just, you

Mike Masnick:

note, I will note really, really briefly, I got scolded by someone you know, who you are, for saying like, we promised to, to read any reviews that mentioned Halloween. And I was told that, we were opening ourselves up to, trolling by some people. And, you know, as a trust and safety related podcast, we should have thought through the, the possible, what we were putting ourselves into in this user generated content world, of, of promising something that maybe we wouldn't be able to deliver on.

Ben Whitelaw:

it's a fair call out. It's a fair call out. You know, we are very judicious with, uh, with not only the reviews, but people's information and obviously if you say something really terrible about us, it'll never see the light of day. Because

Mike Masnick:

We, we have the power.

Ben Whitelaw:

that. Exactly. Exactly. So, good reason to review us wherever you get your podcasts, does really help us get, discovered, increases our listenership, and all those good things. okay, so we, have got lots to get through, including that brilliant bonus chat with CTAs founder Mike. So, so we're gonna jump in, very quickly and talk a bit about, You know, we, we spoke last week about Sam Altman, right? We talked about AI and, open AI's policies and the way that, you know, these large language models are kind of going out and crawling all of this information, creating these huge databases. And here this week we're seeing the kind of other end. Of that and two platforms that I would probably call some of the, like most creative parts of the internet. Certainly some of the kind of parts that I frequent the most, are really struggling with AI bots and everything that comes with generative ai.

Mike Masnick:

Yeah, so two that you're talking about, I think are Wikipedia and Reddit. and so there were a couple different stories here. So we'll start with, there was an article in 4 0 4 Media, which. Is about how Wikipedia says that AI is causing a dangerous decline in human visitors. And this was based on a, a blog post, from Wikipedia sort of talking about this. And then 4 0 4 spoke to, the folks at, at Wikipedia and got a little bit more details on it. Basically they're saying, you know, this isn't too surprising. And, and the Wikipedia people don't sound too surprised by it, that, based on the fact that, AI tools are giving answers, sort of giving complete answers. It's often leading to fewer people actually going and visiting Wikipedia. even though much of the responses from AI systems were probably, effectively sourced from Wikipedia, Wikipedia is a hugely important database that goes into the training of the different AI tools. And so. they're seeing this decline. And the thing that struck me as interesting about it was that the folks at Wikipedia note that this is a concern and it's potentially a problem, but they're not freaking out about it. Right. I think, you know, whenever we've seen there have been similar stories about other sites. That feel like, oh, we're losing out on traffic from AI summaries or, or whatever. And usually they're really, really angry about it. And Wikipedia, not surprisingly, I would say, is sort of taking a more kind of academic observer approach to this and saying, you know, this is potentially concerning and we should think about it and we should have a discussion about it. But it's understandable why it's happening and people want. Answers and they're sort of using AI tools as answers and they sort of suggest like, maybe there's a better, you know, maybe we should be thinking about how do we encourage people to dig deeper? How do you check your sources, go for citations and. Know, it's interesting to me because it reminds me a little bit of the way that, people reacted to Wikipedia itself when it first came around and you know, there was a good decade where Wikipedia was seen as like a threat to, good information. And schools were saying never use Wikipedia. And students were told to stay away from it entirely. And now I think, As time has gone on and people have learned to use it properly, one of the things that people say is like Wikipedia is often a really good starting place to do research because it has all these citations and you shouldn't rely on it alone, but you should use it to explore the citations. And I think there's an argument that Wikipedia is making here, which is sort of similar, which is like. AI is a good starting source, but you shouldn't rely on it as the final thing. And more and more AI tools and most of the ones that people are using these days have gotten much better about adding links. I know Chat, GBT, which is the most popular one now, really does try to add links and sources. Google Gemini does that. Perplexity does that. They're all sort of trying to add sources. That allow you to go and investigate, and maybe it's a cultural thing, maybe it's a, like, I'm an old man kind of thing, but like, you know, to me, like I am always the kind of person who tries to read as much as I can about something. I mean, it obviously depends on the subject you know, if I'm just looking for like the time that the shop closes or whatever, I don't have to go investigate too deeply. But like there are other things where it's like, I want more sources to sort of really triangulate how I think about things. And so. Wikipedia for me has always been useful, but I've also found AI tools to be useful in opening up more sources and allowing me to, look at it, which is why, as we were just talking about before we started recording, I tend to have like 500 tabs open at once, but part of it is because I do these kinds. I do these kinds of searches and it leads me to more and more, and I'll just open up all the tabs and kind of start going through them. And so I think part of the argument from Wikipedia here is like, this AI thing is, is creating a challenge, but it's something that we should think through. Like how do we encourage people to be better information consumers in this age of ai? And I think that's. That's a valid point and a valid way of thinking about it. and so I appreciated sort of the way that they looked at it as well.

Ben Whitelaw:

Yeah. I mean, the other part of, of the kind of comments from Wikipedia that this 4 0 4 media story talks about is how you sustain. Financially a, a kind of initiative like Wikipedia. Right. And, and that's sustained in both Yeah. Financial sense and also the kind of contributor sense what AI agents don't do or bots do or scrapers don't do is, you know, update Wikipedia pages yet, or, or add their knowledge that they've gleaned from. parts of the web as they've gone about providing answers to questions like yours and mine is, you know, they didn't go and re-update those pages and they certainly don't dip into their imaginary wallets and, you know, donate to Wikipedia's annual campaign to keep its lights on. so I think there's, what I got from this is there's this kind of asymmetric relationship. Between, kind of AI companies and the bots and, you know, Wikipedia and other. Data sources that have been used to train LLMs. and yes, that Wikipedia isn't sounding the alarm too much, but it does feel like a, a fundamental question, right? And we see this with other, other news organizations as well who produce original content whose content is scraped by ai. And then, producing AI overviews or responses. how do we sustain? Kind of original producers of information is, is a question I came back to you.

Mike Masnick:

And I think, I think it's important, and I, again, I sort of appreciated the way that Wikipedia is sort of raising this question and saying like, we should discuss this now, you know, before it's, it's a true emergency where like the signs are concerning and the trends are concerning and let's, have that conversation. Whether or not the problem is like people tend not to take these things seriously until it's an emergency. and that may be true here, but I do appreciate kind of the way that they're approaching it. And I do think, you know, I mean, it's kind of funny because as you were saying, like, and as was raised by Wikipedia. That, obviously they don't, donate money to, to the campaign, but also they don't donate, information and knowledge back to it. both of those are not definite, right? Like you could change that, but I could see a pretty quick, like people freaking out. If AI tools started editing Wikipedia, can you imagine the freakout that would occur? Right. Like, and so even as you raise that, I'm just like, huh, that's interesting. But you know, they could, you know, like what if they, they had, you know, a citation that, debunks something that is in Wikipedia or adds to something that is in Wikipedia. They could take on the role of a Wikipedia editor as long as they live up to the rules of Wikipedia in terms of. You know, having the citation, having it be neutral, all of these things, and, this is still early, but like, there is talk about the sort of agentic AI systems where you could give them a credit card and allow them to act on things. So you could imagine a world in which, an AI agent is donating to Wikipedia. But you know, I think that's, probably leaping a little far ahead into the, the science fiction future, but. It's kind of interesting. I mean, the other thing that I was wondering, you know, there is a little bit in here about the sort of cost of bots and the bots scraping Wikipedia constantly and how that is a real cost. But then there's the fact that there are fewer human people coming to, to Wikipedia. But you know, because Wikipedia doesn't rely on ads. In some ways, and I'm, this is not an endorsement. I should be really clear on this, like in some ways, right, fewer human people. Visiting means less traffic, which means less cost. A little bit to Wikipedia.

Ben Whitelaw:

Yeah.

Mike Masnick:

You know, I'm, I'm just sort of trying to balance out all the different, elements here. So it's not like, I mean, other sites will complain like, oh, we lost all these views that then help pay for things because the ads pay for things. But Wikipedia doesn't have that issue. Exactly. So it's a little bit different in terms of how you think about it. but it is still. I think it's the kind of thing that, lots of different websites that are informational websites that you know, are sort of grew up in the age of an open web are starting to deal with, which is this. idea of like, well, the open web was all about humans, and now we have the machines as well. and how does that play into it? Which I think leads into the second part of the, this story that we wanted to talk about, which was the other area of the internet that is often talked about as sort of being creative and user generated and, community focused, which is Reddit. and Reddit. You know, there's a lot of controversy around Reddit. Both, you know, there's plenty of good stuff and plenty of problematic stuff. but there was a new study that came out. Looking at how Reddit moderators, because again, like just in there must be some people who don't know how Reddit works, right? You have like all these subreddits that are community moderated and it's volunteers and they set their own rules and they moderate the moderate the way they do. some of those rules include no AI generated content. Some are much more open to ai, some are entirely about AI generated content, but every, every moderator gets to set their own rules. some folks at Cornell, my alma mater, always happy to see Cornell folks doing good stuff. did a study where they spoke to a bunch of Reddit moderators to try and figure out how they're dealing with the sort of influx of AI content and it's. Apparently quite a challenge, not surprisingly for some of them, because even if you have a rule that says no, AI generated content, as lots of forums do, subreddits do. How, how do you determine that? Right now, like we have a problem where like some content is obviously AI generated, but lots of content. It's not clear at all if it's AI generated. and there are AI generator checkers and they tend to be terrible. They're not good. They make errors in both directions, thinking AI stuff is not, and thinking non-AI stuff is. And so if they can't figure it out. How are the moderators figuring out? And it's a real struggle and they don't know. And then you have like, there's a gray area between like, is the content AI generated or is it, created by a human, but then modified by ai? Where is the line? What do you count? what determines it? And so, you know, I don't think the study has any answers, but it does really clearly call out the challenge That subreddit moderators are now facing and they're all sort of trying to puzzle their way through it.

Ben Whitelaw:

Yeah, I, I really like this piece of research and, and the way that the concerns that the moderators shared with the researchers kind of boiled down to, into three areas. Right. And I, I think they are my concerns about AI as a user as well. Right. You, the moderators talked about content quality, kind of social dynamics on the subreddit, exchanging and a kind of difficulty to kind of govern, I guess because. it's their platform. It's, they're in charge and. I think they're really helpful buckets to kind of almost place our concerns about generative ai. you know, some of the headlines we see each week, work slop, this new term that's been defined, where a worker is using generative AI to produce a, a, piece of work or to do a task, and is almost kind of creating more, more issues by using generative I to. do that. And it's been called by, I think some Stanford researchers work slop that fits into the content quality bucket, right? And so these, buckets are kind of helpful for us to kind of think about the, downsides and the trade-offs of using generative ai. I just love the fact that it's kind of originated from a group of moderators, 15 moderators who look after some of the most, Engaged communities on the internet. As you say, like I think moderators have so much to be learned from, and are often, as you said at the top of the, the episodes, you know, the people who kinda stick their hands up first, who are the first people to see something happen. And I think that's what we're this kind of research represents in, in a way. I had a question for you about Wikipedia, Mike. what would you do if you were Wikipedia. How would you, this is not a pitch to become their, you know, CEO or to,

Mike Masnick:

They do have an open CEO search right now.

Ben Whitelaw:

I, I noted this is an inadvertent job interview that, but like, what would you do? You know? cause Reddit, made a deal with, at least one big AI platform to sell its content. and you, you know. You could imagine, just like other news orgs have done, potentially Wikipedia doing something similar, if it didn't have this kind of nonprofit open web mission and ethos, do you think it would ever change that? Where would you go? What direction would you take it?

Mike Masnick:

Yeah, I mean, I think it's a challenge. I think, the concept of the open web is important. It's certainly important to the Wikipedia people. I think that they would not do it in that way. I do think there's an argument on the reverse side, which is that for the AI companies that really. Benefits so much from Wikipedia, they should be stepping up and voluntarily donating significantly. Um, and just saying like, the future of our business as a AI providers relies on good, high quality factual content, such as the kind that is produced by Wikipedia, and therefore it's actually in our interest to donate a significant sum of money to Wikipedia to keep it going for the sake of. Our own, you know, goodness, but not just like a license for us, but effectively just encouraging Wikipedia to be able to continue to do what it has done all along. And therefore, it is, you know, for the good of the world, not for the good of us as a company. This is somewhat similar to. An argument that I put out, uh, it's a while ago now, almost two years ago I think, where I said, for all these debates about copyright and scraping for journalists against AI companies, you know, all the journalism companies and the, all these lawsuits that are still going on the New York Times against open AI and whatnot. I think copyright is the wrong tool for that, even though that is where all the lawsuits are. And my argument was. The AI companies need more and more good high quality content Journalists produce good high quality content. I think the AI companies should be giving lots of money to journalism outfits not. For copyright, which to me is for paying for old content, but rather for the production of new content. So fund journalists, fund authors, fund content creators to create new content that goes out into the world that more people can enjoy, that is open. That is not just for the AI companies, not just for the, you know, to build into more slop, but. Give it to the, creators to, build more. And there are all different mechanisms for how that could work, but give it to them for the production of future content, not for paying for the access to old content, which is the copyright model. and I think there's an argument for that in Wikipedia as well.

Ben Whitelaw:

Yeah, I think it's interesting. I mean, other tech platforms in the past have done versions of this, right? So, you know, like Facebook. Used to pay news organizations to create video because video was the thing that kept users on platforms. and my issue is that what happens when you, when the well runs dry or there's a change of strategy, and all of a sudden you have that kind of money ripped away. But maybe that's a question we, we answered down the

Mike Masnick:

it's a different kind of challenge too, and, and I would differentiate it slightly in that Facebook paid people to do video on Facebook, right? It was, it was come into our walled garden. Come into our silo, and then as happened. We, changed the rules and realized that we've been, measuring video views all wrong. And it turns out nobody's actually watching your videos. Even though we told you plenty of people were, then we pull, the rug away. I think that's a problem, and that's why I'm sort of encouraging this concept and it feels, you know, like too philanthropic to ever actually succeed. But this idea of like, just. Give people, give creators money to create and don't limit them and say, it has to be on my platform. But just say, create stuff that is out and available in the world. And yes, it should be available for our. AI models to train on, but also for others, it is part of contributing to the open internet that has enabled AI models to exist at all, right? They're all trained on what is available on the internet, pay it back, pay it back to, to create more content for the open internet that is available for all, as opposed to locking it up. And so that's, you know. it's a little bit of a soapbox and maybe it's, foolhardy and, and nobody will ever do that, but I am going to keep beating that drum as long as I can.

Ben Whitelaw:

And, you know, I'm not sure if that'll get you the, the CEO role at Wikipedia, but it's a nice idea. And Sam Altman, if you're interested, you know, we've got a ready made, blueprint, ready to

Mike Masnick:

Yeah,

Ben Whitelaw:

uh,

Mike Masnick:

You could And, and pay us to create more content.

Ben Whitelaw:

Yeah, more episodes of Controlled speech. Um, cool. So that's a really interesting story. I feel like we could go deeper. we, wanna kind of keep going with today's stories I'm going to talk a bit about, a really interesting. Op-ed that was in the conversation this week. The conversation being the, again, openly published, free academic website. contributed to by lots of professors and researchers and academics. Really amazing

Mike Masnick:

Everything under Creative Commons license is I've republished conversation pieces. It's, it's great.

Ben Whitelaw:

Yeah. Fantastic. And, the piece that caught my eye was by, an academic called Alex Beatie. And he asked a really fascinating question, which is the thing that's gonna test my, sophomore or freshman, history knowledge as, for our US listeners is are we entering internet in the Victorian era? is this the Internet's Victorian era? And, I love this because the Victorian era was one of the. pieces of history that I loved talking about most and learning about most, right? The industrialization of, Britain and the world. the kind of really interesting technology shifts, the creation of the steam train, the light bulb, really interesting morality, aspects of the Victorian era where, you know, you're meant to be very, kind of demure and, and pure and, there's this emphasis on sanctity and following authority and anyway. Alex asked the question, are we entering this kind of era of the internet? And he uses, three things to kind of ask that question. He, he looks at age verification, which we've talked about at length on the podcast. He looks at the kind of social media bands and, not only the one that we've seen go through, parliament in Australia, but also the one that's making its way through. Legislation in, Denmark and also apparently there's one in New Zealand where, Alex is from and also the, the smartphone free childhood campaign, which is something we've touched on as well. Those three things make him wonder if we're seeing kind of resurrection of these Victorian moral ideas. But for a, a kind of 21st century internet age. and it's kind of fascinating to, you know, thought about it in this way, but. he points out that there's a, a link between, and I'd never thought about it like this, the idea of digital wellbeing and kind of an abstinence from technology and virtue. Right. So the idea that if you kind of reject technology, if you turn off notifications, if you get rid of apps on your phone or you stop using Instagram, if you set kind of, I dunno, particular restrictions for yourself, then that's a good thing. and I dunno about you, but we, I see this all the time. I do it myself. I've talked at length to my friends about how, you know, great I am, because I've set up an app that delays my, itch to go and use certain apps. my wife, I'm gonna shame her here, um, because she never listens to the podcast. She has, she has talked at length about getting rid of Instagram and how much better that makes her feel, which may be true, but there is a, there's a kind of virtue signaling element to it. and that idea of kind of anti-technology in virtue something that is a really interesting one. And you'll be pleased to know, Mike, that the person who kind of, I guess, uh, characterizes this most in this op-ed is our good friend, Jonathan Hyde. Uh, who, who will always give you, a chance to kind of rant about, um, So we will get into kind of why maybe height's, ideas. Beyond the anxious generation book that we talked about and, and maybe some of these kind of ideas about morality, perhaps have shaped the way we think about technology now. But I just wondered what you thought about the OPED and and what you made a.

Mike Masnick:

Yeah, it's really interesting and it's, it's sort of an interesting thought exercise and I don't know if I fully agree, but I think it's, it's worth exploring. I do think that there is the sort of aspects of morality play and virtue that have gone into a lot of the discussion. and I mean, I have seen some of that and it reminds me a little bit of the concept of the Baptist and the bootlegger. Do you know this, this concept or I, I don't know if I've brought it up before. It's pretty common in, in, in some like economic circles, which is that when regulations come around, you have sort of two groups that will team up and we'll coordinate in support of them. And, the Baptist and the bootlegger concept is with the issue of alcohol prohibition, where you have two groups who find it. Like, oh, prohibition of alcohol is good. One is the Baptist who are doing it for moral and virtuous reasons, and the other is the bootlegger who says, okay, if we outlaw alcohol, I'm going to get very, very wealthy. and you definitely have some elements of that within the sort of like, anti-tech, tech is bad, apps are bad. Kids on apps are bad. You have the people who are moralizing, who are, making a very moral campaign about it. And then you have the, groups who are like, oh, there's an economic advantage to me here. And this is something that just came out yesterday, which I sort of ranted about a little bit on Blue Sky, that is sort of somewhat connected to this, Is there's a, Supreme Court case going on right now. I'm not gonna go into the details of a Cox versus Sony, which is sort of technically a copyright case. But the, amicus briefs friend of the court briefs in favor of Sony came out yesterday and there was one from, NCOs, which is the National Center on, sexual exploitation.

Ben Whitelaw:

We mentioned them last week, right? Yeah,

Mike Masnick:

did I mention them last week? I don't even

Ben Whitelaw:

Yeah.

Mike Masnick:

And they're, yes. They're like this very moralizing group. they started That's right. I did mention them. They, they started out as,

Ben Whitelaw:

See, I was

Mike Masnick:

Yeah, and I apparently forgot, but, but, um, so they came out with a brief that they said was funded in part by the recording Industry of America, and that struck me as really, really surprising. Now, the ra, aa. Has an economic stake in this case, but they're trying to get someone who will make the moral argument for them. And it's a very baptist and bootlegger situation where, when you start to think about these things, what is the real incentive behind it? And is it really virtue or is it, there's an economic incentive behind the virtue? And I, and I think it's sort of a useful framing for thinking about this argument as well, because. I think there are some people who are jumping into these things with moral arguments and some not so much. I think there are some people who are thinking through these things in terms of like, well, this is new technology and we're learning how to use it. And some of learning how to use it is learning, when not to use it and how to balance your life. And I think those are valuable things to think about and sometimes that is presented under this sort of moral virtuous. Guys, because it sort of is a way of telling a narrative that convinces people to do it, right? This is some of the, practice of religion and some of the concepts in religion are really that, right? They're sort of telling people how to live a healthy life, but put into moral and, ethical terms you know, as opposed to like, this is why it's good for you. And so that always kind of happens in some form. I do think that this is. Interesting framing in that we should think through, like what happens if we're sort of overtaken by moral arguments as opposed to reasoned arguments and where does that lead us and what are the limitations of that? And I think this piece does it very well and sort of, comes to the conclusion like, we should be a little concerned about like what we're doing if we're overly focusing on the moral arguments, rather than thinking through like how do we, Get kids to use these tools properly. I mean, the same thing that we've talked about, uh, millions of times, and I will get on a soapbox about if given the, the chance.

Ben Whitelaw:

Yeah, which is what this podcast is about.

Mike Masnick:

But I think, I think it's a really interesting framing and I really liked, I hadn't thought about it this way and I, I like, I like the argument.

Ben Whitelaw:

Yeah. And I think I, I kind of was thinking a bit about the role of height again, and I don't want, I don't want this to become a

Mike Masnick:

I didn't even mention him. Okay.

Ben Whitelaw:

or I know, I know, I know he is got in my, he's got in my brain like a kind of worm. But, um, I went back to kind of like his, kind of moral. Foundation's theory that he proposed in the Righteous Mind, which is his kind of big book before yt, anxious Generation. And it, it was very interesting to me to see that kind of three of the five moral foundations looked very similar to the kind of Victorian ideals that I was talking about. Right. the idea of kind of sanctity and degradation authority versus subversion loyalty versus betrayal. they're kind of, they are. You could claim that they're not rooted in that particular era per se, but they, that is an era where they were particularly strong. And it's interesting that it's not that long ago, and it's interesting that he's, kind of landed there and so it kind of helped me realize that, okay, why are we seeing all these. ideas become so popular now? You know, height is one of those voices in the discussion, and a lot of what he's saying is rooted in this kind of Victorian

Mike Masnick:

and, even,

Ben Whitelaw:

I think is, is helpful for me at least.

Mike Masnick:

the anxious generation. There's this weird chapter in the middle, which is basically like we need to bring back spirituality, which really felt extremely outta place and reads very strange. But is a purely sort of moralistic, argument in the middle of the anxious generation, and which I never quite understood why it's there, but it sort of, it made me very uncomfortable reading it, like, why are you making this suggestion? and what is the basis for it?

Ben Whitelaw:

and you know, who as a parent or otherwise, doesn't want to feel like they're virtuous or that they're moral, right? So, so it is interesting. It is interesting just to kinda think about how maybe he's used some of these ideas and applied them to technology, smartphone, social media. We, we've talked about how the data's not very robust in, in the actual generation, or at least some academics believe It's not very robust, so. a great addition to, I think, a, a longstanding topic. We've, discussed. We will, I think, turn to a few other stories now, Mike, that we will quickly run through, before passing'em to, to yours, discussion with the GLO founder. and so you take us through what you were also, are you reading this week?

Mike Masnick:

Yeah, there are a, a bunch of interesting stories, but one that, that really caught my eye is that Tinder, has launched this, facial verification tool. and this was really interesting and obviously trust and safety at Match Group is run by former. control alt speech, guest host Yoel Roth. and he has been out in the media sort of talking about this, where they're having a system that will sort of prove that the, person opening an account is a human being that sort of matches the pictures that they're putting online. It's done in a very thoughtful way, which is not surprising given that Yoel is there and Yoel is a very thoughtful trust and safety person. it's not age verification. It's not collecting all sorts of private. Hidden data on you. It's just using a system that, basically, as I understand it, at least I haven't tried it obviously, but like, makes you prove that you're a human being where they can see a, a head and, you know, makes you move and do certain things. in other contexts there have been like age verification providers who talk about that, like proof of lifeness, which seems scary and I'm always just more concerned in the age verification realm. But this seems fairly clever and minimizing the risk factors, the threats that I'm always concerned with, with age verification, what kind of data you're storing. Part of that also is just the context of dating apps as opposed to social apps where there's just people talking with dating apps. There is in theory an eventual actual human to human meetup, where the security concerns are much greater. And obviously there've been all sorts of stories, you know, sort of. Generally referred to as catfishing stories of fake people on dating sites, tricking other people into meeting them where they can then be attacked, robbed or worse. There's all sorts of things along those lines, and so being able to actually prove that someone is who they say they are. and is, the human being that they say they are actually does make sense. and there have been similar tools on various dating sites in the past. According to the, articles, though, most of them have been voluntary. And the big change here is that they're going to mandatory. I just thought it was a really interesting story of like within a specific context, a certain type of app that is embracing a new technology to deal with a specific problem that contextually. is an issue, but doing it in a way that is thoughtful and probably, that the downside risks to it are not. as great as some other proposals that are out there. So I thought it was a really interesting and thoughtful approach to this from, Tinder that, is unique and is a sign of why it's nice to have, trust and safety that can experiment and find the right solutions to various problems within their context, as opposed to sort of like the top down one size fits all, Every site must do X, which is what you know a lot of the regulations are right now. Here's a situation where Tinder and match groups are said like, we have certain problems. We have people trying to fake accounts or whatever. Here is a clever way that hopefully solves that, or at least tackle some of that problem in a way that is not particularly harmful to our users and probably is really helpful.

Ben Whitelaw:

Yeah, it's always, you never want to believe, Without kind of critical thinking, the press release of a, of a, platform when they, when they release a, a safety, uh, a safety feature like this. But the numbers in this one from, Tinder are very chunky, right? So it's saying 60% decrease in exposure to bad actors based upon a, a kind of weighted sample that they did a 40% decrease in bad actor reports. You know, these are, Big numbers and, that's why UL says like, this is maybe the most impactful trust and safety feature I've seen in my 15 year career. Like, that's pretty notable. So, this idea of having verified badges. Also something we're seeing a lot more of, in various guys is LinkedIn do something similar. You have to upload an ID in that case. But again, we're back in a, we're seeing much more of the kind of trust, of the trust and safety dynamic being leveraged by platforms now, which I think is, is fascinating. great. I will quickly round up with a, very interesting piece of academic research. Mike that we spotted in our reading this week. this is a, a, great, study about how people who are most liable to believe misinformation, are most commonly not receptive to the influence of authority. and I'll explain a bit more about what that is. Basically, a bunch of kind of social psychologists surveyed people during the pandemic to ask them about why they believed or didn't believe about COVID misinformation. you know, some of the kind of famous claims that we saw in the news all the time, 5G Networks, causing the mind virus and that kind of stuff. And of all the questions that. Was posed to these, participants, Mike, the thing that they found as to be the kind of driving factor of why they believed or didn't believe something was whether they saw it as a kind of win to, you know, have their view as if they saw it as a strength to counter, what is a received wisdom or a, a widely held view, or, you know, and very often in that case was that was kind of government or authority view. And it was kind of, it really shows the problem and the challenge of, missing disinformation because it means that essentially before you can provide any reason or argument, the person who already has a view about how they see themselves in the world as, as a kind of contrarian, perhaps they're not They're not gonna be convinced otherwise. Right. They, it is gonna be very, very hard to come up with a perfect argument to convince that person because they see themselves in a particular light in a particular way. And it just goes to show you, I think why, the issue of misinformation, disinformation is such a kind of difficult challenge to crack and why so many platforms, academics, and the media have really struggled to get its kind of arms around it. this feels like a, a reason why.

Mike Masnick:

Yeah, I think that, you know, I, I've always said that a lot of the disinformation problem is really a confirmation bias problem. People are sort of predisposed to certain ideas and they're looking for confirmation of it, and I think this is some element of it, but I think. what this adds and makes it really interesting is how much of it is this sort of belief in themselves as contrarians and which allows them to discount any source that has evidence against them. and I, I'm sort of wondering to some extent where this came from because I think, you know, we've taught people for a while, which is good, which is like, you know, you should be skeptical of information that is provided to you. That has led some people into this, you know, like self identity as a contrarian. And I don't think like, being skeptical or being contrarian is necessarily bad, but like the issue is when that is your entire identity, which means you will reject anything that disagrees with you,

Ben Whitelaw:

Yeah.

Mike Masnick:

is not, that's not being skeptical. It's not really being skeptical, but you've, you've created this identity yourself. It's like, well, I'm a skeptic, which means I can just reject anything I don't like.

Ben Whitelaw:

Yeah.

Mike Masnick:

I think we, we should maybe recalibrate how we think about talking about skepticism and doing your own research, right? Like, you know, you have all these conspiracy theories who always talk about doing your own research, which is good in, theory, right? The idea of doing your own research and understanding these things. And you know, as I said earlier on the podcast, looking at, you know, exploring different sources and all that kinda stuff. But that's not what is really happening. What they're really doing is looking for confirmation bias that makes them feel better. Like, oh yeah. All you sheep are believing this, but I'm the contrarian and therefore, you know, like I know better. And that's kind of what this study is getting at is this sort of weird self identity of like, I don't believe those guys. I will find my own sources that, you know, promote how I am the skeptic in the contrarian.

Ben Whitelaw:

Yeah. You know what I thought about this research, Mike. I think it's the perfect Thanksgiving dinner table fodder. It's, it's the perfect research to get out when your like strange Auntie Anne says, you know, I'm skeptical about this thing that is, uh, verifiably true. And then you can say, auntie Anne, look, you know, I'm never gonna convince you. You are a self, you know, self-diagnosed contrarian. This research proves that. Enjoy your Turkey. Um, yeah. Um, and on that note, Other holidays are available. By the way, I'm just very conscious that Thanksgiving is, is coming down the track. Um, on that note, let's, go now into our bonus chat. Mike, give our listeners the, pitch as to why they should stick around and listen to this great conversation.

Mike Masnick:

this was a great conversation as Brett Levinson, who's always really interesting. I always enjoy talking to him. Very, very thoughtful and creative mind and thinking about trust and safety and challenges and thinking about, how do we make trust and safety more consistent and explainable the things that everyone always talks about in the trust and safety space. And he has this idea of policy as code and sort of what does that mean and how do you think about. As like subjective decision making and how do we make it a little bit more consistent and explainable? And so it was a really, really fun and fascinating discussion and I think we'll just jump right to that right now. Welcome to Control Alt Speech. Brett, let's dive right in here. I know you've been thinking a lot about this concept of policy as code, and even partially how it might challenge my mass Nixon impossibility theory. So can you possibly, so can you explain what, what do you mean by policy as code?

Brett Levenson:

Sure. Yeah, of course. so I guess I would start with, so I'm an engineer my whole life. and there are certain things that I think are, like people just think of code as like coders and maybe like that's, you know, a thing that engineers deal with. But I think like when we talk about code, there are some things about it. That are true, whether it's a very complex programming language or a very simple programming language, like I, I would guess I would almost say like there's a bar for what the point at which I would consider something to be truly like code basically. Dykstra himself actually sort of had, had talked about this many years ago and, talked about the folly of natural language programming specifically, because natural language is inherently imprecise. And so to me, like the first thing about code is that. It is inherently precise, right? In every programming language that we've ever built. The idea is like, you can write a line of code and the results are sort of deterministic. You know, that if, if I do this, the outcome will be this. and so I think like that's the first thing about code, right? It, it creates a much more deterministic, tructure. Environment. this may not have always been true, certainly in the days of punch cards and things like that, but generally speaking, code is also testable. That's sort of like the second, primitive, I guess I would say around code is like, it's a testable object again, because, you know, like. If I put these inputs in and I run this code, I expect to get this outcome. and so for something in my mind to be code, it should be testable. You should be able to effectively predict, you know, we'll get into this sort of details of, of policy as code a little bit, but you should be able to, predict within some range of possible outcomes what the outputs might look like. and then I think. To be useful code. I guess I would add this little detail. the code should be debug able, in other words, if it's not working the way you intend for it to, it should be possible to sort of introspect into the code and go, okay, well. Lemme see what's the state at this point, and at this point, and at this point, and through that exploration, understand, well, how might I change the code to get the results, that I want? and so I think like, you know, I won't go on forever. There's a whole bunch more things I, I could talk about. But I think, like when I talk about policy as code, it's really something I came up with when I was working for Meta, um, at, you know, in Integrity, the. basic idea that was in my head was like, God, everything we're being asked to do is so subjective. There's all this subjectivity. That's just like a starting place. We have to deal with whatever we're trying to moderate. Can we at least find a way to describe a policy? Is more precise, that is more structured, that is testable, that is debu. so I think like those were the original ideas that I had behind, policy as code and, and ultimately I think certainly contributed to a lot of what we've built at ccle.

Mike Masnick:

And, and I think that's important because I think one of the things that people struggle with in the trust and safety space is you have these policies and people look at it and constantly say. Like some people will disagree with the policies, but rarely is that the issue. The issue is almost always that they disagree with the, implementation or the execution of the policies. And they say, well, This person violated your policy based on their interpretation. And other people say, well, but no, it's not like we interpret things this way. Or people say, you know, this person didn't, you know, you punished this person. I actually just got this very long email from someone claiming like, how dare. Such and such website have banned me because I didn't violate their policy. I only did X, Y, and Z. And it's like, well, I could see how that might, but there's the interpretation side of it. And so the thing that I think you're getting at, which is interesting is that because everyone, you know, people have struggled with this idea that policy has been. subjective. And so is the goal here thinking of policy as code is like, can we get to a point where policy is less subjective and that there's some sort of more objective component to it, or is it something different?

Brett Levenson:

No, I think you're, you're hitting on exactly, the system and the framework and ultimately the language that, that really we're trying, you know, at colada to create. uh, I guess what I would say is. Once you start thinking deeply about policy as code, you start to realize it's not just an, it's not enough to just go like, oh, well, we'll make it code. Um, you actually, you actually kind of have to, you have to think about like, what are the right constructs, what's the right set of logical operators, what's the right, set of functions that you want to give people access to so that they're able to remove some of the interpretation and subjectivity. For those who, uh, hopefully many people are familiar with the idea of a decision tree. and I think largely what in my mind, policy as code should try to create and try to enable users to create is, a well-defined decision tree. and what we actually see from our users is that very often that decision tree starts out kind of shallow. but because it's testable. Because you can see how it's being interpreted over time. You start to realize like, you know what? There's a little too much room for interpretation in this rule. I wrote The good news then is, okay, you know what? I can actually break this rule down further. the example I, I always give people, came from, something I saw during my career, which was, subject, everybody loves a policy about pornography, right? And one of the exceptions and anybody who's sort of worked in safety knows, like this is one of the things that makes it hard is that there are exceptions to every imaginable rule you could come up with. One of the exceptions was, well, if it's art, it's not pornography. Right? Okay, what's art? Now we have to interpret and define art. Right? But you can interpret and define art. Maybe not. And, and you know, I know you and I have talked about this before, like maybe not. To a definition that will satisfy everyone, but actually given how hard it is to do moderation effectively, I'm sort of a proponent of the idea that like I'm willing to sacrifice a little bit of what I would call subjective accuracy. Like how happy is everybody gonna be with the definition? For consistency in decision making and transparency in decision making, right? Like, can we make the rules precise enough and if they're not yet precise enough, can we continue to iterate on them and make them more and more precise, and continue to sort of increase the depth of that tree, until hopefully there's l little room left for misinterpretation of the rules. I think that's largely one of the goals of structuring policy as code is to give people a way to build that decision tree. Easily extend it as of course the world throws new and unexpected examples at us. and of course, quickly. tie that iteration loop together, right? We write the policy, we test the policy, we see the results, and if they're not what we want, we go back and we write the policy again. And, and, and hopefully if we've, executed all of this correctly, that's minutes, not, weeks or months, as is often the case.

Mike Masnick:

Right, right there. There's a famous radio lab episode, which is now very, very out of date where they were exploring and they actually got like to go inside the room. I don't know if you. There at Meta or Facebook at the time where they were like determining some of these things. And the point they kept making is like, okay, well we set a rule and then it's like suddenly you have this situation, which is like, where does it fit the rule? Okay, well we write this sub clause to the rule and then you have this, another situation. You're like, Ooh, we have to write another. And they sort of talked about how the rules just expanded and expanded and expanded. And then, then you sort of have the related problem of like, how do you have someone who understands this massive book of rules and can then, do stuff with it. Consistently and it becomes its own issue. the other issue that comes up and I'm sort of curious, your take on this, and this is always my concern with a lot of these is, how context plays into this. Because there are often cases where there are moderation decisions that if you strip it of all context. You can say, oh, this situation is just like that situation. But if you add back in the context, you quickly realize like, oh wait, these are wholly different situations. And people who are claiming they're the same, it's often bad faith where it's like you're, you're stripping away the important context. So I'm curious, as you're thinking about policy as code, how do you build in the ability to handle context and how does that work?

Brett Levenson:

We actually have a context operator, like we, we realized very early on, how important context actually is, to the decision making. I guess a little history, when I, first conceptualized this, like LLMs were not, Everywhere yet. we knew about them Facebook AI research and like there was early research going on, but they certainly weren't publicly available. so the original conception of policy as code was like, well, can we break what is effectively, complex policy that we're gonna ask somebody to make a decision about down into little bits of information? What we ended up calling info first, basically, And then ask each of those questions to our human reviewers to make the decisions, hopefully less judgment heavy, let's say. Right? But we knew even then that like the context was going to be important. And so clearly, like there's two requirements here, right? One is the context has to be available. the classic example is like a social media post and there's an image and a piece of text, and individually they're fine, but together they're potentially really offensive or really problematic. I, someone just cited a, a great example of this to me the other day and I was like, I can't repeat it here, let's just say, but, um,

Mike Masnick:

Okay.

Brett Levenson:

but, um.

Mike Masnick:

It, it, this is a, a hazard of the, of being in this space is that you often come across stuff that you cannot repeat publicly.

Brett Levenson:

but nonetheless, like one, you must have the context available. And I do think this is, you know, this is why we've strived to make sure we can do multimodal evaluation. And I honestly think. Anybody who's serious about moderation, like in today particularly with the availability of the technology we have, should be looking to do multimodal evaluation and to make sure that that context can be included. But then as we've designed effectively our language for policy as code, we have a context operator so that we can essentially say, Hey, these rules must be true. But they're only true if the context is X, y, or Z. Basically, when the context is not x, y, or Z, this is not true. Or of course, vice versa as well. You could say if the context is not this, then the rules are true. but I think to your point, right, like there's plenty of examples of of cases, you know, that I think the classic one that always comes to mind for me is like. We, and I used to hear about this from friends when I did work at Meta Integrity, and they were like, well, I was, I made a post and I was speaking out against racism. Why did my post get taken down? Right? Uh, and I was like, uh, God, the models just aren't good enough to understand, you know, the, the context of what you were saying. They, they simply understood that you were referencing something that that can create toxicity basically.

Mike Masnick:

right. Yeah.

Brett Levenson:

you know, I think. this is why to me, actually designing policy as code has been a really interesting exercise because it, it, it bears similarities to other kinds of programming languages, sort of. but I've never seen a programming language that has a context operator. And, you know, so we've had to definitely like invent some paradigms that that never existed before.

Mike Masnick:

Right. Yeah, no, it's, it's really fascinating. is there anything else, I mean there, there's obviously a lot we could go into, but, in the interest of time, is there anything else specifically that you think our listeners should be thinking about as regards to this concept of policy as code?

Brett Levenson:

I mean, I think the other thing that I would maybe call out about policy as code, there's really two things. So one, I, I think it. One of the things I love about it and that I wished I had when I was at Meta is that it actually connects the subject matter experts, you know, the people who are responsible for maintaining the policies, for updating the policies, for thinking deeply about what we want on our social platforms and, you know. This ties to the second thing, like more importantly now, perhaps how we want ouris behaving. What, what behaviors we want them to engage in. you know, when should we be worried about them leaking information? When should we be worried about hallucinations and truth? one of the things I really like about it is that those people, it gives them pretty much direct control. Direct impact when it comes to the outcomes, which is like something that at Meta was simply not possible. You know, the policies got written, they kind of got thrown over the wall and operations did their thing, and the policy got translated into it, into something that we could train human reviewers on. And then the human reviewers got trained and then they labeled and the labels came to me. And so basically it was like the labels were the representation of policy, and I just kind of had to hope. That it matched up with what the policy author said, and often it didn't. But that's a story for a different time. so I think like one of the reasons I would encourage people to look at, you know, however, implemented this idea of policy as code is that I think it can significantly, incentivize our subject matter experts, the people who are responsible for this. To sort of have agency in the process, right? Like I make a change. That change is, impacts the platform. And if it's not having the impact I want, well, I, it's just another change away basically. and then two, what we are seeing at least is that it's been incredibly useful. For latency sensitive applications, which these days more and more means AI agents. Right. Um, and there's so many, I guess interesting use cases I, could talk about, but, you know, we work with customer service agents and retail sales agents and internal ais that have access to databases. and it, you know, in these use cases, it's like if you're gonna have any kind of decent user experience. the moderation kind of has to be invisible to some degree and for it to be invisible. Yeah, and I guess this is what I'll say is like. One of the cool things about policy as code that we have discovered is that it allows us to get pretty tricky about how we actually go about doing the moderation. No longer is this just one big decision we have to render. It's lots of little decisions. We're trying to, you know, fit together into a puzzle. and that means this may get a little too techy, but like that means we can parallelize the work. That means if we discover very early in the process that there is a violation, maybe we just don't finish the rest of the process.'cause we already know the answer basically. and so I think. Thinking ahead to the AI driven world that I think is coming our way. personally I think policy as code is going to be a really powerful tool in the toolkit, to make sure that all theis that are coming our way, stay within the bounds of behavior that we want them to, you know, maybe adhere to brand and tone style guides. and even, you know, certainly don't do worse things than that, you know, obviously are possible. So, yeah, I, I think, um, that's one other thing I would add there.

Mike Masnick:

That's great. Great. Well, once again, Brett Levinson from cda. Thanks so much for having this discussion. It's really interesting. I think it's lots of stuff to think about, and I'm sure we'll continue to discuss this idea of policy as code and sort of as the, the space continues to develop. It's a really interesting concept, so thanks for coming on board. Okay.

Brett Levenson:

Thanks so much, Mike. I really appreciate the time.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.com. That's CT RL alt speech.com.