Ctrl-Alt-Speech

The TAKE IT DOWN Takedown

Mike Masnick & Ben Whitelaw Season 1 Episode 50

In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Mike is In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Resolver, the leading provider of risk intelligence and advisory services. In our Bonus Chat, Karley Chadwick, head of platform Trust and Safety Delivery at Resolver, talks about emerging safety trends in gaming and augmented reality and reflects on her experience as a threat analyst.

If you’re in London on Thursday 27th March, join Ben, Mark Scott (Digital Politics) and Georgia Iacovou (Horrific/Terrific) for an evening of tech policy, discussion and drinks. Register your interest.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

So Mike, we've got an episode today about open protocols. It's a, a theme that runs through today's control alt speech. So I thought I'd start today with a user prompt from Surf, which is an app from the creators of Flipboard, which uses the AT protocol. we'll talk more bit about it in the intro, but the way that, surf gets you to engage on its app is to ask you to search the social web. So I want you to, I want you to search verbally, search the social web.

Mike Masnick:

Verbally search the social web, huh? Okay. Okay. well, I, I will say that this has been a, good week for the social web, as I discovered at South by Southwest, with a bunch of conversations, including with the creator of Surf, and also with the CEO of Blue Sky. So we. There's a lot going on in the social web, and we'll talk about some of it in a moment, but what about you? If you were to search the social web, what would you be finding?

Ben Whitelaw:

well I've made the mistake of going into one of my social apps and looking at what things I'd searched for recently. always a dangerous thing. And I found Zero Day, which is a Netflix, series about cyber hacking, which I dunno if you have seen Very good. I was looking for some reactions to it because I really enjoyed it. Stars, Robert De Niro and, quite present in terms of the US government situation. So, tried to kind of see what reactions there were to it.

Mike Masnick:

I was gonna say, I was wondering you're searching for zero day, that that's, you know, usually for hacks, so I was wondering if you were turning into a bit of a hacker, Ben.

Ben Whitelaw:

yeah, no comment, no comment. Hello, and welcome to the 50th episode of Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's March the 13th, 2025, and this week's episode is brought to you with financial support from the future of online Trust and safety fund. And from our sponsor today, resolver, the leading provider of risk intelligence and advisory services. This week we're talking about a whole range of topics, including decentralized social media. User appeals and a big new book that's dropped about meta. My name is Ben Whitelaw and I'm the founder and editor of Everything Moderation, and I'm joined by, a recently traveled Mike Masnick, who is, who is just back from South by Southwest.

Mike Masnick:

a not yet recovered. Mike Masnick, south by Southwest, is always absolutely exhausting. It is impossible to find time to sleep as far as I can tell in the few times I've been, uh, and this was no exception.

Ben Whitelaw:

I've not got a chance to go. Mike, tell us a bit about what it's like, and what your experience was this time round.

Mike Masnick:

Uh, it's hard to, hard to describe. There's a, there's, you know, a ton of people, a ton of things going on all at once. The city, gets completely overwhelmed. there will be very, very long lines for barbecue, but it is worth it because the barbecue is always excellent. there's always a lot of really smart people around, a lot of wonderful people to talk to. It's impossible to get anywhere in any amount of reasonable time. You'll miss half the things that you think you want to go see. but yeah, it's, you know, it's a time.

Ben Whitelaw:

Yeah, just, just like all good conferences, you don't see enough and, and you eat and you eat too much.

Mike Masnick:

Yeah, yeah. No, it's, it's pretty intense. But you know, the, the really good thing about South By is that there are always a ton of really good and really smart people. and you can be walking down the street and as happened to me on Monday morning, I was walking down the street and not paying attention, and all of a sudden Corey Ero grabs me and it's just like, Mike. So, so you have moments like that,

Ben Whitelaw:

Nice.

Mike Masnick:

which was fun. Um. But yeah, it's, there's a ton of smart people. I had so many great conversations. I met, met a bunch of really smart people, got to hang out with a bunch of old friends I hadn't seen in a while. and a lot of catching up, a lot of talking, a lot of good idea sharing. Um, really, really, you know, it's always sort of mind stimulating.

Ben Whitelaw:

Yeah. And you, managed to squeeze in an interview with the, the CEO of Blue Sky Jay grabber. Um, you know, ding, ding, ding.

Mike Masnick:

Ding, ding, ding.

Ben Whitelaw:

Mike's on the, on the board of Blue Sky. We must, we must declare, yeah. I dunno if you declared that in the interview. I dunno if it was just in the version I saw

Mike Masnick:

yeah, I mean, there wasn't, I didn't have an opportunity. I don't know, I don't know if the intro, the CEO of South by, did the intro. I don't even remember if they, said that or not. Uh, it's a good point. Um, but yeah, I mean, Jay was the, the main attraction. My role was just to throw some questions at her,

Ben Whitelaw:

Yeah.

Mike Masnick:

but it was a fun interview. It was on the main stage. it seemed to be really popular. Apparently there was a really long line to get in. we got a lot of really good feedback from it. People really interested in what she had to say. the video of it is online on YouTube. Um, if folks wanna see it. I think it was a really detailed exploration of Blue Sky, what it's trying to do and why it's different than other. Social apps out there.

Ben Whitelaw:

Yeah, she does a, a really good job of kind of unpacking some of the particularly moderation, features that users can, can use to kind of customize their experience. she uses the phrase, kind of choose your own adventure, which I think is a nice summation of, how the, direction in which platforms are moving, I would say, and, and obviously what makes Blue sky really interesting. and she has a great t-shirt on, which attracted a all of attention.

Mike Masnick:

That was the news story. Yeah, that that was the one that broke the news.

Ben Whitelaw:

Yeah, she, for those who didn't see, she was wearing a t-shirt that looked very similar to one that Mark Zuckerberg had worn before. but hers said, a world without Caesars.

Mike Masnick:

In Latin.

Ben Whitelaw:

in Latin.

Mike Masnick:

right? Because the, the original it was, uh, mark Zuckerberg apparently designed his own t-shirt. It wasn't just that he wore his t-shirt and his t-shirt was a play on a, a Caesar, statement also in Latin, which the original was basically it. Effectively it's Caesar or no one. Basically, I am your king. and so Zuck was, it's Zucker, no one, uh, in Latin, which is also again, sort of saying, Zuckerberg saying, I am your king. And so, yes, Jay's t-shirt looked exactly the same, had Latin phrasing in the same font, except saying A world without Caesars. It's fairly clever.

Ben Whitelaw:

very clever. Very clever, nice to be saying something without saying something at all.

Mike Masnick:

Yes. Yes. I think, one of the media outlets said something along the lines of, it was shit posting on stage. Without saying anything,

Ben Whitelaw:

Yeah,

Mike Masnick:

there's now tremendous demand for that t-shirt, and I believe that Blue sky is planning to sell the T-shirt.

Ben Whitelaw:

Ah, okay. Okay. Yeah, I'll get my order in. one question I had around, one of the things she said was around about the kind of billionaire proof. An element of blue sky and, and how the nature of it running on, on the protocol means that it's less likely to be bought by a billionaire. I've seen a kind of bit of discussion about that afterwards. from your point of view, do you really think that, a site like Blue Sky is exempt from being taken over by an Elon Musk type figure?

Mike Masnick:

Yeah, it's a good question. I mean, I have been using the term, billionaire resistant rather than billionaire proof,

Ben Whitelaw:

Yeah.

Mike Masnick:

and I will stand by that. Uh, so, you know, the argument is that. as Jay made clear, the nature of everything being open and replicable and people are trying right now to replicate different aspects, all different aspects of the blue sky at Protocol Stack, we're seeing more and more of that happen every day and there's more coming and the companies extremely excited about that, unlike most companies where it's like other people trying to copy what you're doing and, and. take your users and all that. Blue sky is very, very supportive of that. but part of that is I've referred to also as like the technological poison pill.

Ben Whitelaw:

Mm-hmm.

Mike Masnick:

Which is that if Blue Sky, the company does anything bad, and again, ding, ding, ding, you know, just repeating, I'm on the board. But, if they do anything bad or horrible, if I turn out to be a horrible board member and direct them to, not that I have the full power to do that, but you know, tell them do these bad, horrible things that will be bad for your users. It is set up so that users can exit other people can build everything. You can bring your users, you can bring your content, and Blue Sky will have no control over it. Therefore, that gives the company itself incentive not to do those bad awful things. I think the example Jay used was inserting ads between every post. but that also makes it less. Attractive for a billionaire to come in and take it over, because if they do, then that same thing happens, right? So if an Elon comes in and takes it over, rather than everybody having to go reform somewhere else, you can just very quickly. Move to a different platform and bring all your content with you and bring all your followers with you, which we couldn't do when Elon took over Twitter. So it's, it's resistant, but I mean, there's always a risk that, a, a billionaire could do something bad or that the company itself could do something bad, but it, it's designed to make that less likely to align the incentives better with the users.

Ben Whitelaw:

Yeah, I will say that Jay is, is a very good, speaker on these topics. She does a, great job of kind of explaining blue sky for those who maybe aren't as, clear on what it is and what it's built on. And also I came away from it just being pretty hopeful about the future of social media generally, which is something that hasn't happened for a while, Mike. So congratulations to you and you and Jay for inspiring a bit of hope in, uh, at this time. couple of things to mention. a few bits of admin. You know, we've got a couple of projects that we're working on outside of control alt speech that are worth mentioning. You've got a new podcast, you mentioned it a couple of weeks back, about section two 30. I've got some bad news. you were referred to as a veteran journalist

Mike Masnick:

Oh,

Ben Whitelaw:

in, in the promotion of this podcast. Uh, and then I just wanted to flag it to you. Uh. think it's unfair

Mike Masnick:

Yeah.

Ben Whitelaw:

experience maybe, but

Mike Masnick:

Yeah, veteran man. Take me out back and shoot me. This is, is bad. But yeah, the new podcast series, it's a limited series. I, I forget, I honestly forget how many episodes. it is, it's either six or seven, I think. it's called Otherwise Objectionable Available wherever you get podcasts. Um, but the first episode is out now. and it, it's a documentary style podcast. It's not, it's not this nonsense. Ben, where, where, where I just chat randomly. I'm really impressed with the production team and kind of what they've done. They're amazing interviews in there. In the first episode we have Jeff Kassoff and Eric Goldman and Jessica Ian, and a bunch of other sort of tech policy experts. coming up soon, we have. Both of the co-authors of Section two 30. We have the lawyers who were involved in a bunch of the early cases. It's a really, really fun podcast. I'm, I'm very excited and happy that it's out, out in the world.

Ben Whitelaw:

Nice. I've heard rumors that it's gonna be as big as cereal. Um, that's, that's, that's what I'm hearing on o on the podcast

Mike Masnick:

some somebody was telling, somebody was telling me that, yeah, we need to add a murder somewhere in it.

Ben Whitelaw:

I mean, some will claim that that's what is about to happen to section

Mike Masnick:

Yeah. Seriously.

Ben Whitelaw:

um, and from my side, you might have heard that I did a, an event with a couple of great newsletter creators in January, an event we called marked as urgent. And we're doing another one of those in London at the end of March or in a couple of weeks time. if folks are in London, uh. And there's a big trust and safety conference going on do come along. We'll include details in, the show notes and also in tomorrow's everything moderation. And then, yeah, basically my, we, we'll run into today's stories At the end of today, we've got a great bonus chat, with, a great, company called Resolver. many controller speech listeners will know that next week is the annual Game Developers conference, and there's a lot of interest in player safety at the conference as, you'd expect. We are gonna talk to Carly Chadwick, who's, the head of platform trust and safety delivery. Some of the trends that she's seeing in the gaming space at the moment. if you listen in, you'll see me gently, but firmly being put in my place about a couple of things, which, which is worth listening to. so.

Mike Masnick:

I don't put you in your place enough. Is that what? Is that what I.

Ben Whitelaw:

It wasn't a request, Mike, it wasn't a request. obviously I wasn't here last week, Mike, but I did listen into the podcast and you talked a lot about the risk that the current US administration poses from an internet safety perspective. That's where we're gonna start today's summary of stories and, you're gonna take us through a piece that is published on the Verge about the Take It Down Act and some of the threats that that poses.

Mike Masnick:

Yeah, so the Take It Down Act is, it's this bill, it came out last year. it almost got pushed into, into the final, you know, must pass funding bill at the end of last year. And then at the, basically at the very end, it was stripped out of it. it is one of these laws, like many of the, internet regulation. Laws that on its face, if you just sort of read what it's about, you're like, yeah, that makes sense. We should absolutely do that. And then when you get into the details you realize, oh wait, no, this would be really bad. We shouldn't do that. And so it's one of those, those complicated things. The issue that it is about is. what is commonly referred to within the, the space is NCII or non-consensual intimate imagery. the thing that used to be referred to as revenge porn, before people realize that's, that's a horrible name for it. but it's, you know, Generally, intimate imagery of someone who not want that shared, that is a real problem and it has been a real problem on the internet for a very long time. that kind of content is often weaponized and it is often, creates real problems for people. And we've, talked about it,

Ben Whitelaw:

Yeah, two weeks ago, breeze Lou, the profile in Wired. Yeah. Really, really tough stuff. Yeah.

Mike Masnick:

Exactly. and so even at the end of that story, we, I think we had mentioned that, she was pushing for the Take It Down act because she wanted, legal support for this, what's happened in the last few weeks, which has made the law, I. Get a lot more attention is one that it passed the Senate through unanimous consent, which means they didn't even have a vote. Somebody just mentioned like, is anyone gonna protest this? And nobody, none of the senators stood up to protest for it. So it is passed the Senate and is in the house. Mike Johnson, who is the speaker of the house, who has basically control over what will move, has said that he is interested in moving this bill. So it sounds like that is likely. the first lady, Melania Trump. has been promoting the bill as well, and then in the state of the union or the joint address to Congress, last week or I. Was it last week? Time makes no sense anymore. Um, I think it was last week. Donald Trump also promoted it as one of the things that he was pushing for though, as he is known to do. He says the thing that he's supposed to say on the teleprompter, and then he goes a little bit off script. So in this case, he talked about it and then he went off script immediately and said that he intended to use this law to take down content because nobody has had a more. Unfair experience online. Nobody in the world, Ben, has been treated more unfairly online than Donald Trump.

Ben Whitelaw:

I agree.

Mike Masnick:

Yeah. Well anyways, uh, I wrote something about that where I was looking at it and saying like, well, here we have the president of the United States admitting outright that he plans to abuse this law, to take down content that he feels is unfair to him. That should be a sign. That the law is not properly written in a way that avoids abuse, and that is the major problem. And that is the point that is raised in this really wonderful piece in the Verge by, Addie Robertson, saying that the take down act isn't a law, it's a weapon. It is a weaponized bill that can be easily used, be easily abused by any particular government. By people who want to take content offline. it doesn't have safeguards built into it. There are similar laws. I mean, the most, striking similarity that a lot of people refer to is things like, the DMCA in the US copyright context where if you find infringing content on a website, you can notify that website and it's the notice and take down process where, the site. If they remove the content upon receiving a notice, they receive immunity or a safe harbor. So you can't be sued for infringement if you remove it upon notice. It's more involved in that because the DMCA also has, a counter notice option where whoever uploaded the original content, if they are noticed and their content is removed, they can counter notice it, the site can put that content back up. And if the copyright holder still feels that it's a problem, they then have, I think it's 10 days, I forget now, to file a lawsuit and the platform remains. in a safe harbor. So they're protected as long as they follow that notice and take down and counter notice process. there is also a provision in the DMCA, which is very poorly written and, rarely effective, which effectively tries to limit, Fake notices, on non infringing material, that is written so poorly that it can almost never be used in an effective way. There are a few cases like really on the margins, but mostly it is ineffective and we've seen it. We've seen that it has been abused. There have been multiple studies on the DMCA and how that DMCA is used to take down content that is not infringing. various reports, depending on different platforms have said that somewhere in the range of 50 to 60% of notices, DMCA notices are false notices for non infringing content, which should be kind of. That seems like a problem and, that is already concerning. Now we move to the Take It Down Act, which is a similar sort of thing, but for NCII, it has none of those already problematic and weak safeguards. In the DMCA, there's no counter notice set up. There's no punishment for sending false notices. And so therefore, it is designed in a manner that, is very much set up for people to use it, to abusively, take content down, and then the enforcement side of it. In many ways is left up to the FTC and we right now have an incredibly. Biased. FTC An FTC, that has made it clear that it wants to use the powers of the FTC to punish certain companies almost entirely on an ideological partisan basis. it is, you know, opened an investigation and it is, pretty clear that it is not going to do anything to punish X. it's unclear if it will punish meta might depend on how nice Mark Zuckerberg is to the administration, but it might go after other. Sort of enemies of the Trump administration. So why in the world would anyone support this bill right now? I mean, I understand why the sort of MAGA world will, because it is a weapon that they plan to weaponize. but the bill is co-sponsored by a Democrat and you have a whole bunch of Democrats who are supporting it because they are reading the summary of the bill and the title of the bill, and they think it's a good bill. And so you know, I'm happy that. the Verge had this article sort of laying out just really how problematic the bill is, and I still think the fact that Donald Trump himself made it clear that he, you know, I don't think he even understood the bill, but that, his first reaction to seeing what the bill was supposed to do is, is to talk about how he was gonna use it to take down criticism of himself. Which should set off alarm bells. And so I don't understand why anyone has been supporting this bill, but it, there's a very good chance that it, this bill will become law in the very near future, and then it will likely be widely abused.

Ben Whitelaw:

and what does that abuse kind of look like? Do you think, Mike? what are the potential implications of that, across platforms, that are governed by it?

Mike Masnick:

I think there's two elements to be aware of. I mean, there there's more, but within the time we have, let's do two. So the first is just the ability of anyone to, force any kind of content down. because you can make these bogus claims and because the risk of, failing to, take down content when you receive a notice is so high. platforms will be very quick to take down content. and again, like I said, you know, with the DMCA, we've seen that, somewhere 50, 60% of DMCA notices are probably bogus. what we've learned is that, there's lots of content. I mean, everybody listening to this podcast knows this. There's lots of content online that's someone doesn't like, and historically there was no real mechanism to, you could complain, but the only legal mechanism to remove any kind of content. Especially in the US was copyright law, and therefore, anytime anyone didn't like any content, they would immediately jump to making a copyright claim because it's just a legal process.

Ben Whitelaw:

Even if it didn't involve copyright.

Mike Masnick:

Yes. And, and often it had nothing to do with copyright and even like, people would use it for legitimately bad content. I mean, I know, I remember there were a bunch of stories about like, people seeing anti-vax content, which is, dangerous, problematic content, but people would file copyright claims on it to try and take it down. But then also there was lots of stuff like, we've had DMCA notices filed against our articles a bunch. there was this guy who. For years, he ran one of the first NCII related sites that people would send in, intimate imagery of, former partners or whoever, and he would post it to his website. He got sued and, the FTC cracked down on him and we wrote about all that and he used A-D-M-C-A claim to try and take down our article. He also used A-D-M-C-A claim to try and take down the FTC settlement page against him. But that's what happens when you have a tool that allows people to take content down. They're going to use it, it's a legal tool. It looks scary. Companies don't wanna take the risk and liability. They'll often just take that content down. Now, giving them a tool that's even stronger than the DMCA, it's clear that a lot of abuse is gonna happen. So that's one side of the abuse. The second is what I was describing with the FTC. If you have a weaponized FTC that can enforce this law, then they get to pick and choose who they're going to enforce it against. And what has been made clear in this administration is that. Every enforcement action is only going against, ideological enemies, not any sort of legitimate purpose. And therefore it will definitely be weaponized by this administration to go after perceived enemies, all over the place. They'll just say that they wouldn't take stuff down.

Ben Whitelaw:

And what's the likelihood that it doesn't make it through the house? I mean, is there something done to educate representatives about what could happen if it passes?

Mike Masnick:

I have no idea. I mean, definitely Civil Society has been speaking out about it. the Center for Democracy and Technology raised the alarms last year. They've been the loudest on it and the most thorough. And, they're great. They know what they're talking about, and they've been making an effort. EFF has recently picked up on it as well and has been talking about it. Others have as well. but it's a concern. the Republicans hold the house. it's a very, very narrow margin. The last bill that got a lot of attention, like this was obviously cosa. and eventually the Repub enough Republicans realized that that could be used for censorship, that they sort of bailed on it. that was before they knew that they had the trifecta of, you know, holding the house, the Senate, and the administration. Um, which, now they do. So maybe they don't mind weaponizing it. A lot of them, I don't know if they have the votes. The real trick is like making sure that none of the Democrats will vote for it. I assume that there is some effort going on on the hill. I, I'm not privy to that kind of stuff, but I would hope that Democrats can be taught that this is a bill that will be weaponized by the Trump administration. obviously right now there's bigger issues with sort of the funding and the government. I. That may shut down, even though I still find it funny that the media is still reporting on the, budget, and the, funding bills. when we have Elon Musk shutting down the government no matter what happens with Congress, but that's, that's an aside. That was last week's discussion.

Ben Whitelaw:

The same end by a different means.

Mike Masnick:

Yeah. Yeah. So, I don't know that it's immediate because right now they're focused on that. But in theory, that could be solved today or tomorrow. and then they could try and get back to business. And I wish that more Democrats realize that it is not business as usual, and a bill like this is not, you know, it's not what it says on the tin. And it is very problematic in all sorts of ways. And, giving. The administration more power to punish websites for, things they don't like, is not something that, that anyone should be doing right now.

Ben Whitelaw:

Yeah. Thanks Mike. That's a really, really helpful overview. I'm, I'm sure we will come back to this on feature episodes of Controlled Speech. I'm gonna move us on now to a story that, has been very loudly talked about on a bunch of different news networks this week. I think the, Take It Act stuff has been kind of probably bubbling under the surface a little bit and likely to erupt. But this next story is, one that has attracted a lot of attention. For reasons we'll kind of discuss. this is a new book that's come out about Facebook. if you remember, a, a few years ago now, it was the whistle Blowing of Francis Haugen. This is a, big book serialization that's come out this week by a woman called Sarah Wynn Williams. And the book is called Careless People. Now, I might have a lot of book guilt already. There's uh, there's a lot of. There's a lot of books I ha I buy and never get Round to reading. Um, I'm gonna try and avoid this being one. it's attracted a lot of attention. This week she's given an interview to NBC News in which she explains what's in the book and I'll talk a bit bit about who she is and, what she says. she worked at Facebook from 2011 to 2017 as director of Global Public Policy and worked within essentially kind of Joel Kaplan's team. Joe Kaplan is the new Nick Clegg for those, uh, who, and I will forever refer him to him as that. Um, he's the new Nick Clegg, so he is the chief global, public policy head. So whatever grandiose title he has now, and she worked within his team working on public policy in primarily Asia and Latin America. got a lot of exposure to Mark Zuckerberg and Cheryl Sandberg and other members of the, Facebook team and her book, her memoir, is essentially the latest insight into the kinda machinations of Facebook as a company that controls large sways of, of the, Globe's speech and access to the internet, frankly. she says a number of things in it, which have been variously reported on. She alleges sexual harassment against Kaplan, which Facebook say, He was cleared of back in 2017. she goes into, more detail about an SEC filing that she submitted last year about Facebook, uh, misleading investors. she talks a bit about how Zuckerberg was interested in moving into China, some of which has been reported before. And, she also Talks a bit about the culture of, the company, and she says various things about how Sandberg asked her to, take flights during her, her pregnancy, even when she wasn't very sure about flying. And, and so you get a sense, of what Facebook was like primarily in that period between 2011 and 2017. Obviously a long time ago now. But it's a fascinating there's been a fascinating reaction to the book. Book came out on Tuesday in the us it actually came out today in the uk. So, I've got it on order. it's one of those things, Mike, that, says a lot about the company and the platform, and we try not to talk too much about specifics in companies they change a lot of the time, but. I think it's still worth discussing. there's also been a bit of an update this week that made you feel like it was also worthy of a story in today's control or speech

Mike Masnick:

Yeah, I mean, you know, the, the book sounds interesting, but I'm always a little skeptical of any of these sort of like, tell alls from the insiders. you know, it often feels like, they have an ax to grind, and a mix of the stories that have come out that are in the book. I haven't read it yet either. I'm gonna get outta the library at some point, but some of this stuff has been reported before, and it's sort of reported here again in sort of incendiary language. I've seen a number of former, Facebook meta employees on social media who have been reporting that the incidents that are reported in the book that they were part of were not reported fairly or accurately. Um, so I think there's some concern about that. but the thing that really caught my attention though. You know, it's, it's worth reading these books and getting the perspective, but recognizing that it's, from a single person's perspective and, and maybe not putting full weight into it. but the thing that really caught my attention this week was the fact that Meta basically tried to stop the publication of the book. This only came out yesterday really, where they apparently, as part of, her exit agreements, there was a severance agreement that included a non-disparagement clause and meta is alleging that she violated the non-disparagement clause. In this, and so as a, sign of Mark Zuckerberg's new commitment to free speech, they, they tried to enforce this against her, and went to an arbitration, set up basically making a claim against both the author Sarah Win Williams and the publisher, which is an imprint of McMillan. The arbitrator came out yesterday with a, ruling, noting two things. One, McMillan the publisher. Did not sign the severance agreement with meta and therefore is not bound by it. And therefore the arbitrator has no jurisdiction over the publisher at all. So they're sort of out of the story. But they did say that the publisher was the one who showed up. Sarah herself did not show up. And so they sent notices to her and told her again that she had to reply by today, this morning, Thursday morning, two days after the book came out in the US I'll note. and she had ignored it, but over the weekend. As part of her promotional efforts for the book, she had gone on a podcast, a popular podcast is the description, and talked about how they were trying to shut her down and silence her by talking about the arbitration claim. So the arbitrator was like, okay, well now she clearly knows about it. And so one in general, it's probably not a good idea to completely ignore an arbitration move against you on a contract. She may be in violation of the contract, but The bigger thing is like meta making this effort in the first place, right? Like there is this term, which I am associated with, called the Streisand Effect.

Ben Whitelaw:

I was gonna bring it up. I was gonna bring it up. It's a perfect example. No.

Mike Masnick:

Yeah, and the whole idea is like, it's based on this idea that if you go legal to try and silence someone, it is only going to generate way more attention for that work. And like, I was probably less, I was less interested in reading the book until I saw that meta was trying to silence it, and that, that's why I'm gonna reserve at the library because I, I would now like to read this book. even as I'm a little skeptical of some of the writing, but doing this just, it seems so counterproductive. Like just let it play out. You know, respond like they responded by saying like, this is old news and misleading, and just let it, it'll disappear in two weeks. People are gonna forget about it. but doing this makes it look like, oh wait, there's something serious in this book that you're trying to silence Mr. Free speech, you know, returning to our free speech roots.

Ben Whitelaw:

Oh, it's delicious irony, isn't it? That you know, the platform who are claiming they're returning to, you know, having more speech and fewer mistakes, shutting down a publisher who they don't have an agreement with. it's textbook stuff.

Mike Masnick:

Yeah. So the, I mean, the arbitration agreement, the arbitrator, did come up with this thing saying, because she indicated that she was aware of the arbitration effort, that she is now banned from promoting the book and to the extent possible distributing or selling the book. But that's the publisher. That's not her, but she's not supposed to promote it. I don't see how that flies, I mean. it depends on the, actual nature of the, the severance agreement and the non-disparagement clause in there. I think basically at this point she's being punished because she, she didn't show up for the arbitration hearing. and so we'll see what happens there. But either way, none of that, it doesn't really help meta, even if they technically win this arbitration and maybe she'll like, have to pay up some money or something. it's just gonna draw with that much more attention to the book.

Ben Whitelaw:

Yeah. I mean there's, at the end of the day, it's still a very large company trying to silence a memoir.

Mike Masnick:

It's, it's a terrible look.

Ben Whitelaw:

Yeah. whatever clause is in whatever contract it, doesn't look great, but

Mike Masnick:

And I'll, I'll have to, the details here may matter a lot, but there have been efforts, especially in the last few years, in part because of the Me Too movement to pass these laws that basically say non-disparagement clauses around sexual harassment are not enforceable. and so there may be some elements of that in there, because again, she did make accusations around sexual harassment in some form or another. So. That kind of stuff might be exempted because of various laws. But again, I think most of those are kind of state laws and so I'm not sure where she's located or which state laws apply for this.

Ben Whitelaw:

Yeah. we'll move on to other stories, but it, it is worth noting, at least I'm very cynical of the 3, 4, 5 people who suddenly come out of the woodwork to kind of push back against one person's version of events. You know, it's very straightforward. We both work in, in the media, Mike, we know how straightforward it is for a company to call upon some of its many thousands of employees to, push back against a

Mike Masnick:

yeah, I mean,

Ben Whitelaw:

events. So. Uh, again, like I'm, I'm trying to kind of keep an open mind at this

Mike Masnick:

yeah. Right. Yeah, yeah. It's not fair to say that I'm, and, and I'm not saying that the book is, inaccurate or not, but I, I am saying that like there are people I know who I, trust and who weren't like, you know, out there. I, I've spoken to a few people who are former meta employees and certainly not meta boosters,

Ben Whitelaw:

Okay.

Mike Masnick:

uh, by any stretch of the imagination, and they were sort of skeptical of the book as well. So I have some skepticism about some of the stories of the book. I think it's one person's view, which is always interesting. But we should remember that it's one person's view.

Ben Whitelaw:

Yeah. Okay. Cool. interesting stuff. I'm, I'm sure there'll be plenty more media coverage this week.

Mike Masnick:

1, 1 more quick point on this, because I actually do think it's important. You know, one of the big stories that has come out about this is, is the story about the stuff in China. and, and I did wanna raise this, I, meant to raise it earlier, but now I'm thinking about we should, definitely include it. One of the things that, like Kat and I talked about last week as an example, and this was before all the controversy with this book was like the example of. Facebook and Vietnam and what sort of compromises are they willing to put in? And the discussion that Kat had, which I think was really important to think about, was that there is a nuanced discussion to be had about if we're trying to get into a country that doesn't have that much freedom of speech, do we make certain compromises to enable at least some kinds of online speech is nothing. better or worse than some, but more, limited amounts of speech. And so I think a lot of the China story could be seen in that light. You know, it sounds like internal discussions about what are we okay with, what is Facebook okay with to try and get into China? China is obviously a very large market. It is presented in the book and in the media reporting on the book as if it is a very clear case of like, oh, Facebook compromising with its principles, which might be true. It may absolutely be true. But there is another possible reading of it that is Facebook considering internally, like how far are we willing to go to try and get some openness, some ability for people in China to use Facebook to speak. Is that better than none at all? And the fact that, you know, no compromise, no final setup was ever done suggests that, these are internal brainstorming things that does happen and people can have a nuanced discussion about that. What trade offs are we willing to do that is non But that doesn't it could be that, I certainly wouldn't put a past Mark Zuckerberg to be terrible and evil and say that like, no, you know, we're, just gonna screw over users and, and allow the Chinese government to spy on everyone. But I, I

Ben Whitelaw:

or, or whether they had the right people in the room or the right information available. Right. That's the other thing, because, win Williams is a New Zealander. She's a former diplomat. I'm sure she has a bunch of, great skills. Is she qualified to make decisions about. the way in which meta kind of moves into parts of Latin America or Asia, probably not, you know, probably not. And, and this is the, goes back to the kind of culture point about how, Big companies based in the west coast of the us you know, make decisions for the rest of the world. And I think that's kind of, it's a thread that's been running for the last, you know, 15 years essentially and is keeps coming up. But this is the latest, one of those. cool. We've, got probably time for like a, a few different stories, Mike. maybe three smaller stories to run through. Um.

Mike Masnick:

and go quick.

Ben Whitelaw:

Yeah. Famous last words. Um, lead us off with the story about the Reddit rules check. We both had this on our list, so I'm, kind of excited about this one.

Mike Masnick:

Yeah, this is, this is really cool. Uh, so the Reddit has launched this new feature, which is as you're, posting content, you can do a rule check before, which is basically, I assume, AI driven. before you post something, you can have it checked to see if, the AI thinks you violated the rules rather than, Punish first where you post something, then you get yelled at, this content was taken down and you get all mad. It just sort of gives you a feedback and says like, Hey, this violates the rules. We probably aren't gonna allow you to post it and therefore you can adjust. I'd be interested to see how this works in practice. Right. this could be used for gaming where you sort of like figure, how can I. walk right up to the line without getting in trouble and all sorts of stuff, but as like a feedback tool and as sort of like a, trying to train the user into like not violating our rules. I think this is great. I think it's really amazing and, I'll be interested to see if other sites start to pick up on it because. seems like a really good concept and like a technical approach to, teaching users about what the rules and what the limits of the rules are. So I think this is a really cool experiment. I'd love to see how it works out with Reddit. I could see areas where it will fail, but I'd love to see how it goes and I'd love to see other sites trying some, something similar as well.

Ben Whitelaw:

Yeah, we, I feel we've kind of jumped ahead here. We've talked about moderation being, ai being used for moderation, almost, moderator level, you know, being used to kind of enforce policies and, but actually we've kind of almost seemed to jumped at a little bit and read it, which have a, a strong history of creating moderator tools and, doing user engagement have kind of gone one step further here, which is really exciting.

Mike Masnick:

it's very cool. It's very cool.

Ben Whitelaw:

Really like it. the story that I picked out this week, Mike, is a story about the online dispute settlement bodies. We talked about this in the podcast, in the past, it's a DSA, specific, provision where users can now apply to have their content reviewed by a very specific, online dispute settlement body. and they will, Work with the platforms to figure out whether the right rules were followed and you can kind of essentially complain if you, feel like your content was taken down unnecessarily or if you reported it, nothing was done. And ACE is, Appeal Center Europe. It's one of the five or six ODS bodies that have started kinda reviewing content on different platforms. Ace works specifically with Facebook, YouTube, and also TikTok. And it's released some data, just a small data set about some of the initial cases it's reviewed. What it said in a blog post, which we'll link to in the show notes, is that it's had 1500 people, submit appeals against their content in the last three months, I think, since the start of the year. And so far it has gone through 150 ish. appeals specifically related to Facebook. Every, platform needs a, a different data, agreement. and at the moment it only has an agreement with, Facebook it's gonna start to, uh. look at appeals from TikTok and YouTube shortly. And of those 150, about half of them apparently, have been ruled against the original decision made by Facebook. So the ODS body is kind of pushed back against the original, decision to either leave the content up or to pull it down. And that's interesting because, you know, that suggests that there's a kinda high proportion of, decisions made by platforms, that aren't, according to policy. whether or not this stuff scales, we don't know. we're at a very early stage of this, but I think it's really interesting to see some of this data come out at an early stage. And, yeah, looking forward to seeing more of this and also more, data from the other ODS bodies as well.

Mike Masnick:

Yeah. No, I, I, I think it's cool as I'm glad that we're getting the data, as a reminder, like the rulings of these bodies are not binding. So we'll also have to see, whether or not the platforms actually listened to them. But it's interesting to see. The other thing that I think is interesting is, 1500 complaints. I'm sort of trying to do the math and like, does that build up to a real business? Because, you know, part of the idea here is that ACE has to be self-sustaining. And all the other ones, you know, and so, I'm not sure that's enough. And so,

Ben Whitelaw:

It's a big question about what the throughput is of, of these appeals. Like can the platforms promote them enough? Will they promote them enough to its users? And then will you have enough ODS bodies to kind of process them? We, we don't know.

Mike Masnick:

yeah, and, and then the speed with which they process them. So, you know, 1500 complaints in, they've done 141 decisions, less than 10%. I don't know. It'll be interesting to see the, the sort of, all of this, how quickly they can do it, how many complaints are coming in, how closely related, but we're starting to see data and that's, that's fascinating.

Ben Whitelaw:

Yeah. One platform that's not bound by an ODS body or, or isn't covered by an ODS body as yet, I believe is Spotify. and they're the subject of the third small story that we've got today, Mike.

Mike Masnick:

Yeah, this was, an interesting story. there've been some sort of interesting trust and safety questions around Spotify and sort of what content is allowed on there around like podcasts and stuff. But this one came out, there was Andrew Tate, who is just a horrible person. If you don't know who he is, like consider yourself lucky.

Ben Whitelaw:

Don't, don't Google it.

Mike Masnick:

Yeah, I'll, I'll leave it at that. He is a terrible person, but he sort of apparently set up a class on Spotify and it was just as awful as an awful person would, create. and the interesting thing here, I think was that Spotify employees kind of revolted on this one. And so there was sort of an internal, slack channel where people were calling out how terrible it was. That, Spotify was allowing Andrew Tate and his class to be on the platform. and so it's sort of an interesting statement on, seeing employees within a company standing up and saying like, Hey, no, we don't want this on our platform. We're not comfortable working for this company that would enable this sort of awful content.

Ben Whitelaw:

Yeah, we saw that back in January as well. When Mark Zuckerberg made his, his announcement, we saw a lot of meta employees revolt internally on Slack and, to an extent publicly as well. but this is an example of, of actually having, you know, some effect and that content was taken down. So, it's a reminder to, trust and safety professionals, even though you potentially feel like you don't have a, a voice, um, these things can change, let's round up our stories this week, Mike, with a tiny story that BBC published this week about, an interview with Roblox, his CEO. we don't have loads of time to go into it, but, it basically is a sit down with Dave Zuki, I think is how you pronounce it. And, he talks about their approach to safety online and to child safety. ROBLOX is obviously used by millions children online, often gets, criticized for its approach to it, AI safety in particular. And what's interesting here is that. He basically says that if parents don't think Roblox is safe, then they shouldn't be allowing their children to use it. And I thought that was just a really interesting, reaction to this question of child safety is something we come back to a lot. And I, I appreciated the honesty and the originality of the response basically to say, if you aren't sure as a parent, that your child is safe, then better to, Be on the, side of safety, to seed onto one side because, you have to be sure yourself. So I thought that was an interesting, line from that interview.

Mike Masnick:

Yeah, and I think it's, think some people are reacting negatively to this kind of saying, like, oh, well he's not taking responsibility, but no, I mean, he's, right to some extent, right? Like the parents are the sort of first defense for children's use of, technology and devices and services. And so saying like, look, The parents have a role here is I think, an important statement and I'm glad that he's willing to do it because I think a lot of tech CEOs, because they're being yelled at so much, they, sort of internalize the idea that the entirety of the responsibility has to be on them, but it can never be right. I mean, responsibility has to be spread around to a number of people for him to say like, okay, look, if you don't trust me, don't use it, I think is a really honest statement and I think it's a good one to hear.

Ben Whitelaw:

Yeah. I dunno if Zaki or or anybody from Roblox is gonna be at the GDC conference next week in San Francisco. but it's a neat segue into our bonus chat. for those who haven't been to the conference and, and I'm yet to make my debut, but it's a, a huge conference with some of the biggest games developers where they're talking about innovation, creativity, and, player safety. we talked about this with Carly Chadwick, the head of. Platform trust and safety delivery at Resolver, which provides a range of risk, intelligence and advisory services. And Resolver has actually been around even before Facebook, Mike, they're that old.

Mike Masnick:

Wow. Okay. I didn't realize that.

Ben Whitelaw:

yeah. So, um, yeah, we get into that with Carly and, uh, I hope listeners enjoy our bonus chat this week. Great to have you join us on Control Speech. Carly, thanks very much for making the time. before we start, before we get into the kind of gaming questions, for those who aren't familiar, tell us a little bit about Resolver and what it does.

Karley Chadwick:

Of course. Uh, thank you for having me, Ben. so resolver trust and safety, has been going strong for about 20 years now. Actually we're coming up to our 20th, anniversary, which is really exciting. but we started in 2005. Under a slightly, different name. Actually, we were known as Crisp back in the day, and we were really set up with our mission around keeping children safe online. that started, with our founder Adam Hildreth, who whilst working in a different venture, very quickly identified unfortunately, how at risk children were, from grooming and other predatory behavior online. thus this crisp dream came to life. So, like I said, it's, been 20 years. We've been combating online abuse ever since. That central aim still really being about keeping children safe when they are online. But that has since expanded out. So today we work with social media companies, search engines, NGO Partners, regulators, you know, just to name a few to deliver on that mission. our expertise has stretched a lot more wider than just child safety now. So we are, core online harm specialists in everything from that original child safety goal right through to hate speech, counter-terrorism, suicide, self-harm. and we're doing all of that, from our original base here in Leeds, which is where I'm speaking to you from now. All of our other global offices, through that network of online, technical intelligence and our threat analyst teams.

Ben Whitelaw:

Brilliant. and so talk a little bit about what those threat analyst teams do then. How, what role do they play and what information they're providing to the platforms and services you work with.

Karley Chadwick:

yeah, good question. One of my favorite things about Resolver is our diversity, and I think this has never been more apparent. Then with those threat analyst teams, so our, human intelligence, we have them working 24 7 and they're covering all manner of languages, global markets, risk spaces, and those people have got some of the most interesting backgrounds I've ever seen. We've got ex-military. X law enforcement, sociologists, we've got people who have come from academia, so people who've studied, psychology, linguistics, criminology, you know, you name it. And they're all coming together to combine that knowledge that they've got, with our technical expertise to find all of those online hams that I just mentioned. And I think going back to that 24 7 aspect risk is global. we all know that, within the industry, but I think what's really important and what's special about those threat analysts is recognizing that risk is also localized. Now, what may manifest as a risk on one platform in say, the German speaking market, it's not necessarily going to be the case in the Korean speaking market. You know, we can't rely on our understanding of hate speech in North America. To be able to tackle the risk of hate speech in France, we really need a localized understanding of what is that nuance in that region to be able to really, and most effectively proactively monitor what those risks are, not just for the users who are in those spaces, but for those platforms and those companies as well. So they are really well placed to do that. They're employing all of, all of those, great skills, that behavioral analysis, that osint geolocating, you know, just to name a few of their skillset, to find what is happening in that region, what's happening today, what might be happening tomorrow to best protect people and that, God, that could be anything from what's the trajectory of neo-Nazi radicalization in Spain. How is this latest, technology feature going to be used to share CSAM? Or how are bad actor communities going to exploit an upcoming major event, say, an election? what's going to happen globally, to impact those spaces?

Ben Whitelaw:

Mm. Interesting. And we were talking before, before we started, you, you're a German speaker yourself. You start as an analyst. and what are the kind of major events that you've worked on in the past really spring to mind when you, think about the, threat analysis and threat detection space that you've, you've worked with.

Karley Chadwick:

that's a really good question. And I think go, unfortunately, you know, in the field that we're in, there's been quite a few events over recent years that come to mind. COVID being one of them, the US election, The most recent and the one prior to that. for me as a threat analyst, my very first, uh, incident, let's call it, which really brought to my mind how fast-paced this industry is, was unfortunately Christchurch.

Ben Whitelaw:

Mm.

Karley Chadwick:

I say it in interviews when we recruit people, the very best thing about. The world of trust and safety is that every day is different. you, don't know what's gonna happen next month, next year. the risk is ever evolving. And that was never more so the case for me than the morning of Christchurch. And I remember coming into the office as, you know, as we all do with my internal to-do list. I'm gonna finish that project. I'm gonna go make a start researching this. And walked into an office that was alive with activity in a way that I'd never seen before. we had people literally running from desk to desk trying to make sense of what was going on. And it, and it was, it was horrific, from a linguistic standpoint, which is my background. brought to light that exploitation element of what are people going to do, or what tactics are people gonna employ to be able to get content that really shouldn't be online? Online. and with Christchurch, we were seeing, every 30 seconds, a reupload of the livestream was making, its way onto a platform. It was first uploaded in Indonesian and then you'd see it being posted in Korean and then it would make its way to Spain. that for me was one of those events. And unfortunately, there's been a few more over the years. but for me, that was the first one that really made me sit up straight and go, oh, okay. this is really serious now.

Ben Whitelaw:

I remember it was like playing whack-a-mole for lots of platforms, wasn't it? Um, really tough. so how does resolvers, capabilities in threat detection apply to gaming then? You know, we've got, thousands of people gathering in San Francisco this week for GDC, you've done increasing amounts of work in the gaming space in recent years. What emerging threats have you seen? In gaming platforms versus perhaps social media and, and marketplaces?

Karley Chadwick:

Oh, where do I start? I think it's important to start with the acknowledgement of they are. Interestingly, quite similar. So the threats that we see in gaming, in a way are some of the evergreen risks that we've always been seeing on social media platforms. they are slightly tweaked as you would expect them to be, to, The opportunities that are available in gaming that might not necessarily be anywhere else. but God, we could do many episodes talking about the swatting risks, predatory monetization, increased risk of ransomware and, harassment that we're seeing take place online. and it just, goes to show really how big the gamification of risk has become in, say, the past. Two, three years certainly. But it's one of those that it really sits at the intersectionality of a lot of other risk spaces, you know, and it can be really insidious in that regard for a risk that let's say hate speech. it's one of the first ones that cropped mine. When you ask someone what's some, some of the, more inappropriate things that happen on gaming, people will always talk about hate speech. And it always starts with that friendly banter approach because a big group of like-minded people have all come together. They all feel really safe, they're all here for the same reason, the same interest. and that's when the banter starts Because unfortunately in, in some of these spaces where these communities are gathering, they have the sense that they are less moderated. So they can get away with saying a little bit more that they know is inappropriate and it goes unchecked and it, goes a little bit further from there. And those are the spaces where we find that banter situation quickly escalate. And then it becomes harassment, and then we, we see these risks very quickly, morph and represent risks to child safety, suicide, and self harm, even terrorism, which for me, talking about gaming, radicalizations got to be up there, I think, as one of the biggest, the biggest risks. gaming's a natural conversation starter, which is a really, you know, it's a really positive thing. Like I said, we get people come together around a game that they all like, or a trend within a game that they all like, you know, everyone remembers the Fortnite dances. and that they get a community going in that regard. And I want to be clear, like that's not necessarily a, bad thing. one of the greatest things about social media in, in the past, you know, two decades. Has been the ability for millions of people worldwide to form those connections, those safe communities, to share experiences, to make friends. And that's a brilliant thing. But when that community gets centered around something that's not so positive, that's where we get into a really dangerous space. That's where, under the veil of sharing things that are like-minded to that community. we get those risks proliferating a lot more than, than we necessarily would've done.

Ben Whitelaw:

one thing I wanted to ask about was, physically augmented spaces, you know, the kind of augmented reality, um, virtual reality.'cause again, this, these are new spaces are emerging where harms differ, in terms of their scale, in terms of their, definition almost, and how we deal with those. How do you think that that changes the nature and severity of, some of the harms and also the work the resolver does?

Karley Chadwick:

So these worlds, you know, it's their unique selling point. They blend the digital with the physical, and for me that raises the stakes of the risks get stakes way higher, One of the biggest risks we see with these physically augmented worlds is harassment.

Ben Whitelaw:

Mm-hmm.

Karley Chadwick:

And there's this long held belief by certain users of these spaces that what you say and what you do, they bear no consequence.'cause you know, hey, well it's online, right? You know, what's, what's the harm? how can you possibly. me and punish me for this thing that, it was only said online. It, doesn't count. augmented harassment, unfortunately, we're seeing it become quite egregious from virtual stalking, sexual harassment, right through to unfortunately sexual assault taking place between users, of these spaces. And they're, they're increasingly. Testing the parameters of what's possible within that world. And unfortunately for me, biggest piece that I don't think people are aware of is just how much information about yourself you are giving away when you are, when you're using augmented realities. We are seeing gamers increasingly use their home surroundings as part of the game experience, and they're sharing this with who they believe to be other friends, other gamers within these realities, and they're not meaning to. Revealing where they live, where are they at that moment, you know, where do they go to work? Where do they go to school? And this is, really opening users up to those, some of those risks that I mentioned earlier around swatting and, and other security threats that can be really quite serious and can, snowball really quite quickly. you know, back to that idea of it's not real life, you know, what's the harm, This is arguably one of those factors that makes it hard to monitor. It makes it hard to identify because, you know, in these quasi virtual quasi real experiences, where do you draw the line with freedom of expression? Where, where do you stop the creativity when it, stops being creative? certainly for a lot of these experiences where people are making these modifications to make it their own experience, it really can feel like every man for themselves or, you know, this is a, a lawless experience. like I said, when we were chatting about gaming earlier, there are a lot of similarities between risks that are happening on gaming and risks that are taking place, on non-gaming platforms or in other social media spaces. and unfortunately I think how resolve and monitor all of this history does, unfortunately repeat itself. So whilst bad actors will constantly adapt their behavior, they're constantly going to change the way that they act in the face of the environment that they're in. we are going to see and have seen patterns to the risks. That we first saw at the first invention of social media, 15, 20 years ago. users are getting to grips with how to use gaming, certainly how to use these AR experiences, and they're trying to push those boundaries between what is and isn't possible. And that line, you know, it really is blurred. So until we have that distinction of what is clear, I think we're going to have a lot of users who are emboldened to take advantage of it.

Ben Whitelaw:

The multifaceted nature and, and the constantly evolving nature just really demonstrates the importance of the work that Resolve is doing. And we'll continue to do, hopefully, for another 20 years. so Carly, thanks so much for joining us today on Control of Speech. Really appreciate your time and uh, thanks for coming on.

Karley Chadwick:

Thank you so much.

People on this episode