Ctrl-Alt-Speech

The UK Wants Us To Ask Your Age Before You Listen

Mike Masnick & Ben Whitelaw Season 1 Episode 66

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund and by our sponsor, the Digital Trust & Safety Partnership. In our bonus chat Mike talks with DTSP Executive Director David Sullivan to talk about their new Safe Framework Specification, which is an official ISO standard (available for free download) which will help everyone better understand best practices and concepts around online trust & safety work.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

So once again, Mike, we are talking about AI on control alt speech, and

Mike Masnick:

no.

Ben Whitelaw:

who isn't right now, I'm afraid. and the platform that we're gonna be talking a bit about today's hugging face, which, uh, many of our listeners will know about, the hugging face prompt, is, I'm not sure if it's, if it's designed for us, but I'm gonna, I'm gonna pose it to you and you can gimme what you think. on the homepage of Hugging Faces website, it says, share your work with the world and build your Machine learning profile.

Mike Masnick:

Oh yes, because my machine learning profile is very big. Uh, now that's a fun one to try to respond to. I will say that I keep having conversations with people about productive uses of machine learning, ai, LLMs, and I am currently. As we speak, well, not like at this very second, but, but right before we recorded, been once again debating with people on the platform known as Blue Sky, that, yes, thank you, uh, machine learning. LLMs can be useful when used properly, but many people are using them poorly. So I will continue to share my work with where they are useful and when they are useful. But, uh, sometimes it's a a little bit of a challenge. Some people are very skeptical.

Ben Whitelaw:

Okay, so, so go to Blue Skype for your latest tips and tricks on, on using

Mike Masnick:

working on it. Uh, what about you? Uh, what, work do you have to share with the world about your ML profile?

Ben Whitelaw:

Well, no one wants to see my ML profile, at all. It's, non-existent. It's terrible. But you know, control al speech is, I would say, us sharing our work with the world. And, in that sense, I think this is a good place to start. we've got a lot to get through, so let's crack on today. Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's July the 17th, 2025, and this week's episode is brought to you with financial support from the future of Online Trust and Safety Fund, and sponsored by the brilliant digital trust and safety partnership and industry initiative designed to promote a safer and more trustworthy internet. This week we're talking about playing whack-a-Mole with AI models being whacked by the UK's controversial child safety measures and being hit across the face by scams. Damn scammers. I'm Ben Whitelaw. I'm the founder and editor of Everything in Moderation, and I'm joined by, an increasingly AI literate. Mike Masnick, who, probably has a hugging face account, or is due to at any moment.

Mike Masnick:

I don't, I don't have a hugging face account. but I have been doing a lot of messing around with, AI tools lately. I was very pleased in the last few days. I have been wrestling with a vibe coding tool for the last couple weeks, trying to get one feature to work properly. Um, the tool that I built, that I wrote about last month, I have this task management tool that I built.

Ben Whitelaw:

Yeah, talk a little bit about that.'cause we've, we've heard you've been doing this stuff, people might not have read the piece. You've essentially created your own like productivity

Mike Masnick:

Yeah. and I keep, you know, sort of keep making it better. and for some reason I had one feature that I was trying to get work, and it was, bugging out on me a little bit and not quite working, but I did get it. It was like, really exciting. Um, but yeah, I mean, I, you know, I, I wrote this whole thing about it where it's. I like the idea of being able to control and use my own tools and, mold them to my needs specifically. And so I built a, it's a task management tool with a bunch of other features built in. I have it connected to my calendar and my email, so I can turn any email into a task, I can turn any calendar event into a task. I have a bunch of other sort of features in it, and it just sort of like. Manages my day now and it's built exactly to my needs. And, like, I, you know, played around with like every productivity tool ever invented all of them sort of required me to shift my brain to, act in the, the way that whoever created it things which wasn't exactly the way that my brain thinks. And now I have a tool that sort of matches the way I think. And, at least for me, I think it's made me a lot more productive.

Ben Whitelaw:

Yeah, I mean my, desktop is a kind of graveyard of productivity tools and, you know, my browser, just has like, that's where productivity tools go to die. I've just got them kind of stacked up one by one. I never used them. has it made you more productive though, and how do you know?

Mike Masnick:

yeah. I, I don't know. I mean, yeah, I don't know for sure, but it, it is something that it has helped me sort of organize my day and I found it useful and so like keeping track of stuff. My issue is always like, I'm working on a million things at once, as you well know. And it's like. Stuff will sometimes fall through the cracks. and, I think this tool is keeping me much, much more organized. part of it is that it has a whole feature that sort of like forces me to slow down and sort of plan out my day and like look through like what am I actually gonna do today? and I have all these tasks out there, but which ones am I actually going to do today? And it sort of forces me to reflect on those things a little bit.

Ben Whitelaw:

clever. When are you gonna be, uh, launching your, giant software company?

Mike Masnick:

Well, this is the thing. It's like I, I, it is designed for me and me alone, and a couple people have asked me like, oh, can I, that really sounds like exactly what I want. Can I get an account? And I'm just like, no, I've turned off. You know, it has. it's funny'cause you use these vibe coding tools and they just naturally assume that you must be wanting to turn it into a big business. and so it has like a whole login and sign up for an account. And I was like, no. Like we're turning that off

Ben Whitelaw:

Yeah. Yeah. Yeah.

Mike Masnick:

it is just for me, and I'm not letting anyone else use it. And so, yeah, it's, kind of cool. You can build your own though.

Ben Whitelaw:

Maybe I will, maybe I will. this tool sounds like it will come in handy next week, where you have a pretty packed week because you're gonna be at Trust Con 2025, the major trust and safety professionals conference held in San Francisco. a lot of our listeners will be going, many folks will be traveling over the weekend. They might be listening to the podcast as they, go. We hope people, arrive safely and en enjoy themselves. You'll be doing a bunch of sessions, but most notably.

Mike Masnick:

Yes. So if you're listening to this and you'll be at Trust Con, please come see Control Alt Speech Live. It's the final session of Trust Con, just like it was last year. It's Wednesday at four o'clock in the Grand Ballroom. A. It was a fun time last year and it will be slightly less fun this year because Ben, you will not be there, but. I still promise it'll be fun. we'll have a really good time and a really good discussion and, uh, yeah, so I'm, I'm hoping we have, we had a huge crowd last year and hopefully we'll have a huge crowd this year as well. And it's a great way to close out your trust con

Ben Whitelaw:

Indeed, I want it to go well. I want obviously you to, nail the hosting Mike, as you always do, but I also want folks to, to go along and heckle, uh,

Mike Masnick:

That's,

Ben Whitelaw:

to make and to make it tricky. So,

Mike Masnick:

man.

Ben Whitelaw:

so, you know, that's, I, I, I'm split. I, I'll be honest, I'm split between wanting it to go incredibly well, but

Mike Masnick:

are gonna be yelling. You are not as good as Ben.

Ben Whitelaw:

Yeah. Yeah, exactly. Exactly. I mean, for people who aren't going to trust Ghana, who, don't have a ticket, do not worry. The recording will be on the feed next week. we will be putting that out as usual, with a really great bonus chat, actually. So it's, it's all gonna be as last year, you'll be able to listen to it and hear the heckling and hear, hear, hear how well Mike does in my place. It's all good.

Mike Masnick:

There we go.

Ben Whitelaw:

Um, talking of bonus chats, we've got a really, great bonus chat today with, David Sullivan, the exec director of the Digital Trust and Safety Partnership. David tells us all about how the trust and safety framework has become an official international standard. What that means for trust and safety folks and companies who are interested in content moderation and how it's gonna be helpful for practitioners and regulators. So he's gonna unpack all of that for us. He's actually doing a talk at Trust Con about this, so it's very much hot off the press. Which is how we like it, a control al speech. does mean we've got a lot to get through today, Mike, so I might be ching us along, more than usual. we've got a lot to get through. and we're gonna start off, I think a lot of what we talk about on control or speech is, the kind of difference between what platforms say they do and what they actually enable. Knowingly or unknowingly and the kind of process of realization that companies go through when, they discover that the unintended action of a user has taken place. And, we are gonna be talking about the French founded company hugging face, which has built a bit of a reputation as, the nice, friendly face of ai, but has got itself into a bit of a pickle this week.

Mike Masnick:

Yeah, well it's, it's interesting and, and the question is like, whose pickle is it, I guess? Um, so there are a few things, right? And, I think most people listening to this podcast will recognize that, anyone will say that if you create a space online that allows people to. Upload or post any kind of content, anything that is user generated content, you have a trust and safety issue. it doesn't matter how or what. People will figure out ways to abuse it and people will figure out ways to do bad things with it. And you sort of need to figure out what you wanna do with it. And I think a lot of the entrepreneurs in the space come into this thinking like, we're we're just creating a, platform man. Like, what people do with it is their own business and, don't necessarily. Think through all of those consequences. and so this, I think, is an example of that to some extent. And it actually starts with a different platform, not hugging face, but an AI platform called Civic ai, which, has raised a ton of funding and got a whole bunch of money as like a place where people could upload different AI models for generating text to imagery. And, I, I think they raised money from Andreessen Horowitz, which is like the top of the top VC firms these days.

Ben Whitelaw:

Yeah. Can I just say like, When I came across Civic ai, maybe, I don't know, a few months ago I thought it was C as in the, like the mammal, the kind of the, the animal.

Mike Masnick:

Right.

Ben Whitelaw:

and I was like, that's an interesting name for an AI company. And then I realized that actually it's C-C-I-V-I-T ai and there is no, like, the founder doesn't have like a strong connection to cts and they never owned a ct if,

Mike Masnick:

we don't know. I mean, but, but yeah, it probably not, uh, but anyways. so Civic AI was, getting criticized reasonably for years now. I mean, they've been around for a few years that a lot of the models that were being uploaded to their service were actually used for generating non-consensual, adult imagery of real people. And so, there are already all sorts of issues and there have been for years with what's known as NCII, non-consensual intimate imagery, which often was around like, people taking images of themselves, sending them to someone and then having them get out or something like that. Now this, the rise of AI generation tools in particular, image generation tools, Unfortunately, in many ways changed the game on that in terms of making it so that you could create fake such images. Deep fakes as we've said. And of course, because there are a lot of people in the world who gosh, I was gonna say awful, but you know. In some cases awful. And a lot of cases don't think through the consequences of what they're doing or don't think about the harm that they may be doing to other people without them realizing it. ran to this platform generally to create naked images of people that they were, you know, wanted to, to see naked, I

Ben Whitelaw:

Yeah. And, and all of these kind of models and, uh, are named after celebrities, right. there's a lot of kind of targeting of celebrities. There's a lot of, a

Mike Masnick:

And so, and

Ben Whitelaw:

reveal public

Mike Masnick:

and, and in fact, different kinds of celebrities too. So there was definitely a lot, and obviously the majority of them being females

Ben Whitelaw:

Yeah.

Mike Masnick:

You know, men are awful. Right? and so you know, so some of them are like the sort of world famous Hollywood stars and, stuff like that. A lot of them though were also like internet famous. So people who were. Popular more, you know, within certain communities and stuff, which is, I mean, they're both terrible, but in some ways, even even worse. and so civic AI has come under criticism for years about this, and they've talked about their own sort of attempts to deal with it. Not, well, I don't think they've ever held themselves out particularly well, but a few months ago they said that they were finally gonna ban all of these models, but they gave, they didn't do it immediately. they sort of gave model makers the ability to, download their models and gave'em a few days before they started to cut them all off. And eventually they cut out 50,000, or somewhere around 50,000 models that they, took down from their service. and. So this started with a story on 4 0 4 media. there was a report done that looked at the, you know, there were these researchers who had been following the models on civic AI and sort of used this to, to figure out how many models were taken down. But they were also looking at how much of the imagery that was created on civic AI was non-consensual, intimate imagery

Ben Whitelaw:

Because they had played, they played down how much NCI was on the platform, right?

Mike Masnick:

Yeah, they had put out a report in, 2023, at the end of 2023, saying that, I think less than 20% of the content they said was we would consider it to be PG 13 or above. and so these researchers went to study that and said like, no. And they even went back in time and kind of looked at where things were. And even in 2023 when all this was happening, it was probably. A much larger percentage. Obviously, some of that is subjective. You know, what counts as PG 13, which I should be clear since it's an American rating, right? PG 13 is an American

Ben Whitelaw:

Yeah, I think we have this in the uk. Yeah. Yeah,

Mike Masnick:

don't know what the UK movie rating systems

Ben Whitelaw:

No, I think it's, I think that's the same.

Mike Masnick:

Okay, cool. Uh, and so the idea is like, you know, how, how, graphic the imagery is and, and everything like that. And, but now they're saying that like On Civic ai, you know, towards the end it was, up around like 80% of the imagery was this kind of imagery. All these models were taken down. But in the process of taking down all these models, these researchers sort of tracked which models were taken down. and then 4 0 4 media had a separate story about hugging face, which was basically a bunch of the people who were making and using these models re-uploaded them to hugging face. And so hugging face has about 5,000. So again, 50,000 were taken down from Civic ai, but about one 10th of those have been re-uploaded to hugging face because hugging face is also, it's different than Civic ai, but it's basically a repository for. Different AI models, LLM models, and people can download them. It's, if you're familiar with GitHub, it's sort of like a GitHub for ai. and there's different, things that you can, you can do with it there, but it's, it's a repository. And so people took these models and, part of this was an organized effort. It wasn't just like randomly people did this, but rather a bunch of people who were users of civic ai. Set up a Discord and a group where they were discussing all of this stuff and did a concerted effort to upload the models to hugging face, sometimes using the same names, but often trying to hide the fact of what they were.

Ben Whitelaw:

Oh, okay.

Mike Masnick:

And there's actually like somewhere, I think, in their discord or something. And this is again, based on, on really good reporting from 4 0 4 media. There's like a translation page where it's like, oh, if you're looking for a model that will produce naked images of this or that Hollywood star, go to this model which is uploaded to hugging face under the name like test model or you know, something benign where it is not clear if you're just looking at hugging face. So this presents. A tricky trust and safety problem for something like hugging face, right? If you're just seeing the model being uploaded and it's just called like test model or something totally benign, how do you know that it's really, a model that has been trained specifically to make, non-consensual imagery of a particular person, a real person, 4 0 4 media was able to like. You know, work backwards. And they had the list of all the models that were taken down from Civic ai, they had access to the Discord, and were able to, figure out, how these, models were up there. But again, so it, represents this really interesting thing, one in that like, You would hope that hugging face was, trying to block these models as well. And certainly should have been aware that when civic AI goes down, people are gonna start uploading. But also, again, it gets to like the difficulty of all of these things when people are trying to deliberately hide it and rename it. And being pointed there from elsewhere. It's not that people are going to hug and face and like searching for, I want to create naked images of so and so. becomes a, a somewhat more difficult problem, but it also sort of suggests the kind of whack-a-mole nature that we've seen with other kinds of content where, okay, this platform says, okay, this is no longer allowed here, and it goes away there, but it pops up somewhere else, often darker, often harder to control, often harder to, follow and keep track of. It's the whole like dark web issue all over again. That, we've seen in in other contexts.

Ben Whitelaw:

Yeah, and I wanna pick up on that'cause, the Civic ai, CEO has previously talked about this exact thing, and that platform garnered a lot of users because other NSF, not suitable for work platforms. they changed their policies around what models were allowed and so users flocked to civic ai. And so you have this, shifting of, of users between different platforms in exactly the way you talk about. I mean, what are the ways we're kind of, we can get around this, like do we want platforms to be working more collaboratively to prevent this kind of. safety issue to prevent, you know, what is a huge and growing issue of NCI do. We do, we expect platforms to do that.

Mike Masnick:

I mean it's tricky in lots of ways. And, and, the 4 0 4 media article, which again, like you can't praise those guys enough'cause they're so good. But, they sort of raised the issue where it's like. This is difficult in, in all sorts of ways because there are times where it's like, it's okay to have models that will produce images of real people. And so they talk about like parody images or like protest images, you know, for like creating images of like a politician. there's always, with all these things, there's an element of. Is it the model that's the problem, or is it the users that are the problem is it the platforms that's hosting these things? and there's, as with anything, there's like some of all of those things, right? So like, could you see a world in which you have some sort of like, Database the, the platforms can, check against like a hash database of different models and say like, we've decided like we don't want this particular model. Yeah. I mean maybe that limits some stuff, but like people will adjust the models and re-upload and it becomes very tricky to stop. You

Ben Whitelaw:

Yeah.

Mike Masnick:

as with everything that I talk about here, like the answer is that it's always more complicated and there are no simple answers and. Some of this is societal, you know, where it's like, it would be nice if we trained, young men that, they're probably more productive and societally appropriate ways to, think about these things. Um, as opposed to, you know, having them channel their, desires, I guess in, in this manner.

Ben Whitelaw:

that's the kind of comment, Michael, that will get you shouted at by tech users. Because remember last week where you said that you suggested media literacy as a kinda long-term solution and they came back and said, we don't have time. do we have time to retrain men? And then that's, that's, that's an, that's another podcast that's, that's a different week. But, yeah, it's, it's a super interesting story and. there's lots to it, and I think this is something we will end up seeing, in other ways that we will end up talking about the whack-a-Mole.

Mike Masnick:

Yeah. and one thing too, and again, this is another point that four four media raised, which I also is important is. It is often true that adult content leads the way in internet technologies, and that's not necessarily a bad thing. And, nothing in this should, is suggesting that like adult content itself is inherently bad or problematic. There are places for that and there are uses for that. the main issue here is the non-consensual aspect of, of it, which, I think is, is something that. We as a society has have been learning more about and understanding more recently because it's, mostly a recent problem. Not entirely, like there are always stories going back for a long period of time, but it's something that more and more people are recognizing now as an issue and as with any issue that sort bubbles up. Recently, people are still sort of figuring out as a society how to deal with it and how to manage it. And we should be careful not to go too far. And then turning that into something like all adult content must be banned because there are efforts to do that as well. and so the arguments here are not in support of that, but rather, the consent part is really what's key here.

Ben Whitelaw:

Yeah, for sure. I mean, it's interesting that you know, the idea of kind of AI models and NCIS as an emerging threat is only just kind of surfacing really. I feel like we're really at the start of that. but pornography is something that's been around a long time and that kind of leads us. Neatly into the next story. It's something that, is still being regulated or sought to be regulated by governments around the world. And, here in the uk there is a big moment next week where, some new codes that are designed to keep children safe, we'll come into. Practice. So platforms will, will need to have adjusted their operations and, we'll have to show ofcom the regulator that, it is abiding by these codes. I know all about this, Mike, because I spent my Sunday listening to the Ofcom, CEO Melanie do given interview, talk about Busman's holiday. Um, that's, that's how I spend my, uh, my weekends, at the moment. but it's a really interesting interview and I think it's worth talking about today. you managed to watch it as well, the course of us prepping for today's podcast. So, but I wanna give kind of listeners an overview of, of really who Melanie do is, what she's about, and then we can go into what she said. So she is, the CEO of of Ofcom. As I mentioned, she's been around for four years in the role and she was appearing on. The BBC is kind of major political Sunday talk show, which is called, the Sunday Show Run, and presented by a woman called Laura Konberg, very prominent UK political journalist. Um. the thing I wanna get across here is that this is a show that's listened to, watched by a lot of people. I didn't have the numbers for this year, but it's back in It was had 1.5 million viewers. so this is a, big way that, folks in the UK get their politics news, and the whole show was about online safety because of what's happening next week. Melanie Doers was, the main interviewee and there was a panel of, online safety experts as well. And she was asked by Laura Coonsburg about what these new codes will mean and whether they will keep children safe in a, the way that I think lots of people want. a couple of things to this, which I'm, keen to get your thoughts on her kind of performance and her responses, which I actually think were, pretty good. And then there is the coverage, by the BBC itself and, and some of the questions, what was interesting really was how Melin do gave a really, I think, coherent and very like, professional response to, questions around online safety. She, I thought she used the opportunity really well to talk about some of the. Companies that have agreed to off comm's new codes and have agreed to, for example, put up age verification, on their sites, PornHub being one of them. but she also pushed back at some of the kind of suggestion that off com and, and the government more broadly wasn't doing enough. which is something that child safety campaigners will often say that, that the regulation hasn't gone enough. there was. Some interesting questions around, kinda misinformation, which is a big topic in the UK very much since the last year when we had those Southport riots that we talked about on the podcast. And I think, doors did a really good job of saying that actually, misinformation is a very difficult thing to regulate and we're gonna wait for regulation from, ministers before doing anything with it. Basically overall, I thought she gave a really strong interview and, I think that's notable because. A there is a lot of criticism, ofcom for the regulation and for what platforms are going to be doing to comply with it. Over the next few weeks, we're gonna hear a lot more about that. But as far as people you kind of want to have at the top of an organization, like a regulator, I thought, you know, she did pretty well. And she doesn't give a lot of interviews. The last time I could find that she gave a, kinda as extensive interview as this was, to a UK newspaper last year. she doesn't really come out and talk about this a lot, so it was a notable moment for UK online safety and. Obviously, that deadline next week of, the 25th of July for companies to adhere to these codes is gonna be a big moment in time for UK online safety generally, and I think we'll ripple much further than that. what did you think of her performance? Before we get into the question she was asked.

Mike Masnick:

Yeah, I mean, it's always tough to tell, right? Because like someone in her position is, is in kind of an impossible position because it's just never gonna satisfy anyone, right? Like, she's sort of being told that she is in charge of making the internet safe. The internet is never going to be safe. Like just inherently, like this is the nature of society again. And so, there were polished media trained politician answers to questions. Some of which, you know, there were some that I thought she could have pushed back stronger on. I think she did push back on some and sort of pointed out that like there were some misinformation about the policies and how they were being implemented and how Ofcom was, treating it. You know, I think Some of what's important to think about too is like Ofcom has always tried to position themselves as sort of. You know, I used to joke that they, present themselves as the friendly regulator. You know, I don't think that's exactly true, but it's more like the reasonable regulator is the way they've positioned themselves. Whether or not that's true is, a different story, but they've always, been out there sort of trying to engage with. All different stakeholders and saying like, we're trying to understand. We're trying to balance the different stakes and, and the different interests and the different challenges here, and we're trying to find the path that works the best. And I think that's kind of what she showed in there is again, like, we're trying to do this now that allows, activists on one side or the other to say, oh, you're too much on the other side. Therefore you're not doing enough. and I think she did navigate that. but again, like I have my general complaints about, the online safety Act and sort of how it's being implemented and all that kind of stuff.

Ben Whitelaw:

Yeah.

Mike Masnick:

you know, for her position, I think she did as, well as could be.

Ben Whitelaw:

Yeah, we'll get into that.'cause there are a couple of platforms that have made big announcements this week. I thought it's worth noting that actually in an interview that I read with her. This might make you like her more. She suggested that she was skeptical about Jonathan Hate's book, the Anxious Generation, and I was like, that's, you know, similar to Mike. So I

Mike Masnick:

the way to my heart.

Ben Whitelaw:

I dunno if she was, yeah, if she was flirting with you from afar. The, the, I think also an interesting nugget. Do you know how many hours a day of her smartphone she uses?

Mike Masnick:

No, I have no idea.

Ben Whitelaw:

guess.

Mike Masnick:

Oh God.

Ben Whitelaw:

And she thinks this is bad, by the way.

Mike Masnick:

So it's gotta be a number that I'm gonna think is way too low. Uh, two hours. I

Ben Whitelaw:

Three,

Mike Masnick:

Three. Okay.

Ben Whitelaw:

three hours. She uses her phone for three hours a day. And she says that, that is bad. I would hate for her to see my usage stats.

Mike Masnick:

Yeah, seriously.

Ben Whitelaw:

Um, but you know. thought that her responses to the questions around smartphone usage were pretty good, despite the fact that she clearly doesn't spend a lot of time on her phone. let's get into the kind of platform announcements then. blue Sky. made an announcement, which,

Mike Masnick:

Bell. Bell. Where's the bell?

Ben Whitelaw:

Mike is on the border of Blue Sky. As many folks will know, we actually got a, uh, a message from one of our listeners. shout out to Barry who said that he wanted to hear the bell for this story. So we're kind of, we we're not mentioning it because of that, but it was in a useful, useful nudge. BL police, police sky are going to adhere to these. Children's codes by having some form of verification. I don't imagine you wanna match about it'cause you're not involved in a day-to-day basis, but, what did you make of that?

Mike Masnick:

Yeah. And, also we should note Reddit similarly announced that, that they're going to be enforcing it as well. Um, look, yes. I am not involved in day-to-day decisions. I'm on the board. I am associated with the company. Take everything I say is, biased. Uh, if you want, I've made my position clear on age verification. I think it is a silly technology that doesn't work well. I think it has a whole bunch of problems. I think it doesn't solve the problems that people think it is solving. and I think that we're learning that as lots of different places around the world implement laws requiring it, we're seeing all of the problems with it, including the risks to privacy, all of which I've been extremely clear about. That said, there are laws that any company that wants to operate in certain countries has to comply with. And this is like an ongoing challenge of any internet company and figuring out if you want to provide services in different countries, there are different requirements and how do you balance that and, make those decisions is, an ongoing set of trade-offs. And debates and discussions. and so, anyone who wants to operate in the, in the UK has to do this. blue Sky announced their plans early. Reddit announced soon after. Within the next week, we're gonna hear from a bunch of others. as you mentioned and as was mentioned in the interview with, Ocom, ALO, which runs PornHub, is agreeing to, do the same thing as well. which is interesting in that they. PornHub has very clearly pushed back in the US on almost every state. In the interview, she says that this is the only jurisdiction that they've agreed to comply with. That's not entirely true'cause they did agree to comply with Louisiana because Louisiana had a very specific implementation that. PornHub was willing to, agree to, but every other state that has introduced, age verification rules, they have not, which to some extent is, depending on how you look at it, a statement that's saying that Ocom is being somewhat more reasonable in what they consider. an acceptable level of age verification, or age estimation or age assurance, which people will say are three different things. In reality, they all sort of blend together. the element that is interesting about the UK approach, again, I'm not approving of this, but the UK approach is effectively allowing companies to experiment. And figure out what is most effective. While in theory, and again, I don't think there's a way to do this without causing some privacy risk. that is the most privacy protective, and they're sort of willing to iterate and allow companies to iterate as that goes. That's better than many other places that are saying like, you have to do this specific thing. You have to scan faces, you have to upload IDs, or whatever it might be. Ofcom is allowing different companies to try different things. I think we're gonna learn a lot from those experiments. would I like it better if this was not a requirement and that if Blue Sky had not, put this in place probably, I think that would've been a better world. Um, but this is the world that we live in and companies have to operate under the laws that, they have.

Ben Whitelaw:

Yeah, and we don't have a lot of times going to it today. but the European Commission has unveiled its guidelines for online platforms about how to protect children this week, which includes a kind of its own age verification app. So it's taking a slightly more kind of concrete guided approach To the process than Ofcom currently is. So that is notable. I'm sure we'll come back to that

Mike Masnick:

And, and the one, one quick thing I wanna say on that was the EU Commission. When they did that, they released this announcement where they said that the age assurance methods must be accurate, reliable, robust, non-intrusive, and non-discriminatory. That's an impossible collection of things. Like there are no tools that, hit all of those points. And I think this is what I'm always concerned with, with sort of the magic wand thinking of like, if we say this is the way the technology has to work, that's the way the technology has to work. and that's impossible. And I worry about mandates that require the impossible. So that's, all I'll say on the EU commission setup.

Ben Whitelaw:

Yeah, just briefly, I wanna talk a bit about, the line of questioning that Melanie dos got from the BBC presenter. I was not very impressed with the level of questions and the insistence. Of Laura Kosberg to kind of suggest that Ofcom just needed to block certain kinds of content. She kept talking about how, parents didn't want to see particular kind of dangerous challenges on platforms or see, you know, nudity and, and wasn't it just simpler for, for them to block it? Not really understanding that that is incredibly difficult, has a bunch of kind of downsides and she kept going back to it and, and it made me think, you know, this is really why we. do the podcast, to have kind of open conversations about the intricacies and the challenges of balancing these, different priorities you know, of speech, of safety, of privacy and this kind of, you know, to your point about four or four media being such a gold standard for, for this stuff. The BBC, for me was a bit disappointing. Okay.

Mike Masnick:

Yeah. And, I'll just say like, this is the thing that people who don't have experience or knowledge or, don't dig deeply into this world, think like, oh, this must be easy. Just take down the bad stuff and have no understanding of the trade-offs that, come about with that. And so it's like, if you want no bad stuff online, you should turn off the internet. Like you should make the internet not exist, right? all of these things have trade-offs. If you, if you never want there to be car accidents, you should ban all cars, right?

Ben Whitelaw:

Yeah.

Mike Masnick:

but that has a cost to it. and so I thought those questions were very weak, but very standard is what you hear from the media all the time and from politicians all the time. And it's part of why we try and do what we do. Me on teter you and everything in moderation and both of us together here on controlled speech is get across the idea that these are, there are trade-offs and, these are difficult questions with no easy answers. And every time some media talking head goes off about like, well just. You know, just ban that stuff. You have to stop it. It's not a realistic solution, and it doesn't contribute productively to this discussion.

Ben Whitelaw:

Yeah, agreed. we are gonna race through Mike and I mean race through the final three stories we've got today. we've talked about my TV viewing habits this week. we wanna talk about your TV watching habits this week, and you've been watching a documentary about sextortion, which sounds fascinating.

Mike Masnick:

Yeah. And this is also, it's also on UK tv. I, I had a VPN over to London to, to, to watch this. So

Ben Whitelaw:

The home of the home of good online safety tv,

Mike Masnick:

Yes, yes. Uh, it is available on YouTube if you are coming from a London IP address or if, you have an account on Channel Four's, online website, which I don't, but I did see it on YouTube. So it's really interesting. We have talked about sextortion scams before and how tragic they are and the impact of them, how widespread they are, how they're happening, and the challenges of, fighting them. And this documentary was fascinating. It's, it's only 45 minutes long. the person started is, is Jordan Stevens, who's a, music artist in the uk. Fairly successful, well-known.

Ben Whitelaw:

Can I just recommend one of his songs?

Mike Masnick:

yeah, go for it.

Ben Whitelaw:

what's the name of it? It's called Mama Do the Hump. an, it's, it's an absolute banger. Rizzo kicks you. I, I was very surprised when he did a documentary about online safety'cause uh, he's a quite a funny guy and a bit of a joker. But that song, excellent track,

Mike Masnick:

Well.

Ben Whitelaw:

Please continue.

Mike Masnick:

Yeah, sure. What, what, what's interesting is that what he did was, you know, he heard about Sextortion scammers and he partnered with people who understand how those scams work. And with some computer cybersecurity folks and they set a honey trap trying to get sex extorted And what they did was he got, with consent, a male model who gave him images that could be used to, could be sent to one of these scammers. you know,'cause they approach, pretend they're a young woman, send their own like AI generated stuff, but he, he got an actual male model to, volunteer to, or, they probably paid them I'm sure. And sent back things and then, you know, got scammed. And it's the same plot that we, we've talked about in, past episodes where as soon as the image is sent, the conversation switches and it's like, I'm going to destroy your life. You have 10 seconds to send me$200. But part of what they did was they got fake. gift cards.'cause these guys always want gift cards. That's how they want you to send the money. they set up a fake website where when you went to redeem the gift card, it asked you to check a, there's a little popup. And if you click the popup saying like, you give it your location. the site would extract your location like down to one meter. Like they could figure out like in the thing the guy shows Jordan where he's sitting in his house and Jordan's like, oh my gosh. Like how that's like really, really scary. and so he sort of, they set up this, Honeypot trapped to get, and they get a scammer and they figure out where they are and then he flies to Nigeria to try and go confront the scammer and they go through this whole process. It's really fascinating, really interesting. He also talks to. They do confront that particular person who denies it, but there's a, they have a lot, they keep collecting a bunch of evidence. That is almost certainly that guy. But he also does have conversations with a bunch of people who admit to being these scammers, they all wear masks and stuff. and they talk about, but then he also talks to, the family of a victim who took his own life in the uk, who was a victim of this stuff, and to a bunch of his friends. And it's fascinating because you see sort of all the different sides of this story, and, and the important part is that they really present the idea that. People fall victim to these scams, and it's not the fault of the people who fall victim. And the most important thing is like one more people to know about the scam. So they recognize them when they're happening, but then also if people become victims of it, to talk to someone about it and not think that, the situation is hopeless. And so one point there's like a soccer match in memory of, this kid who ended up taking his own life. They talk to all of his friends and his friends are saying like, oh, we learned more about this. We understand. Like, oh, I wish he had come and, and spoken to me and we would've helped, and all these kinds of things. And I think that was really important. and as I was, cause I, I went and I watched it on YouTube. I saw there was like another, video that was recommended to me, the algorithm at work, uh, of Jordan also being interviewed by a TV presenter in the UK about this documentary. And that TV presenter was like banging on on him, like, oh, well the messages like, never, never speak to people you don't know online and never post, you know, naked pictures of yourself. And, and Jordan was really good in, pushing back and saying, look, look, people, humans are people, like we make mistakes. The more important, like, telling people just not to do anything is never gonna be useful. Like, yes, they should learn about it. And yes, like everyone should be better about this stuff, but the real message has to be don't think that you're trapped. if you get into one of these situations, if something bad happens, be willing to talk about it. And the more people talk about it, in the documentary at one point too, he's like going around the streets talking to people. Do you know about Sextortion? And like a couple of people admitted to him like, oh yeah, I was a victim of it. And he is like, we need more people to be talking about it. I, so I thought it was really well done and, I was really impressed by him in particular, in, in terms of how he handled the, the whole show and the discussion around it.

Ben Whitelaw:

Nice. We know, we know how hard it is to, you know, for people in documentaries and presenters. We've talked about it before. Um, you on the John Oliver show and, providing advice and guidance, but you know, making, programs that people will watch and that leave. The viewers with a better understanding of how online safety works is tough. So, props to channel four, and the production company for that. this next story, Mike, is actually a continuation of a theme. this is a blog post from the company Block party, which is a, tool that I've used in the past myself. really, really useful tool to help you understand what data that you make available. online and, and helps you kind of clamp that down so you are less at risk from scams and other things. And it's a slightly tragic tale of the block party intern who was non, ironically, completely true, was self scammed at the very start of her internship by some very, very clever, scammers who targeted her and. Her information, sent her an email, got her a phone number and then essentially did a gift, one of the classic gift card scams. and the blog, post kind of is a really honest and open. explanation of how it worked. basically she admits that they took advantage of her wanting to make a really good impression at the company of, of wanting to be seen as somebody who was helpful. And so when her boss, inverted commas, texted her to say, can you go out and get$5,000 worth of gift cards so that I can give them to the team, please send, Pictures and receipts. she did it and she spent eight hours, running around the city, pulling all these together only to find that it was not her boss texting her. It was, uh, a scammer. And, you mentioned there about the kind of need to talk about these things. I think this is a great example of that. I also think I was, probably very liable to do this back in my internship days. Like, like anybody would be. Right? and the fact that that scammers are targeting. people in that moment is just what I find interesting. We, we sometimes think that scammers is. Are targeting old people, folks with low IT literacy. No, these are smart young people who are being targeted at the moment, where they feel somewhat vulnerable at the start of a new, job or role. And, getting them to do something which in, in other situations they would, would have gotten onto. so again, great to be talking about this, We haven't got the, the, the intern's name, but block party for writing about this. Uh, I'm a big fan of.

Mike Masnick:

Yeah. And, and they were really clear. And in fact, like, you know, I know that Tracy, the, the CEO and founder of of Block Party posted on social media where she said, like, her gut reaction on finding out that this happened with the intern was like, oh my gosh, how could you, how could you do that? How could you be so stupid? but then realizing like, no, this happens. And I think this is important, and it ties into our previous story as well. Like, the scammers are professionals. The scammers are trying to scam. A hundred people a day or more. this is just an operation and they're just working the numbers and they have every trick in the book. And everyone as an individual, whether it's a, a sextortion scam or any kind of scam as a target, like you can be, so good and, and catch and block 99.9% if you fail once. it can be hugely destructive. The problem is that there's this stigma attached to it that if you fall for one of these scams, you must be stupid, and then you won't talk about it. You won't tell people. And often that's where we, get to the, the situations of, people dying by suicide

Ben Whitelaw:

Yeah.

Mike Masnick:

because they feel hopeless and they feel like, oh my gosh, how could I have been so stupid? And I think we need to. This is why this, post was so great. We need to normalize the idea that like, no, you're a victim here, right? It's not because you were stupid, it's because you were targeted by someone who's a professional and all they do all day is try and figure out how to work on people's minds and trick them into doing these things. It is not the fault of anyone who falls for these things, and we really need to de-stigmatize the idea of falling for a scam, is because you're stupid. Everyone can. There are tons of people. There are, PhDs, there are professors, there are all people fall for these scams. that's how it works. Everyone will fall for a scam at some point,

Ben Whitelaw:

Yeah. Agreed. and much like, you know. Real life crimes. You don't, turn around and criticize people for being stupid if, if you get robbed or you know, you provide them sympathy. That's very much the same approach we should take, I think with Scam. So a couple of, really great, public, messages, I think for people who might have been victim of scams or who know people that have been. Let's round up today, Mike, with, um, a little announcement that, Wikipedia have made for, uh, a tool that they created. This is, this is very cool. I didn't know much about this idea of, the Wikipedia test, but they have, taken an idea which you're gonna tell us a bit more about, and developed it into a kind of public policy tool that folks can. Use essentially to check whether regulation, like the online Safety Act here in the uk, whether that, would harm a site like Wikipedia, which is kind of a by word for, you know, openness and privacy, respecting, technology and, and knowledge sharing, which one of the best parts of the internet as far as I'm concerned. so this Wikipedia test is, is a set of questions, That force public policy experts to think through, am I doing something here that is unintended or that's gonna harm, users in ways? I didn't think you have a bit more of the background. I hear

Mike Masnick:

Yeah, so I mean, this came out of discussions that have been happening among a bunch of people for years now, which was, this recognition that almost all of the internet policies, including the ones we've been talking about today, were almost all written with just a few companies in mind. You know, Facebook, meta, Google, Twitter, YouTube, that's what everyone is thinking about. But the internet is much bigger than that. And if we regulate the internet, if it were just these big companies, then that's all you're gonna be left with. and all these other platforms are, going to, to have trouble. So, while back, Somebody had come up with this idea of the Wikipedia test saying like, test every regulation against Wikipedia. Now, I had taken that and run with it and wrote a post, basically saying like it should actually go beyond just Wikipedia and said we should have a test suite of, Websites that, any regulation that you have that touches the internet, you should run down this list of websites and say, how will it impact them? And so it included Wikipedia, it included things like GitHub, but it also included like small forums, like a knitting forum or like a music forum different things like that. Like how do they, and I included dirt in the list. Like how would this law impact each of these?

Ben Whitelaw:

And we've seen a lot of them, kind of full foul of regulation. Right. We talked a bit about that, on the, on the podcast before. So, so is this Wikipedia test kind of, the outcome of those conversations?

Mike Masnick:

Yeah, so it's part of it. I mean, Wikipedia sort of took that and ran with it and, in their LinkedIn post about it, they thanked me and, and some others. what Wikipedia did here, which I think was really good, was then they took that and said like, well, what questions? You know, it's one thing to have that test suite. Then the question is like, what questions should you ask to judge whether or not this law will have a negative impact? and so I think it's really great. the one thing I will say, the reason I, I had always said it should be more than just Wikipedia is because we've seen some regulatory bodies already. Sort of write Wikipedia out. Like when Wikipedia raises this issue, they'll say, okay, it doesn't apply to nonprofits. Right? Like, that's the, like, the easy way out. But that still captures a bunch of things. So I do like this. I think, we should look to apply this to a wider, group of sites beyond just Wikipedia. But it's great that Wikipedia is doing this as well.

Ben Whitelaw:

And I imagine this is in part because, of the, ongoing, in the background conversation that is happening between Wikipedia and Ofcom, where Wikipedia is pushing back against the categorization that it has been given. and basically suggested that because of. The Online Safety Act, it's gonna be difficult to, operate in the way that it has always done with volunteer moderators that, follow certain procedures. So it feels like this is an ongoing conversation between regulators and platforms in a sense. and great to see, great to see this as a, new resource. Be fascinating to see if other platforms, apply it

Mike Masnick:

Yeah.

Ben Whitelaw:

look forward to seeing more.

Mike Masnick:

Yeah. And I, and I hope that it does become useful. And, you know, this is always about this conversation between regulators and the platforms and, and how these things should work, which I think actually leads us really nicely into the bonus chat, that I had, Just a little while ago actually with, David Sullivan of the Digital Trust and Safety Partnership, talking about their new standard, which I think should be really useful in these discussions and the discussions between platforms and regulators and with practitioners and, and sort of trying to understand how do you implement trust and safety. in a way that is actually useful. Creating a, a sort of straightforward language that everyone can use, that, uh, everyone's on the same page and people understand stuff, and we don't have some of the problems that maybe I talked about earlier with like TV presenters insisting like, well, why don't you just make the bad stuff disappear? these are serious issues that you need to have serious, nuanced conversations about. And I think, as you'll hear in my conversation with David, I think that this new standard helps You know, move us forward along that path of being able to have serious, thoughtful, nuanced conversations about how do you make online spaces safe? david Sullivan, welcome back to Control Alt Speech. We are glad to have you and the Digital Trust and Safety Partnership on as a sponsor for the bonus chat, and with something quite exciting. So this week, I think today you have officially released the safe framework, of trust and safety digital safety best practices as an approved ISO standard. So congratulations first of all.

David Sullivan:

Thanks Mike. it's a pleasure to be back, and it's a pleasure to be back with this news in particular.

Mike Masnick:

So let's start out with the basics. What can you tell me about the framework?

David Sullivan:

So, the framework goes back to the formation and launch of the Digital Trust and Safety Partnership a little over four years ago. and as I think you and, listeners to this podcast know, better than anyone else, when it comes to trying to figure out how to have safe and trustworthy digital products and services, but not. Unduly constrained people's human rights, including the right to freedom of expression. this is a really challenging matter. And so our partnership really brought together companies providing all kinds of different digital services, around, I think a. Simple but powerful idea about how to deal with this challenge, which is that, instead of trying to standardize what types of content or conduct should be allowed online, standardize the practices that all companies are using to address whatever risks are particular to those products or services. so rather than saying, Hey, this is trying to get companies to agree on what types of, you know, content should be. Promoted or demoted on their sites, or, you know, try to agree on definitions of really contested things like hate speech or what have you. The idea is that all the companies are more or less. Using the same types of practices as part of their trust and safety operations. and let's, clarify those. And so we do that with a framework that has five overarching commitments, that all companies can make, that are around product development, product governance, which is, you know, the rules and community guidelines and what have you, enforcement. Of the product governance improvement over time, and transparency with the public. so that's the framework. and we've built that out also to include, a way of assessing how companies are implementing those practices. and. All of this is, I think, aimed to be scalable and proportionate so that companies can align with these best practices, without expecting, you know, small startups to do the same things that a big company like Meta or Microsoft is doing. so that's the framework. and it's been publicly available, in full for more than a year. but it's now been published by the International Organization for Standardization and the. International Electro Technical Commission, as a capital S international standard.

Mike Masnick:

And so what, what does it mean for it to be a Capital S standard? You know, both practically and I think philosophically what, why is that important?

David Sullivan:

So practically, what it means is that, in a vote by national standards bodies, and I can talk more about this if we like, that happened earlier this year. Um, you had national standards, bodies from around the world decisively approve this to be a standard, from the, committee that's responsible for information technology standards within ISO and IEC. so you've had, National bodies around the world, approve this document. So it's going to have global recognition, in ways that are similar to other international standards, that folks working in trust and safety who are either working, come from the fields of information security, privacy, or AI may understand. So in those fields you have, standards that are. Very well known and well developed, with much more bells and whistles than what we have published. Um, like ISO 27,001, which is the, international standard for information security. and we hope that this standard can be a first step. It's really the first standard of its kind for trust and safety, and one that can then allow. Everyone around the world, whether you're companies, whether you are users of these services, whether you are, regulators and policy makers, to use the same concepts, the same terms, and embrace a similar approach, while still allowing for that kind of diversity of products and services that we all want.'cause that's what makes the internet work and work for everyone.

Mike Masnick:

Yeah, I mean, it it's interesting because. I think when people think of standards, they often think of sort of like rigid rules, and this is, you know, this is what everyone must do. And so I'm sort of curious, how do you, balance that with trust and safety, where it's this constant iteration and constantly people have to adjust and companies are, changing based on the realities of the world and sort of what's happening at that time. So how, how do you, turn that into a standard.

David Sullivan:

Yeah. So there's all kinds of standards out there. Um, standards are really, for anything that, that is kind of. Measurable and repeatable, you can potentially have standards. And so oftentimes you think of standards as either, technical standards. A lot of, you know, code that we used for the, you know, for the internet and all kinds of, products and services are, based around technical standards and specifications. But you also have standards that are really about frameworks and processes. And if you think about something like the NIST cybersecurity framework, for example, that's not a technical. Rigid specification that is, A interrelated set of concepts that allow everyone to get on the same page and talk about and understand things and how, how to do things, in a way where you can still have lots of variation. that's been the inspiration for our approach. That's why we have these five. Overarching commitments. And then we have examples of about 35 specific best practices, that companies can use to implement those commitments. But which practices apply? Really depends. so for example, under our product governance, commitment, we have a best practice around, community governance. Basically the idea of community self-regulation, the kinds of things that happen on services like Wikipedia or Reddit. that's not an a best practice that's really applicable if you are a, you know, file sharing service or, you know, or a, you know, an online word processor or something like that. so you need to have that variation, but then you also need to have a rigorous approach, and that's where our assessment methodology. Which really aligns with a lot of the things that companies are already doing, both as part of enterprise risk management, as well as regulatory compliance with things like the Digital Services Act and what have you. where you're identifying controls, documenting controls, testing those things, and doing that in a way where for the biggest companies, they do it in a way that's really, As sophisticated and rigorous as you can get, while having processes that can also be used by smaller companies. Um, so that's how we approach that.

Mike Masnick:

I mean, is, part of the effort here kind of a recognition that historically a a lot of, internet companies kind of had to keep reinventing the wheel every time, you know, like new company would come along and sort of suddenly discover that they have to have some sort of safety practices in place and, sort of making it up along the fly, or kind of guessing what other companies were doing. Is that, is this

David Sullivan:

Or, or hiring people from other companies and having them, and Yes. So I think, you know, and I, I was not present for the formation of DTSP. I came along as the first executive director after the organization launched. But I think that there was a question, when the companies came together to write up this framework about are we all. Doing the same thing. And the answer was yes, more or less. You know, there's, differences and there's big differences in the way that different trust and safety teams are, structured depending on the size of companies and how they approach this stuff. But ultimately people are doing the same thing. And our goal, I think, really is for those five commitments to be kind of. Exhaustive that everything in trust and safety can fall into those five buckets. but underneath that, it's really up to the companies and depends on what type of product or service. and that's similar, like I said, to other, types of international standards that are out there. This is not reinventing the wheel. There are a lot of standards that are really about concepts and guidelines and terminology. that's also one thing I wanted to specifically highlight. Is that, all international standards follow a common format. They have a scope, which is what is in the standard and what isn't in the standard. And then they have terms and definitions. most international standards, You have to pay for, you have to pay for an individual license from iso. Um, and you have to pay, oftentimes, couple hundred Swiss francs, in order to download an individual end user license for the standard, through the process by which our standard was approved, because we make the standard available on our website. At no charge. the standard is also available from iso, at no charge. so that's unlike a lot of standards that are out there in the world, which will hopefully make this more accessible and usable, by everyone. but also, on the ISO website, they have something called the online browsing platform where you can browse all of the international standards in the world and look for things like. Terms and definitions and all of those are freely available through their platform. what you now have that you did not have a month ago, is that if you search for trust and safety, in the online browsing platform, the definition from our standard. ISO IEC 2 5 3 8 9 is there. so we now have, internationally agreed definitions for some of the terms in this space based on what industry practitioners are doing, that hopefully is gonna help to improve and inform the conversation between companies. Governments and NGOs and all of the people talking about trust and safety, online safety platform, governance, in a way that I think can lead to more productive conversations, going forward.

Mike Masnick:

Yeah. I, I mean, I, I think that's great. I think that is part of, you know, there's been so much effort over the last, half decade to decade to, like recognize that some sort of. Standardization is important. just the language, even the term trust and safety. You know, that was sort of only, I think, really sort of agreed to generally in the last five years maybe. so it's, really, really interesting. just to close it out, for people listening to this, you know, obviously people listen to this, are interested in online speech and, trust and safety. What's the main thing that you hope that they get out of, looking at the standard.

David Sullivan:

So the first thing, is that, yeah, this is, I think, a way to respond to this market need for some sort of standardization, without the kinds of measures that really could constrain, you know, both the variety of internet services are out there and the different ways that people can use them to express themselves and organize and like live their lives online. so hopefully, that. You know, is the kind of bottom line that any company can use this, really internally, to benchmark, their practices, to see how they're doing, to use this as part of regulatory compliance. You have companies who are members of DTSP, who have been using this as part of their risk assessments for the Digital Services Act. you have companies that are not members of DTSP, who have also used this as part of their, risk assessments. It's also something that's been recognized by some of the other online safety regimes around the world, like Australia. but it's also something that can be done purely voluntarily because hey, you know, you might be saying inside a company, we want to know how we're doing. And this is a way of kind of benchmarking what you're doing and aligning with others in industry, to take this forward. The other thing is that, Companies, can join our partnership and help to shape the future of this standard. so, all international standards need to be reviewed every five years. in our case, we're planning to do that review a lot sooner than this, because we've been thinking for a little while, Hey, the landscape has changed, in trust and safety. Since our best practices originally launched almost four years ago, the role of AI. Trust and safety operations, for example, has changed a lot. We may need to add some best practices, to think about that. And so we have been planning to do that with sort of broad stakeholder engagement, public input opportunities, and we will be doing that, going forward in the near future. but for specifically companies that have some kind of service where they're doing trust and safety, come on in and help us make sure that these standards, are applicable and useful to you. so that, that. That's really, I think, one of the key messages here.

Mike Masnick:

Great, and so people can download this from the ISO website. We'll put a link in the show notes as well, and from the DTSP website as well.

David Sullivan:

That's right. Yeah. There's kind of identical versions, the ISO version and then the DTSP version and you can get them, either way at no cost to you.

Mike Masnick:

Excellent. Well, thank you so much. A fascinating project. I'm glad it's out in the world. and thank you for, coming on the podcast to talk about it.

David Sullivan:

Thanks so much, Mike.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode