Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
I Bet You Think This Block is About You
In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Jim Jordan Demands Advertisers Explain Why They Don’t Advertise On MAGA Media Sites (Techdirt)
- TikTok Has a Nazi Problem (Wired)
- NazTok: An organized neo-Nazi TikTok network is getting millions of views (Institute for Strategic Dialogue)
- How TikTok bots and AI have powered a resurgence in UK far-right violence (The Guardian)
- Senate Passes Child Online Safety Bill, Sending It to an Uncertain House Fate (New York Tmes)
- The teens lobbying against the Kids Online Safety Act (The Verge)
- Social Media Mishaps Aren’t Always Billionaire Election Stealing Plots (Techdirt)
- X suspends ‘White Dudes for Harris’ account after massive fundraiser (Washington Post)
- Why Won’t Google Auto-complete ‘Trump Assassination Attempt’? (Intelligencer)
- ‘Technical glitch’ is no longer an excuse (Everything in Moderation from 2020)
- A message to our Black community (TikTok from 2020)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Discord. In our Bonus Chat at the end of the episode, Mike speaks to Juliet Shen and Camille Francois about the Trust & Safety Tooling Consortium at Columbia School of International and Public Affairs, and the importance of open source tools for trust and safety.
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So Mike, this might be a niche reference for some of our listeners, but I'm going to presume, you know, what you're talking about. There's a Noster client called Primal and it prompts users to say something. So I need you to say something.
Mike Masnick:All right, Ben. Well, what I'm going to say is I kind of miss having an audience screaming and cheering when we start the podcast last week, that was, that will really change things for us.
Ben Whitelaw:I know, I know. I've never been whooped or hollered so much in my whole entire life. Um,
Mike Masnick:So, uh, do you have something to say as well?
Ben Whitelaw:my, my say something is, Are we really talking about Jim Jordan again, for God's sake. Hello, and welcome to Ctrl-Alt-Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you with financial support from the Future of Online Trust and Safety Fund, and by. And by our new sponsor, Discord. My name is Ben Whitelaw. I'm the founder and editor of Everything in Moderation. And I'm back, albeit only one on one, in our usual dank corners of our respective homes across the Atlantic with Mike Masnick.
Mike Masnick:Well, it's, uh, it is a different experience to be doing it this way than in front of a live audience. But, uh, you know, there's some comfort here being able to sit in my comfy chair and, uh,
Ben Whitelaw:Yeah.
Mike Masnick:do it this way. But, uh, it was fun. The live podcast was certainly fun. So I hope people enjoyed it if they were there and, uh, if you weren't, I hope you'd listen to it. And we now have the YouTube video up as well, if people want to see it. Live and see what they missed. Uh,
Ben Whitelaw:attended the session. There was a couple of other sessions that were taking place at the same time at TrustCon. We think over about a hundred people, 120 people, maybe is a, is a hell of a lot of people. And, um, we were on stage and it was, it was great. It was a really great experience. So thanks to everyone who joined us. It is just the two of us now, Mike, there is no crowd, but we, we might, we've got to do what we've got to do, and that is rounding up this week's stories and analysis from the world of content moderation and online speech. We have some really cool stories to talk through. Today, before we get into that, we've got a great bonus chat sponsored by discord. At the end of today's podcast, we've got two really sharp minds in trust and safety. Juliet Shen and Camille Francois, who will be talking about the trust and safety tooling consortium at the Columbia school of international and public affairs and the importance of open source tooling, which is something that really came out strongly as one of the themes from TrustCon this year, so we're really lucky to have them. On the podcast this week. We're ready to roll though. You, you ready to hit this week's stories?
Mike Masnick:if we must,
Ben Whitelaw:Cool. So, I know you were burning the midnight oil on this first story, right? This is actually a tech story. Oh, I want to say
Mike Masnick:all right, let's go for it.
Ben Whitelaw:not. Um, that you have spotted, in some kind of U S legal documents. It's that man, Jim Jordan again. I'm still baffled by him and how he has a role in, in the U S political system, but take us through it.
Mike Masnick:Oh yeah. Yeah. That's a whole other story. The, uh, Jim Jordan, how he is a powerful person story. that, yeah, we're not going to get into right now. story though, there's a little bit of other history here, before we get to the Jim Jordan part of it, which is that, years ago in, I think it was like 2018 or 2019, it was 2019. Following the, Christchurch, mosque, shootings, which obviously, you know, got tremendous news. And there was a lot of concern about social media and what social media was doing in response to, certain content like that. A bunch of advertisers. From the World Federation of Advertisers set up a program called GARM, which is the Global Alliance for Responsible Media. And the idea is that they would work together and share notes about, how to advertise in a way that was brand safe and to avoid having advertisements show up next to criminal terrorist content, problematic content, neo Nazi content, things like that.
Ben Whitelaw:I remember it well.
Mike Masnick:Yeah, it was an attempt by advertisers to really push the conversation and push social media companies in particular to be better about, platforming some of this content and making sure that, It wasn't putting them as advertisers on those platforms into, an unsafe brand situation where, you know, having their ads show up horrible content. And so, you know, they've been out there sort of in the background, like it hasn't been a major thing, but they've done a lot of good work and sort of in pushing platforms to handle some of this content better. when. Elon Musk took over Twitter, there was a little bit of back and forth between folks associated with Garm and Elon about how, There were concerns, obviously, I think well founded concerns about how the way that he was planning to manage it would present a brand safety risk for advertisers remaining on the platform. And we have since seen that advertising has dropped depending on who you talk to somewhere between 50 and 70%. Uh, it appears to continue to decline. The latest reports I've seen are that, you know, even. Now in 2024, the numbers keep going down, um, of advertisers on Twitter where they just, or X, because they just don't feel it's, safe. and so there was a back and forth where Elon got really mad at Garm, but then every time that the company. Not Elon, but the company wanted to present itself as like catering to advertisers again, it would trot out the GARM standards and talk about how they were meeting the GARM standards, even as like Elon was battling with the GARM leadership. And then, just last month, at the beginning of July, X announced excitedly, like they put out a press release saying that they were so excited that they were officially rejoining GARM. And
Ben Whitelaw:they left and then rejoined.
Mike Masnick:Twitter pre Elon had been one of the members of GARM with a bunch of the other social media companies and a bunch of the advertising companies. And then they left in a, you know, Angry fit with Elon, but still, even after they left, you know, I remember there was a big advertising conference in 2023 where Twitter was desperately trying to sign up advertisers and they kept talking about how they were obeying the garm standards as a proof that it was a safe place to advertise. And then, so this year they announced that they were rejoining GARM and they were excited and it was this big thing. And it was clearly just like, it was done, at the same time as this big advertising conference where they were trying to sign up advertisers. And I'm sure Linda Iaccarino had a lot to do with it. And we're like, you know, we're safe for advertisers. We're now a member, we're back in GARM. A week later, like one week after this, Jim Jordan, and here's where he comes in. He released a report. Claiming that GARM was this, antitrust violation, collusion among advertisers to try and censor conservative speech. And he made a big deal of it. And Elon follows Jim Jordan because of course he does. And, posted this angry tweet saying. GARM is this criminal operation. We are going to sue them. And, I hope that attorneys general start to investigate them for criminal, behavior for convincing people not to advertise on Twitter.
Ben Whitelaw:Even though they joined a week before
Mike Masnick:a week earlier, excitedly rejoined
Ben Whitelaw:hand, right hand.
Mike Masnick:A week later, they are evil criminal. We are going to sue them. We hope there are criminal penalties put against them.
Ben Whitelaw:Interesting.
Mike Masnick:Notable.
Ben Whitelaw:that's, that's helpful background. I can't wait to hear what this story is.
Mike Masnick:So now the latest thing, that just came out last night as we're recording this and why I was up late last night sorting through this figuring it all out is that Jim Jordan as the chair of the house judiciary committee, which has, some strong influence and powers has sent letters to basically every top advertiser who is a member of GARM, and they released all of the letters. And so, they're all basically the same. I focused on the first one, cause they're in order and it's to Adidas, you know, the shoe and clothing company, um, sportswear company, basically saying like, you horrible company. You were a part of GARM, like, reveal to us if you were colluding to boycott GARM. All sorts of conservative media sites. And it has the thing where it's like, yes, we know that, you claim that you did this for brand safety, but the reasons that you want to advertise on these sites do not appear to us to be about brand safety. Uh, and so we need to know if you're colluding and they're asking for all sorts of documents and who you talk to and how you planned all this stuff. And, suggesting that it is some sort of illegal boycott.
Ben Whitelaw:Right.
Mike Masnick:saying like, not advertising on the media sites, both regular media and social media sites that we, the Republican support is illegal, you must advertise on these sites that is completely against. What the first amendment means and says, and obviously like, there's no right to demand that someone advertise on your platform. If there were Ben, you and I would be demanding a lot of companies be sponsoring this podcast.
Ben Whitelaw:A hundred percent, a hundred percent. I mean, so, so let me get straight. So, so he's written a bunch of letters to Adidas, but also other big companies, American express, BP, Pepsi, Verizon saying, why have you not paid for advertising on the Joe podcast? Why have you not paid for advertising on the daily wire? I think it was one of the news outlets on bright bar on Fox news. He's, and he's suggesting that because there is no advertising relationship, that that amounts to some sort of antitrust collusion.
Mike Masnick:Exactly.
Ben Whitelaw:Wow.
Mike Masnick:Yes.
Ben Whitelaw:Okay. I mean, is, has he lost his mind? Is that
Mike Masnick:Well, I mean, there's a question as to whether or not Jim Jordan ever had a mind. Um, you know, clearly this is, partisan politics, culture war stuff. That, you know, just, is designed to work people up and to work up, you know, his base of people who believe in it. And that happens to include Elon Musk, who retweeted the House Judiciary Committee's announcement of this with, uh, A popcorn emoji, as in, I am excited to watch what happens here. All these advertisers who refuse to advertise on X, Oh, now they're going to get theirs. As if that is magically going to make them decide like, yeah, this is the platform I want to advertise on now.
Ben Whitelaw:Yeah. It's, it's odd, isn't it? I mean, the thing that's strange here is that Jim Jordan is going after the body that was set up to kind of counter the platform's very holy, very piecemeal Policies around advertising, right? Like, as you said, right at the front, you know, this came about and after a couple of years where it was found out that you could target on Facebook, Jew haters. That was like a category by which you could, find and identify a group of users. there was also a Google parameter in, in the kind of advertising console, you could literally target people who were racist or who, who had shared racist speech, right? So like GARM's coming off the back of a bunch of, kind of revelations about how advertising is, it's targeted towards people, GARM is created. And yet Jordan thinks that the, the problem is not that the way the platforms do advertising, which is still pretty ropey in a lot of cases. It's actually go after that organization. It feels backwards to me.
Mike Masnick:Well, you will, if you pay attention to US politics, Ben, you will discover very quickly that an awful lot of the things that come out of Jim Jordan's mouth are positively backwards. I mean, Jim Jordan was also heavily responsible for making the Twitter files into, like, a big legal issue because he held a bunch of hearings in which what was revealed by Elon Musk to a bunch of We can sort of call them reporters. I don't know. Uh, was, was also completely backwards. I mean, they were taking information about the trust and safety practices of Twitter and representing them in a way that was the opposite of reality. And they did that repeatedly. So the fact that he's now doing that in the case of advertising and social media and, and regular media, and that it's backwards, it's, you know, Uh, that, that fits to his particular brand. Um, and so there is an, there is some element of this that is just like, this is the nature of us politics right now. Uh,
Ben Whitelaw:I mean, what, what are we, what can we expect to happen? Right? Because the letters ask for these companies to maintain cool notes, discussions, meetings related to GARM or GARM members. It wants the companies to declare if they're part of working groups. What is Jim Jordan planning to do with this and how is that going to affect kind of how the platforms perhaps, and companies behave with each other?
Mike Masnick:yeah, it, it will be interesting because you know, the followup to this will probably be some sort of subpoena. And there was another report this week about the number of subpoenas that Jim Jordan in particular has issued. he loves to do that, even though he's also, you know, a been known for ignoring subpoenas when they come in his direction. So it's likely that'll happen. I would imagine that there will be hearings that he will demand that some of the executives come and he'll get to berate them for not advertising and claim it's collusion, even though there's basically no chance that lawsuits will result from this unless, you know, if Trump gets elected and he puts in place, Friends and partisans at the justice department. I could totally see them trying to launch an antitrust case against them. And so, how the companies are going to react, that will depend on how fearful they are of this actually turning into, a legal issue. And I could see some of them just start to advertise on these platforms, just to have an excuse to say like, no, no, we are advertising on them. We're not boycotting them just to avoid that sort of, scrutiny.
Ben Whitelaw:Yeah.
Mike Masnick:But, that's kind of ridiculous and that alone should be seen, you know, a politician browbeating companies into advertising on preferred media seems like a huge speech problem that everyone should be aware of. And it's, I find it also just. Pretty ridiculous that Elon is cheering this on while at the same time going out and claiming he's a, free speech absolutist, when this is a pretty clear violation of all basic fundamental free speech principles.
Ben Whitelaw:Yeah, platform exec with somewhat dubious views on, on speech, especially when it concerns them. Uh, yeah, not the first time we've had that. It does sound like we're going to have to talk about Jim Jordan again in the future, which I'm, I'm a little bit, I'm a little bit, I've got straight, you know, difficult feelings about, but it sounds like we, he's going to provide lots of material for controlled speech in the future, Mike.
Mike Masnick:Unfortunately, yes.
Ben Whitelaw:Okay. Onto our next story now, which is a TikTok report, piece of research that's been published by the Institute for strategic dialogue. It's a interesting, if not super new, look at how pro Nazi ideas are spreading on the platform. it's been done by Nathan Docter, Guy Fiennes, and Kieran O'Connor. Uh, And it's essentially kind of tracking the, network of far right and Nazi accounts and how they proliferate, how those ideas proliferate across the platform. There's a few things to kind of note. And I'm particularly interested in this because of what's happened in the UK recently, Mike, which I'll go into. But, basically there's a lot of this content that is, masquerading as palatable content. And there's a lot of. A lot of it's being created by generative AI. And they give some examples in the report, which quite clearly when you look at it are violative, but I guess from a, an operational standpoint, from a tooling point are very hard to detect. There's parts in the report about how, you know, the for you page of the accounts that they set up very quickly become, um, uh, flooded with Nazi ideology and content and hate speech. and, and, you know, overall they find hundreds of accounts, which they say 200 doesn't sound like a lot really in the context of TikTok, but, they get a significant number of views. They say 6. 2 million, which I think is enough to be to have some concern, the interesting part for me, I think is. The fact that when they reported these accounts, TikTok didn't do a lot with them. So within 24 hours, the reports were kind of untouched. The accounts were still live. And then a month later, they looked at it again and Wired reported that about half think half of the 50 accounts had been taken down. So around, around 23. So the speed with which the TikTok is reacting to this content and these accounts is pretty slow. I was looking up guidelines just before we started and TikTok make it very clear that violent and hateful organizations and individuals are not allowed on the platform. and. if you report them, they're going to conduct a thorough review, including off platform behavior, which may result in an account ban. So again, not super new. We know that far right ideologies and hate speech is common on platforms, not just TikTok. The thing for me this week, Mike, is the fact that the UK, there has been this really, um, Impactful story happening in Southport in the North of England, where basically there was a, an attack, on some young children by a 17 year old, which has been used by, extremist groups, in the UK as a reason to, basically bribe and we had whole night in Southport where, a mosque was attacked, police were attacked. There was a. A whole, uh, several vans were burnt out, carnage really, and not the, you, you're kind of used to seeing, in the UK or anywhere, really. And there's been a couple of pieces off the back of that in which TikTok have been kind of highlighted as reasons why some of these groups have organized. so the, basically the themes of the report were start, we were seen in real Um, and, and we're starting to see how those accounts perhaps. are leading to people organizing and, in real life, which is this, it's the extent to which that's true is up for discussion, but I think it's a really interesting topic that we wanted to talk about. So you've looked at the research, you're slightly more skeptical of the scale of this stuff. It sounds like, talk me through why.
Mike Masnick:yeah, I mean, it's, it's definitely interesting research and I'm glad that, it was conducted and there's definitely things to learn from it. Some of the things that, concern me a little bit is that, TikTok is a very different platform than other platforms. And I think this has come up in other contexts as well, where views on TikTok in some ways mean less than views on other platforms. and so, Part of that is just the nature of the way that TikTok works and the fact that it has the for you feed and that things just Pop up and it's, it's often content that you're not directly following. You know, it goes into the algorithm and it shows up and people swipe by it and, barely even watch it. And so, videos that aren't viral have a lot higher view counts on Tik TOK than on other platforms. And that leads. Some people often in the media to overestimate the impact and real reach of a video that shows up on Tik TOK. And so, you know, one of the things in this report is, is, some of the content that was found in this report had over a hundred thousand views, which sounds like, you know, without context sounds like a lot, but in Tik TOK world is like a really, really small number of views and. Again, it doesn't show how many people actually engaged with the content, how many people, wanted to do something with it. So I'm always just a little hesitant because we had a similar story last fall where one reporter had found a few videos on Tik TOK in which young people were quoting Osama bin Laden out of context and sort of taking these clips and saying like, Oh, you know, what's going viral on Tik TOK these days are the kids are like supporting Osama bin Laden and, and all this stuff, which was an exaggeration. And so, yes, there are some people some of that content did exist. It wasn't really going viral in the sense that we normally think of as content going viral, it's just sort of the nature of how views work on Tik TOK. And so I'm always a little hesitant to buy into this idea that any of this content has been that successful. And we all recognize that this content is really problematic and it's bad. And I understand why, platforms should do something about it and say they do and sometimes will do, But like actually deciding which content to action is as people interested safety, no much harder than it seems from the outside. It's easy to say here are 50 neo Nazi accounts, but for the platform itself to then take action. They have to see that it's violates the rules in some way. And so then there's, there's a whole effort of like looking at the content. Is it actually violating the rules? Like off platform behavior, being a part of that, being able to look at that. Yeah, that's great. But it also takes time and you're dealing with a whole bunch of things at scale. And if videos are really not getting that many views. It is perhaps a less urgent thing than there may be lots of other things that trust and safety folks have to be working on that are more urgent and more concerning and getting more attention. And so I'm less concerned about that. That's not to say I'm not concerned about it, but it's like, it's easy to sort of take this story out of the larger context of all the things that are happening in trust and safety. And like, what is the actual impact and what are the things that, they're looking at? The thing that struck me as even more interesting about this, though, is the fact that in both of these stories in the study on neo Nazis on TikTok, and the story about Southport and the organizing is how the real organizing and the real planning is actually being done on Telegram. And then they're planning how they're going to use TikTok as a vehicle to get this message out there. And so there's an interesting story there to pay attention to. And we've obviously talked about Telegram in the past and how little Telegram pays attention to any of this or cares about any of this and just allows all of this kind of behavior to happen. And so the thing that, that I find really interesting is that. These bad actors, and I think it's very clear that, whether they're direct neo Nazis or just like stoking racism and bigotry, which arguably it's the same people involved in both of these things, or very closely associated people, they're planning these ideas on Telegram, and they're using TikTok and some other platforms just as a vehicle for delivery.
Ben Whitelaw:And that's what. What I was thinking is like, how close here are we to seeing a kind of far right ideology stack, right. For disseminating messages, right. And, and organizing real world, protests, riots, however you want to call them, you know, you've got the generative AI for the kind of content creation. You know, you produce weird pictures of children, normally, women, you know, girls. With a group of men behind them who look ominous and who are often, ethnic minorities to give the sense of, okay, it's, it's a migration thing, these people are non British, we need to kind of get rid of them, that's the first part. Then you've got the organizing of that content, the sharing of that content on Telegram. People asking folks in their network to like, share, kind of amplify the content that's been created. And then you've got TikTok, which is the dissemination part, and then you probably got a bit more telegram on top. It feels almost like the guides that we talked about in the sextortion cases. You know, it feels like we're quite close to seeing like a, almost like a step by step guide. To how to organize, real world harm and hate speech potentially. And, and that's a kind of probably not a new thing. I think there's probably groups on 4chan and other social platforms who've been doing that a long time. I guess this just hit home for me particularly because it's in the UK and it erupted so quickly and out of something that's so tragic.
Mike Masnick:I mean, and the thing that I'm, not as sure about is, it's quick and easy to sort of focus on the, social media component of it. But historically, we certainly have examples of like mobs gathering and mob violence. We've definitely seen examples where there are, mobs that gather and come together quickly and mob violence and you can go back to, in the U. S. across the 20th century, we've seen often racist. Mob violence and, mobs of people gathering and attacking people not based on social media. So there, you know, and I'm not saying like social media is, cause clearly it's being used here, but there's a part of me, it's wondering like Where do you draw the line between like, is this a, human societal problem versus a technology problem? And, maybe there is an argument and maybe there is evidence that says that, this is somehow worse, this is enabling more people or, quicker involvement. But I'm not, sure that that's true when we look at historical violence as well, and the historical examples of mobs rising up things like the Tulsa race riots, in the U S and other, lynchings across the South that were often, instigated by white mobs against, black individuals who were falsely accused of things. And, and so, It would be great if the world could stop that, you know, for, for, for a variety of reasons. And it would be bad if technology is making that worse, but I want to be careful before we jump to the conclusion that it is entirely because of the technology that this is happening when we do have historical examples of it happening pre technology as well.
Ben Whitelaw:Right. I mean, again, I don't know for sure. My guess is that there'll be people within the group of writers who didn't know each other until they met up in person, right? And that's probably because they found each other online. And so I think you're right. I don't, I'm not naive enough to think that these views and attitudes don't already exist, but the extent to which they've They're compounded by some of the ways the platforms behave and work is something that I continually come back to, unfortunately, which we won't fix or fully discuss in this
Mike Masnick:And I mean, it's interesting too, right. And that, the Guardian article that talks about a lot of this, the UK talks about, Tommy Robinson, who's, who's behind a bunch of this and the history there is that he did get banned from a whole bunch of platforms from, basically all the different platforms, not telegram. And then Elon did bring him back to Twitter after he'd been banned for four or five years, I think. Um, And even though he's banned from these platforms, he's still able to like instigate other people to then go on those platforms, these are really difficult problems to solve. And there's no like easy answer to dealing with it. And so. I do think that we as a society collectively should be thinking about these things and should be exploring these things. I want to be careful though, about rushing to the idea that, if only TikTok had banned this stuff faster, that, bad stuff wouldn't have happened because I'm not sure, I, I always worry about if we're not actually at the root of the problem and we're sort of just, trying to solve a symptom rather than the, the root cause it's just going to pop up again.
Ben Whitelaw:Right. And, and I'm, again, I'm not naive enough to think that there's only downsides to the platforms. There was a really amazing cleanup process that took place, which I was watching online, and People, builders coming and rebuilding the wall of the mosque that was taken down, people providing their services for free, like a real community spirit, which again, coordinated via social media. So, you have to see in both lights and try and be kind of even handed with it, but it is going to be something that's going to I hope there are researchers out there thinking about, I'm going to look into it myself and see if there's research that's exists already. Cause I feel, I feel like this is an interesting topic.
Mike Masnick:Yeah.
Ben Whitelaw:Great. So that covers off our kind of big stories for this week, Mike, we've got a couple of other really good ones. We, definitely couldn't have done this without you. episode without touching on KOSA, the kids on my safety act and it's lesser known, act as well, which you're going to talk about, but basically that's got passed in the Senate this week. but it might be the end of the road,
Mike Masnick:for now, I, you know, there's no ever end of the road, I think on this stuff, but yeah, so, KOSA was actually merged with another bill, the COPPA 2. 0, and they were merged together into COSPA, the Kids Online Safety and Privacy Act.
Ben Whitelaw:not to be confused with the friendly ghost. Right. Right.
Mike Masnick:this was sort of known, you know, last week it, they voted to sort of move it forward. And then this week there was the vote in the Senate it picked up even more co sponsors. I think by the end it had somewhere between 75 and 80 co sponsors at which point, you know, Co sponsors are not going to vote against it. So it was definitely going to win. And then, there were only three senators who voted against it. Ron Wyden, who has been very clear and very public on his concerns about it. And, who's the author of section 230 and has always been, a very clear thinker on. Internet issues and how they could go wrong. The two that were maybe a little more surprising were the other two were Republicans. It was Rand Paul from Kentucky and Mike Lee from Utah. Mike Lee's concerns seemed, strange, in that he, because the law talks about the, DSM, I forget exactly what DSM stands for, but like the official, psychological, standards, whatever it is, um, you know, that he, he, Yeah, and there's sort of another culture war issue in terms of, is the DSM too woke, I think. And so, because, it references the DSM, Mike Lee was against it. Rand Paul! Put out a dear colleague letter, which gets sent to all the other senators that was then leaked. And I wrote about it on tech chart. It was very interesting. And I actually thought it was like one of the best arguments I've seen against KOSA that not culture war and not crazy partisan. And it was just this really straightforward, clear explanation of the potential problems with KOSA. And in particular, it talked about things about, you know, In KOSA, you have rules about trying to avoid things that might cause anxiety for children. And the letter points out correctly that climate change is a big concern for many people, including certainly young people, and it can cause anxiety. And there are plenty of stories and plenty of studies about. Kids having anxiety about climate change for good reason. and
Ben Whitelaw:So you could have like climate denying organizations use KOSA potentially as a way to minimize information about it. Right.
Mike Masnick:exactly. It's the concern that they would raise the enforcement has to come from the FTC. So it depends on who's in charge of the FTC. But there is this argument then also that platforms themselves just to avoid. the fear of being accused of, creating anxiety in children, that they might pull down information on climate change. That would be bad. And then the other thing that he points out in the letter is that, a lot of the things around advertising that kids should not be exposed to certain kinds of advertising, he points out, you turn on the television and kids can see that kind of advertising already. And we don't freak out about that. And in part, because we sort of understand that the first amendment. In the U S allows that kind of thing. And so he had some really clear concerns on speech. And then what came out a few days later is that there's concerns in the house from the Republicans in the house. these may be culture war, Jim Jordan related kinds of concerns. Oh, the DSM is too woke. it's not entirely clear. But whatever those concerns are, it seems to be that, the house will not move forward with KOSA for now. That could change, but it sounds like, especially with as we approach election season, it seems unlikely that KOSA is going to move this year, and they'll probably have to come back with something new in the new session next year.
Ben Whitelaw:Right. And there was an adjacent piece, right. That we discussed before we start recording the verge put out about. Actually kids, 300 kids who kind of showed up to demonstrate their opposition to KOSA, right? Which I think is a, there's a couple of really interesting threads in that.
Mike Masnick:Yeah. And, the people who've been pushing KOSA have been presenting kids, often presenting parents, who were, concerned about online stuff, not necessarily maybe as knowledgeable about the impact of different aspects of the way the law would work and whether or not it would actually protect kids. But there was a lot of like, we have to do KOSA to protect These kids and showing pictures of the kids, making Mark Zuckerberg stand up to the parents and apologize to them. All of these things. This was really interesting. It was organized by the ACLU. They brought in, I think it was about 300 kids to go talk to people, uh, in Congress and both the house and the Senate and the kids were, uh, Incredibly eloquent, incredibly thoughtful presenting like this won't help. This will cause more harm. There's certainly a lot of concerns on the LGBTQ side of things that this will lead to the suppression of, useful content. And it was really interesting to me that that happened right before the vote and the same politicians that had trotted out the kids in support of KOSA seemed very eager to ignore the kids who were presenting the reasons why KOSA was dangerous and bad. And there's a whole thing about, you know, on both sides, you could argue like, should we be using kids as, props in some sense, but like actually listening to what kids are saying, and understanding like, the kids that the ACU brought together, these were thoughtful kids. the Verge had this great article about it. The New York times also had a really good article looking at what the kids were saying. And it was really thoughtful and saying like, this is, Adults wanting to look like they're helping rather than actually helping.
Ben Whitelaw:Yeah, I just want to read one out because I thought it was like super eloquent. This is a student who says as a Brown woman, I post a lot about immigration. I post about content related to who I am and what my identities are. And that is how I informed people around me about the inner workings of my identity and the inner workings of systems in America that may be hurting me. And who I am and what I stand for, like that's, get her on the podcast. I mean, just,
Mike Masnick:seriously.
Ben Whitelaw:I know that's incredible. And like, really, I think the idea that people like her and these children who have a stake in the future of the internet, it feels like they need to be more involved in some of the lawmaking and some of the policy making of platforms. Like. I'm really interested in those kinds of, governance projects where you're, getting a wider array of views and particularly from groups like children, like the LGBTQIA community who just aren't necessarily heard enough.
Mike Masnick:Yeah. Yeah,
Ben Whitelaw:but yeah, KOSA will be back. We will be talking about it
Mike Masnick:I'm sure.
Ben Whitelaw:until we're old and, and fed up of it. Um, so we, we, this is a story of my guy. I. Tector, I am a reader of Tector, naturally, uh,
Mike Masnick:you?
Ben Whitelaw:and often include it in the newsletter, but I read your post that was entitled, Social Media Mishaps Aren't Always Billionaire Election Stealing Plots. I want you to talk about, uh, a bit about this and then I want to disagree with you.
Mike Masnick:Excellent. It's, it's about time we fought a little more, Ben. Yeah. So this was, you know, there have been a variety of stories in the last few weeks. about different sort of mishaps that happen on different platforms, and everyone, depending on which side it impacts, jumping to the conclusion that this is an attempt to suppress support for either cause or effect. Donald Trump or Kamala Harris, depending on, on which side you support. and so, there was one where it was, right after the change, uh, in the, uh, democratic ticket and Kamala Harris became the candidate, they switched the Biden HQ account on X to be Harris HQ or Kamala HQ. I forget the exact one, but a whole bunch of people followed it. And so some people started to get an error message saying it was, they were rate limited from following more users. And everyone jumped to the conclusion that this was Elon, who has come out as a very strong Trump supporter, putting his thumb on the scale, but Blocking people from following the Harris campaign and therefore involved in election interference.
Ben Whitelaw:right.
Mike Masnick:On the flip side, you had stories about if you search for president Donald, it wasn't showing up Donald Trump for some people I checked it and it was for me, but Elon Musk did the search and it didn't show Donald Trump.
Ben Whitelaw:We're gonna need to have a bow every time his name is
Mike Masnick:Yeah. Seriously.
Ben Whitelaw:that's what we'll come back to that.
Mike Masnick:So then Elon, who won at the very same time as being accused of election interference for things that he was doing is claiming, Oh, Google is engaged in election interference because when you search for president Donald, Donald Trump, isn't necessarily the first person showing up though, even for many people. It is, there was other ones too, where. There was uh, Intelligencer article about how, if you did, a search on assassination attempt of, you weren't getting Donald Trump as of the results and people again, we're insisting that this is some sort of election interference. And my, my argument is that none of these are election interference. Almost all of them are just, you know, the way that, Content moderation tools work. Sometimes they make mistakes and in some cases it's sort of, they're understandable why they end up this way. The autocomplete on Google was, they have rules in place to try and avoid, crazy autocompletes around politicians. And so they limit those in some ways with the Harris account on X, there are systems in place that Twitter had before they're not perfect. They do have problems, but that if you have an account. That has, for example, recently changed its name, and suddenly gets a whole bunch of traffic. Like that may be a sign that there's a problem. There was another one that was also accused of X causing problems, which was the white dudes for Harris. Which was organized, a big zoom call, and that got suspended. And again, like there are reasons why, there were potential reasons why that would normally happen, which was, there were a few other accounts that had very similar names. It was confusing, which was the real one. None of those accounts were officially associated with the Harris campaign. They're all asking for money. These are things that as a trust and safety. Person or a trust and safety system. You might look at and say, these are suspect and we should be careful with them and maybe temporarily suspend them until we can investigate. But everyone immediately jumps to their, partisan corners and assumes that one or the other is election interference. And my whole thing was just like. Take a breath, take a deep breath, like calm down. These things happen.
Ben Whitelaw:Mike, we can't take a break or take a break with the internet. It's breathless.
Mike Masnick:Okay.
Ben Whitelaw:there's no engagement if we take a breath, a breath. Sure.
Mike Masnick:So, you know, to me, like a lot of this is just like, it's the same thing we've seen for years. Every time there's a moderation decision that you, you make, Disagree with people were jumping to the conclusion that it was politically motivated, or it was, the CEO in charge deciding to personally get involved and say I don't, like this candidate. Therefore I'm going to block people from following them. You know, if that was an attempt at election interference, like it's not going to do anything. It's not like someone's not going to vote for Donald Trump. If they support Donald Trump and they can't find him in the autocomplete. And it's not like someone's not going to vote for Kamala Harris. If they suddenly can't follow Kamala Harris on the Twitter account.
Ben Whitelaw:But so I think let's park the general consumer understanding of how trust and safety works and how platforms work, because I think that's something that's like too big to kind of necessarily address, like, agree. Jumping straight to the conclusion that everything is a conspiracy and everyone's out to get them What I don't like is the fact that we give platforms of any size a pass under this kind of idea of a technical glitch or an error. And I wrote something for Everything in Moderation back in 2020 about this, where I basically said like, it can no longer be an excuse in an era of trust and safety where tooling and AI and, the technology is going to play an increasingly important role. I don't feel like we can. Allow this to be a get out of jail free card. Like it, it felt even in 2020 that it was being used and as a couple of examples, I'll share as a free pass. and as a kind of sweeping under the carpet of stuff that was like a really legitimate concern, the big one, the reason why I wrote the newsletter at the time was that you might remember, basically on during the Black Lives Matter protests, all hashtags, I think related to Black Lives Matter, basically registered zero. And so people were like, What the hell? This is like, why has this happened? The difference then compared to now is that TikTok wrote this blog in which it kind of explained why and apologized because it recognized that actually that's a really big deal if you are. A black user of, Tok, it's a big deal with the George Floyd hashtag and the black lives matter hashtag look like they're not populated at all. And we'll link to it in the show notes. I think it's a really interesting indicator of where we were and where we are now in terms of how platforms work. That was a, for me, a good response to a technical issue. If something's going to happen, if your systems are not going to work in the way they're intended, that happens, but like, Let's put your hand up and say why that was and explain it. Because that's, where people. Imagine that there's a conspiracy and it's happened a bunch of other times as well. You know, there's an example on, Facebook where a bunch of Tunisian journalists of all people got taken off the platform. We've seen it in the Palestine Israel, war as well. There's been a whole number of examples. And I remember Andy Stone actually getting up a number of times and saying, he was the kind of spokesperson at Meta. It's a technical glitch. He almost became a kind of bit of a meme for it. And, and I just, I'd hate for us to kind of give a pass to platforms. the ideal scenario is that they come out and say, this is what happened. This is why they put some PR resource behind it. They do some engineering investigations. They make that public. Maybe this stuff is part of, how platforms can be more transparent about their content moderation processes. You know, maybe that's something that in future will be regulated, the response to incidents like this. I don't know, but I don't, I don't want to give a pass where there shouldn't need to be one.
Mike Masnick:So I, I don't disagree with most of that. So, so, more transparency would be good. I think the more that just the public can understand these things would be good. The problems are that that becomes a lot trickier than I think people realize for a variety of reasons. One being that these things happen all the time. Yeah. Which ones do you explain? Which ones do you not explain? If you have to explain every decision that becomes overwhelming. Also, the people who are politically motivated or just maliciously motivated are going to claim it's all nonsense anyways, and then you're just fighting. Also, sometimes when you reveal that stuff, you are revealing useful information to malicious actors about how to get around the rules and how to game things, but also it just leads to this back and forth fight where the companies will come out and say, this is why it happens. People say, we don't believe you, you should have done better. But then. Like you're missing all the other stuff you know, where mistakes are being made, like yes, you can say like, maybe if it reaches a certain threshold of media attention, you should, but then that just encourages more like media overreactions I want there to be more transparency. I would love for there to be more transparency. I w it would be great if the companies explain this stuff and more people could understand it. And we have even better insight into how all this is happening. I just think it's, it is a lot more difficult than just saying, like, dedicate some PR resources to this and explain it because. There are a whole bunch of competing factors that need to be thought about.
Ben Whitelaw:yeah, no, I definitely agree with that. I mean, the, maybe there's a threshold of users that it affects or content posts that it affects and at that stage, a percentage of users have come across this problem and therefore it merits some explanation. You know, there is in the, obviously the DSA, there is this kind of feedback loop that the regulation is trying to develop when you block or report somebody, you get a better understanding of why that is, this kind of speaks to that idea that if you can't access the content, whatever reason, because. It's just changed its name or, there's 140, 000 people on the zoom, zoom call, and I don't know, it crashes that there's, there's a kind of followup.
Mike Masnick:Yeah. And, and again, like, just think that that is much more difficult than people realize, like the challenges of doing that. It sounds simple unless you're like, I mean, I've had conversations on these things with, trust and safety people where it's like, it's not always so clear and, having that ready to go and being able to explain it in a way that doesn't create more problems down the road and doesn't just, lead to more and more angry people. There is a reason why oftentimes the best strategy is just to like, do your enforcement and let it stand.
Ben Whitelaw:Okay. we won't figure this one out either. This is, it's been a podcast of, loose ends. I would call them, um, great. So that rounds up our stories for today. Mike, thanks so much as ever. Really great episode. I enjoyed that chat. We have another great chat ahead of us, the bonus chat sponsored by Discord this week and featuring those two really great kind of sharp minds in trust and safety, Mike. I haven't heard this chat yet, so I'm excited to hear how it went.
Mike Masnick:Yeah. Yeah. It's really great. People who've been in the trust and safety field should know both of these people. people, Juliet Shen, who has worked in trust and safety for a bunch of different companies, Google, Snapchat, Grindr, just a really, well known, excellent, thoughtful person in the field and Camille Francoise, who has worked at Niantic and had worked at Jigsaw some other places, also two of them just, Widely well known folks in the field of trust and safety who are both now working at Columbia, as part of this trust and safety tooling consortium. and they had a great panel at TrustCon last week on open source tooling. And that's really what our discussion is about. And we jump right in, with me asking Camille a question about, why the open source platform is important. trust and safety tooling stuff is so important and it's a really fun discussion. So check it out.
Ben Whitelaw:Great. Can't wait.
Mike Masnick:All right. So Camille, I want to start with you and ask, why is it that open source tooling for trust and safety is so important these days?
Camille Francois:Thanks, Mike. The first thing I'll say is, um, as a field, we are maturing, we're growing. It was very clear at TrustCon. And there is now a fantastic sort of set of literature and there's a field around understanding trust and safety. And when we look at how we've built that field and what we know about, our own practice, A lot of it for a long time has been very policy focused. I understand that. I think some of that also comes from conversations with, regulators, policymakers, but as a result, we don't have a lot of common vocabulary and useful framework to talk about the actual. Tools and technologies that really make trust and safety. And so for now, we are playing with sort of a working model that helps describe those infrastructure. Because in reality, unless you're sort of, you know, very, very, very early on in your journey, most companies end up having a tapestry. of different tools and systems sort of smacked together in order to have robust safety infrastructure. And so in this sort of like tapestry, what we see is four different high level functions that seem to resonate across the different teams that we're talking with. The first one is the set of tools and systems that you may use to do detection, to detect the problematic content. Uh, there's different types of that, right? You can have, Classification based detection tools like machine learning models, you can have a hash based detection tools for content that's been known, you can have a series of things to get user signals, but this is sort of like a bucket of tools and systems around detection. There's another bucket of tools and systems around review. Once you've detected something, this review tooling really helps you. Prioritize, make queues, it helps you center the wellness of the folks who are going to do review in many cases. They also help you report and comply with some of the legal obligations that may come from specific type of content like CSAM. There's a series of things around enforcement, right? And once that something has been confirmed as being, for instance, violative content, you may want to take enforcement action on, the user or the content. And finally, there's a series of tools around investigation, right? Sometimes you want to go a little bit deeper, maybe in your logs and around this investigation tooling. There's a lot, Of important, important features such as around privacy and security and making sure that we're also able to properly secure, these tools as people use them for investigation. And so all this to say, you know, the reality of, what we call the tooling infrastructure tends to be a lot of these complex systems woven in together. And it's not that there's never been open source bricks of these systems. They have, when, when we look at sort of what exists right now, it's still very clear that, there is no easy entry plug and play interoperable. auditable set of tools that people can easily use, when they enter that field. Another sort of more simple way to put it is that the bar for doing safety right, and even for doing basic moderation, remains really high, which we think is a very important thing to tackle and right now because of A, all the new entrants in the field, B, the fact that finally we have that maturity as a field to go and build that open source muscle that, for instance, that Cyber security has, and see the fact that we talked about it. AI is changing some of the trust and safety field, because it gives us new, uh, new tools, for instance, LLM based, moderation, but also because it gives us new harms to think about and new players to onboard into our practices.
Mike Masnick:And to some extent, I think you may have hinted at the answer to this, this next question with that last answer, but, Juliet, I wanted to go to you and, ask just, you know, there's this new trust and safety tooling consortium at Columbia, that you're both working on and, what is that for listeners who might not be aware of it? What exactly is this consortium?
Juliet Shen:Sure. So I'm really excited to talk a little bit more about this. The tools consortium is really bringing together folks who are often not necessarily driving the strategy in some of these common existing trust and safety spaces. So really focusing on those individual and teams that are doing the implementation. At this stage, the tooling consortium is being incubated and hosted at Columbia School of International Politics and Affairs. From there, we're doing a series of research efforts. So we're currently in the process of mapping the trust and safety tooling landscape right now. That index does not currently exist. So that serves as the baseline for us to start having those conversations of where are the products a little bit more overrepresented and where are the main remaining user pain and the gaps in the market.
Mike Masnick:And, and, uh, I understand part of that is that you're, you're putting together this open source graveyard of trust and safety tools. Do you want to talk a little bit about that really quick?
Juliet Shen:Yes. Uh, so it's a graveyard. It's also an island of misfit toys, you know, pick your metaphor. But what we've noticed is that there are some very passionate individuals who do really believe in the open source philosophy and culture. Uh, maybe it's from their past experience. Maybe it's just a personal passion of theirs, but those people are the ones who have really driven a lot of these open source projects in the trust and safety field in years past. They may be hackathon projects that convert into a full open source project. They may be, you know, just an hackathon project. And those are really solving actual problems that practitioners and engineers face today, but we lack that pipeline of maintenance and development. So we see this little graveyard of wonderful technology, but sometimes those organizations themselves even aren't aware that they exist, because that knowledge has moved on as individuals change roles and change organizations as well. So, We believe the consortium is uniquely positioned to kind of be that gatherer of knowledge, that historian, how things are developed, what worked, what hasn't worked, as well as kind of that real list and index of all the different tools available today for organizations, small and large.
Camille Francois:think what I like most about, some of the work that Jonathan's already done, over at Columbia is sort of seeing that because a lot of people sort of show up on a regular basis thinking like, Hey, new idea. So many other people would need that tool in this technology. I'm going to open source it where we see sort of everybody throwing spaghettis on the wall, but always hitting the same corner. And so. There are some specific good ideas that have been had every few months. and I think at the end of the day, this is kind of what, we're most focused on is as a field, how do we, get this ability community and structure to build on top of each other's work, something that comes through again, again, and again, when we talk to the builders, the engineers, the PMs, the designers about their career interests and safety is. Often we hear that story of, yeah, I have been building a, you know, 30 percent of a functioning rules engine in five different companies, and I would love to not have to pick this up from scratch. We are hearing so loudly and so clearly that there's a lot of wheel reinventing when it comes to that tooling stack, simply because people aren't allowed to just like take it from a strong foundation that's interoperable and then build from there, right, to maybe focus on the thing that is most, important for their business, for their users, for their formats, right? To sort of like really, get faster to the place that's more tailored and more innovative.
Mike Masnick:So one thing that I've been thinking about regarding open source trust and safety is and I know that, some of what's happening with the consortium came out of the effort from the Atlanta council, over the last few years to sort of, look at the trust and safety space. and I had attended one of the, the round table discussions specifically around open source tooling. And one of the things that I came out of that discussion and was a Was that it felt like a lot of people were talking about how every different platform or every different service had very, very different needs when it came to trust and safety tooling. And so there was a part of me that worried that. Open source might not work as well for this area because the needs are so different. So I'm, I'm kind of curious as to your take on that, because I, I, I was worried that, you know, maybe this is a spot where open source wouldn't really make sense.
Camille Francois:Yeah. It's a great question. And let's, assume that, you're on something true, right? That there, there's an, there's an element of truth in this. And I think this explains how we have built the landscape to date. I think we were very big on thinking that every single harms type is so specific that it needs its own organization. And every single company is so specific. That is going to have, uh, to grow its own, fully, you know, home homegrown, uh, infrastructure. We've done a lot of that. And I think that we're also seeing the limits of this model. I don't believe in a one size fits all infrastructure for everything. But I also think that right now, you have a series of tools and technologies that are housed in harm silos. Right. And so, for instance, when you're a newer entrant in the field, you know, say you're a smaller, medium sized organization, you do have a fairly hard regulatory bar for how to think about safety and content moderation. even for instance, if you have like an app, you're also going to have to comply to rules from the app stores and just sort of piecing together, what do you need to do something that's a fairly basic infrastructure is going to have you talk to GIFCT about, uh, violent extremism, and then going to have you talk to the tech coalition, on a different set of hash. And then maybe because you think about. A specific type of content. You also want to engage with a third organization around, uh, NCII. And at the end of the day, really what you have is a sort of like tapestry of different orgs that each have different parts of the puzzle. And while that might be very justified from a. Policy perspective from the basic tooling perspective, you also kind of have this need to make sure that all those different formats, all those different maybe databases, all those different, pieces of the technological puzzle need to be able to be interoperable. And I think, again, this idea is to make sure that It becomes easier to enter into the field, and it becomes easier to focus faster on the piece of the puzzle that's going to be unique to you, right? And I've had this, uh, this conundrum at Niantic, because of course we're building, AR games, and so if you talk to the folks who are sort of in very specific formats. They all share the same story, which is they are new challenges in safety and moderation, right? They are areas where we know is the sort of next frontier of the tools we need to build, the technologies we need to develop. We just need to make sure that we can all get faster to them, and not spend all this time on the parts of the tooling that frankly, again, go across, right, go across those different harm types and tend to be the fairly basic set from which to build on.
Juliet Shen:Yeah, so I think jumping on top of that, I think there are definitely more smaller pieces that are also really good potential for open source as well. So, for instance, there are fantastic web components that are open source that can really help with, you know, these tools, because the end of the day, they are web apps, and there are many, many kind of libraries wrappers out there that can kind of repackage existing tools or services and make it more interoperable. So those are some of the some pieces that outside of the trust and safety specific context There's a lot of potential there to bring great value to the engineering team and efficiency just through using open source.
Camille Francois:We also see, I think, like across harm times, some shared infrastructure that's already emerging, right? So, we've spent quite a bit of time looking into ThreatExchange was a tool that was initially developed by Meta, um, uh, which is open source in here. That's an interesting case of a cyber security tool, really, that we repurpose for trust and safety, but a lot of different innovative, strong programs across the industry run on ThreatExchange, including the new tech coalition Lantern. Which is a signal sharing program for child safety. And so I think we also see that having strong infrastructure on which to build from and on as a community allows us again to get faster to innovation, which is really, we're focused on, right? It should get. easier to do basic safety things. It should be easier for new entrants in the field to understand how to even start building that safety infrastructure. and it should be faster for all the rest of the innovators who've been in the field for longer to sort of focus on unique areas where they can push the needle for the rest of the field.
Juliet Shen:And there's ways that we can potentially borrow from AI groups as well, where they've kind of published these open source recipes or collections of scripts or prompts, we kind of see that as a potential for the consortium as well. There may be recipes of different building blocks or different libraries of open source software that may be tailored for if you're a UGC and e commerce platform. If you're a live streaming and live chat platform, different recipe blocks that we can then publish and make accessible.
Camille Francois:Cookbooks!
Mike Masnick:Great. And so for people who are trying to follow the work that you guys are doing, where should they be looking?
Juliet Shen:Well, I'm very excited that we are going to be launching a first initial contact us site called trustandsafety.tools. That's going to be going live at the end of this week. you can also find us through Columbia SIPA's page and that we will also be having an online presence through LinkedIn, Blue Sky, Threads, wherever you get your real time information.
Camille Francois:And if you are the in person type, our broader team, which, uh, includes Eli Sugarman and, Yoel Roth and Dave Willner just came back from, TrustCon. We were very grateful, that everybody seemed very ready to engage with that topic of, open source trust and safety tools. We had a full room for the dedicated panel on that. And our next, uh, conference stop is of course going to be the trust and safety research conference. So if you sit on the researcher side, either, in a corporation, in an organization or in academia, connect with us, ahead of the trust and safety research conference in September.
Mike Masnick:Great. Well, thanks to the both of you for obviously the work that you're doing and for coming on the podcast to talk about it. So thank you very
Camille Francois:for having us, Mike.
Juliet Shen:Thank you, Mike.
Announcer:Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.