Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
Watch Out, AI is Getting More Persuasive
In this week's round-up of the latest news in online speech, content moderation and internet regulation, Ben is joined by guest host Alice Hunsberger, VP of Trust and Safety and Content Moderation at Partnerhero and former Global Head of Customer Experience at Grindr. Together they cover:
- Measuring the Persuasiveness of Language Models (Anthropic)
- Our Approach to Labeling AI-Generated Content and Manipulated Media (Meta)
- Survey: New Laws Mandate Access To Social Media Data, But Obstacles Remain (Tech Policy Press)
- Brazil Might Regulate All Social Media After Clash With Elon Musk (Guardian)
- X automatically changed 'Twitter' to 'X' in users' posts, breaking legit URLs (Mashable)
- LinkedIn starts verifying recruiters to stop scams (Axios)
- Content policy is basically astrology? (T&S Insider from Everything in Moderation)
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
In the somewhat offish, perhaps slightly disinterested words of threads, what's new Alice?
Alice Hunsberger:Hey What's new for me is pretending to be a journalist. I, I, I'm fairly an old hand at, regurgitating links that other people might find interesting, but actually adding commentary and talking about it and analyzing it is, uh, it's been a new adventure for me and on a podcast, episode, this is entirely new. So we'll see how it goes.
Ben Whitelaw:nice. Well, what's new with me is that I have a new co host, in the form of you. So it's all changed here at Ctrl-Alt-Speech, but only temporarily. And we'll talk more about that in a second. Hello and welcome to Ctrl-Alt-Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you with financial support from the future of online trust and safety fund. My name is Ben Whitelaw. I'm editor and founder of Everything in Moderation, and I'm joined today by a very new face. It's not Mike Masnick, who is on holiday with his family this week. But it's Alice Hunsberger. Welcome.
Alice Hunsberger:Ben. Thanks for having me.
Ben Whitelaw:How are you doing?
Alice Hunsberger:I'm good.
Ben Whitelaw:really nice to have you here. Um, for listeners who don't know Alice, she is the VP of trust and safety and content moderation at Partner Hero. Used to be VP and global head of customer experience at Grindr. Yeah. So you've got vast, vast experience. So we, you were an ultimate choice for us when we knew that Mike was going to be away and we wanted somebody to step in. Tell us about yourself.
Alice Hunsberger:Well, um, thank you. I, yeah, would, tell us about yourself. It's very broad. I thought we were supposed to be talking about what's happening in the news? Um, yeah, I mean, I've been working in trust and safety professionally for 14 years or so since 2010. I started out as a content moderator and worked my way up to VP and I'm obsessed with all things trust and safety. So, I have a lot of fun reading what's in the news and, Thinking about it, but that is not my day job, uh, unlike Mike, so I'll do my best.
Ben Whitelaw:You're very well equipped. You're very well equipped. Alice also, we owe a lot to you for this podcast because you actually put Mike and I in touch at a conference last year, right?
Alice Hunsberger:Yeah, we were all at TrustCon together. It was my first time going and my first time meeting people in person. And as a avid follower of both you and your newsletter and Mike and his site, I was like, Fan girl meeting both of you is like, ah, people, people I read every single day. And we were at a after party somewhere and you both were standing next to each other, but not talking. And so I was like, Hey, do you guys know each other? Like, do you know what each other's faces look like? Uh, and introduced you. So I would like to take credit for, this podcast existing. And therefore it's only right that I'm the first step in host.
Ben Whitelaw:Yeah. Credit credit or blame one of the
Alice Hunsberger:Sure. I guess. Email, email me privately and let me know your thoughts if you're listening to this.
Ben Whitelaw:Yeah. Also you've made it sound much more awkward than it was. I remember being very, uh, suave in my approach to Mike. You were basically maybe, maybe it was much more awkward then it was in my head, but I thought I handed myself very well. I, you're
Alice Hunsberger:Of course you did. No, I'm the awkward one. I'm not, I don't want to insinuate that anybody else was awkward, but I get awkward in social situations, so.
Ben Whitelaw:No, you, you did a great job and, it's a lovely to have you here hosting. It's also been great recently. You've started writing a Monday newsletter for everything moderation, which is called trust and safety insider, which is the kind of. Nuts and bolts, guides to how trust and safety works from, uh, somebody who's worked inside platforms for many, many years, how have you found writing that newsletter?
Alice Hunsberger:It's been great. It's been really good. I, um. I have really enjoyed taking time to write every week and think about, my own perspective. And I get a lot of random messages on LinkedIn from people who are like, help, how do I think about this thing? Or what do I do? and so it's been really good to share that a little bit more. Broadly, and then the, the best part is you're a really good editor. So you make, you make my writing better. It's like, when I think about all the stuff that I put out on LinkedIn while I'm drinking coffee first thing in the morning and not thinking too hard. And then like actual posts that I've, Put a lot of thought into and that you edit the quality is much higher. So, that's been good too.
Ben Whitelaw:Thank you. I didn't pay Alice to say that or anyone. And also imagine if I made it worse, that would be terrible. I would need to get out of my industry pretty, pretty ASAP. So good to hear that that is the case. Um, so
Alice Hunsberger:getting good feedback on it too. Haven't we? Like
Ben Whitelaw:yeah.
Alice Hunsberger:enjoy it.
Ben Whitelaw:Yeah, lots of great responses from people. And we'll include an example in today's show notes so that listeners of the podcast can take a look if they haven't read Alice's great opinions each week already. Which is, I think a good time to, crack on with today's episodes, to get into the stories from this week. Alice is a. Vast network of, people that she follows and speaks to. So she's bringing a couple of really interesting stories to today. Whilst the lineup is different and Mike is not here, the format remains the same. So Alice and I will look at two bigger stories this week. We'll go in depth. on those stories for 10 minutes or so, and then we'll skittle through the rest, of the kind of interesting stories. And as ever, they will all be linked to in the show notes so that you can go and have a read for yourself. Before we crack on, we, we don't have a sponsor interview this week, so there is no bonus chat, but if you'd like to join us on the podcast and, talk a bit about trust and safety with Mike and I in the future, please do email us at sponsorship@ctrlaltspeech.Com that's C T R L A L T speech. com or go to our very. Attractive new website that our producer Leigh has built, which is a definitely worth visiting, even if you're not going to commit to being a sponsor. Um, great work Leigh. So, onto our first story, then, advanced warning for people. There's a fair bit of AI today and Alice, you picked a story that, is super, super interesting, about the persuasiveness of AI models, right?
Alice Hunsberger:Yeah. I'm glad you put the warning, the content warning in there. It's funny. It feels like more and more everybody's job is turning into thinking about AI, what the effects of AI are, how is it going to change things, but especially for trust and safety people, everybody that I talk to who's in a leadership position. Position and trust and safety is all consumed with AI, both in terms of how can we use it to moderate? How can we use it for good? But then also, how is AI going to accelerate some of the bad content that is out there? We're seeing, on the Internet today. And so the link that I brought to us today is about anthropic. They just put out, a paper on measuring. Model persuasiveness. So they looked at all of the Claude models that they've put out, and then chose issues that they wanted the model to try to persuade somebody to agree with, and they took a, an argument from a human and an argument from each AI model and had people sort of evaluate how much each example changed. Their mind, uh, And so, yeah, it was, it, it's a good example of trying to look at actual ways that AI can be used for harm, especially this year, the mega year of elections, when lots of people are going to try to persuade other people to vote for the right candidate or vote for the right cause, one of the things that I thought was like the most interesting about it. Um, and so they tried four different types of persuasiveness with the models. There was rhetorical and emotional appeals to change, which didn't work. They worked, but not as effectively. But then the two that were the most effective were, logical reasoning and providing Evidence, uh, but they didn't specify whether that evidence had to be true or not. So if you could just like make up deceptive information and call it evidence, that was the most effective of all of them, uh,
Ben Whitelaw:like presenting stuff as evidence was, was enough.
Alice Hunsberger:Yeah. And so, it sort of proved that people don't necessarily verify whether things are true or not. They take something presented to them with confidence as accurate. And,
Ben Whitelaw:Sounds like some people I know.
Alice Hunsberger:Well, yeah, and it's, you know, it's super sketchy because now you can just fabricate any kind of facts that seems like it might persuade somebody. And so it's definitely something to keep an eye out for. And I think it's a good study in, showing the need for teaching people, media literacy, fact checkings, being skeptical, and, you know, just generally. Not taking random arguments at their word.
Ben Whitelaw:Yeah. No, you're right. And, and, it's very much true to form for Anthropic to do this kind of research, the founders, for those who don't know, left OpenAI because they wanted to build a more kind of trustworthy model, right? So it's, it's great that this research is happening, that we're starting to see. Some quite advanced, thinking around the building of these models. One thing I was thinking about as I read the paper was does, does the kind of findings that you've brought to us here, does that help kind of bad actors in terms of being able to create prompts and create content that actually, causes people harm? Do you think this kind of research could be used inadvertently to cause a user's issues.
Alice Hunsberger:Oh, you mean our bad actors also reading the anthropic paper and
Ben Whitelaw:mean, I'm sure they are.
Alice Hunsberger:I bolded that says the deceptive strategy, which allowed the model to fabricate information was found to be the most persuasive and people were like, okay, cool. I've got a roadmap now for,
Ben Whitelaw:I'm on the right track.
Alice Hunsberger:It definitely is for sure. That said. I think it's likely that bad actors are doing this kind of Studying and research on their own anyway, um, probably not through scholarly journals with all sorts of references and, you know, specific parameters, but for sure, they're a B testing different types of arguments on real people and seeing what the results are and learning from that. And so, if we know that people are. Looking at how these models can be used for bad. Then we have to also spend time understanding that ourselves so that we have a chance at combating it or putting up guardrails or somehow inoculating people against believing misinformation or whatever, whatever strategy people choose.
Ben Whitelaw:So if, if you, uh, I mean, how, how do you take this research and use it in your work at the mobile? Like, so what, what would you do with the outputs from this research? And how would you start to think about implementing it in, your role or in previous roles, and maybe some of our listeners are thinking about how to do that.
Alice Hunsberger:Good question. One thing that made me think through was the importance of citations and of credible websites and being able to understand what a credible website is. Is the sort of compounding of misinformation tactics like this, plus search results becoming more terrible these days, having a generated summary at the top of the search result that comes from. Whatever website, whether it's actually factually correct or not, super scares me because it could be that somebody gets a, message trying to persuade them to change their mind, makes up some completely random facts. creates a bunch of bogus news websites that also make up those facts. So then when somebody Googles, Hey, is this random fact true? They'll get a little AI summary at the top that says, sure, this is true because like, you
Ben Whitelaw:We found it. Yeah. It exists.
Alice Hunsberger:fake news reporter. com also cites it. So I think the answer is. Labeling information as much as possible, providing citations, teaching people what trustworthy news sources look like, teaching people how to be skeptical, even if you're a platform that doesn't deal in news. Per se, it's always good to have campaigns that kind of teach these skills. And then really, honestly, I think the solution is teaching kids in schools. So that when they, as they grow up with this kind of,,access to information everywhere, but becoming harder and harder to verify it, then we need to teach people. Research skills.
Ben Whitelaw:And this is only going to get more harder for the kids of the future, right? So it's an issue now, but what the paper says, which I found really interesting is that the models that Anthropica built are getting more persuasive. So, you know, it has Kind of two models, which of course, compact models, which, are getting more persuasive over time and Claude and the latest version is magnificently called Claude three Opus, which is a, such a like wicked name for something that can do so much harm. Um, you know, it's, several kinds of points on the chart in the research, more persuasive than, Claude 1. 3. So if you take that, kind of logic, you know, future models are going to get more persuasive. And the distinction between, generative AI and humans in their ability to persuade people is going to get really much, much more indistinct. Uh, and so your, your point about being able to kind of find ways of unpicking the differences is going to be really, really key, which I think probably takes us neatly to the next. Uh, onto our next story, which actually kind of came out on the back of last week. Really Friday was, the announcement by Meta on how it was approaching labeling and labeling of AI generated content, particularly now you mentioned labeling there, Alice, and I know you've got some views on whether labeling works and it's kind of ability to drive the behaviors we want, but just to give an overview for people who didn't. See this story, basically Facebook is changing its policies around AI generated content on the back of a oversight board recommendation. The kind of pseudo Supreme court that is. Independent, but it's funded by Facebook and by Meta. So this recommendation suggested that Meta needs to broaden its policy around, AI content was used to be very narrow. It used to be that it was only generative AI that had made somebody seem to say something that they hadn't, that was going to be pulled down from the platform. Then this came forward where Joe Biden was kind of manipulated to seemingly touch his elder granddaughter that was referred to the oversight board. And they basically said that that policy is too strict. We need to kind of broaden that Facebook have gone away and take a look. And they are now agreeing that that. Is the wrong approach and they're broadening how they kind of view generative AI on the back of that. And their, their response is to start labeling content. And so you will see a lot more on meta platforms, a little label that says made by AI. That label will appear if a user indicates through the upload process that it was made by AI, but also Facebook have some meta, have some. Clever technical ways of being able to do that. This distinct AI content from, from other content. So this is a big, change for Meta. We're going to see lots of changes on the front end and, some changes to the kind of user upload process. What are your thoughts on this Alice? How, how confident are you that this is a step in the right direction?
Alice Hunsberger:It's definitely a step in the right direction. There's two separate threads going on here. So one is, Are we able to accurately detect and label all instances of some kind of manipulated media, be it AI generated or not? And that's more of an existential question now because AI is making it much more possible, but you know, we've had Photoshop for a gazillion years. And
Ben Whitelaw:What, what, what are your Photoshop skills like out of interest?
Alice Hunsberger:Oh, mine are non existent, yeah, which is why, yeah, AI makes it very easy for people like me to now create images that I couldn't before. But I think it does speak to, you know, every platform has had some kind of policy on manipulated media forever, even before AI. Trust and safety people have been dealing with this issue, but the scope of it. Is bigger now, and the possibilities are more endless. And so the, the question is for major news things like this Biden example, then we'll be able to label it and say, Hey, this isn't true. But that's not necessarily going to be possible for fake images of Non celebrities, uh, they might not get the same kind of labeling or detection, possible. So that's one part. And then the second part of this decision is around making sure that the intent and the harms, of manipulated media are considered as well as just whether an image is Real or not. So the recommendation from the oversight board also talked about don't remove manipulated media completely. If there's no harm. Being created. Like, if somebody's, you know, posting something in good fun, and even if it's, significantly altered and could mislead somebody in some way, you also have to think about what are the repercussions of that, um, because there's free speech, satire, parody, art, all kinds of reasons why somebody might do it. upload a photo of a celebrity that's not true, that aren't going to necessarily like harm democracy or some other major issue. So they're trying to sort of separate the harms caused with the proliferation of manipulated media with potentially labeling and paying attention to those cases where. It really is going to make a difference.
Ben Whitelaw:Right. And to what extent is that the kind of right approach? Cause I remember back in. During COVID, there was a real spate of platforms who started using labeling to identify false or misleading posts and to try and kind of separate, those two things, like the proliferation versus the harm. And I guess To your point earlier, the onus on the user to kind of decide for themselves whether this was actually, true or not I mean, I don't know how kind of successful that was, but do you feel like that this return to labeling is a, is a kind of first step and a good one in helping people understand what is AI media and what isn't?
Alice Hunsberger:Yes and no. So the reason why it's good is because exactly what I was talking about earlier with our other story, uh, The more that people recognize that the ways that AI can be used and the ways that AI can be used to manipulate you, then the more skeptical people are going to be. So if they're browsing on Facebook and they see a gazillion labels that say, Hey, this is probably trying to mislead you. Hey, this is also probably trying to mislead you. Hey, this thing might be trying to mislead you as well. Then they're going to start getting skeptical. And I think that's a good thing. Um, the bad part of it is if people get too used to relying on labels to tell them whether something's true or not, then if something isn't labeled yet, they may be even more likely to believe that it's true because it doesn't have a label. So therefore I won't be skeptical about this one. And so I worry that it won't be a complete solution, especially when people are just sort of lazily scrolling and aren't necessarily there to apply all of their critical thinking skills. Um, but I don't know, you know, it's definitely better than. Not labeling at all or not saying anything at all and just leaving people to guess completely. So I'm, I'm in favor of it generally. I just think this is the beginning of the conversation about how to treat things like this, not the end.
Ben Whitelaw:Yeah, and one thing that the announcement made clear is that certain warnings and labels will appear more prominently. For certain types of harm as well. We didn't get to see what that looks like just yet, but apparently that is something that they will, they will do is, is grade the labeling to make it more prominent, to make it more visible in, I guess, feeds in contexts where, it's important for the user to see that, which I think is good. Um, the dev was in the detail, but what that actually looks like, but, and then the other thing that I think will be interesting to see is the extent to which other platforms follow suit here. So, you know, Facebook meta is working with. Other platforms, to, kind of create common technical standards that help identify generated AI media. And so, yeah, the fact that they've kind of done this suggests that, perhaps others will follow suit. So I'm fully expecting platforms to also say that they're going to start labeling more content, which I think brings the efficacy of labeling back into view again, because it's, I would say somewhat fallen off the radar of certainly me and, less in fashion with some platforms than it was a few years ago. So we might see a rise of labeling once more. That's kind of our two stories that we we've drawn the line between AI persuasiveness and AI policy. So how the models are changing and how the platforms are changing in response to those models. We're now going to kind of. We're through a bunch of stories that are I, but which didn't quite make it into the top slots for today's podcast. Where'd you want to start Alice?
Alice Hunsberger:Let's see, uh, I have one that I will try to talk about quickly. I
Ben Whitelaw:ultimate challenge.
Alice Hunsberger:well, I listened to your podcast last week, recently, in preparation for being on it this week, and you and Mike both talked about, like, I don't know if we'll be able to talk about this quickly. Now I'm in the seat, and I'm like, oh my god, I'm looking at all
Ben Whitelaw:the words
Alice Hunsberger:I have no idea.
Ben Whitelaw:out.
Alice Hunsberger:Uh, okay. So I, let me, let me see what I can do. There's a pretty cool study by Mark Scott that was published in tech policy press. He was a fellow at Brown's information features lab and spent his fellowship talking to people about public access to social media data, specifically, researcher access, API access, so that people can study what's actually going on. At social media companies and then put it out in the public eye, make recommendations, discuss it, understand more about what's going on behind closed doors. Uh, the interesting thing and why I thought it was pretty, noteworthy was he talked to a wide variety of folks. So he talked to regulators, public health officials. Social media, uh, people working at social media platforms. So trust and safety executives, which is like what I've done in my career. And then also academics and civil society. And I love that he included. Trust and safety pros in this, survey, and that they were just as frustrated about lack of access as the researchers were and a couple of the trust and safety executives who are not named said. They wanted to work with researchers, wanted to send over data, wanted to learn more about how they could be doing better, but the legal departments at their companies prevented them from doing so, um, over, you know, legal, yeah, like, you know, the worry that something bad about the platform is going to get out there and make them look bad or open them up to, to liability and. Of course that's a valid concern and also why are we stopping groups of people who care about making the internet safer for people from collaborating with each other? Like, it's not good. So, I thought that was really interesting. A lot of the time people are looking at tech companies with one broad stroke as though everybody working inside of a tech company has the same agenda. And I thought this was a really good example of where trust and safety executives are often on the same side as regulators and academics, and perhaps it's legal departments or overall company policy that's preventing the collaboration.
Ben Whitelaw:definitely. And Mark, who is also Politico's chief technology correspondent, if you recognize the name, also talked about how regulation, particularly in the EU, was starting to kind of chip away at some of those, I guess, more conservative natures within platforms to not share data. Noted that data access was improving in some places and we had a few years ago. Twitter reduced the amount of data that researchers could see via their API and Reddit did the same, it's been talked about at length. And it does seem like according to Mark and the interviews he's done, that that is shifting back the other way, partly as a result of regulation coming out of Brussels, which is really interesting to note. So great summary, Alice, I think you did a good job. That was and sweet. Um, I'm going to have a job, keeping up with that. My, short story that I thought is interesting and worth reading is a Musk related story, unfortunately, but interestingly, uh, from Brazil. So a Brazilian Supreme court judge has basically ordered an inquiry into Musk after the ex Twitter, uh, owner. Basically decided to ignore or turn down a decision by the Supreme Court judge to have some accounts banned. Now, we don't know, you know, exactly what accounts that the Supreme Court judge wanted to be kind of taken down, but this judge is investigating fake news and hate speech. That happened during the government of the former Brazilian president, Jair Bolsonaro, and which kind of culminated in huge violence last year on. January the 8th, when Bolsonaro was, not elected again, after the election there. And so this is not the first time that the judge has gone after a tech platform. He ordered an investigation into some Google executives and some Telegram executives, and. There's a real war going on between kind of Brazilian government officials and tech platforms. And it really seems like, the tech platforms aren't backing down at all. You might remember, that Google put up a warning on its home page to users who are in Brazil, basically warning that there's a law coming, which is called the fake news law, PL 2630, basically warning that this law was going to change kind of internet. Freedoms for people in Brazil and basically trying to mobilize people to, talk to their local, politicians about it. And so there is a completely for, for several years now, there's been a log ahead with each other, the government and tech platforms, and this is the latest example of that, which is, I think really interesting. There's a really good piece that I've linked to in the newsletter today. And also we'll put in the show notes of actually an oversight board member and lawyer called Rinaldo Lemos, who says that basically Musk and lots of other tech companies have what he calls a much more influential, uh, have much more influence on the big tech than we'd like to realize. So there's a really interesting kind of ongoing battle there, which, and Brazil is a massive market with a huge growing population of people who are coming online, so that will continue, I think. Did you have a big user group in Brazil and the companies you've worked in previously Alice? What's it like at Grindr in Brazil?
Alice Hunsberger:Yeah, Grindr has a lot of people in Brazil. So it's definitely an area that we paid attention to. I also think This is a really good example of how and why you should never believe companies when they say that they're politically neutral or will only follow the law or, um, are, you know, Elon Musk when he says he only, uh, he believes in, in free speech. But selectively, because the companies are this big can't help, but to get involved in political issues, even if they're trying hard not to their, geopolitical entities and of themselves, and they have, diplomats and lawyers and foreign policy agendas, just like countries do. And so, yeah, this is really a good example of. A company trying to say that they're neutral in some ways, but obviously having a specific agenda and that's true. I think of any really major platform.
Ben Whitelaw:Yeah. And that, that's not to say that the Brazilian government has its heart in the right place because there has been criticism from a lot of civil society organizations about the fake news bill and the possible impact of that on, Brazilians human rights. So. It's interesting that you could either see Musk as this kind of anti government, freewheeling tech billionaire who's trying to kind of serve his own needs in the course of this, reaction to this Supreme Court judgment. Or you could see him as somebody who is, you know, you'd have to squint probably, but somebody who is pro human rights and is combating legislation in a which doesn't have a great Record of putting together kind of solid tech regulation over the last few years. And your view on Musk is, probably tainted by whichever way you, you have thought about him in the past, but he's done some really good work in. Turkey recently to try and push back against a government bill that was put forward. He has restricted data and has kind of stymied efforts to make research data available back to your point earlier in relation to, government takedowns. So that's no longer available for us to go through, but he is kind of bringing these very public, efforts against governments in a way that feels a bit more musky and I would say a bit more, it's all about me. This is, it's kind of theatrical in a way, rather than being systematic data led. So listeners can take their pick on still what, what that'll actually, where they, where they sit on the fence about Musk.
Alice Hunsberger:I had one more link that is not big enough to actually talk about in depth, but is a really good example of, perhaps not being systematic and data led. Which was that, this week on iOS, Twitter slash X users saw that links to Twitter. com got displayed as links to X. com instead, because they're trying to get people to refer to it as X, but the actual URL itself, like the hyperlink. Remained in its original state. So you could do something like, create a URL, which is net flip twitter. com. It would show as a display as a link to netflix. com, but you click on it and then it actually takes you to net flip twitter. com, which is like. Perfect for phishing scams because it looks like the URL is legit because it says netflix. com you click on it. They clone the Netflix website. They ask you to log in and get all your information. So, uh, that got reversed, but I think it's another really good example of like. Why you need a really robust trust and safety team. And you need people who think like scammers and have spent years working, to fight scammers in the room when you're making product decisions, because it's like super easy to say, Hey, let's just make something that looks like Twitter show is X instead. Because. That'll be cool for marketing or brand awareness. And if you don't have a trust and safety person in the room to say like, no, don't do it, then this happens. So,
Ben Whitelaw:Yeah. I mean, we talked on the podcast last week, Mike and I about the new head of safety, Kylie McRoberts, and the fact that in her first week, she's had to deal with something that weird and so kind of odd, you know, to the forcing of people to use x. com is in some ways proving that Elon is still running up the content moderation learning curve, which is, uh, Mike's favorite thing to talk about. Um,
Alice Hunsberger:yeah, that's great.
Ben Whitelaw:Just, just on that story, I love the fact that users on X slash Twitter started registering the domains in order to stop people from being phished. I thought that was amazing.
Alice Hunsberger:it was great. It was so great. Yeah, I love it. And
Ben Whitelaw:is good in the world.
Alice Hunsberger:and that's the, you know, kind of good Twitter energy that people enjoy is like everybody getting behind a good project like that, you know, so it was very, very cool to see.
Ben Whitelaw:I really like that. And then my other story that I want to bring, bring listeners is, uh, again, a story we talked a bit about last week, about verification, LinkedIn have started to verifying recruiters, to stop scams on the site. So again, It has over the last kind of year or so, try to push people to verify their authenticity by providing work email addresses or, or ID, through a system called clear, and it has now rolled that out to recruiters as well. So. Addressing an issue where, and I've had some of these myself, where people get in touch with you thinking, that you've got a great job offer lined up and try and talk to you about how your experiences are great. And they've got a perfect role for you only to find out that those people do not exist, which must happen to a lot of people. So, so LinkedIn have started to address that again through, verifying, which I think is good. Points to a proper broader trend in trying to ensure people are who they say they are, which I think is only a good thing.
Alice Hunsberger:I don't think I realized that they didn't have verification for recruiters. That's
Ben Whitelaw:Yeah, I know it's kind of funny that they started with the users and didn't start with recruiters because I feel like the incentives for, recruiters are much higher, the, the fact that they are often reaching out to people to, offer them jobs and, you know, people who are open to work are particularly vulnerable. In, in some senses, right? So it's maybe a bit of a backwards way around, but they got there eventually and we'll see how that kind of pans out. Awesome. That brings us to the end of today's Ctrl-Alt-Speech. We've whizzed through two big stories and a host of other little ones as well. Alice, how did you find today?
Alice Hunsberger:it
Ben Whitelaw:Did it live up to expectations?
Alice Hunsberger:Yes, please, please invite me back pending, reviews of this episode, but anytime that either of you are out, I'd be happy to step in. I can pretend to be you next time. I can try to channel the British accent I used to have growing up. See if I can, see if I could do it.
Ben Whitelaw:I'd love to hear that. I'd love to hear that. And I think you did a great job as, as a journalist. I think you've got a, uh, career in the media in the making.
Alice Hunsberger:Yeah, we just need more people to subscribe to our newsletter and then I can. Just do that full time.
Ben Whitelaw:exactly, exactly. So listeners do subscribe to everything in moderation to read Alice's work. Also subscribe to the Ctrl-Alt-Speech via the podcast platforms that you listen on. We mentioned last week that we were going to talk about some of the reviews that have come in. We've had a smattering since last Friday, and we wanted to save those till Mike was back in the chair. So we're going to look at those next week. You've got a chance. While he comes back from his holiday in Texas to leave us even more positive comments and, uh, we'll definitely go through them next week at the start of the episode. So thanks Alice for your time today. And, we'll see you all soon.
Alice Hunsberger:Thanks, Ben
Announcer:Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.