Ctrl-Alt-Speech

Deepfake It Till You Make It

May 24, 2024 Mike Masnick & Ben Whitelaw Season 1 Episode 11
Deepfake It Till You Make It
Ctrl-Alt-Speech
More Info
Ctrl-Alt-Speech
Deepfake It Till You Make It
May 24, 2024 Season 1 Episode 11
Mike Masnick & Ben Whitelaw

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor TaskUs, a leading company in the trust and safety field providing a range of platform integrity and digital safety solutions. In our Bonus Chat at the end of the episode, TaskUs SVP of Global Offerings Phil Tomlinson tells us about his time at the Trust and Safety Professional Association summit in Dublin, his key takeaways from the event, and the trust and safety lessons learned from well-designed conference lanyards.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Show Notes Transcript

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor TaskUs, a leading company in the trust and safety field providing a range of platform integrity and digital safety solutions. In our Bonus Chat at the end of the episode, TaskUs SVP of Global Offerings Phil Tomlinson tells us about his time at the Trust and Safety Professional Association summit in Dublin, his key takeaways from the event, and the trust and safety lessons learned from well-designed conference lanyards.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

So since you've been recording Controlled Alt Speech Mike, we've been opening each episode by borrowing a prompt from a social media platform or app. And, you know, we've had the big names, we've had Twitter, we've had Facebook. But this is the weirdest yet. Okay I wanted to warn you. The heads up has been given. So, are you ready?

Mike Masnick:

Yeah, sure. Hit me with it.

Ben Whitelaw:

Mike, I want you to tell me what you really, really want. and that is courtesy of Airtasker. the app for DIY outsourcing. That's not me.

Mike Masnick:

I thought that was, I thought that was courtesy of the Spice Girls. Uh, what, what I want is to know when it is that you and I will be replaced. By AI deep fakes of ourselves. And we can sit back and relax while the AI deep fakes discuss the news of the week.

Ben Whitelaw:

I'd love that. I'd love that. We've, maybe we should be training somebody on the podcast recordings to do that.

Mike Masnick:

There, I'm sure there are companies out there who would be more than willing to do that. So, but what about you? What, what is it that you really want?

Ben Whitelaw:

I, I really, really want our listeners to rate and review Ctrl-Alt-Speech. That's the only thing that makes me happy. We've had some great reviews over the last couple of weeks and We've got some great comments on Techdirt as well, which have been fantastic. But if you are enjoying the podcast here's your reminder once again to rate and review us wherever you get Podcasts and uh, thanks for listening Hello, and welcome to Ctrl-Alt-Speech your weekly roundup of major stories about online speech content moderation and internet regulation This week's episode is brought to you with financial support from the future of online trust and safety fund. And by this week's sponsor TaskUs, they're a leading company in the trust and safety field and provide a range of platform integrity and digital safety solutions, and we'll hear about them a bit more later. My name is Ben Whitelaw and I'm the founder and editor of Everything in Moderation. And I'm with Mike Masnick, as ever. How are you doing, Mike?

Mike Masnick:

I am doing okay. Uh, it is

Ben Whitelaw:

still still in, still in human form, I

Mike Masnick:

still in human form. I am not an AI just yet, but, uh, we'll see. We'll see. Uh, no promises for the future.

Ben Whitelaw:

I'm surprised nobody's tried to, AI deepfake you yet, I'll be honest.

Mike Masnick:

I am not that interesting.

Ben Whitelaw:

But you know, having, having, you know, Mike Masnick say something controversial about speech would, probably fly around the internet pretty

Mike Masnick:

I, I don't, I don't think so. There was someone who once tried to create a, uh, a fake, Twitter profile of me, which was, uh, use the name Ma Mike Nick. Uh, and it lasted for like three tweets and nobody paid attention to it. And then the guy gave up. So,

Ben Whitelaw:

Uh, I would, was it just not believable? What was the problem

Mike Masnick:

I mean, it was, you know, it was attempting to make fun of things that I say, it was sort of trying to be like a, you know, parody of me, but like, I'm not that interesting to parody. Like, I'm not, there's not that much, you know? You know, and plus whoever was doing it, I guess, you know, their heart just really wasn't in it.

Ben Whitelaw:

Yeah. Yeah, you've really got to care about those things, aren't you, to continue them. Um, okay, well, fingers crossed that doesn't happen again, not in AI form either. We've got a great episode planned, Mike, We've got a range of stories that we've been prepping ahead of time and We've also got a great bonus chat with TaskUs' Phil Tomlinson About his time at the trust and safety professional association summit in dublin recently, which we mentioned on last week's podcast He's going to give us his key takeaways at the end of today's episode And also going to share some trust and safety learnings pegged to his conference lanyard, which is worth waiting for

Mike Masnick:

as someone who is obsessed with, conference lanyards and, and how to do them right, I have not heard this, the interview yet. I am looking forward to finding out what we can learn about trust and safety from conference lanyards.

Ben Whitelaw:

Yeah, it's an original take and phil really knows his stuff. So it's worth hanging around for So let's get, get into their mic. We've got, a whole range of stories and platforms and issues to be getting through today. We're going to start with a kind of new association, new coalition, really, of platforms that both of us spotted in our reading this week. talk to us about this new initiative to counter pig butchering.

Mike Masnick:

Yeah, this is really fascinating. And I don't know I assume our listeners have probably heard of this concept of pig butchering, which is the scam that has become really popular and a big deal, there've been a bunch of articles about it over the last few months year where it's, you know, it's an online scam, often a romance related scam, the reason it's called pig butchering is because it takes place over a fairly long period of time. It's not a one off scam. It's where, people try and. fake befriend you and sort of build up a relationship and trust, potentially romantic, relationship, and sort of drag you along for many, many months and get closer and closer and then eventually start trying to get money, Out of the person there are a number of different forms of this. And there's been a lot of really interesting reporting on the fact that most of the people involved in these scams are often in Southeast Asia are often victims of, human trafficking and are effectively prisoners, locked up in compounds and are forced to, scam to avoid. being beaten or, or worse, it's a really horrifying situation all around. But it's become a really big deal. And it's also one that's really difficult to track down and stop. Oftentimes it involves multiple different platforms. A lot of them start with just, you know, randomly texting different numbers on different platforms, whether it's regular text or WhatsApp or what, and then sometimes we'll move to other platforms to try and be more secure and hidden. It often involves cryptocurrency. There's some that involve dating sites. There are a number of different ways that, it occurs and it, it It's tricky to track down in part because it's using all of these different platforms and obviously the cryptocurrency element is fairly tricky as well. One of the common pig butchering scams, that is less about romance, but there may still be a romance component to it because it's a motivating factor for some people, uh, often involves telling people about some great investment. And that often involves cryptocurrency and we'll, we'll lead people to download a fake, crypto exchange app where, they get you to, put money into it and trade for, and it looks like for a while, it looks like you're, you know, making wonderful returns and then eventually the app disappears or all your money disappears in some form or another.

Ben Whitelaw:

So if people, I've had some people get in touch via. What's up? You know, you get those random spammy messages. Is that, what we're getting? A kind of pig, potentially pig butchering

Mike Masnick:

Yeah, almost certainly most of them are usually, you know, they, they often start with, like a very friendly, like, Hey, how's it going? Or like pretending that they know you. And sometimes they'll use a name or sometimes they'll use some sort of weird thing. Like, Hey, can't come, can't come by to paint the house today, but you know, I'll be there tomorrow. Like just something to get you to respond. The whole idea is like, it's a really. Friendly come on. Then what they hope is that some people being nice, we'll respond and say, Hey, sorry, wrong number. And the person will be like, Oh, I'm sorry. And then, you know, begin some sort of conversation that then leads into, you know, and it's, it's sort of preying on the fact that. some people out in the world, thankfully are nice, you know, and, and are trying to be helpful in this situation and a lot of the pig butchering scam is really built on preying on people for being nice, which is kind of, you know, horrific in its own way.

Ben Whitelaw:

Yeah. So what are we seeing this week? What's, what's the, the link up between these companies?

Mike Masnick:

this is really interesting was that basically there's this coalition that's been formed called Tech Against Scams and it involves a bunch of different companies including Meta, Coinbase, Match, And a few others, especially in the cryptocurrency space, a few other crypto exchanges that are all saying that they're going to start to work together to try and stop these scams. And this is really important for all the reasons that I was describing in that, A lot of these involve a number of different platforms and tracking down who is using them and recognizing that the scam is happening across these platforms is a lot more difficult, but when these firms work together, hopefully the plan is that they can exchange certain information that allows them to track down when this is happening and to prevent the scam from actually, reaching fruition.

Ben Whitelaw:

Interesting. So this is a kind of part of a trend we're seeing really where platforms are starting to recognize that harms exist across platforms and proliferate very quickly across platforms in some cases. what do you hope that this will allow the platforms to do that they weren't able to do before? Do we know exactly what the link up will entail?

Mike Masnick:

I mean, it'll depend on the details, but, you know, we've definitely seen this in terms in the past where Meta and Twitter, would exchange information on political disinformation, trolling groups by, large state actors where you had Iran and Russia and China basically trying to, what was referred to as coordinated inauthentic behavior. Um, and it was. helpful when the different platforms could share notes and details and data on who was doing what and sort of realize like, Oh, okay. You know, this thing that we, we weren't sure if it was part of a larger campaign based on information from other apps, we were able to figure out that it was associated with this campaign and therefore deal with it, whether it's, you https: otter. ai That just larger view into the information rather than having it being separated out into, the separate silos of each of the different companies is really useful. And so I assume that the thinking here is something similar where they can start to share information. If they're seeing a pattern of activity that they know they lose track of once it moves off of one app onto another. So, I'm sure this happens quite a bit where something maybe starts on match. You know, if they're going for the romance angle, we'll move to WhatsApp for the private conversation. And then there'll be some sort of cryptocurrency component that comes in later. in the past, These companies might have that one little view, which is maybe not enough to do anything, but now when they can share and follow the path of what's happening, they might be able to intervene and shut down the accounts and, stop the scam from actually going through.

Ben Whitelaw:

Yeah, so they kind of have the full journey essentially mapped in some of the companies involved at least. You did mention that there was a couple that aren't, in this group that, you know, we might expect to be there.

Mike Masnick:

Yeah. I mean, the, the main one is the concern is, Tether. Tether is a cryptocurrency, which is, ostensibly. tethered to the U S dollar. So the price is, you know, one tether to one U S dollar. there are some concerns about how real that is. And, and there's always been some concerned about tether being a little bit on the questionable side of things, but often a lot of the pig butchering scams do rely on tether. And there's been a bunch of reporting on this. It's not entirely clear why that is. I think. think it might have to do with the one to one peg between the dollar and tether, that it just feels easier for users and a little more comfortable than some other cryptocurrencies. but I know that has been a really big part of, a lot of the pig butchering scams and tether is not a part of this coalition. And I think that is kind of a notable omission.

Ben Whitelaw:

Yeah, I mean, I think there's, presuming this does go well and that there is some impact from working together in this way, hopefully you will see other companies join the fold. And this, as you say, does feel like it's on the kind of more, slightly more informal side of these linkups where, it's, Kind of based on informal sharing of information. There are obviously more formalized groups, such as gift CT and stop, NCII non consensual intimate image abuse, where some of those groups that are formed out of nonprofits report more widely information that they're hearing. And there's a bit more of a transparency around that, but I think both of these examples are positive and right. And reflect the fact that. If you're an inauthentic actor or somebody who's looking to enact a scam You're going to work across platforms and the way that these companies are thinking about it is clearly in line with that.

Mike Masnick:

Yeah. And I think it'll be interesting to see how well this works and, what they're willing to talk about publicly as, time goes on, and whether or not other groups join and, hopefully we start to see some success stories. I'm sure the scammers will start to, you know, modify their tactics cause that's what scammers do. But I think it's, a really interesting start and, we'll see sort of what else comes out of this.

Ben Whitelaw:

Brilliant Awesome. Thanks mike. Our second big story today is Another story from india last week. We talked about the banning of independent media outlets by youtube which super interesting For its own reasons this week wired have done a really great long read on You deep fakes within the political election there. And again, the election is a mammoth event in India is coming to a close in the next couple of weeks. And what this piece outlines really is the way that political kind of, parties are using AI to essentially produce deep fake videos to reach a larger audience and to encourage people to vote is a really, really interesting, piece. And they're doing it for a number of reasons. They're doing it because there's a number of different languages across India, 22, I think in total. And also because way that campaigning works there, and the large geography that the politicians are having to cover across states, makes it really difficult to reach people, en masse. And so, this is really the first election, obviously, where we're seeing, um, AI being used and the piece outlines one particular young guy, a 31 year old called Devendra Singh Jaden, who really is a kind of one man, AI expert. He's helping politicians create deep fakes of themselves. He's recording them speaking and creating. images of themselves that they can also use for calls to help encourage people to vote. And so that's really fascinating in itself. The scale of it is, Unbelievable as well. There's a note in the piece about how 50 million calls were made to Indian voters in the two months before voting started. And so these are basically, voters will get calls where a politician who has had their voice kind of cloned, is essentially speaking to them about why they should vote and why they should vote for that party. So it's a huge amount of people receiving calls. I mean, not in the context really of, the Indian population, but, but the number is really significant. And, some companies, there are a number of different companies doing this. Some companies are doing 25 million calls in the case of just a few weeks. So they're getting through an awful, lot of calls and reaching an awful lot of people there's a couple other parts to it. I think which are really interesting the psychology of this type of work is fascinating as well. So this young guy who owns this company Called polymath synthetic media. He was saying that actually people love hearing from deepfaked politicians because they feel they never get to see representatives and they they feel like politicians should reach out to them and he what he might say this because he's in the game of producing deepfakes but he was saying that actually there's something kind of unique about hearing from a politician even if it's not them saying it live. it's essentially them having gone to some effort to reach out to them and to convince them to vote. and so that's really interesting, the psychology of it as well. But essentially It's a huge industry that's emerging in India alone. And we can expect to see this happening elsewhere, I think.

Mike Masnick:

Yeah. I mean the thing that struck me about this is that for all the talk, you know, if you speak to anyone about deep fakes and politics, all of the discussion for the past few years has been about faking something bad. You know, picking an opponent or someone that, somebody doesn't want elected, creating a fake video of them saying something terrible. We had the, the story. um, last week's podcast of the claims, which turned out to be real, you know, deep fake video, there are all these concerns about deep fakes being used in a negative context. And what struck me about this was it was a case of, politicians deep faking themselves in order to effectively give themselves more range. And just like, you know, I'm looking forward to the deep fake Mike replacing me on this podcast so I can spend more time. I don't know, doing something else, but what else would I do with my time? I have no idea, but, uh, you know, it was really interesting to see that. And some of it is just, you know, doing these calls where it's clearly not the politician calling the person, but it does. feel more personalized. in some cases, it was not clear that, recipients of these calls were aware that it was a deep fake they were talking to in some cases. They were just providing a message and maybe the extent of the like true fakery was just adding in the person's name. So calling and saying, Hey Ben, I really hope you'll vote for me, you know, and just sort of presenting a message. There is some experimentation with actually being able to talk to the deep fake politicians. In the wired article, they sort of bury it pretty far down in the article that that does not work very well. And anyone who has played with generative AI these days probably gets a sense of like how badly those things will go. But here it was even like, you know, it would get stuck in a loop. It would keep repeating the same line over and over again.

Ben Whitelaw:

Who wants to speak to a politician really?

Mike Masnick:

well, yeah, you never know. But, um, it, it just struck me as a really, you Interesting use of it. Whereas everybody is like really afraid of how it would be used negatively here seeing it being at least attempted to be used in a way that, allows politicians to communicate better with the voting populace. And it, you know, it sort of reminded me of their, the famous stories of, how different politicians over time embrace new medium to get word out. So, you know, radio with. FDR and then television with JFK in the U. S. These examples are sort of canonical of, the candidate that learns how to embrace the new medium in a positive way to better connect with, the citizenry, makes a real difference. And so it struck me as just a really different kind of story compared to the, Oh no, AI deepfakes bad to the, Oh, Hey, here's a way to connect with voters.

Ben Whitelaw:

Right, but what the other thing that the piece mentions is that some of the things that the politicians were saying, obviously, was not necessarily true or factually correct, right? So there, there is that problem of like politicians, we hope saying, things that are true and abiding by certain standards, but that's not necessarily the

Mike Masnick:

yeah, but politicians lie all the time. I don't, I don't think there's a, there's a,

Ben Whitelaw:

mean, some, some or the most, right? This is, there's a whole, there's a whole theory about misinformation coming from, the top and from elected officials. And, so, so I'm less. Comfortable with the idea that this is, this is something that we should be comfortable with. Like,

Mike Masnick:

I, I'm not saying that anyone should be comfortable with it, but I'm saying it's, it's happening, right. And it's going to happen. And it's, it's worth recognizing how people are going to use this because I assume this is only going to increase, it is a way to better connect with certain voters. and therefore. whether it's good or bad is a wholly separate question. The reality is that it is happening and we should be thinking about it and it should change our thinking away from just thinking about AI as a deep fakes as being this thing that was going to be used to create fake negative videos, because it's people are going to come up with more and more creative ways to use it. And some of them may be really compelling. And some of them, may really. work in terms of like getting out the vote or convincing voters of certain things or connecting with voters. And there's all sorts of things that we're going to see. And this is sort of the first like big article that I've seen that sort of begins to grapple with some of those issues and some of those questions in terms of, you know, how does this appear? How will it be used beyond just this idea, which we haven't really seen that much of I'm going to make the politician that I don't like, or that I'm running against look terrible by faking a video.

Ben Whitelaw:

Yeah. I mean, talk to Mike about how it's playing out in the U S cause there was, the whole piece around robo calls in relation to the primaries. Right. Which is a similar kind of trend. Do you expect to see this happening in, in the U S election later this year? And what, what are the kind of rules around that?

Mike Masnick:

mean, it is a big and kind of open question. The, the, just this week, a couple of things did happen in the U S related to this, which is the person who was behind the robocalls for Biden, the fake Biden robocalls saying like, don't, don't. It was whatever, uh, got, was indicted and, relatively quickly we'll be facing, jail time. Some pretty serious consequences for doing that. So it was great that they started talking. Now that that guy had sort of come out very publicly after it all happened. And the FCC announced that it is looking at putting in place rules for political advertising, using AI, having to declare that AI is involved. There are questions about whether or not. One, the FCC has that authority, they might not. There are questions about whether or not those rules would pass First Amendment muster. They might not. But you know, it's certainly something that is now being talked about in the U S whether or not candidates are going to use the technology in this way. I'm not as sure we are also dealing with, you know, on the present at the presidential level, certainly like not necessarily the most. technologically sophisticated, uh, individuals on, on either of the major parties. But, you know, we'll see. They both have teams of people who might try and experiment with stuff, and we will see where that leads. But right now, it's not clear that there's anything legally that can be done. There's a lot of talk about it, but there would certainly be First Amendment challenges on a lot of the legal side in the U. S.

Ben Whitelaw:

okay, I mean I noted in this white piece again about how this AI consultant working with politicians had actually been contacted by a number of different organizations in Canada and the U. S. as well to do work for them and had received kind of contacts from, you know, burner accounts on, on different social platforms asking for, you know, kind of deep fakes about politicians, not by politicians.

Mike Masnick:

right, the more traditional one that people expected that this guy was approached to do that. He claims that he wouldn't be in that business, that there were some scandals and, and that was not his interest. obviously that is still a concern and we will still see plenty of stories about that. And there may actually be some things, though that relates to another story that we, wanted to talk about as part of this this week, which was that, Nick Clegg at Meta has come out and said that they haven't seen a sort of massive influx of, the deep fakes in political context. And they were talking about sort of the early elections this year, the Taiwanese one being the main one, where there were a lot of concerns about how China in particular might use AI deep fakes to try and influence that election, and they didn't really see it, or at least not at a, systemic level. Um, and, and I think the quote was that he said it was a manageable amount that, you know, I think some of the concern was that these kinds of things would flood the systems and overwhelm them. And Clegg is claiming that meta at least has not seen it at that overwhelming level.

Ben Whitelaw:

Yeah. And if we're kind of joining these two stories together, as you say, like if Nick Clegg and Meta are only looking for category A of the deepfake stories, which you obviously rightly point out is. Has probably not transpired in the way we thought, then that's one thing, but there is maybe this whole other kind of category of deepfake stories, which is not referring to when he says that there has not been a significant volume

Mike Masnick:

yeah.

Ben Whitelaw:

these videos. Right? That's, it's almost like if you kind of ignore or you're not looking the right place, you're not going to see the thing that you need to see.

Mike Masnick:

Well, the question how are we concerned or is Metta concerned about, the kind of deep fake that is talked about in the wired context. Is that a problem? I mean, again, most of those were sort of one to one communications that might be happening over WhatsApp. In fact, they probably are happening over WhatsApp in a lot of cases, but is that something that Metta is concerned about where it's a politician On their own behalf, as opposed to negatively, presenting Mr. Disinformation about, uh, an opposing candidate. It's kind of a different question than the one that everyone's been thinking about and the one that I think Clegg is referring to here. But again, you know, meta has put in place some policies where they do want to, be aware of, AI generated content and properly label it. So there are some concerns there. And also it is still early in this technology and it, it, you know, Maybe saying like, ah, it hasn't been that big of a deal is kind of like foreshadowing for like, it may be coming still.

Ben Whitelaw:

and we won't know until it really hits us right there's a there's a sense of kind of Everything will be fine until it's not. I think that's what I took from, from Clegg's comments this week. Brilliant. So, so really interesting piece from India via Wired and thanks for talking us through that, Mike. Let's go on to our, other stories this week. We've got a few really nice ones, um, including this first story, which actually hasn't been kind of fully reported out, but something that you spotted on, Blue Sky about how they're thinking about spam. Talk us through what you've seen.

Mike Masnick:

Yeah. This was really interesting. Aaron Rodericks, who is the head of trust and safety for blue sky was formerly, uh, worked in integrity at Twitter. Famously fired by Elon Musk, uh, like so many people. Um, and, uh, He, there was a bit of a controversy that happened on blue sky in terms of labeling a certain user as spamming and without getting too deep into the weeds, basically this user was responding to a number of, uh, the user appears to have been someone in Gaza who was, looking for funds to try and get out of Gaza for obvious reasons. And was posting replies to a whole bunch of different accounts, basically linking them to, I can't remember if it was actually a GoFundMe or the equivalent of a GoFundMe, but basically saying like, help, uh, help me get out of Gaza. And it was, legitimately spam like behavior. It was responding. It was out of context. It had nothing to do with who they're responding to. They were responding to many, many people. There are a bunch of things about it that would tick any box for anything that was, spam like behavior and therefore blue sky labeled the account and the posts as being spam. And some people got very mad at that because some people were saying, you know, given the context of the message, should this be considered spam? It shouldn't be. And blue sky should remove the label. So Aaron posted a thread, which I thought was really interesting and very thoughtful and sort of laid out the transparently, like, this is how we think about this issue and saying that, this happens, but we have to judge it based on the actual behavior and not the content and not the details of what it is that they're spamming about. If it is spammy behavior, we have to label a spam. Otherwise, we're in this position where we're constantly making exceptions. We're constantly having to judge the value and the, righteousness of the spam. Uh, and that is not a workable solution. And so. I thought it was fascinating for a variety of reasons, in part, just the fact that Aaron was willing to come on the site and, explain all this publicly, because I honestly can't recall another platform being that public and that thorough in their explanation. Normally when there's some sort of trust and safety, content, moderation, controversy, you get a sort of, you know, worked over PR approved statement that says nothing, that doesn't really provide much transparency. People still get mad. People got very, very angry at Aaron. I think unfairly,

Ben Whitelaw:

Mm.

Mike Masnick:

You know, I think it was actually a really useful thing for him to do to explain it. Lots of people, I'm not saying everybody got angry. Lots of people really did respect it and pointed out, like, this is really valuable to have the head of trust and safety come out and explain the thinking on this and explain the nuances and trade offs and the reasons why Any organization really has to judge things based on behavior rather than the actual content. and so I just thought it was a really great thing. I think, it is predictable, but unfortunate that a lot of people still got really angry at them and came up with all sorts of excuses. This happens in, in the trust and safety field. All the time of saying like, well, if your rules lead to this, your rules are wrong. And it's like, like if, if you were in Aaron's shoes, this is the same decision almost anyone would make. Otherwise your platform is going to be a mess.

Ben Whitelaw:

Yeah, I mean he he's really really honest about You know, he basically kind of writes through the trade offs that he's trying to make, right, and kind of unpacks his, thinking and, and that's really impressive, obviously, in a really difficult circumstances. And he even admits at one point, you know, that he doesn't have a good answer for how you, allow somebody in this situation to kind of somehow bypass the rules that have been set up this and he admits he doesn't want to create a system for that. So there are a few alternatives, though. I think some people posted in in the comments to his piece about, spam labeling or other labeling, I guess, having a time limit. Limit to it and it kind of it being something that is obviously Personal to this user's situation now, but maybe shouldn't be the case in I don't know 48 hours or a week or a month or whatever it might be Do you think there's kind of scope there? Do you think there's do you think there is ways designing a system still?

Mike Masnick:

there are lots of interesting things. And I actually do think that blue sky is really open to those things. I mean, a lot of what they're doing is about user controls. And I imagine that tools for that kind of thing will happen. I mean, already, if you look at it, within blue sky, you have control over your own moderation. tools and settings. And that includes like, how do you deal with things that are labeled in different ways? So, you could set it up so that your account handles. spam in different ways, whether or not it's blocked entirely or hidden or shown to you. So if you disagree with the labeling there, you can treat it differently. And that's basically. Part of the control that something like blue sky gives you. And of course there are alternative, apps that are beginning to pop up that let you experience blue sky, content, not through the blue sky app, and they can set up different rules as well. And if one of those apps decides that we're going to make an exception and this kind of spam, we don't consider spam, they can effectively ignore the label that the blue sky moderation team is, putting on this kind of content. so something like a timed moderation setup, I think would be really cool. Um, you know, it's not there now, but it wouldn't surprise me if it comes at some point.

Ben Whitelaw:

Yeah, I mean, I suppose it's all complexity to add right to an probably already quite complex system. I, even as somebody who thinks he knows a fair bit about moderation, hasn't gone in and really customize my feeds and the way that my moderation works on BlueSky because it's kind of a big thing to set up, right. You need to kind of think through what it is you want to see and what you don't want to see. And

Mike Masnick:

It's not, it's not that hard. Play around with it. It's fun. It's fun.

Ben Whitelaw:

Yeah, I feel like they could onboard it better, but I think this Aaron talking it through in this way is a really helpful thing and, um, fair play to him for doing that.

Mike Masnick:

I really appreciate the transparency and the thoroughness of the transparency as well. But let's, move on to the next story.

Ben Whitelaw:

Yeah. And so this is a story that, I was reading this week about, uh, it was published in Lawfare about spammaflash, which, if folks don't know, it's where basically Chinese operatives post via thousands of different accounts, promoting the Chinese government. Um, And, uh, it's often used to kind of harass Chinese dissidents elsewhere outside the country. Now, this has been something that has been noted since 2019. It's been something that, Grafica stumbled across in some of its work. And Meta last year did a big report, in which it unveiled that it took down almost 9, 000 accounts linked to the kind of Chinese state. Which we're posting essentially kind of, missing this information. Now, spam flash historically has not been very successful, which is basically kind of not very targeted, not very high quality, the accounts tend to kind of promote each other, but this new report, Based on some work from, Elise Thomas, who works for the Institute of Strategic Dialogue says that actually these spambolage accounts are starting to become more sophisticated and the way they're doing that is basically kind of hitching their cart to, Donald Trump and making themselves to be kind of MAGA supporting accounts. So she notes three or four different accounts who've changed their username. They've changed their banner. They've started to tweet more like. Essentially Republican supporters, and they're building up kind of larger audiences that are being retweeted by, people with serious, uh, reach and clout on platforms, including Alex Jones and in some places. So, this is a really kind of interesting evolution of something that's been there for a while, but which is starting to kind of, I guess, become more sophisticated, in the run up to the elections later this year. And what we don't know is really the scale of it. At least kind of mentioned that it's quite hard to know how many accounts there are how linked they are they move a lot again across platforms And it's not just on meta or on the major platforms where they exist They're on smaller websites and forums as well. So understanding the sheer size of it is difficult But yeah, I thought this is a really interesting kind of read and something i'd recommend listeners delving into

Mike Masnick:

Yeah. I mean, I thought it was interesting. It also, I thought the timing was interesting that it came out like three weeks after Wired had a, big article that basically said that. It was, why China is so bad at disinformation. I'm basically talking about the, how unsuccessful these accounts have been. And, you know, there's a quote in here in the wired piece that effectively said it was, like they were at the level that Russia was at in 2014. So sort of 10 years behind Russia, which is generally seen as being much better at the disinformation online game. And it's interesting then that, now with this report, the lawfare report it's, you know, it's basically saying they've, they've moved up to maybe 2016 in terms of, you know, creating MAGA supporting accounts and sort of just generating nonsense online and sort of, worming their way into, MAGA circles and getting retweets and stuff like that. it's sort of an interesting evolution. I don't find it that surprising. You assume that sooner or later, The Chinese trolls are going to start to figure this stuff out. but it is interesting to see, how that's advanced in, just the last few years.

Ben Whitelaw:

Yeah, a couple of elections behind but making ground it seems okay let's We swiftly on to a story that is super interesting. Mike talks a lot about a lot of issues that you cover a lot in tech that we can week out, a tick tock block.

Mike Masnick:

Yes, but a different one.

Ben Whitelaw:

Yeah, not that one.

Mike Masnick:

Yeah. So this was interesting in that, France, there's a island in the Pacific or set of islands, New Caledonia, which is controlled by France and there's some controversy happening there because they've changed the law that will allow. People who have lived in New Caledonia for, I think it's a decade, to begin to vote. And so, folks there are concerned, especially people who have been advocating for independence of New Caledonia, are worried that this will sort of further entrench, their being beholden to France. And so, there's been some. Protests and some violence, and it's been kind of a mess. And the response was that suddenly New Caledonia started blocking TikTok. So this is not about worries about China having access to information or about propaganda or anything. This was very much a, huh. There's a bunch of people who are mad and it feels like they are organizing and spreading information about protests and everything via TikTok. And the easiest way to deal with that is just to shut down access to TikTok across all of New Caledonia, um, which is like horrifying for a lot of reasons. And You know, there have been other cases certainly of like, apps being shut down during protests and, you know, it being concerns that were raised, but here this is, a pretty big step for a supposedly, Western liberal democracy. To say, gosh, this is, this is kind of a mess. We're just going to block this app and we're not really going to explain why. I mean, a lot of this is kind of assumptions that it's connected to the protests and the violence. But, uh, it's, it's a big step and I think a lot of people are raising concerns about it.

Ben Whitelaw:

Yeah. I mean, I remember when Nigeria blocked Twitter or shut Twitter during the protest a few years ago. And I wonder what folks in France would have said about that at the time, right? You know, France, a country where speech and liberty is of utmost importance. it is funny how things kind of, you know, Do shift very quickly. And, this isn't something that, the DSA does, doesn't cover, the French territories and it's, it's something that will be interesting to see how that kind of shapes up. Because I think the, it does, have a jurisdiction or issue. But, you know, this is right in the midst of some really. Interesting kind of regulatory changes in the EU and people should be concerned about this. I think, Okay, we will shift now to our last story of today's podcast in terms of the news roundup. And we've got a story from X slash Twitter, and I wanted just to kind of quickly flag the kind of beef that's been happening in Australia recently, really with, the e safety commission there. So, Linda Yacarino actually was on stage, today talking about this and has called, the Australian government out for overreaching in terms of basically this whole episode, but essentially just to unpack it for listeners, they might remember, we probably mentioned on the podcast a few weeks ago that the e safety commission tried to, have Twitter take down a stabbing video, um, a series of videos on the platform, uh, after, an event in Wakeley in a part of Australia where, a minister was stabbed and. That happened, they removed a bunch of URLs and Twitter geo blocked the content there. But Twitter then they kind of fought back and has said that they actually don't think it should have been the case. They should allow people to kind of decipher themselves. And so the eSafety Commission then went back to the federal court in Australia, applied for an injunction. last week that injunction was, um, there was an extension applied for and that was rejected. And what has happened is that, um, Therefore, this content is now going to be kind of banned in Australia. But it's going to be available to everyone else at this point in time. And so you have this kind of ongoing standoff between these platforms and this, regulator in Australia, which. It's ongoing, right?

Mike Masnick:

yeah, and I think it's worth clarifying a little bit on the details here, which was just that, this is a common thing when a country says that certain content is illegal, a lot of platforms will geo block it. So just that, Territory, country, whatever has the content block. And that is considered sort of the best practice for a variety of good reasons. And here, what the problem was, was that the, Australian e safety commissioner came in and said that geo blocking it was not good enough because Australians could use VPNs and get around it, which is always the case, right? You can always get around geo blocking with VPNs. And that is a really major concern because effectively what the They're saying in that case is we should be able to block content globally, which is, you know, there are all sorts of jurisdictional questions. And this is like, I've talked about this, like one of the earliest stories on tech or 27 years ago was about the jurisdictional questions of the internet. And this is a thing that will never go away because. The internet is global and while there are, you know, there have been elements of fracturing of that global internet. And in fact, we did a whole podcast on the, you know, one of our first ones was, you know, Is the internet still global? But there is this big question of, how do you deal with that kind of thing? And there have been other cases. There was a big one that was fought in Canada and the U S seven or it was really started a decade ago. With Equus tech, which was this company that was mad about some, former partner or a former employee who was, selling similar stuff that potentially violated its IP. And they wanted Google to block access to anyone. Google was willing to geo block it, but said a global, block would not count. That went back and forth in the courts in both Canada and in the U S and it. Got to this point where the U S said you can't force Google to block it globally. And Canada said, yes, we can. And it kind of ended there. It's possible that, because of, uh, uh, trade agreement between the U S and Canada that came out later that that was effectively wiped out, but nobody's really tested that. And there was concern that that would lead to a bunch of. folks, like, rushing to Canada to try and get global injunctions, and that didn't really happen. Um, but now there is concern. It's sort of the same situation here in Australia. If Australia thinks that it has the right to do global injunctions, I personally think that would be really problematic. You don't want one country to be able to determine the speech rules for the entire globe. Um, and so this is a case where I'm certainly willing to criticize, the The former Twitter for lots of different things. And it's sort of very hypocritical stance on a lot of speech issues, but this is one where I think it's totally right. It should be fighting any kind of global injunction. And I think it was correct to stand its ground. And so far it looks like after a little bit of time that the Australian courts agree with, with experts. Or X Twitter or whatever. I guess it's so tough to say X.

Ben Whitelaw:

It's X now,

Mike Masnick:

Uh, I cannot, I cannot bring myself to agree to that, but, um, I think that this is one where they definitely made the right decision. It was the right call to fight it. And I'm glad that the Australian court shot down the e safety commissioner.

Ben Whitelaw:

Yeah. It's X slash Twitter as the good guy for once. It seems who'd have thought it brilliant. Thank you, Mike. Thanks for clarifying. The situation Australia there, that was really helpful. So we've now got a bonus chat with TaskUs where Phil Tomlinson, who's the SVP of global offerings. He and I spoke early this week about his trip to Dublin for the. TSPA, EMEA Summit, and he's going to walk through his key takeaways and his thoughts for how collaboration, the trust and safety industry is taking shape. Enjoy that. And thanks for listening. So we talked in last week's Ctrl-Alt-Speech podcast about the flurry of trust and safety events happening at the moment. And one such event was the TSPA's EMEA summit in Dublin, the Trust And Safety Professionals Association's big European summit. Now, unfortunately neither Mike nor I I could make it even though I'm just across the water in London, but we're very lucky to have a very respected industry figure who's here to share his key takeaways from the event. Phil Tomlinson. Great to have you here.

Phil Tomlinson:

Ben, good to see you again, my friend. Thank you so much for having me.

Ben Whitelaw:

How's things?

Phil Tomlinson:

Things are good. As you were saying, I was at the, uh, TSPA event in Dublin last Friday. the power of collaboration with a focus on, EMEA, Europe, Middle East and Africa. I have to say, you know, as someone who's attended, I think that was my fourth or fifth event so far that the TSPA have put on. So pretty much most of the, except the APAC summit last year. I think this was my favorite. And I'll tell you why a couple of reasons. Obviously, you know, a home game is always nice. I'm based in Dublin, so didn't have to get on a 12 hour flight to be there. So that's, you know, from a personal perspective, that was nice. But, you know, joking aside, the level of thought that went into the programming, the level of thought that the TSPA put into, the flow, you know, it was kind of a little bit like Glastonbury music festival. It was like, do I go to this or do I go to that? And you kind of have to make some trade offs. And I think that the quality of the representation from, as is known as the global South, and just really the, the insight and the nuance and, and the voices that were, highlighted, I thought was a, was a real treat. so yeah, well done to the TSPA Charlotte and Maggie and the team. I think they played a blinder.

Ben Whitelaw:

brilliant. Sounds fantastic. I'm now even more jealous. I couldn't make it. tell us about, you know, you obviously say you went to a lot of events. You have to choose between tracks. Tell us kind of what your key themes were, your main takeaways from the summit.

Phil Tomlinson:

Yeah. So I think maybe I'll, I'll do that by highlighting a couple of the sessions that I really enjoyed. You know, I attended a session, looking at cyber security and online harm in the context of the MENA region. So Middle East, North Africa, focusing sort of on the Arabic speaking countries in, in that region, by actually, an ex colleague of mine at, at Twitter. Um, he did a fantastic job. He, you know, He spent time at Twitter, then at Meta, focusing on these issues at these large platforms, has since gone on to create his own consultancy. Fascinatingly, he sees himself as a bridge between public and private. As, as you probably know, there's been tensions and misunderstandings between the governments of the Middle Eastern countries and these large platforms for years. I mean, sometimes culminating in, in the platforms being temporarily shut down or banned or whatever, right? he's acting as a liaison or as he says, a fixer, which I love it as a, as a fixer between, you know, local government representatives, local law enforcement, and these platforms as someone who's, who sort of comes from that world, who's fluent in not just the language, but the sort of cultural nuance of, of how these, um, societies operate. And it was also worked for the best part of a decade. On the platform side, he's in this unique position and he, it was just a fascinating, talk showing the power of collaboration and also the need for bringing in folks who can act as kind of integrators between these two worlds. Right. And, uh, when these platforms take a very global. Policy approach, which I completely understand. I've been on the platform side. It's very hard to make policies for hundreds of different countries or languages or whatever it might be. So, you know, taking a view that, we sort of need this API between these two worlds in order for us to both Empower the people and the voices in those regions, but also stay on the right side of the law and the right side of norms and culture, but also allow the platforms to kind of scale what they want to do in those regions. So that was a fascinating, um, session. Really enjoyed that. Another one that really struck me was, a panel discussion focusing on, you know, the difficulties of Multilingual support in a trust and safety context. And, you know, you always hear about these long tail languages or, lesser spoken languages in the region and, it's kind of a, widely known that certain languages don't get the kind of support that they should at a platform level. and this was sort of a group of folks drawn from industry and, someone from Ofcom as well, who. Um, really spoke about how platforms trying to adapt both in sort of hiring folks with those backgrounds, but also thinking about the role of AI, particularly generative AI, in a world where, you know, you need to support 150, 170 languages as a platform, right?

Ben Whitelaw:

And is that something that's kind of evolved? You're thinking how has TaskUs thinking particularly about the language piece, because I work across. All countries and for different kind of platforms as well within that.

Phil Tomlinson:

Yeah, exactly. You know, that resonated for me as someone who sort of operates a outsourcing business where we. You know, we have presence around the world in places where, you know, for example, we have a European language hub in Thessaloniki in Greece, our APAC language hub out of Kuala Lumpur in Malaysia. We also have sites in Taiwan and other places where we deliver multilingual support and, you know, it's hard, right. It's hard to scale those kinds of teams. It's sometimes hard to find the talent that you need in those, markets. And I think Taskus is really laser focused on. You know, as an employer value prop as a, you know, as an attractive place to work in those markets, you know, how do we bring in folks, who have the language skills, who understand the, you know, the socio political landscape of the markets that they're supporting, do they need a visa? Do they need relocation assistance? Do they need other enablements in order for them to come and,, Join us in Thessaloniki or KL or some other location. So that's a big focus for us, particularly this year, in EMEA where we've seen a lot of demand for services, uh, multilingual services, you know, the kind of core European languages, as well as Nordics, Arabic, Turkish, Russian, Ukrainian. So that's been a, it's been a fascinating journey. And I think one that we're. Continually trying to improve,

Ben Whitelaw:

Yeah, and is that prompted by regulation basically kind of Insisting that the number of moderators match the kind of speakers of that language in market. Is that how it works?

Phil Tomlinson:

I think, in part, absolutely right. We've seen the DSA come into effect for, you know, the second tranche of companies below the VLOPS, I think in, was it February of this year, traditionally the demand for kind of high scale language support has come from the large enterprise platforms. We're now seeing that demand filtered down to the, what I would call the mid market or the, or the smaller that they're not small by any, by any Man, these are multi billion dollar companies, but, uh, these are companies that are, beginning to now invest in, support ecosystems that goes beyond just, English and a couple of handful of other major languages for them, and that's where someone like TaskUs really does, add value, right, we can offer them up to 30 languages, in locations that I said are both attractive and, we can deliver high quality from, uh, and to your point, yeah, you know, meet the, the surging and sort of, you know, the peaks and troughs of demand that come from a, an industry that's becoming more and more regulated.

Ben Whitelaw:

Yeah makes a lot of sense and mike and i've talked in previous podcasts about the the arabic Uh language question as well. We did a podcast on uh, meta's banning of the word shaheed and you know, that's a You Yeah, really interesting. I think again region of the world obviously very current as well. Um going back to the dublin summit I wondered kind of What you were taking away from it as like practical things to implement if somebody like myself who unfortunately couldn't make it. Where are you thinking about kind of using the nuggets from that conference in your day to day work?

Phil Tomlinson:

Yeah. I mean, it's a great question. Um, going back to the overarching theme of, the power of collaboration, I certainly see scope for. Multiple parties. So you think about kind of large enterprise platforms, you think about startup, there's a, as you know, yourself, there's a, a wealth of startups that have come up in the last 24 months around the problem of trust and safety, either developing these kinds of niche boutique solutions, building kind of middleware type tooling or everything in between. So certainly I think there's, there's opportunities for companies like task us to partner with some of these really smart startups out there. And in fact, we've done that, you know, recently Launched a partnership with a, AI data labeling platform out of London called V7. Um, we've been working with the good folks over at Cinder for a while. You know, there's a, there's a lot of that collaboration, as platforms sort of see that there's no one company that's going to solve everything. Right. This, this is too big and broad of an issue. And if there's a company out there saying, Hey, we have everything, I'd be. I'll be questioning that, right? We, we want to specialize at things that we know we can do well and bring in partners and then sort of over time, build our capabilities in that way. So that's the first thing I would say is, you know, let's pull down the veil of the fog of war of competition and say, okay, how do we collaborate? How do we, how do we augment and, compliment each other in the sort of Complex ecosystem that is trust and safety. The other thing that really struck me was, you know, the power of, giving a platform to voices that don't always have a say in these conversations. So, like I said, you know, it was amazing to see folks from Myanmar and, and Sub Saharan Africa and North Africa and the Middle East speaking very prominently and with great authority on stuff that, uh, you know, frankly, I, it was eye opening to me. I've been in the industry. Almost 20 years and I learned a whole bunch of stuff on Friday, you know, to that end, um, in the July main TrustCon event that's taking place in in san francisco. I think this is the third one The previous two years TaskUs has sponsored. Uh, we've been a sort of a marquee sponsor of the event this year We're taking a slightly different approach and we're going to put in place a kind of like a scholarship. I call it a scholarship It's more like a a travel support program where we're going to pay for folks from around the world who may not otherwise have a chance to come to San Francisco and, have to spend long, you know, airfare and three nights in one of the most expensive cities in the world to come and attend the conference, we're going to pay for a bunch of folks. We're working with the TSPA to, to facilitate bringing people into the conference so that they can. Make their voices heard, but also network enable, find jobs, find synergy, find compatriots in a world where you can sometimes feel very alone and isolated. Right.

Ben Whitelaw:

That's really nice to hear. I mean, it's, like you say, one, one company can't win outright. If it's one of those rising tide lift or boat situations. And I think that's really nice to hear that Task Us is thinking in that way. And, and, and in terms of TrustCon, that is a conference I'm hoping to be at. Mike and I will be doing a live recording of control alt speech there. So I'm going to plug that, but, what are you most looking forward to about that event and, and, um, Are there particular sessions that you got, in your head as you're definitely going to attend or once the TaskUs putting on,

Phil Tomlinson:

Yeah. So I, I haven't yet kind of mapped out exactly. I'm trying to think if I'm not even sure if the,

Ben Whitelaw:

it's not public, I

Phil Tomlinson:

I don't think the agenda has been announced yet. So, uh, I hope to attend a bunch of things once it gets announced, I can tell you that TaskUs, has put a number of proposals on the table for that event. We're pretty confident in having some of those accepted and, and we're hoping to, in addition to, sponsoring folks from all around the world to come and attend, like I said, lifting all boats. We're hoping to. Have an organic kind of grassroots presence where we, we will have task as employees and affiliates speaking with authority and on the various stages and, and rooms. But you know, we'll have to hold off until the agenda is formally announced, uh, I don't want to, I don't want to be a spoiler here.

Ben Whitelaw:

Great. But people, if they're listening, they can come and find you at TrustCon. They can come and find your team. They can talk to you about some of the issues we've talked about. And, um, you'll be around

Phil Tomlinson:

Absolutely. I I'll be there. I've got my flights booked already. I can't wait. It's one of my favorite weeks of the year. You know, I heard someone say on Friday, it's, you know, it's It's a little bit like a school reunion, right? If you've been around trust and safety long enough, you see the same faces and you see the same, people. So, you know, it feels a little bit like you're a high school reunion, except you actually like these people. So that's,

Ben Whitelaw:

No one bullied you.

Phil Tomlinson:

no, absolutely. No one bullied you. Uh, the other thing I would say props to the T I, this is a small thing, but I want to call out, so last year's TrustCon in San Francisco, I gave them some feedback. I thought very. Nicely worded feedback saying, Hey, I don't like the lanyard. So that, you know, my, one of my big bug bears of conferences. And if you've been to as many as I have, you get this lanyard and it's got your name on it and whatever. And then 10 seconds later, it's flipped the other way around. And all anyone can see is the name of the conference and maybe the company you work from. For someone who's very bad at remembering names. This is a nightmare for me. Cause I'm like, Oh, there's someone's face that I know I should know their name, but I have to look at this as the land is not helping me out. So I said to the TSPA, Hey, we should think about a solution for this. On Friday, I get there. They had the best lanyards I've ever seen that double anchored name on both sides, big bold letters, and I'm. I know I'm sort of joking a little bit by saying this, but for me, it highlights the attention to detail that the TSPA has put into these events and actually realizing as trust and safety professionals, you realize small changes make big downstream impacts and the fact that now me and anyone else who was there didn't have to like squint or pretend you knew someone's name or wait to be introduced and have that awkward. I actually think. There's probably five, 10, 15 percent more engagement that that simple lanyard change is driving. And I think as trust and safety professionals, we're used to living in the detail. You're used to living in that nuance. And there was a nice little allegory of how, you know, folks are taking a very kind of iterative human centric approach to just, you know, a conference, which was

Ben Whitelaw:

Yeah, I love that. Yeah. I'm looking forward to wearing my lanyard in in san francisco already. That is a that's a teaser in the in the most positive sense

Phil Tomlinson:

Yeah. Look out for the lanyards. They're a top notch.

Ben Whitelaw:

Awesome Phil. Thanks very much for for joining us today. Thanks for taking the time to unpack what you learned from the TSPA dublin event and yeah looking forward to seeing you and many other listeners in san francisco in Not very long now in about six weeks time.

Phil Tomlinson:

Yeah. Thank you for having me, Ben. It's always a pleasure. And I look forward to seeing you over in San Francisco. If not before.

Ben Whitelaw:

Brilliant. Take care. Thank you

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.