Ctrl-Alt-Speech

An Appeal a Day Keeps the Censor Away

Mike Masnick & Ben Whitelaw Season 1 Episode 32

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

So you and I, Mike, we're a long way from university and college, right? It's been a few years, but if we were back studying, we might be using the secret crash app, Fizz Social. It's a new one. You might not have heard about it. And if we were using Fizz Social, we might be asked when prompted. What's fizzing? I don't think, I don't think either of us would have any credibility if, we responded to that in any other situation, but control or speech is fine.

Mike Masnick:

All right, all right, within this context, well, since that's a made up word, and it means absolutely nothing, I have no idea, even within the context of this social app, which apparently is hot on some college campuses, what's fizzin I can make up what it means.

Ben Whitelaw:

Yeah. Please do.

Mike Masnick:

And so, I'm going to say I had to travel this week and, went to a conference and I'm still a bit jet lagged. So right now my brain is fizzing. What about you, Ben? What's, what's fizzing with you?

Ben Whitelaw:

What's fizzing is probably the discord server of, uh, AI. I'm guessing that's, I'm guessing that's fizzing, um, and a lot of concerned users, which we're going to talk about in today's episode. Hello and welcome to Ctrl-Alt-Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you with financial support from the future of online trust and safety fund. My name's Ben Whitelaw. I'm the founder and editor of Everything in Moderation. And I'm back with Mike Masnick, freshly returned from a, a weekend in Bologna. how are you doing Mike?

Mike Masnick:

I'm, I'm doing well, I, I enjoyed, I got to go to the In Hope Summit this week, and gave a, keynote talk there, which was really, really interesting, a lot of discussion on child safety issues and, uh, really enjoyed talking to folks there and, hearing the latest on that and, you know, keeping up on all these things that are obviously very important subjects.

Ben Whitelaw:

Great. Awesome. I mean, I was just stuffing my face with pizza and pasta. Um, and, and then listening to you and Kate debate away last week. It was a great episode. Um, she didn't hold back.

Mike Masnick:

Yeah. And, and, uh, I enjoyed that. And then the week before was you and Kat, which I thought was really interesting. So this is the first time we've been back together, you and I podcasting. So it's, uh, it's nice to have the, official crew back together for this week.

Ben Whitelaw:

Yeah, exactly. In, uh, the Britain U S in its original form. since we last got together, Mike, we've had some reviews, people have heeded our call for, some feedback, most of it's been nice. Um, I've got to say, did you have a favorite from coming through the reviews we've had?

Mike Masnick:

Yeah, they're a bunch of really nice ones. It's always good to hear good reviews. and so I liked this one that came in a few weeks ago from someone who gave us five stars. Always nice. We appreciate that. But you know, honest feedback is important. It said, I'm not in the trust and safety industry and I think that's nice to hear because this is not just a podcast for people in the trust and safety industry. It's supposed to be for anyone who's interested in issues around online speech and says, so not in trust and safety industry, but the discussion is easy to follow and interesting for anyone with an interest in the topic. Exactly what we're looking for. And. Noted. I particularly appreciate that Ben and Mike are bringing in several women experts for a non tech bro perspective. does that mean we're the tech bros here?

Ben Whitelaw:

I think, no, I, I think it means we're also not tech bros, but neither women nor tech bros,

Mike Masnick:

yes, yes. But, um, no, I mean, I think that is actually one of the things that we do think is important is, is bringing in all different perspectives, certainly not, Just tech pro perspective, not just us perspective, and really having, broad, thoughtful discussions on these topics. And, I, I hope we're succeeding. And so it's nice to have, uh, reviews that suggest where hitting that target that we're aiming for.

Ben Whitelaw:

yeah, definitely. We also, spotted a post from a listener and, I want to know what listeners do while they're listening to the podcast. Mike that's, this is, this is my new thing. Okay. So we've got a, uh, a skeet, a post on blue sky, that said doing laundry and catching up with control or speech. Okay. So I was like, Oh, that's kind of nice. You know, I can see how that would work. Then I also got a text from a listener. Who I happen to know personally. And, and she said that she listened to the podcast while doing laundry. And now I'm just wondering how many people listen to control of speech doing laundry and what that means.

Mike Masnick:

That is a good question. Is there some sort of nexus between laundry and listening to us ramble on about online speech?

Ben Whitelaw:

Yeah. Maybe there's this like enough laundry that you, in the hour that we talk, you can get most of the laundry done, washing, folding, ironing, washing. You know, maybe that's it. Maybe that's it. And you know, now I want to know what everyone else does. Okay. So if you're listening to this, you want to tell us what you do while you're listening to control or speech, drop us an email podcast that control or speech. com. Send us a message on one of the various platforms. maybe we'll make this a new feature. might be fun.

Mike Masnick:

See, now I'm trying to think of what I do, because I do listen to every episode. Uh, it's, it's, you know, it's painful to listen to yourself as many people know, but I, I, find that it's probably important. to making me a better podcaster. But, yeah, I mean, I, I try and sneak in podcast listening whenever I can, doing dishes, going for a walk, driving places, probably doing laundry too. I'm sure that's happened.

Ben Whitelaw:

Yeah. Okay. Well, maybe, maybe that's the thing. Boring tasks plus control or speech equals slightly less boring.

Mike Masnick:

There we go. There we go.

Ben Whitelaw:

great. Let's crack on with the stories today. We've got, some really interesting stories. They kind of knit neatly together in a, in a way that is, quite an enjoyable, when we look at each week's stories, we're just trying to figure out what the connections are, what the dots are we can join. I think there's some nice. Nice linkages today. So let's start Mike with, it's been a while since, we've talked and it's been a while since I've heard about the mad, lawsuits that have been filed in us States. So it's appropriate that we're back together and there's been a story this week.

Mike Masnick:

13 States and the district of Columbia all filed lawsuits, against TikTok they all worked on together, but it was. a little bit different than normal in that each of them filed the lawsuits individually within the, states and, District of Columbia. Uh, and so they, they filed them in local courts rather than federal court. Most of the lawsuits of this nature have been filed in federal court. and each of them filed their individual lawsuits, but they all obviously work together because it's all based on the same basic thing. So there are little differences between each one because they're all relying on state laws that say slightly different things. It's better. basically says TikTok bad for kids mental health, very, very light on the details within the lawsuits, within the complaints, they sort of mentioned, the scrolling and the challenges that happened on TikTok and claims that TikTok, was trying to get kids to use the app more and therefore this is some sort of consumer harm. did look at all 14 of the filings. There was also Texas filed one that was slightly different. So you could say there were 15 lawsuits filed in the last week against TikTok and within state courts. And just like, just, It feels to me, they're all based on this idea, this narrative that TikTok is bad for kids mental health, and therefore we can sue about it. And that's it. Like, they don't really have details that support that. They have vibes. TikTok, bad for kids, therefore we can sue. It feels very, very weak to me.

Ben Whitelaw:

And so the, this is based upon kind of teens and children suffering mental health issues as a result of using it, right? Like it's kind of as a result of excessive use. Is that, is that right?

Mike Masnick:

I mean, that's the claims that they make, but not clearly and not directly, they can't directly tie it to TikTok, they sort of generalize around it. And they basically just say that these various features that TikTok has, is why. You know why we have these problems, but they make a whole bunch of jumps and assumptions that are not clearly supported. You know, and we've talked about this before, but not clearly supported by the data. So they just say, you know, the for you feed autoplay, endless scroll, ephemeral content. They talk about, you know, The stories and Tik TOK live, which is, you know, relatively new features that we don't even know push notifications. Like every app today has push notifications. And then also they say likes and comments. So here we are. We just started out this podcast asking people to comment. Are we negatively impacting the mental health of our listeners when we do that?

Ben Whitelaw:

I mean, it's bad enough. We asked him to listen, Mike, let alone. Yeah.

Mike Masnick:

it's, it, it just, it feels really, really weak. So, and you know, and there's just all these other things where it's like, yes, you can see some areas where maybe some things, can lead for some kids to certain problems. I talked about beauty filters but also they just cite things that are clear and not really proven. So they mention the search in general, idea for a health warnings, which we've talked about, which is not based on any science. They talk about, teen mental health, being in a crisis, which we've also talked about where the evidence does not show that it's directly caused by these apps. And the lawsuits just sort of assume that they are. They also compare it to substance addiction. They say that, it's just like being addicted to drugs. And we know that's not true. We know that there are different issues there. And yet the lawsuit really reads the lawsuits, plural, read as though they sort of took the media narrative and assume that all of it is true and they just. don't seem to feel that they need to go any further in terms of actually proving the details. It's just this, assumption out there that because some kids and nobody denies this, that some kids have struggled with mental health and therefore have used social media. And again, most of the evidence suggests that for most kids, it is situations where they're not getting mental health support that they need. And therefore they turn to social media and use that more, which is not. Not a great situation by any means, but not that the apps themselves are causing it, but the assumption is then flipped and there's this larger assumption that the apps are inherently harmful. and therefore the people who made the app should be held liable for this inherent. that goes across it.

Ben Whitelaw:

I mean, I feel like I remember there was a Guardian article from about 18 months ago that had the headline something along the lines of, we just know so little about what TikTok does, to our mental health. and I don't think we've actually got any further than that. And I think these lawsuits kind of demonstrate that in a way, don't they? Like, there's so little kind of robust evidence as to its effects on populations in general, let alone children.

Mike Masnick:

Yeah. But I don't think that's true, right? I mean, I do think that people have been studying this now and, the studies are coming back, not showing any inherent problems with it that some kids definitely struggle, but there are a number of other factors and that's, where the evidence is. and I think actually. That, takes us to a point where I, I want to transition to this other piece that I think is really important and that want people to read if they haven't yet, which is. From Dana Boyd on her sub stack. and hopefully most of the people listening to this know about Dana. if you don't, she has been the leading researcher in this field going back many, many years. You know, she wrote a book a decade ago about kids and social media, and really, has spent tons of time talking to kids, talking to social media companies, talking to parents, doing all this research and actually understanding this stuff and has a much, deeper, more thoughtful conception about all of this. And she wrote a piece that I think is really important for people to read and thinking about this. And I wish that the people who filed these state lawsuits had read called risks versus harms. And, the point that she's making is that, says in it. There's a difference in the question between does social media harm teenagers and can social media be risky for teenagers?

Ben Whitelaw:

Mm

Mike Masnick:

And, it's undeniable that there are certain situations in which it can be risky for teenagers, but that doesn't mean that it is inherently harmful for teenagers. And she goes on to talk about thinking about risk and harms as two different things, and that often people conflate these things. And as she notes, like there are all sorts of things that we do that have some level of risk involved. And we do things to try and judge that risk and determine how safe we feel and then to minimize the risk. So going outside and walking down the street and crossing a street, Is potentially risky. You could get hit by a car. You could get, you know, I don't know, exposure to the sun and get skin cancer. Like there's all sorts of risks and we do things to try and minimize that. If you're a child, we hold your hand as we cross the street. you know, eventually as you grow older, we teach you how to look both ways and, do things to minimize the risk, but there is still some level of risk, but we don't then say like, we need to sue the streets or sue the cars for existing because every once in a while, a car does hit someone, she also compares in this piece, which I think is really interesting to. Skiing, a favorite activity of mine, which I also know, has some inherent risks involved and, is not the safest of activities, like you can definitely get, really hurt and lots of people do. And yet we're not talking about passing laws to shut down ski resorts. What we do is we teach people, we teach people how to do this. We teach them how to minimize the risk, and we still, Expect that for some percentage of people, there is going to be some harm. They will fall on the wrong side of that risk. And

Ben Whitelaw:

I mean, it is really interesting piece and I included it in everything moderation this week as well. I think it's a great read, like definitely go and read this everyone. and, and she makes kind of the point that, risk reduction is about socializing the risks and people understanding what they are and educating people around them and people having agency to decide for themselves, which obviously in the real world. We've done pretty well. we haven't done that as well online. And I wondered like Donna kind of says that we will get to the point where people will understand how to manage their risks. When do you think that will be, because, you know, the internet has been around a bunch of time. A lot of people have spent a lot of time on, on the internet in various guises, to kind of jump through the kind of intricacies of the speech, but like, when do you think that will, that will happen? Um, and, and what's going to make us get to that point?

Mike Masnick:

think that it's, I mean, it is happening. It's just that, with something new, it always takes little bit of time. And this is the argument I've been making for a long time, which is that, historically societies figure these things out. and a lot of that is that socialization of the risks and the, and, how to understand it and how to minimize it. And we're seeing more and more evidence of that, you know, and, so people get mad when I talk about things like media literacy, but that's really what, we're talking about here is if you have a better understanding of these things, you learn how to handle it. You learn how to minimize the risk for yourself. You learn how to take precautions. That doesn't mean and Dana makes this really clear in her piece. There are still ways to talk about these things on the design side. She, she goes deep into the questions of design because so many of the like regulatory approaches here are focused on design effectively. Like we have to, force the companies to design. their apps and services in ways that are inherently safer. And there are reasons to think about that and how to do better design stuff. But some of it is, you know, some of the harms are not really from the design. A lot of the, the, risk is from the speech itself. And that's where the problems come in. And we see this with like, you know, in California, they had the age appropriate design code, which was, a design code, but again, it was like, It wasn't really, and that's why the courts rejected it as unconstitutional, because they saw that it was really about regulating speech under the guise of calling it a right. regulating design. And so, you don't design your way out of these things. You can do some things to help minimize stuff. You know, you design crosswalk, you know, we're getting back to the street analogy, you design crosswalks and you design traffic lights and signals that help, and, you know, there are other things that people do, you know, there some places that now have like flags. If you're crossing the street, you grab a flag so you can, you know, so there are some design things and it.

Ben Whitelaw:

where, do they have flags?

Mike Masnick:

Oh, you haven't seen this. This is fantastic. I've seen it in a few places where they, they have like on polls on either side of a crosswalk, you have a little bucket with little orange flags. And so you pick up the flag as you cross the street and you sort of wave it so that cars will see and know. I've seen it in a few places. I, yeah, it's not like I'm trying to think because it's not, near me, but I've definitely seen it in a few places. I think I S I saw it in, Wyoming recently when I was was on my trip of this summer. We saw it in Wyoming, but I've seen it in a few other places as well.

Ben Whitelaw:

Sorry to stop you in your flow, but I, I just, where

Mike Masnick:

Yeah, yeah,

Ben Whitelaw:

is this?

Mike Masnick:

it's it. I've seen it in a few places. I think I also had seen a video that they had in like Tennessee also somewhere, but I didn't see that one person, but I've seen it in a few places.

Ben Whitelaw:

But the point is,

Mike Masnick:

The point is it's like a design way of like trying to, make things more safe, but a lot of this is just sort of educational. And this is, this is a story we hadn't planned on talking about, but I'll just throw it in as well was that, of all the bills, I criticized a whole bunch of these bills in California, but they recently passed a media literacy bill. Specifically about AI and technology. And I think that kind of stuff is really important if we're teaching kids how to use these things properly. And that doesn't mean banning them entirely. It's saying like, learn how to use it appropriately so that you are minimizing those risks and you're taking precautions and you're making smarter decisions yourself rather than expecting the company to magically protect you from all potential harms.

Ben Whitelaw:

Yeah. I mean, Those two stories for me indicate once again, that we were still in the problem definition phase. You know, we still really don't have consensus as to what the problem is, broadly speaking. So we don't really, you know, Donna's been writing about it for years. And so this idea of like risk versus harm is still something that is, being talked about and refined and kind of

Mike Masnick:

Well,

Ben Whitelaw:

thought through, and that's, that for me is an issue and I'm, do we get to the consensus that we need to then start to figure out what happens

Mike Masnick:

Well, I think. You know, I think part of the issue is that when something does go wrong, it's horrible and, often traumatic. And we have all of these examples of, traumatic things of, kids dying, right? I mean, the worst possible thing. And so that leads people to want to take action and that is understandable. And it's often, they want to find something to blame and social media, because it's new, becomes a really easy target and because everybody uses it. But I, I think that, the really useful thing about Dana's piece, while it may not be new, but it's just a really, you know, Good way to reframe the thinking about this. When we're talking about these things, are we talking about an inherent harm? Is it something where we have, designed a system where almost everybody who goes through it is likely, or that the risk level is so high that everybody should realize there is a problem? Or is it one where there is just this inherent risk and we have not thought done enough to train people how to, use their own agency to reduce the risk. if we start to think through this lens of, are we talking about an inherent harm, or are we talking about levels of risk that can be managed by individuals? It just paints all this through you know, it's a different kind of picture and a different prism to look at these things through and a much more useful one. So what I think is useful here is at least in using this as a framing device in how we talk about these issues of risks and harms and how to move forward from that.

Ben Whitelaw:

Yeah. And how to identify as a kind of responsible risk taker on the internet as well, because risk, as you say, isn't always a bad thing. Okay. We've, um, we've, covered a few stories there. Let's shift now to, I guess, a platform story now is a succession of platform stories that demonstrate that. Really, the risks are there and they're, being, felt by users try to be dealt with by platforms, not always to the best, outcomes. And this, this is a Verge story that we both noted this week, which is basically noted the kind of oddities of, moderation on Instagram and thread recently. It's kind of pulled together a few examples across the two. Two meta platforms and, kind of ask the question, like what the hell's going on with moderation, um, on these two platforms. So just to give you a sense of what's going on, there's, you might've seen some posts on various platforms about this yourself as a listener. People saying that I'm having my account taken down because Instagram thinks I'm under 13. or, my threads posts has been taken down because I've said a word and the AI moderation has kind of decided that that's not acceptable. And some really odd examples and actually a couple of the verge reporters have been caught up in this themselves. So I think that's kind of where the story came about. the most kind of egregious one I thought was interesting was, uh, kind of social media consultant, figure called Matt Navarro, who posted a BBC news article about Tom Brady, the NFL player falling for an AI hoax, that post, was flagged. He got downranked and obviously then talked about it and posted about it. So basically lots of weird things going on. and, you know, an indication, I think that increasingly platforms are using AI and automated moderation to do the work that's kind of all well and good. And in some sense, we also noted that TikTok this week are, they've announced that they're going to be moving much more of their moderation to be automated. And so you have this kind of situation where. AI moderation is not working particularly well right now on the bigger platforms where the resources are available and they've got much more experience of doing that and more platforms are going to go down that route. and basically users not being able to, contest the decisions that this moderation is, is having. And it made me think about an issue that I had in my head. And we talked about this briefly, but when everything in moderation got blacklisted on LinkedIn, I told you about this, didn't I? Um, very weird situation where essentially the kind of everything moderation page, one day I rocked up, tried to paste the link to the newsletter and I was told that, the URL, the LinkedIn thought it was malware and that it was dangerous to other users. And I, and I was,

Mike Masnick:

from an informational standpoint.

Ben Whitelaw:

Yeah, I mean, you know, it's a subjective thing, I guess, but, and so I ended up submitting, I ended up having a long process of trying to gain access to be able to post on LinkedIn again, and like, just to kind of explain to users what that looked like, because this is happening many, many times over, and I hadn't quite grasped how difficult it was to, when I was on the receiving end, I ended up having to speak to LinkedIn. I filed a report. I had, a six or seven email back and forth with a LinkedIn representative who said, it's all fine. The preview's appearing, on LinkedIn as it would normally do if you post a URL, you can go ahead and do this, to which I applied, actually, it's not working. they said, have you checked, uh, the SSL certificate of your website? you know, have you done X, Y, and Z? And I said. I've checked all of the above, you know, it's all above board. It's still not clear to me why, why my website, which is ironically about content moderation and online speech, let's not forget is being, prevented from being posted. And I was sent on this complete round the houses trip. They sent me to a website, VirusTotal. Don't know if you've heard of VirusTotal, has it come across your radar?

Mike Masnick:

Never.

Ben Whitelaw:

it's a kind of Spanish, of cyber security firm that was founded years ago, bought by Google. And it kind of aggregates lots of different signals, antivirus kind of cyber signals, and then packages them up and that allows people like LinkedIn to buy them, and be able to kind of bake that into their moderation processes. they said, actually, it's not us. We haven't decided that your website is, an issue. Go and speak to this company called Syradar who are one of the signals, one of the inputs for the service. And so I then go to this company, Syradar, which is a similar kind of cybersecurity company based in Hanoi. And they confirmed that it was. a false positive, but we're very slow to reply. And I ended up having to go on Facebook messenger, Mike, to find a, a pretty much defunct Facebook account of this company. So radar to ask them to unblacklist me and to allow me to kind of then post the net literally about three hours later, I was able to post the everything moderation URL again. And that it was all kind of. automated moderation, like it was all AI moderation was the models that were doing the work in the background. And it made me realize just how how helpless users can be when they don't have the ability to contest these issues. and the fact that we're seeing a bit of a trend of this in this past week or so is, definitely worth

Mike Masnick:

Yeah. And it's interesting to me on few different levels. You know, last year we released two different. trust and safety related video games. And one of them moderator mayhem is one where you're a frontline moderator. and we added a level and we almost did this a little bit as a joke. We added a level where the company that you are working for turns on its AI moderation system and promises you that the. appeals that reach you should be a higher level now because the AI will pre vet them. And when that happens, you start getting the dumbest possible AI decisions. just, you know, a whole bunch of things that misunderstand words. And I found it funny because, you know, one of the things that came up in the Instagram threads moderation thing was that using the word cracker, it was originally used in, in the context of cracker jacks, a very popular snack. I don't know if it's popular in the UK, but very

Ben Whitelaw:

going to ask you, is crack, are crack jacks nice? They look delicious.

Mike Masnick:

Oh yeah, they're, they're wonderful. It's, it's a little bit of an old timey kind of, uh, snack, but you know, it was like, it had an association with baseball and the song take me out to the ball game actually mentions Cracker Jacks in the, in the, in the thing, it's, it's, you know, it's caramel and popcorn and peanuts and stuff. It's, it's

Ben Whitelaw:

not to like?

Mike Masnick:

Yeah, it's, you know, it's very tasty, but anyway, so somebody had mentioned that got banned and it was because of the use of the word cracker, which in certain contexts can be a slur. And then anyone who was saying cracker in any context was getting suspended or banned. And that's the kind of thing that we put into moderator mayhem as a joke, like exactly that kind of, word that has two meanings and in one context might be problematic, but in many contexts is not. And we kind of thought that was a joke. and especially like, I almost felt bad because when that game came out, that was, you know, the AI stuff was getting more and more popular and. we were sort of joking about like, AI is going to be terrible at content moderation. And then like, since then we've seen examples of AI moderation actually being, you know, pretty, pretty compelling and pretty good. And so I was like, Oh, maybe we went too far with that and moderator mayhem. And now like, no, it looks like, but, but then, you know, of all things, like, You would think that Meta would have a better sense. Like I could see this happening for like some smaller company that, you know, doesn't fully realize, but you would think that Meta with all the trust and safety expertise that they have and the AI expertise that they have, that it would never get to that level, but it does, and that is, it's a really sort of interesting thing to think about, as we move to a no matter what the AI is going to become more and more important within the trust and safety world and within the content moderation world, the fact that meta at this stage of the game is making such, ridiculous looking mistakes, and we can say like, we know. I talk about it, the impossibility of content moderation, doing content moderation. Well, you are always going to make some kinds of mistakes, but these kinds of mistakes are so embarrassing and so ridiculous because the other one, which we didn't even talk about as much, which was crazy. And the, you sort of mentioned a little bit was like this verge reporter was told that their account was being taken down their account that they've had for. A long time since they said before, Facebook bought Instagram, which would suggest how old they are. They were told that they were under 13 and their account was being suspended. And they actually uploaded their ID as part of the appeal to show I'm over and they rejected the appeal. And you're just like, the systems are out of control at this point.

Ben Whitelaw:

yeah, yeah.

Mike Masnick:

it's, you know, this is not just like, these are hard calls. These are the AI just making blatantly wrong calls.

Ben Whitelaw:

Yeah. And I can, we've talked about the kind of response that platforms usually come out with once a technical issue happens like this, and it's usually like, this was a technical glitch, we've now fixed it. Like somebody at some point will probably recognize that one of the many systems, much like in my example with everything in moderation, wasn't working as should. And has been adjusted as a result. they kind of went back to normal. Does it give users any more confidence? This is not going to happen again. I don't think so. you know, the worst part about it for me is that in the examples I mentioned, and in my example, the fact that it relies on the user having experience and understanding and personal connections to fix it, that's the real worry, right? You know, I was able to reach out to somebody at LinkedIn who I knew worked on the trust and safety team, Matt Navarro. The kind of social media consultant guy, he added Adam Massary, the head of Instagram and threads who replied to him within a few hours, if not less, and said, we're working on it. And then, you know, a bunch of time later, it was fixed. Like if you're not somebody who, who knows what to do and in the comment thread underneath Matt's post, there's a whole bunch of people who've had. been had their accounts taken down for very innocuous things. If you don't know what to do or where to go, you are just stuck. and

Mike Masnick:

but it's, yeah, I mean, it's, it's yes. And I think that is a concern, but it also, it is more complicated than that in some ways too, because these are the cases where there are legitimate concerns that there was a mistake made. The problem is that even when decisions are made that are legitimate or have a legitimate reason behind them, people still get upset. This is part of the impossibility of content moderation at scale. I am now experiencing. Some of that myself, you know, as we have mentioned, I'm now a board member at blue sky, which I think some people think gives me way more power than I actually have. there are some people who have been moderated on blue sky and had labels applied to their account and I get. skeets in my direction constantly, like, Mike, fix this. Like, how come I am getting labeled as being intolerant? And it's like, it's not my decision to make, so, I can understand why, like, it becomes really difficult. Like, yes, you're right. in your case, you're able to reach out to someone or if things happen, or, with the verge article, they can write an article and that will tend to get attention. And that feels problematic, but also, are people who are going to complain with less than legitimate complaints. And so it's, it's tough to work out a system where it's like, yes, only the good complaints get through and, and not the, uh, less legitimate complaints.

Ben Whitelaw:

sure. It sounds like you're making, making out that I deserved it. Are you saying

Mike Masnick:

Well, you know, you, you might, I mean,

Ben Whitelaw:

I wrote,

Mike Masnick:

go,

Ben Whitelaw:

I think I wrote about on hub. I think that was what my problem was.

Mike Masnick:

that, that, that could be it. That could be it.

Ben Whitelaw:

legitimately. Uh, so yeah, I mean, this leads us really neatly onto our next story, which, um, was a big story this week and is a kind of perfect, other side of the coin to, to the story we've just talked through, about the Appeals Center to Europe. Mike, tell us what it is and, and what's happened. Mm

Mike Masnick:

So this is, big news. this is sort of an offshoot of the oversight board who we've talked about before, which was. Originally put together by Metta to handle, appeals on, you know, higher level, decisions. And obviously we've talked a lot about the oversight board over the years. If you are listening to this, you probably know who the oversight board is, but they have launched this new thing, which is being funded through a 15 million grant from the oversight board trust, but is then. In theory, independent called the Appeal Center Europe, which can be shortened to ACE, A C E. and it is one of these appeals boards that is being set up under the DSA. And we've talked about the DSA a bunch, but one of the interesting components of the DSA is that it does require tech companies to allow for there to be third party appeals boards, these sort of, alternative dispute resolution kinds of systems out there. And in the last few months, we've seen a few of these different services launch. And the idea is that if you are in one of the situations that we just talked about, around a moderation system that you feel has gone wrong and done you wrong, that you can appeal not to the company, but to a third party board that will review it and then, make a decision. And so ACE Appeal Center Europe is the newest and probably the one that is getting the most attention. because it has that connection to the oversight board, though, interestingly, it is not just going to be for meta properties. It will also cover YouTube and Tik TOK. And they have a system where if you, feel that you were wronged by one of those, you can appeal, you have to pay 5. to appeal, but if you win your appeal, you will get your 5 back. And, the companies I think are paying like 90 per, issue as well. Um, so some of the interesting things here are that, it is the separate appeals board. It is a way to, challenge some of these decisions and have an outside party look at it. It is, and this is important. It is not binding on any of the platforms. So, You know, LinkedIn is not one of these, so I know some of the other, providers out there are handling LinkedIn complaints. So in theory, you could have gone to one of them and complained about your, everything in moderation being blocked. They could look it over, say, this is ridiculous, send their decision to the company, and then the company looks it over and they are obligated to, look at it in good faith and, then decide, but they are not bound by it. So they can choose not to follow these decisions,

Ben Whitelaw:

Yep.

Mike Masnick:

I was going to say it's,

Ben Whitelaw:

Right? Like this is, this is like a kind of big, big, new marketplace for user appeals and like a massive shift.

Mike Masnick:

it is a big change and it is a change that is developed under the law. And so, there's been a lot of talk about the oversight board over the last few years, and this idea of having different kind of dispute resolution mechanism for social media, and it's been an interesting experiment, but this makes it much broader, right? The complaint with the oversight board was that. it has that sort of connotation and I know people hate this, but as like the Supreme Court that it's taking up the big issues, the big sort of policy kinds of things. thinking. Whereas these under the DSA, these are supposed to be about smaller issues. You know, the kinds of things like, Hey, I can't share everything in moderation anymore. Hey, Instagram says I'm 13, even though I've been on their platform for 14 years.

Ben Whitelaw:

Yeah.

Mike Masnick:

These kinds of decisions where you can have an outside party, look at it and make a decision send something that will, that the company will actually pay attention to. So you don't have to do what we do, which is like, find somebody that you know, to complain, who will pass you on to somebody else. And so it's creating this process. I have no idea how it's going to work. And the interesting thing here is that there are now a number of these different bodies and there will probably be more coming. I think we were saying before, there's already four of them announced.

Ben Whitelaw:

This is the fourth year.

Mike Masnick:

and there's like a competition element there too, because apparently the way it works is that you as a user. Can choose which of these boards that you go with, and then you can't, you can't then switch, you know, once you've chosen, that's, who you've chosen for this particular thing. So, but there's this sort of competition element that, it's this multi sided market in an interesting way where, you know, you can complain to different boards, the companies will, maybe we'll take some boards more seriously than others and, and may appreciate things. I think it's, it's just a really fascinating experiment. We don't know, there's all sorts of reasons why it might not work. And I think there'll probably be some hiccups along the way, and we'll definitely see some, some of the challenges, but I think it's a really interesting approach and one really worth observing and seeing what it leads to. And, I hope being able to iterate and take lessons from it and maybe something. Much better can come out of this. You know, one of the complaints that I've had for years in talking about the big tech companies, Google and Facebook and, and all of them is that they don't do customer service well. They're not designed to take customer complaints. There there's, you know, it's very difficult. Like, who do you call? You don't, you can sort of send a message off into the ether and never get a response. Whereas this is a way of sort of forcing that. On people and creating an interesting sort of marketplace for that. And so, I'm really fascinated to see how this plays out.

Ben Whitelaw:

Yeah. I mean, this kind of mechanism is used in the EU in other, in other areas, right. In the kind of consumer world, if you're not being provided, the service of your product is, unsatisfactory. There's like processes in place for that. So it's kind of been taken from that world. and applied to the DSA in what is known as article 21, Alice Sonsberger, who writes trust and safety insider, the kind of Monday newsletter that comes out under the everything moderation brand. You know, she, she's like really excited by this because it's the biggest shift in user appeal is really since the report button. it's been a decade since the report button happened. We've seen some. Innovation in terms of reporting flows and the data being gathered through the reporting process, being more robust and more actionable for trust and safety teams, but this is like a whole new level. and Alice was imagining, and I'll show the piece in the show notes. A world in which, you know, ODS bodies specialized in different harms and in, different types of complaint. Right. So she, thought about an example of, hate speech in the LGBTQ community, you might go a to a particular ODS. In the EU, that's particularly good at that. and, you know, the four that have launched so far, one is in Ireland. That's, the ACE, body that you mentioned, Mike, there's also one in Hungary. There's one in Malta. There's one in Germany as well. essentially, you can find the ODS body that's going to be able to deliver you the outcome that you most want, right. And it's not binding, but. We expect people to gravitate towards the bodies that have the best kind of a response rate can say that they've managed to enforce some sort of change or impact for the platforms. This is a really interesting kind of new mechanism that the platforms are going to have to deal with.

Mike Masnick:

Yeah, we're going to have to see, right. and, you know, one of the things, and I mentioned this to you before we started recording, I have this, degree that I got when I went to college and industrial and labor relations. And one of the things that we studied there was. Arbitration, because that's a, you know, alternative dispute resolution mechanism that is very popular and, and, a lot of different contexts. And in theory, when you explain the arbitration system is like a way of avoiding. Uh, very expensive legal challenge. There are some things that are really appealing about it. And, it allows for a setup where there's a human being who will listen to both sides and try and make a balanced decision. In practice, it proved to be a lot more complicated because of some very twisted incentives that were structured. And what happened, especially in the early two thousands, was that companies started abusing the arbitration process. Because they were often the ones who, you know, companies would bring thousands of arbitration cases and individuals would bring one. and the arbitrators tended to be historically biased towards the companies and arbitration actually became a way for companies to get away with really bad behavior and have an arbitrator sort of rubber stamp it. And so, there are some things set up with the way these disputes systems are set up

Ben Whitelaw:

Yeah.

Mike Masnick:

to try and avoid that, I think. But it's really tough to predict how these actually play out and what the incentive structures are and the different gamesmanship that is going to happen. And there's a whole bunch of sort of like different variables and game theory that has to go on here. And I don't think we'll know exactly how well these work for some time. But. still fascinating and still like the opportunity for us to learn something really interesting and potentially lead to. Useful systems for dealing with, just outright mistakes. it's worthwhile to have that experiment happen.

Ben Whitelaw:

Yeah. And to your point, the likelihood of. Bad faith appeals, much like the folks who are getting in touch with you, uh, on blue sky and asking for a pre, for whatever reason that's likely to be high and, how we kind of, these bodies sift through those and how platforms deal with those is an ongoing question, but yeah, super fascinating. So, two sides of, of that kind of appeals question in two stories there. Cool. So there, there are kind of bigger stories this week, Mike. Let's do one smaller story each before we wrap up. you had a couple that you were tying together, a particularly egregious, example of, a user who definitely deserves a ban.

Mike Masnick:

Yeah.

Ben Whitelaw:

No question about it.

Mike Masnick:

yeah, so these are, there's actually two stories here and I'll, I'll do them both really quickly. There is something about and I've sort of joked about the content moderation learning curve that every new social media platform has to learn and that is like you have all these new social media platforms pop up and say we are the free speech platform. We're not going to do any moderation at all. Of course, very quickly they learn like that doesn't work.

Ben Whitelaw:

Hmm.

Mike Masnick:

And so here were two examples of that with two different platforms. The one that you were sort of alluding to was this platform kick, which sort of set itself up as a kind of video streaming platform that was not going to be aggressive and, banning people and, you know, they want to, go do, you know, whatever crazy thing you want it to do and this. Guy who seems like a real jerk, you know, the kind of person who would hear that and say, yeah, that's the platform for me. Uh, Jack Darty, was driving his McLaren, you know, super expensive, uh, supercar, at, crazy speeds and live streaming and doing crazy stuff and checking his phone and like responding to people while he was doing that, and then crashed it on the live stream and kick. Finally decided that that was enough to ban him. so there's all sorts of stuff between like, are you encouraging people to do stupid stuff? And there've been concerns about that with like Snapchat other stuff as well. But, that I thought it was interesting that kick actually banned him after saying like, basically we we're not going to do, you know, we barely ban anyone. And it's like, well, you do have a limit, right? It's like when somebody crashes a supercar on your stream, sort of shows you the idea that, yeah, you need trust and safety, you need content moderation. And especially if you claim that you're not going to do that, you're going to attract people like this, who are going to just create problems for you. the second sort of related story to that was truth social, which obviously is Donald Trump's, social media platform, which, these days is not so much a social media platform as like the place where he just, post whatever nonsense is coming to his mind. Um, but, um, there are a lot of users on it. And there was this investigative piece at Gizmodo where they had, filed freedom of information act requests with the government to find out if people had complained about things related to truth social and found a whole bunch of people are getting scammed. So scammers are going on truth social perhaps recognizing that. the rabid followers of Donald Trump may not be the most discerning and careful people that they are easy marks, and that they are being scammed out of tons of money and it's romance scams, it's, crypto investment scams. It's all sorts of stuff. and apparently they're really easy marks and true social has very little in the way of trust and safety. they do have some, but very little and. They apparently have not done a very good job of finding and stopping these scams. And Trump's, you know, most rabid followers on the platform are getting taken for tons of money.

Ben Whitelaw:

Yeah. And in Donna, in Donna Boyd's words, they've not been socialized or educated quite sufficiently to understand the risks at play.

Mike Masnick:

yes, I would say that. But you know, it's, these are both examples of, young social media platforms that assume that they don't have to do much moderation and don't have to do much trust and safety work. And you quickly see that when you do that, you get, scams and jerks and, terrible people on your platform. And that's, that's what happens.

Ben Whitelaw:

Yeah, no, exactly. Um, final story of today's episode might follow as a similar theme. You'll be unsurprised to know. some people might've seen this on 404 media. it's a story about a virtual, companion site. That uses AI to, make mostly young men feel kind of less lonely that was hacked. Okay. And it raises a lot of questions about yeah, the moderation of small sites and apps. And actually places in sharp focus, I'd say the, dangers around open sourcing kind of large language models, which. used by platforms like this one to, create products. So what's happened is that the data of all of these users has been leaked and includes the prompts that they put into the virtual companion app. And a lot of that included kind of sexual fantasies, very personal information, and some of that included. references to underage, children and prompts to create child sexual abuse material. So for a full media report, there's some 30, 000 occurrences of the term 13 year old, which is really shocking. And, you know, you might say, okay, this is a hack you never learned to kind of see that data we weren't meant to know, but actually if you. The founder of this site, this product moi, M U A H dot AI, has been very kind of forthcoming with the fact that his app is very well moderated, so he's given interviews. He's talked about the fact that, prompts are moderated. They've implemented content moderation to prevent. Inappropriate, questionable content on the platform and the language models are trained to avoid sensitive topics. That's clearly not the case at all, based upon the information from this hack. Right? Like it's, it's very clear and. this is a tool built with a kind of assortment of open source tools and made me think like, we're thinking about open AI. We're thinking about kind of anthropic that the kind of big large language models that are going to have kind of commercial models, laid on top of them and users will be able to use for their own benefit. There's all this open source stuff that we're kind of seeing being used for, for reasons that don't make me feel very good about it. and actually the, the kind of follow up to this that I noticed as we were researching today's podcast, some of that data is now being used. and people are being sextorted off the back of it. So we have people who've, used that app, who put in prompts, whose email addresses are linked to that data, and they're now being, sextorted by people forced to provide money and access to, Company data and company services for the employees that they work for. So, you know, harm, perpetuating harm all over the place, really. And, again, Definitely, the founder of that app needs to really go up your content moderation speaker of mine. Cause very far behind.

Mike Masnick:

Yeah. I mean, there's a few different things here. Like the question around open source models and stuff is a much bigger discussion. Not one I think we can have in sort of the, the quick story version. Cause there are a lot of big questions around that. The thing here that I think what this shows is that there are a lot of people who just, claim that they're doing content moderation and trust and safety work. And that, you know, safety is important to them. And a lot of them are not. Being entirely honest about that. And I think that's what clearly happened here is you make those claims publicly. This is, since we started this episode off with the, talking about the tech bros, this is a common thing that happens with tech bros, where they will tell you what you want to hear and they will say, of course, we're doing this safely. And of course, we've put these safeguards in place. And of course, we've thought about this stuff. And the reality is that they don't care and they haven't thought about it. And this is what happens is that, then, They get hacked and all their data is leaked. And, hopefully that means this app is dead in the water and this guy's got to go find a more responsible job. Um, but this is also where media literacy comes into, do you believe this guy when he says that? And, uh, there are lots of, probably lots of other red flags associated with. Kind of the way this app was designed and the way it was pitched and the way it was marketed that should have raised red flags about whether or not you would trust him to have actually built the app safely.

Ben Whitelaw:

Yeah, indeed. All of those stories kind of remind me that we have a long way to go in, in the quest for, some consensus around what the problem is with. online safety and speech in general, but I feel like we've done a good, job today of trying to flesh those issues out and, uh, nicely back in the chair with you. And,

Mike Masnick:

Yeah. Always good. Always good. I'm glad it's always, we really enjoyed our guest hosts are, who have been, Excellent. I think. Um, but it is always good to, be sitting across from you again and to have a discussion. I think, you know, the interesting thing about today's episode was that there was this, through line between all of the different stories and that they all were sort of connected in, how this space remains a really challenging space with lots of interesting things happening.

Ben Whitelaw:

yeah. And at this point, I only hope our listeners have finished their laundry. Um, but they,

Mike Masnick:

I hope you folded, folded your clothes carefully. Do not leave them all crumpled up. Okay.

Ben Whitelaw:

Indeed. Indeed. We're a pro, ironing non crease podcast. Says the man who never does any ironing. Um, that's a great note to wrap up today, Mike. if you enjoyed the podcast today's episode, drop us a review on the platforms where you most listen. And thanks again for joining us. We'll speak to you next week.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode