Ctrl-Alt-Speech

Moderating Politics & Politicizing Moderation

June 14, 2024 Mike Masnick & Ben Whitelaw Season 1 Episode 14
Moderating Politics & Politicizing Moderation
Ctrl-Alt-Speech
More Info
Ctrl-Alt-Speech
Moderating Politics & Politicizing Moderation
Jun 14, 2024 Season 1 Episode 14
Mike Masnick & Ben Whitelaw

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund. 

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Show Notes Transcript

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund. 

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

So to mark the fact that X slash Twitter has this week hidden likes for the first time, Mike, we're going to take a week off from our usual platform status opening and instead ask, what are you liking this week?

Mike Masnick:

Well, since you can't track my likes anymore, you now need to ask. Uh, I like the fact that the Supreme Court has not yet broken the internet. Uh, I, I was, uh, up early this morning wondering if they were going to rule in the Murthy and Netchoice cases, and Not yet. So, uh, I'm clear. What about you? What

Ben Whitelaw:

Maybe, maybe next

Mike Masnick:

Yes. Well, probably, almost certainly next week. Uh, what are you liking this week, Ben?

Ben Whitelaw:

Well, nothing as nefarious as Elon might be thinking that, uh, that I'm liking, but, uh, Uh, that's for sure. I'm, I'm liking being back in the chair of Ctrl-Alt-Speech. I had a nice few days off last week. Thank you for, taking the helm. And I really enjoyed listening to you and your way of riff on. Uh, last week's story. So great to be back. We've got a packed agenda. Let's get to it. Hello and welcome to Ctrl-Alt-Speech. Your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you with financial support. From the future of online trust and safety fund. My name is Ben Whitelaw. I'm back with Mike Masnick this week. Mike, you're a choice of co hosts there. It's it's your Rolodex is pretty impressive. How did you, how did get Yoel?

Mike Masnick:

I just asked. it was pretty straightforward. I mean, obviously you and I both know that, there are a lot of really great, uh, Really smart people in the world of trust and safety. And, it's wonderful. And we're very lucky that when one of us is out, we're able to call in, uh, actual experts to take our places

Ben Whitelaw:

Indeed, indeed.

Mike Masnick:

good discussion. So.

Ben Whitelaw:

Yeah. Between Alice and Alex and now Yoel, we've had a really great selection of co hosts. If, if you are interested in being a co host on the podcast, get in touch with us, we'd love to hear from you and we'll be, getting more co hosts on and more guests on over the course of coming shows as we, uh, consolidate our position and kind of get used to doing this, weekly podcast. we've got lots to get on with Mike. This has not been a quiet week, has it?

Mike Masnick:

No, no, we were, we were looking through the, what stories we're going to do this week and realizing, uh, we keep a spreadsheet going and I had to add a bunch of extra lines to, to, to the spreadsheet to actually put in the, the stories, not all of which we're able to cover, but, uh, it was a busy week, busy

Ben Whitelaw:

Yeah, it is a busy week. We're going to start, with a big story that you identified kind of tease up a couple of big stories that we know are coming. talk us through the, the story you picked this week.

Mike Masnick:

Yeah, so this was, something in the Washington Post, in their, um, tech newsletter, in which they had seen a letter that was sent by Meta lawyers to legislators in Maryland. So Maryland recently passed an online child safety bill. It is similar in some ways to a bunch of other online child safety bills that have been making the rounds, most similar to the ones in California. including the ones that have been already declared unconstitutional. and the regulators or the legislators in Maryland had sent a sort of nasty letter, frankly, to the big tech company saying, do not interfere with the will of the Maryland legislature and do not sue to block these laws and, and, and. Meta sent back a letter or their lawyer sent back a letter saying, uh, we have no problem with this law. We have no intention to block it and, or to sue to block it. And that raised a bunch of eyebrows because, NetChoice, which is one of the trade groups that Meta is a member of, and many other companies are members of, has been the, main trade group that has sued to block a lot of these laws. There's a few other trade groups involved as well. and Netchoice has been very vocal about the problems that they see in the, Maryland law and certainly suggesting that they were planning to sue to block it as well. And so the fact that Meta was, was effectively coming out and saying, we have no intention to block this while Netchoice is saying otherwise, raise some, some eyebrows. And I thought was actually kind of an important thing to dig into. Cause there is this assumption that net choice and the other trade groups are just doing the bidding of big tech, that they are the voice of meta and Google. And this suggests that the story is perhaps a little bit more, nuanced than that and a little bit different.

Ben Whitelaw:

Right. So before we dig into it, let's just, can you give us a potted history of net choice basically? Because just in a kind of, cause. Before I started to cover the topic of kind of content moderation and online speech, they weren't very well known to me. And there'll be some listeners who probably aren't super familiar. give us the kind of like 30, 000 feet view of what they do and also why they're relevant

Mike Masnick:

Yeah, sure. So, there are a number of different trade groups that represent sort of different coalitions of different tech companies and Netchoice is one of them and they have been around for, quite some time. And they were sort of always positioned as being sort of the, the center, right? Coalition trade group in which they have a lot of, connections and relationships with, American politicians on the center, right? So, they had connections to, mostly Republican politicians and generally took, What I would consider to be traditional historical Republican positions on free markets and less regulation and things like that. and, and they had done sort of traditionally had done sort of. Standard lobbying advocacy efforts, talking to politicians, making them aware of, you know, the potential concerns that they had, you know, again, generally coming from sort of center right types of positions. and then just more recently, Really, within the last few years, they had really built up a sort of litigation arm as well, and, had been willing to be fairly aggressive in filing these lawsuits and, part of what is important and what sort of comes out in this particular story is that, um, Net choice is not, they get to make decisions on their own. They are not decisions made by their membership. And there are different, many different trade groups, some of which require approval from all or most, or some, you know, each of them have different setups in terms of how they're set up, but net choice in particular, somewhat famously though. You know, a lot of people don't realize this are very independent in terms of what they can do. They don't have to seek out approval from all of their members on any of the actions that they take. And that has

Ben Whitelaw:

hmm. Mm.

Mike Masnick:

them up to be fairly aggressive in their litigation posture, even if their members don't agree. Now, That leads to lots of questions about, if their members continue to not agree or vehemently not agree, will they continue to be members? And will they continue to pay membership dues, which is, how the trade organizations exist. Um, and so, you know, this story suggests that there's some stuff going on behind the scenes that, it's not all, you know, net choices, actions are not. They're not speaking for meta now, to some extent, I think that's good. I think because the narrative is always that anything that net choice and some of the other trade groups are doing are really, you know, they're sort of front groups, or shills, depending on how you frame it, people are going to say stuff negative, you know, net choice to me has always struck me as. Like true believers and what they believe in. And it is not based on, Meta is telling them to do this. It is because they think these laws legitimately unconstitutional and problematic. And this is kind of demonstrating like, Hey, maybe that's true. You know, the sort of cynical approach of saying like, Oh yeah, sure. They say that, but really they're just doing Meta spitting, you know, The, this revelation that they are in disagreement with Metta is, suggesting that is not true, but it, it does raise concerns to some extent, because, you know, we had seen some of this before famously, there was a, a trade group, in the, 2010s timeframe called the internet association, which was sort of designed to be the big trade group of all the big internet companies, and they fell apart and completely dissolved, uh, three or four years ago, in part, because the different companies had very different priorities and the largest of the large tech companies began to realize like, We should maybe start embracing regulation because it acts as an anti competitive moat for us. And some of the smaller companies were like, Hey, wait, this is going to handicap us and we don't want that. and, and so I think that helped lead to the, dissolution of the internet association, which kind of. Dissolves. You know, pretty famously and pretty suddenly, and so I, I worry that that same sort of thing might happen. I hope it doesn't because I actually think Netchoice does really good work in a lot of these lawsuits. But this, this sort of public, you know, there's nothing in there that says that they're, they're feuding or, in dispute with one another. But the fact that Meta would, send a letter saying they wouldn't. Oppose these laws, while Netchoice is very vocally saying they plan to, raise some questions about where all that's going.

Ben Whitelaw:

Yeah, sure. And so a couple of guys, things to say here is like, as Meta Facebook, they have a number of different big tech members, right? So they've got Amazon, they've got Google, they've got Lyft, they've got Nextdoor, PayPal, uh, Waymo and X. So, so it's kind of, it's a pretty broad church. It's kind of, which is interesting in its first, right. Right. It's not just the speech platforms. They have a whole bunch of companies in there under their remit. and then the other kind of points is to say is that these net choices is taking these two cases to the Supreme court, right. The ones in Florida and Texas, which are the ones that we're expecting next week. So the timing is very, very interesting from that

Mike Masnick:

Yes. Yeah. And so, the Supreme court cases, which are normally referred to as net choice, there's actually two trade groups involved as net choice and CCIA, which is another, trade group that represents another batch of, of the tech company, some overlap, some not overlap. And so, yeah. Right? I mean, it's interesting that, a week before, you know, and it was potential that net choice cases were going to come out this week. So the fact that, that we're seeing this sort of, a little bit of a crack from behind the scenes of, where they might disagree. I would assume, I have no idea, I would assume that MEDA is in support of the net choice position on the Supreme Court cases, because the Texas and Florida laws are absolutely crazy. You know, this, the letter here was specific to child safety laws, where, you know, MEDA was saying effectively, like, we're fine with any, any, uh, it was sort of framed as we're fine with any child safety laws you pass. We're not going to complain about them, um, which is different than the Florida and Texas laws are around content moderation more broadly. And I think, there are elements of that. that's where some of the disagreements may be occurring in terms of net choice. Again, sort of taking a more traditional free market approach, maybe opposed to kind of any, internet related regulations, whereas Meta's position as a large company that is doing a lot of stuff on child safety anyways, maybe saying child safety laws are fine, but we draw the line at, the content moderation ones. I

Ben Whitelaw:

just in terms of what the implications for this rift are do you have a sense of where it might go?

Mike Masnick:

don't know. I mean, that's, that's what made this kind of interesting. Right. Because the, you know, it, it certainly brought me back to this idea of what happened with the internet association and part of what happened there, there were a bunch of reasons why internet association fell apart. And I had written an article years ago when it happened based on conversations with some of the people behind the scenes about. where internet association broke down. And part of it stemmed from dispute again, over whether or not to support certain laws. And in that case, you know, internet association had come out in favor of SESTA FOSTA, which was the, The sort of one and only law that changed section two 30. you know, it had been a mostly unified front, from the internet industry opposing SESTA FOSTA, uh, for good reason. And I think since the law did pass and we've seen that it has been kind of a disaster and has created real harms and done a lot of damage and not done very much to help anyone. I think that that was the right position, but. It was really sort of meta first, again, broke ranks and said, actually, no, we support this. And Sheryl Sandberg came out very publicly and said that, meta then Facebook was in full support of SESTA FOSTA and then Internet Association also came out in support. And I think a lot of their members, especially the smaller members were very, very upset about that. And so it led to that organization, going away. I hope that doesn't happen with NetChoice, because that would be very sad because I think the organization is really good.

Ben Whitelaw:

Could NetChoice or any industry association exist without the funding of the platforms?

Mike Masnick:

that would probably be very, very difficult. I have no, you know, I have no direct insight into their, their financial situation or how it works, but the assumption is that the majority of the funding is certainly coming from the larger platforms because that's, that's kind of how this works.

Ben Whitelaw:

Yeah, no, it makes sense. Other industries, definitely the same. So, yeah, really interesting kind of good spot. And, um, particularly in, in terms of next week's expected outcomes from those cases as well, an interesting tee up for that story too, let's kind of also, you know, let's move on and still talk about the implications of, U S. Supreme court decisions on platforms. It's my best segue between those two stories there. And delve into a new report from Amnesty International, which looks at the way that platforms have responded to the kind of revoking of, Roe versus Wade. So it's a brand new report that kind of really unpacks and charts the sharp rise in. Posts related to anti abortion being removed, being limited, being suppressed and, the report kind of is not super comprehensive, not trying to kind of quantify this, but it gives examples of where non profits and organizations looking to provide information to women around, abortion, health and, and where clinics are available, how platforms have, really started to, I guess, because of unease about what they're allowed to say, remove some of that content. So basically for me, unpacks the difficulty of treading this line between how to deal with kind of this kind of regulation basically. And there's some really amazing examples that I wanted to pull out, where. As a user, you can imagine how difficult this must be to have information hidden, suppressed, um, when you're trying to kind of share, share vital health information to people that, that follow you. so there's examples from, some nonprofits where they basically had. content taken down because it was billed to be illegal activities and regulated goods. So it kind of fell under this policy. that Meta has which feels like a real stretch. Some other content was marked as sensitive and hidden behind a kind of fuzzy wall. there was lots of. kind of graphics and, and charts and maps that would again, deemed to be unsuitable for users to look at. And it just really brought home the, the difficulty, be able to kind of moderate this content at scale. As always, it was one user. I don't think it was an organization that tried to post this, but one user who attempted to kind of navigate and understand where the platform's line was simply posted abortion pills can be mailed. Right. And, this doesn't, if you think of it just in, in those, five words, doesn't necessarily go against any policies. You know, it's not an attempt to buy or sell or trade pharmaceutical goods, which is one of the policies that it could go against. It's not something that, is looking to, gift pharmaceutical gifts, which would be against policy. It's not asking for those. Goods or drugs, and yet that was taken down. So the kind of trickiness that comes with posting a rap about the topic of, anti abortion and since that big landmark Supreme court judgment two years ago, is this really brought home in this report? And I thought it was really, really kind of hit me in the gut, basically, that the difficulties platforms have clearly had in trying to figure out what, people should be allowed to say and what can't.

Mike Masnick:

Yeah. And I think it, it really highlights how all of these debates, come back to these same issues, which is that so much of this stuff is subjective in terms of, of how you look at it. And it's easy for people to say. Platforms need to be better about taking down bad information without recognizing the fact that everyone disagrees over what is bad information, right? And so, you know, what comes clear here is that it's not just that the platforms are struggling with this, but that, Politicians are sort of leaping into the fray and, and pressuring and threatening the platforms if they have any information on their platform that is about, abortion pills, for example, and there was another Supreme Court case this week that. tossed out a case that was trying to ban abortion pills nationwide. It was based on an absolutely crazy legal theory and so crazy that only the, fifth circuit would agree to it. But the Supreme court said that's, that's a step too far, but they sort of wrote that decision in a way that tees up that there will be more challenges to abortion pills in the future. It may be done in a, in a way that might be more effective. and so you have this situation, this comes up in other, areas of, potential issue, including to go back to our previous story, like child safety issues, right? So, there's a debate in Congress about COSA, uh, the Kids Online Safety Act that we've spoken about before, and which, maybe still, it's still on the table and there's, plenty of discussion about that coming to the floor potentially very soon. And there had been some discussion about how that, that itself might impact abortion, information that. If, if that is considered dangerous information, could states use COSA? Could the state attorneys general, could, an FTC that is anti abortion, use COSA as a way to try and get that information blocked? And what we're seeing, and we saw this with SESTA FOSTA also, The platforms are going to take the path of least resistance. And sometimes that means, you know, with SESTA FOSTA, it meant taking down LGBTQ content that someone might accuse them of being, you know, ridiculously related to things that might violate SESTA FOSTA. It was why different platforms started removing, adult content because they were afraid of the SESTA FOSTA things. When you get to things like. The Dobbs decision and potentially COSA, that could lead to abortion information being removed. And what we're seeing, what the amnesty report says is like, yeah, that's already happening because the companies are going to try and avoid liability. So that's what you get is the suppression of information.

Ben Whitelaw:

Yeah. And that, that was my big takeaway is the fact that over moderation is going to increase, right? Because platforms have to take the path of least resistance, as you say. And it was a really fascinating example from the report in which, a representative of a a service I think called Hey Jane had said that somebody from TikTok had told them that yes, the reason why their account why account content was pulled down and why they were suspended was over moderation and basically they were sorry that that happened and they didn't really have any other kind of other explanation for it. And, and, you know, this idea of kind of over moderation happening, then there being an apology, if you're lucky, uh, about why that happened. But then having no kind of repercussions or, further outcomes for that is just going to be a cycle that I think more people are going to have to, understand and get on board with, because the scale of this kind of content is impossible for platforms to do with any kind of meaningful nuance and care. It's, it's these, these are kind of, I guess, AI's LLMs being able to spot content, pull it down. If somebody gets in touch, then, okay, then we might need to re

Mike Masnick:

Yeah, but it's, you know, it's even more than that. It's not just the, inability to deal with the nuance. It is the legal threat makes the nuance less important, honestly, because you know, once you have the legal threat, like there's a much bigger cost to leaving up content that might create legal liability and therefore the incentive is to over remove just to avoid the legal liability because worst that happens when you remove content that should be left up is that people get upset and like, I mean, legitimately people get harmed, right? That this is the lesson of FOSTA SESTA was that people get harmed, but the liability. does not fall on the companies and therefore they're going to remove that content. And so we're seeing that in the abortion context already. I think it gets worse if COSA passes and some of the other kids online, safety legislation passes. We now have this data, which I think is, is really valuable in these anecdotes, at least about what's happening in the abortion context, just because of DABS already. Without it, Directly, you know, without there being direct, legislation around intermediary liability on it. But, you know, we know how this plays out and it looks like it's going to continue to play out that way.

Ben Whitelaw:

no, indeed. Yeah, so I think this, this was a real eye opener for me. It's, doesn't bode well, I think in terms of, like you say, some of the big areas of content moderation that for a long time platforms have tried to tread a line on, but yeah, it was interesting to see it all laid out in one report. Okay. So I did my best there across two stories, Mike, to really give a sense that I understood the U S legal system. Um, the, uh, a couple of other stories that we'll cover now are a bit more, in kind of comfortable territory for me, but, we're going to start by. Looking at a story that broke basically just in the last 24 hours or so about the Stanford internet observatory. And there's been a real kind of collective outpouring about this news, hasn't there? Like

Mike Masnick:

Yeah. And so, there have been some, some rumors and whispers and stuff, but, platformer had a story that just came out this week saying that the Stanford internet observatory is being shut down. basically being dissolved. And I assume a lot of people listening to this podcast probably know of the Stanford internet observatory. I hope you do. It's been around for, a while now and has done a bunch of really good work in terms of observing the internet, I guess. Uh, and,

Ben Whitelaw:

true to its

Mike Masnick:

Yes. Uh, you know, and looking at and doing real useful research on things around disinformation and how things spread online and all of that kind of work. But in part, because of that, they had become a real target, a ridiculous target, a target of, just disinformation and outright nonsense and propaganda from. some of the worst disinformation peddlers around, and they turned it around and sort of said that Stanford Internet Observatory was like leading censors or were like leading the censorship industrial complex and a whole bunch of nonsense that anyone who knew anything new was not actually happening, but. You know, within Congress, they had, uh, the, you know, those sort of disinformation peddlers had a, had a big supporter in Jim Jordan, who is head of the House Judiciary Committee, and then also had set up this separate subcommittee from the House Judiciary Committee on the weaponization of the government, which is

Ben Whitelaw:

What a name.

Mike Masnick:

yeah, it was, you know, uh, it was sort of designed like the official scope of it was supposed to be, you know, looking into how the government had been weaponized and stopping it. But the reality is that that committee has been the most weaponized committee around, has subpoenaed and, uh, Dragged before it a whole bunch of disinformation researchers, trust and safety professionals, just a whole bunch of people and then has continued to have, ridiculous, misleading, terrible hearings that have just, spread. Tremendous amounts of disinformation. They've also selectively leaked and released content from depositions and closed door hearings. they, because they, they have subpoena power, they've gotten access to all sorts of emails and records. And again, very selectively released these things and really tried to. Create an incredibly misleading, damaging portrait of a bunch of different organizations, including the Internet Observatory. And unfortunately it feels a little like Stanford may have just decided that this was too much trouble and is effectively, they've let go of some of the people and they're sort of dissolving the, the entity as a thing.

Ben Whitelaw:

yeah, no, so, so Rennie DiResta, who many of you will know her contract has not been renewed and, she has an interesting looking book out at the moment,

Mike Masnick:

Yeah. Just came out this week. It's very good. I have not finished it. I'm, I'm about three quarters of the way through it as we speak. And it's wonderful. It is a really worthwhile read. Uh, Renee is at some point we haven't scheduled it yet, but some point coming on my other podcast the Techdirt podcast to talk about it. it's, it's a really, really good book. And one of the lessons in that book is that, you know, a lot of people think that when disinformation peddlers go around and spread a bunch of nonsense, your best bet is to ignore it. And she is making the point that no, that doesn't work because they're, they're building their own universe and their own cast of characters. And they're, bringing in more and more people who, if you don't push back in some way. who will believe that what these people are saying are true when it's not. And it feels a little bit, unfortunately, that, like, Stanford has succumbed to that and Stanford has sort of taken the wrong lesson from the Stanford Internet Observatory, which is like, you know, you sort of have to fight back sometimes, right? I mean, everything is context specific, but sometimes you have to fight back against nonsense and Stanford is sort of caving to it. And you would think that an organization that is that big and has that much power behind it. would be willing to stand up to nonsense, but it appears that maybe they're not. I will say that it does appear that a lot of the initiatives from the Stanford Internet Observatory are going to continue. They're just going to be rehoused within different aspects of Stanford. And so I think important things like the Journal of Trust and Safety, which is fantastic, is going to continue. The Trust and Safety Research Conference, I think that's going to continue. And so I, I'm hoping that most of the important work that is being done continues. But it's going to be through other parts of, the university.

Ben Whitelaw:

yeah. And hopefully moderated content, which is a great podcast that's done by Alex Thamos and Evelyn Dueck as well, which both Mike and I listen to regularly. Um, so yeah, it's, it is a bit of a shame. Hopefully its impact will still continue in lots of other ways. As you say, we, you know, we talked about, just recently it's Nick Meck research, which is incredibly well done, um, really brought something new to the space. Um, but it is. It is frustrating. I, I personally feel like this and potentially if the net choice cases don't go the way that, that we want them to do next week, we're at a kind of crossroads, right? That it feels like between all of these kind of. I don't know, many signals we're starting to see almost like the need to restate the importance of trust and safety again, and to kind of fight back against this politicization of moderating content in some form, which is you say, kind of Jim Jordan and his kind of funny subcommittee of led the charge on, This is a real blow, I think, in lots of ways. And there's a piece that talks about this this wider trend in the Columbia Journalism Review, which we'll link to in the show notes as well.

Mike Masnick:

Yeah. And the one thing I'll say about that is like, and some of this comes out in the CJR piece, but, but also I think this is important is that there are. Efforts within the space that are less credible. So like that one talks about like the Hamilton 68 report, which I think a lot of people in the space sort of recognized early on was kind of nonsense. And, you know, I don't want to, you know, don't want to be mean about it, but like it was making these, claims that didn't seem fully based on reality. And so that allows some of the nonsense peddlers to look at that and say, this was nonsense. And it was, but then they use that to paint with a broad brush, the idea that the entire space is nonsense. And so you have this in lots of spaces where, you know, there are people doing good work and people doing less good work, and it's easy to take the examples of the people doing less good work and then try and use that to paint with a broad brush that everybody in the space is full of nonsense.

Ben Whitelaw:

True. But the counterpoint here is that even where the work is good, as is the case of the Stanford Internet Observatory, you know, you still run the risk of the narrative becoming. The thing, right? The narrative kind of supersedes the work. So you can do as best the work as you want. And we can kind of debate how good the work is. But if, the narrative is conducted through the Twitter files that Elon Musk kind of oversees and through Jim Jordan and what he's doing there, then unfortunately that's what people are going to perceive to be true. And obviously. Through media as well that doesn't understand these issues and reports on, you know, things being more blunt than they actually are. So, so there's, I would say that like, yes, not all initiatives are at the level of the Stanford internet observatory, but probably to Renee's point in her book, we need to kind of combat this stuff proactively because no one else is going to do it for us.

Mike Masnick:

Yeah. Yeah. And I think, you know, I think it's challenging because, I mean, this happens in lots of spaces to where it's like some people do good work. Some people do less good work, and it's easy for people to find an example of bad work being done and use that, you know, so, so like the nonsense is always sort of wrapped around a grain of truth. and that's often what has happened here. So like the Hamilton 68 report, for example, has been like a key central piece to a lot of the nonsense from the Twitter files, even though like what really came out in the Twitter files was like, Twitter people looked at the Hamilton 68 report and were like, this is garbage. We're not going to pay attention to it. so it would be nice if, the media, the sort of more mainstream media were willing to be more careful too, because they held up the Sam Hamilton 68 report as being, gospel when, like most people who knew anything, we're like, yeah, it's not so good. Um, But that has allowed for now the nonsense pellets come in and say, Oh, well, like everybody bought into the Hamilton 68 report and that allowed the government to censor content, which is, you know, a bunch of nonsense. And so it's, like, again, some of this just comes back to basic literacy and media literacy and understanding like, who are credible players in the space and who are not, and you know, the willingness of bad actors to take less than credible actors and promote them as if they were credible, it just creates an ongoing mess. Mm

Ben Whitelaw:

So we, we could go more into this. I think we will in future episodes. I think this is a really interesting thread to pull out, but yeah, unfortunately the Stanford internet observatory will not be existing in its kind of current form. Hopefully we'll hear more from Renee on the tech dev podcast in the coming weeks. We got a couple other really interesting stories. One is a flag. I was really interested again, great segue actually, to this piece from MIT technology review. Co authored by René d'Oreste, look at that, talking about OpenAI's recent, report on, actors on that's used that are using its platform to spread, kind of misinformation. And, and this is something I'd missed in the last couple of weeks, but it came out at the end of May and they found that there was. A host of actors from China, Iran, Israel, who are using basically the, the platform to generate, content at scale that was kind of being used across platforms to sometimes effectively kind of create misinformation, but otherwise, as we've talked about in the podcast before, just kind of muddy things and create noise. And the piece is interesting because Renee and Joshua A. Goldstein says that the, actually the report is pretty good, for basically saying that the impact of this stuff is not great and kind of says that it's on the right track in terms of addressing some of the concerns that we all have around generative AI, and misinformation, particularly around elections. So it's good to have their, I think, think quite sage. Response to the open AI report and the fact that, actually maybe bad actors aren't doing the damage that we thought they were, I know, Mike, you had a thought on, the open AI report and its similarities to Some of Facebook's reporting recently and previously as well.

Mike Masnick:

Yeah, I mean, so I thought the OpenAI report was, was interesting. And it sort of takes me back a little bit to, the history of transparency reports within the internet space, which I've followed closely over the years where, internet companies sort of created this whole concept of transparency reports, you know, going back, uh, you know, a decade or more at this point, more than a decade, certainly. And, you know, they started in terms of like, Revealing things around, takedown demands, there'd be like copyright takedown demands and revealing how many they were getting, government requests for information, you know, law enforcement requests for things like that. And then, some of the companies started to do things around threats and threat intelligence and covert operations and, and, coordinated inauthentic behavior using these different platforms. And Meta really sort of led the way on that. And has released, threat reports going back quite a while. And the open AI report looks like, if you just opened it, it looks like the meta report, like it's, it's almost scary. Like it, it's possible. It's even using the same font. Uh, it's, it's really, really close in how it looks.

Ben Whitelaw:

Hmm.

Mike Masnick:

but you know, it struck me as interesting just in that. I wasn't expecting that because like, I thought of those kinds of the sort of threat reports, it tended to come from, you know, social media companies, companies that were hosting the content as well. And open AI is different in that they're generating content, and not really hosting it in the same way. The content is usually then, you know, moved elsewhere. And so I actually, it took me by surprise that they would release this kind of report, but. in a good way, right? I mean, it's, it is a good sign that open AI is actually thinking through this stuff and thinking like, Oh, Hey, our technology is going to be misused and we should be transparent about when we see that because, we don't see that much outside of the social media space. Where, Meta is willing to do that. Google has done that. Twitter used to do that, uh, no longer does. And so having open AI being upfront about those kinds of things and where the technology is being misused is actually a really valuable thing. And I think, I hope it catches on again, the transparency reports, started with, uh, Twitter did it, Google did it, and then other companies began to catch on and do it as well.

Ben Whitelaw:

Yeah.

Mike Masnick:

It'd be great if the other AI companies began to do this kind of thing also.

Ben Whitelaw:

Yeah. Two things here. I mean, the first is that I know that there are a number of different people working at open AR who used to work on the policy side at Facebook, and there's an element here, I guess, of I'm going to do what I know worked what I thought worked. And so there's probably an element there of like, that's, that's why we're seeing similar things. The other thing is. To what extent is there no other kind of precedent for open AI? Like it's, you know, there's no other kind of generative AI platform that's doing transparency reports. Right. So it kind of makes sense to pick the biggest or the best platform who've done this stuff in the past, and that probably is Facebook. what do you think about that? Is, is it because there's no other. Examples to go on to an extent.

Mike Masnick:

Maybe. I mean, it certainly wouldn't surprise me that as these things develop that, they'll begin to diverge a little bit. There is some usefulness in having them be similar because you can sort of compare the different things or even begin to track like, How, bad actors are making use of both platforms, perhaps together. And so having, having similar related, threat reports coming out could be useful, but I also think that, open AI will begin to discover that, you know, they'll begin to diverge in terms of as they discover new ways in which their platforms are being abused. but you know, on the whole, I think it's, it is a really useful thing for them to be open and transparent about it. It allows others to recognize, the smaller companies in the space may not even be as aware to look for these kinds of things. And so if open AI is publicly talking about it, hopefully it'll lead to more, more of these other companies, at least looking for it, even if they're not doing transparency reporting themselves. So hopefully it'll get there. I think it's, it's definitely a good sign. It's something that should be encouraged, but we'll see how it develops. You know, this is sort of early on in the process. And as with. Transparency reporting for the social media companies. It's changed, you know, the nature of it, the style of it, the things that are included, even how it's broken down, has changed over time. And I imagine that's going to happen with open AI as well.

Ben Whitelaw:

Yeah. Okay. Great. Talking of social, social platforms who have done transparency and smaller ones as well, this is a neat, neat segue once again, to our next story around, uh, around Ku, uh, the, the Indian Twitter clone who for a while did transparency reports, although I've just had a little look and, and they're not as comprehensive or as, uh, archived in such a way that I would hope they would be. But. we maybe have an explanation why in this piece from rest of the world that you found.

Mike Masnick:

Yeah, this was really interesting. So KOO, which is K O O, I think, you know, we've done a lot of U. S. stories this week, so we're finally getting out of the U. S. We'd like to make sure that, we're a little bit more international each week. KOO, K O O, was the sort of big success story out of India. And I had actually done a report, uh, about two years ago, we had done a report Global internet regulations and their impact on investment. And, it was interesting because Kube became a part of that report because what we had found was almost everywhere where there was really strict internet regulations, investment in internet startups went down. dramatically. and so there was a clear correlation there, except in India. India was the one exception. And when we went to look as, as why, like the investment, you know, India put in place very strict internet regulations, which they used to beat up on, Twitter in particular, somewhat famously. But investment had actually gone up, but most of that investment had gone into two startups. One was a big e commerce startup that also sort of somewhat collapsed in scandal at some point. And the other was Ku. They had raised a huge amount of money and was presented as this huge Indian success story. And it was a Twitter clone. It looked like Twitter. It even had, I think, a bird as. The logo, uh, it was just sort of like different colors. It was a yellow themed app, uh, as opposed to a blue themed app,

Ben Whitelaw:

And it was in, and it was owned by, uh, it was founded by Indians. Right. So it was a kind of, you know, like Twitter for India.

Mike Masnick:

exactly. And, and they sort of played up the language support for the, you know, there's many, many different languages within India and they played up the language support and it was seen as a success though, even at the time when I was looking at it about two years ago, part of the talk about it and some of the concerns were that they had successfully recruited some, members of the Indian government and the BJP party, which, just recently sort of won its reelection though with less support than maybe they were expecting. And so there was some people saying like, Oh, it's only successful. Because some of the, some top BJP politicians have supported it. And so there were some concerns there, but it was seen as this sort of massive success and it was going to sort of take over India, especially at a time when, Modi was really mad at Twitter, Twitter wasn't, blocking content that he wanted blocked and they were fighting him in court. And, and there was like a raid on Twitter's offices in India, but coup was seen as the success story. And I hadn't really kept up with it though. I had signed up for an account way back then when I was

Ben Whitelaw:

Oh yeah.

Mike Masnick:

which I probably still have, but like, I, I, um, haven't done anything with it and I hadn't really followed it. And then rest of the world has this report. It's basically saying who is, falling apart and really falling on hard, hard times. And, it highlights a few different things. One, which surprised me was that they never really had that many users. So I think. they raised a lot of money, which is what caught my attention. Cause I was looking at, you know, at the time I was looking at data on investments, and they had told this story of having a lot of users. It looks like maybe they never really had that many users. And without that sort of core user base, it was tough for the app to grow. And then since then, the sort of usage, the daily usage has really collapsed.

Ben Whitelaw:

Yeah. It seems like from, they started off with about 60 million downloads according to this report. And as of last year at 7 million, what kind of active users that's kind of Completely collapsed, as you say, to just a couple of million. And so they're now kind of struggling to raise funds and, pay people's wages. And it's not a great look, is it?

Mike Masnick:

And, and it's, you know, it, it suggested in the report that like, they also had some trouble finding direction, you know, they, they sort of kept, kept going. Trying to pivot and they were going to focus on monetization and they had some sort of like weird monetization schemes, you know, when, when it was tougher to, to find avenues for, investment, you know, and so I think they, they struggled, they went through a lot of struggles at a lot of companies. have, but it also somewhat reminds me of another story that, I had seen about a month ago or two when there was discussion on the U. S. ban of TikTok and what would happen because India famously banned TikTok over some disputes, uh, around what content was being allowed on TikTok and, and, some disputes with China over, how certain border areas work, but, so there, you know, people had looked at what happened and there had been a bunch of India based Tik Tok clones that had popped up so similar to coup that had been briefly successful. But none of them lasted very long. Eventually all the traffic basically went to Instagram. And so the conclusion was that, you know, everything moved over to Instagram eventually. And so there is something to scale, right? I mean, I think, I think that's what, comes out of this and like building a, uh, India based clone only, maybe doesn't get you the scale when people are using some of these apps, they're looking for global scale, not national level scale, even in a country as big as

Ben Whitelaw:

And, and innovation as well. You know, there was the line there about, you know, text based posts, isn't enough anymore, you needed to have video needs to have something kind of. There was new and added something to the market and it didn't. And so I think that's where it's, it's kind of dissipated a little bit. It links neatly on to, another fledgling app actually, that we're starting to see, rise up and is our final story of this week which is Fizz, which. I'm not on, I don't know if you have an account on Fizz, Mike,

Mike Masnick:

I do not have an account on Fizz.

Ben Whitelaw:

I I presume that was the case, mainly because it's targeted at, high school kids. and there's this piece in the Wall Street Journal that I thought was fascinating about how, basically this brand new app, another anonymous messaging app targeted at schools and colleges has basically once again, turned toxic, looks at basically the, the way that this, uh, to Stanford. Students have created this app designed to allow you to dunk on your peers and students and also teachers. And it kind of zooms in on a couple of schools where it's really gone very wrong. People have started to post pictures of one another, sometimes explicit. They've decided to kind of, uh, you know, spread rumors about people's sexual orientation. They've started to kind of bully one another. Some really horrible stuff in there about students not feeling able to, not look at the app every five minutes because they're wondering if it there might be some information on there about them. And my reflection on this is like, have we not learned anything from the last 20 years? Like two Stanford educated guys drop out of college to create an app that's designed to like, Spread gossip on, you know, your peers and does that not ring any bells? Like what I also found that like, they, Fizz have, have hired a trust and safety, expert, you know, they've got a head of trust and safety, which is great. I looked a bit more into the details of this. It's another, I think, Somebody who worked at McKinsey as a, as a business analyst who has now been appointed the head of trust and safety of this, school gossip app with several hundred thousand users, which is causing havoc among schools across the U S I don't know, I just, You know, is it time, you know, is it time to pack up, pack up our bags, Mike? What is this all for?

Mike Masnick:

You know, yeah, those who have not used Yik Yak are deemed to, to repeat it over and over again, you know, and I mean, it goes back before that, right? I mean, it's like, the fact is, right, like, not everybody has, has spent a Decades, uh, paying attention to these things like we do. And so everybody has these same ideas, especially young people who don't remember their earlier apps or remember them more fondly than they should have. We'll just kind of keep recreating it. And, and, you know, we've had, rumor apps, and systems in the past before that. And so, the reality is that like, high school kids are kind of, kind of terrible that way sometimes. And, and, I kind of feel like. This sort of app is going to keep popping up. And even if there isn't an app specifically dedicated to it, kids are going to use other apps to, to sort of recreate it themselves. It's just sort of the nature of, kids and, and how, how they act. That's not to say it's, good. It's obviously problematic in lots of ways, but, it's just one of these things, but yes, the story is kind of astounding in that you could replace the names of the company. And just, you know, have written the same story five years ago, probably 10 years ago, uh, 15 years ago, this, this sort of thing just keeps popping up. And, and I think that is because like kids will use this kind of app you know, the new hot app is the thing that kids are going to use. And sometimes, high school kids are unfairly mean and bullying, and that is, we're not going to fix that with or

Ben Whitelaw:

no, but you know, the, the kind of transference of liability, when you create an app like that to the kids and to the parents and the people who haven't kind of, and the headmasters who are having to pick up the pieces. Once, I don't know, some photo gets shared on that app because you don't have a, Very robust trust and safety policy is the bit that kind of gets, gets me riled. And, and I think I need to go and have a drink. I think that's what, um, we might, we might be talking about fizz in, 20 years, Mike, on, episode, you know, 671 of controlled alt speech, who knows?

Mike Masnick:

Well, we'll, we'll, we'll see. Uh, but, uh, yeah, I mean, it's an interesting article just because, we're seeing this kind of thing happening again. And yeah, you would hope that, that hopefully people will learn. But the, the fact is, uh, we spend a lot of time thinking about this and knowing the history and. Kids don't, and so they're, they're going to keep coming up with

Ben Whitelaw:

Yeah. And the importance of careful online moderation needs to be restated time and time again, as we always say. So I will go and take a bit of a break. I need to have a lie down, maybe have my blood pressure taken. Um, it's not a nice way to end the episode, but it's great to be back. Um, thanks for a great episode. Some fascinating stuff there. And, uh, if you're listening and you enjoyed today's episode, feel free to rate and review us wherever you listen. Thanks for joining us once again, and, uh, we'll see you next week.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.