Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
Won't Someone Please Think of the Adults?
In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- EU Explores Whether Telegram Falls Under Strict New Content Law (Bloomberg)
- Too Small to Police, Too Big to Ignore: Telegram Is the App Dividing Europe (Bloomberg)
- NIST Reports First Results From Age Estimation Software Evaluation (NIST)
- Supersharers of fake news on Twitter (Science)
- Digital town square? Nextdoor's offline contexts and online discourse (JQD:DM)
- The first social media babies are adults now. Some are pushing for laws to protect kids from their parents’ oversharing (CNN)
- TikTok offered an extraordinary deal. The U.S. government took a pass. (Washington Post)
- Q&A: Ireland’s Digital Services Coordinator On 100 Days of the DSA (Tech Policy Press)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So I know we're not in a car, Mike, and I'm not a taxi driver to channel Uber's opening prompt. When you go onto the app, where to today,
Mike Masnick:Uh,
Ben Whitelaw:you're getting sillier and sillier, aren't they?
Mike Masnick:yes, yeah, we're, we're not that far into this podcast yet. We're going to have to figure out more prompts. But, it seems that, we are heading all over the world. We were just spending a bunch of time talking about what stories we wanted to cover. And they seem to be from all over though. It feels like today we were going to be spending a fair bit of time in Europe, uh, looking at a bunch of different stories related to Europe, but also a few other places around the world. Uh, and where are you headed to today, Ben?
Ben Whitelaw:Well, not necessarily today, but in July, I'll be heading over to your part of the world. We're going to be doing our live episode of Ctrl-Alt-Speech at TrustCon 2024. We found out this week.
Mike Masnick:Woo hoo!
Ben Whitelaw:So, uh, yeah, I'm coming over to you. You're coming over to me. it's a globe trotting episode today. Hello and welcome to Ctrl-Alt-Speech. Your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you with the financial support from the future of online trust and safety fund. My name is Ben Whitelaw. I'm the founder and editor of Everything in Moderation, and I'm riding shotgun today with Mr. Mike Masnick. Uh, I hope you're paying your fare, Mike, I won't, I won't have anybody jumping out today,
Mike Masnick:Well, the question is, what am I going to, rate you at the end of this ride? How many stars?
Ben Whitelaw:which is a content moderation issue in itself, isn't it?
Mike Masnick:Oh yeah. Yeah. There's, there's, are so many things, uh, there was a discussion I had recently with someone about how, at least in the U S I don't know if this is true elsewhere, but if you ride an Uber, it's basically, rate five stars or destroy someone's life. Like, you know, if it's anything less than five that, you know, they know that they're, they're in trouble, but
Ben Whitelaw:been some horror stories as well. Yeah. We've the same thing here, you know, like people, drivers chasing passengers to get them to re rate their journey because it's that kind of effect. It's kind of
Mike Masnick:Yeah. Yeah. But let's just talk real quick about, uh, we're, we're doing a live Ctrl-Alt-Speech at TrustCon. So for anyone who is listening to this, and hopefully many of you listening to this, we'll be at TrustCon in San Francisco at the end of July this year. Uh, TrustCon is an amazing conference. If you have not already. plan to go. You might want to consider it. I have been to both of the Trust Cons that have happened in the past. This will be the third one. Uh, and I am greatly looking forward to it, but we are going to be recording a live version that if you are attending the conference, you can attend.
Ben Whitelaw:Yeah, it's very exciting. We've got a couple of great panelists who will be joining us. We'll say more about them another time in the run up to the event. Yeah. So our first time on, on the big stage, at TrustCon together. And yeah, the benefit as well is that people who can't come to TrustCon will be able to listen to it. On the podcast feed as usual. So we'll be recording the session and we'll be putting it in the feed as a episode. So you'll hear it either way, if you're subscribed and following along.
Mike Masnick:But if you're there, you can cheer along and have your cheering be heard on the podcast.
Ben Whitelaw:Yeah. Yeah. We're, we're wondering whether to have questions, aren't we? there's, there's literal moderation issue there.
Mike Masnick:Yeah. Yeah. we'll have to see, you know, that adds to a level of, how well it can be recorded if we're doing questions, making sure people are speaking to mics. We'll see. We'll figure it out. We have time.
Ben Whitelaw:Exciting stuff brilliant. So let's crack on with today's stories. Mike Um, as you say, we've got a bunch from around the world But you know you have identified one related to europe and to the dsa
Mike Masnick:Yes. so it's kind of interesting in, in a few different ways and there are a few different variations of the story, which is that the EU is beginning to explore Telegram, and Telegram is. an interesting app. We've mentioned it on the podcast before. I think very early on in the podcast, we had a discussion where the founder of Telegram had given an interview, which he almost never does, and talked about how, small a team they have and how few, trust and safety people they have on staff. and yet Telegram comes up in all sorts of stories. Recently there was the discussion about the study on NCMEC and child sexual abuse material and the reporting on it and the fact that Telegram does not report anything to NCMEC.
Ben Whitelaw:It's it's everywhere and it's nowhere
Mike Masnick:It's everywhere and it's nowhere. It is also, as Bloomberg is reporting, has become a central source of mis and disinformation, often pro Russian propaganda, anti Ukraine propaganda, and there are increasing concerns about all of that and how that works. Now, As anyone listening to this will know, the DSA has been in existence now for a little while, and part of what the DSA is supposed to do, for better or for worse, and people know I have my concerns about how the DSA is set up and what its impact on speech is and will be in the future, but part of the DSA is supposed to, deal with and hopefully slow down the spread of mis and disinformation. And the way that Telegram has so far gotten around that is to claim that it is too small, that it is beneath the threshold, just barely beneath the threshold to qualify. And so it sounds as though the EU has decided like, hey, maybe we should look into that. And so, you know, beginning an investigation to determine whether or not it really has the 41 million users that Telegram claims, which puts it just nicely beneath the 45 million threshold that the EU requires to become a VOP, a very large online provider under the DSA. And so there's sort of two, related and interesting stories, both in Bloomberg this week. one about. Yeah. I love the title on this one, too small to police, too big to ignore. Which, you know, also raises some questions about, obviously the questions about how truthful telegram is being, but also this idea of, having thresholds for determining these things. there is like a natural sense to, well, you know, you don't want to apply the same onerous compliance rules for small companies as you do with big, but where do you set those thresholds and how do you, how do you deal with them? And there are stories in other, you know, in other contexts doesn't necessarily apply here, but in other contexts where threshold laws create very weird things where people sort of bend over backwards to stay under threshold so that They don't have to comply with certain regulations. I forget the exact example, but I'm, I remember somebody talking about some threshold law that applies only to companies with 50 employees. And wherever that was that there were like thousands of companies that had 49 employees.
Ben Whitelaw:right right
Mike Masnick:You know, so in this case, it feels like Telegram is probably perhaps misrepresenting their actual user base. Um, given how prevalent it is, 41 million feels a little low.
Ben Whitelaw:Yeah, I mean it it has 700 million monthly active users, right So, you know, I'm, I'm not a massive telegram user, but the fact that the kind of EU only has 41 million of those does feel a little kind of, um, and, and what we know about messaging apps in really big countries in the world, like the China's and the India's does suggest to me that actually that that feels a bit low. I agree.
Mike Masnick:Yeah. So it'll be interesting to see what happens, but then also, I mean, even beyond the, what happens around the investigation as to how many users and whether or not they VLOP, even if they do qualify as a VLOP, I'm sort of curious what happens. I mean, Telegram, have made it clear that they don't, they're not particularly inclined to, do anything, uh, on any of this stuff. And that's sort of their attitude and their position. And it would be an interesting situation to see what happens next. How does that standoff play out?
Ben Whitelaw:Yeah. What's your kind of prediction? Do you think that they would back out of a block like the EU if they were forced to suddenly have processes in place, as per the DSA, like Dubai?
Mike Masnick:what happens, what the EU does if Telegram is like, yeah, well, too bad. Like, you know, do what you want. Like, we're not based in the EU. Uh, know, stop us if you can, because it sort of feels like that's where, seems likely to head. You know, I guess the telegram is technically, where's the basis Dubai
Ben Whitelaw:Yeah.
Mike Masnick:And so, I'm not entirely sure what, and that's sort of been their attitude towards, you know, other laws, like laws in the U S again, they don't report to NCMEC, and, as a non U S company, do they have to report to NCMEC, maybe not, but, it was a little surprising that they just completely ignore that, and they seem to ignore lots of other laws as well. So, my sense is they would just ignore the EU, and, potentially even mock the EU, which, uh, wouldn't surprise me.
Ben Whitelaw:Yeah, I do know that they have, um, all platforms need to have a kind of legal representative in the block. So they're kind of legal representatives in Belgium.
Mike Masnick:Mm
Ben Whitelaw:And so the Belgium, regulator has the unenviable task of kind of overseeing. Telegram on behalf of the, and it has come out and said previously in the last kind of month or so, how difficult it is and how like problematic it is to try and wrangle with a platform that basically has no presence in the country, really. So that's the closest kind of closest we get to Telegram from an EU perspective. But yeah, it's super difficult. And what's interesting for me is that the way that this kind of call for An audit on the number of users a telegram has, which is obviously what Bloomberg has reported, came about is that the Estonian prime minister, called for this to happen and, and obviously Estonia, super susceptible to Russian disinformation in its position in the EU. And is basically a kind of call from the top to kind of basically have the adopt, try and adopt Telegram as a VLOP, which it has 24 now, there's a new one was added just today, Timu, the, Chinese shopping app. So it's, it's interesting that they're kind of seeing more of these big platforms, coming under the EU supervision and also people calling for platforms to be added. Um, and I wonder what that will lead to really in terms of, we still haven't got an investigation, from the EU yet, but there's five, I think, out in the world at the moment. So yeah, Telegram is really interesting. You're right. And we'll have to see what that yields.
Mike Masnick:Yeah, it is. It's just kind of interesting what happens when you have a fairly large platform that simply doesn't care. And I think a lot of people thought that maybe Elon Musk and, X was that platform. But I think telegram is the much more interesting one in terms of their willingness to just not care about the laws.
Ben Whitelaw:Yeah. Yeah. What can you do? okay, great. Thanks, Mike. In terms of our other big story for today, I picked out, A new report from the national institute for standards and tech, which it's something that sounds quite dry Um, I don't want listeners to turn off so i'm going to try and explain it as quickly as I can This is a really interesting new report actually by what is essentially a kind of standards agency in the u. s They do A bunch of things related to kind of creating regulatory frameworks for everything from cyber security to astrophysics, you know, they provide kind of industries and markets with the kind of confidence to invest in, platforms and technology. And so basically NIST have done their first report for a decade into age estimation and age verification. And they've looked at six kind of software platforms. the estimate age from photos. Now that's interesting to us, obviously, because age estimation is used by platforms to provide access to different types of content, different types of experiences. In the last few years, it's become really top of mind for platforms as they're trying to ensure that the right kind of sets of users see the right stuff. And Instagram famously have brought in age estimation to ensure that users above a certain age can use the platform. So this report is really timely and, there is this other side where age estimation is, potentially thought to be an issue for privacy and, and we'll. Where data is stored and when not entirely clear as an industry, what the ramifications of that being kept is. But this report is really the first time that we've seen comparisons across different age estimation softwares, at least for the, for a decade or so. And obviously there's been a huge change in, computing capabilities and artificial intelligence in that time. Just briefly on what the report says, firstly, it kind of distinguishes between facial recognition. And age estimation, which previously got kind of lumped into the same bucket. Facial recognition is where you're identifying a particular person. Age estimation is where you're obviously kind of looking at their facial features trying to ascertain age or certain characteristics. And the way they do that is by looking at 11 million photos across, a bunch of government databases, including. Kind of visa applications and border crossings and things like that. And the, the results are really interesting. You and I were talking about this before we started, basically NIST doesn't hold much, uh, doesn't think much of the, the software that it has assessed. Um, it says that there is a real range in performance of the algorithms that it assesses. There's huge room for improvement. And. The way that it kind of particularly kind of the scores that it gives the different software providers ranging in quite a lot of detail. So for example, for Yoti, which is used by Instagram and other social platforms, it's got a, score of, two for people between the age of six and 70. So the, Yoti is only out by two years when it's assessing, a photo of, somebody within that age range. And. yes, it's slightly higher for other platforms that, have less of an interest for us in the online safety space and, and yeah, basically kind of, we're starting to see this, this age estimation area take shape. And, you know, I know you've talked a lot, Mike, in the past about kind of how there are concerns about age estimation, whether or not we know enough about the platforms that are using them. And what did you think about the report?
Mike Masnick:Yeah, I thought it was interesting. Obviously, age estimation is a really big and popular topic right now. it's certainly being mentioned in a lot of different proposals, regulatory proposals of requiring age estimation, as a tool. Some people think that's better than, full age verification. I have a lot of concerns about the actual quality of the tech, which I think to some extent the NIST report supports, but also about the privacy implications of it. The idea of having your face scanned to use the services, you know, potentially problematic, in a number of different ways. That said, it's interesting. NIST is a well respected and very serious, agency of the government. The work that they do is, tends to be pretty thorough and rigorous. And so I appreciate the analysis here and, the fact that they basically note that the technology is still pretty weak, um, they say it certainly has improved since. 2014, which is the last time they studied it. And that seems pretty obvious. You would expect, especially with everything that has happened in the last 10 years, um, that this particular technology would get better. And you would also expect that it is likely to continue to improve. You know, the report also did note that. Error rates were different for different types of people. So, for example, it was much better at age estimation for male faces than female faces, and that they're not entirely sure why that is, and how that happened. Could be an issue with the training data. It could be an issue with a bunch of other things in terms of the, the way the algorithm is, is structured. Um, there are a whole bunch of questions there. You know, I think it's, it's useful that this is being studied. I worry a little bit that, as this. continues that people will sort of look to this, even though it notes the limitations of the software, people look to this as sort of a justification. Okay. There's like a NIST, qualification process here or something along those lines, you know, the algorithms that they studied NIST had put out a request for different companies to submit. Their algorithms to be tested. And so it was only tested on companies that voluntarily donated their algorithm to NIST to study. Uh, and they studied six of them who, who had voluntarily done it. So there are questions of, are those six the state of the art? You know, what does that mean if you don't submit it, you know, there are questions about, are people going to start, you know, if age estimation., starts being required by law. Then, the companies that don't submit to NIST, are they sort of blocked from, being used? Or is that no longer considered a best practice? We don't know exactly where this is going to go, but it is interesting to see. I think it's good that an organization like NIST is actually looking at this and looking at the quality of these tools. but it'll be interesting to see sort of how it plays out. I did find it interesting that, you know, you mentioned NGOD and. Yodi has a press release about it that sort of presents this as vindicating their software. And I'm not sure that NIST is really doing that because NIST definitely makes it clear that like the technology still has a long way to go to improve. and you know, Yodi may have done well compared to the other five algorithms that were submitted in certain categories. but I think that this technology is still in a pretty nascent stage and, there's still a long way to go.
Ben Whitelaw:Yeah, I agree. And there's questions for me about this kind of specific nature of online safety as well. So the database that NIST is using to kind of, I guess, provide these scores and these benchmarks, it's not specific to a kind of online safety context, you know. So there's questions really about, is there any different circumstances or contexts in which somebody might kind of verify themselves to use an app, which aren't really covered in the way that they've, produced data. And the other thing is that the thresholds for that kind of work might be different. So, the report, it's very clear that it's not about defining thresholds, you know, different uses of this technology. We'll, Require different age thresholds. It feels like for us, for us social platform, trying to ensure that teens don't use the app or, or maybe a site that has mature adult content, that tolerance has to be much smaller because even if it's only two years, that could be somebody who's really quite young to be accessing a porn site. So I don't know if it's quite as bespoke as it needs to be for online safety. But it's still a big deal that they've decided to do this report 10 years after the last version
Mike Masnick:Yeah. Yeah. No, I, I think it was important. I mean, a few weeks ago we had talked about, I think this was actually when you, when Alex was here instead of you about, you know, only fans doing age estimation and being accused of setting the, level of it a different age than they had claimed. And so, the claim was that,, the age estimation software had to predict that you were over 25, I think, but they'd really set it to 21, which was allowing in younger users. And so that, that error rate becomes really, really important in terms of, how this is handled and sort of, there are also other questions about what do you do if somebody fails an age estimation, situation, does that then switch to verification? Does that person automatically get blocked? Are there other, remedies or other situations that are taken into account? These are all things that I think people are still figuring out, what are the best practices and how they're going to be used. but it is interesting and I think important work to see, you know, NIST beginning to explore these
Ben Whitelaw:Yeah agreed. And the wild thing in a way is that this is only really clear by reading the ot report It's just how extensive this technology is used now, right where? You know, Yoti itself is used across Instagram, Facebook, dating, Onlyfans, TikTok, Yubo, Epic Games. It kind of boasts in this press release. And I don't think anybody would really kind of know, they've never heard about Yoti and, platforms who have used that technology to provide a kind of check, has not to this day had a NIST report to really rely on. So they've obviously got the assurances they need to be able to integrate that into their platform. Um, They'll probably be feeling better now that this report is out and I'm sure others will too, but it's kind of, it's interesting to note that this has happened without such a report and probably will therefore really increase the adoption of this tech.
Mike Masnick:Yeah, I mean, you know, the thing I'll note is that contrary to maybe the mainstream narrative that some people believe, a lot of platforms out there are trying to make sure that they are safe for, uh, they are age appropriate in terms of what they allow. And that is a huge challenge and raises a whole bunch of questions, some of which we've discussed here. And so, you know, the fact that these companies have been testing these kinds of technologies and making use of them already, is not that surprising if you spend time in the space, but maybe is surprising if you believe the. Sort of mainstream news narrative that the companies don't care and want to addict kids and destroy their lives or whatever, whatever the narrative is. So,
Ben Whitelaw:Before we move on, what, to what extent do you think the kind of privacy. Questions are, I don't know, kind of answered by the report like this. So you do the kind of age estimation issues. Are they lesser than the kind of facial recognition ones? If you're, if you're to kind of the podcast,
Mike Masnick:yeah. I mean, if, if you're going to rank it on a spectrum, the estimation. is somewhat less than, but I mean, lots of people point out that age estimation and age verification often do go hand in hand. And so you get to that point where if you're in that fuzzy zone, where it's not clear, then that leads to age verification. There are also concerns about when you start to mandate age estimation, You know, because it's being mandated, if there's a failure, then what, right? So say you're only supposed to let in, let people over the age of 18, use a service and you use age estimation and it, it fails and lets in someone who is 16, then what is the liability situation there? And if there is still liability because you, messed up, then, does that really turn into age verification because the company doesn't want to be liable and doesn't want to risk that, then they're going to switch to the more extreme and the more privacy intrusive version of the age verification. And therefore you end up in, the real, Privacy concern, even if the age estimation by itself doesn't feel as privacy invasive as a full age verification.
Ben Whitelaw:yeah. And, and but there aren't a lot of alternatives, right? This is the kind of thing that platforms are having to battle with.
Mike Masnick:Yeah. Yeah. This is what everyone is trying to figure out now. And it's, it's a really difficult question. And as with so much in the world of trust and safety, there are trade offs, right? And no matter what you do, there are trade offs and understanding those trade offs and not thinking that there's an easy solution is, is an important part of this.
Ben Whitelaw:Yeah, so yeah, we'll link to the report in the show notes. Definitely worth reading that, directly. And then we'll post some of the reaction to that report as well. Cause it's all interesting as a package. Okay. Mike, we've got a story next that I think will be interesting over, uh, dinner table and also when you, you know, next, next speak to your, your mom on the phone or via email or whatever medium you speak to them. This is the news that, maybe not unsurprisingly, misinformation is mostly spread by older women.
Mike Masnick:Yeah.
Ben Whitelaw:Tell us more.
Mike Masnick:This is a fascinating new study that just came out in science. You know, one of the major, major science journals out there, uh, super share is a fake news on Twitter. And so it was looking at that and, and then there is this sort of, again, sort of mainstream assumption that, mis and disinformation providers are, you know, Young dudes, I think is the way the way it goes. Or, I mean, there's all these concerns about teenagers and, and not being able to tell real information from, false information and yet there have, you know, for a long time, lots of people have said like, no, you know, the real problem is really like old people on, often Facebook, who just don't know any better and believe everything that they see and just don't have the sort of like media literacy that younger people are hopefully being taught. And this study sort of supports that and kind of in an interesting way and found that the biggest sharers of false information, they call them super sharers, tend to be older women, middle aged women, often living in somewhat wealthier areas. in the U. S. often in Arizona, Florida and Texas. Um, I'll maybe leave aside commentary about people who might live in those states. Um, but, that has become a major vector of the sharing of false information. And this, you know, it's kind of surprising because so much of the conversations around disinformation doesn't focus on on that. Mhm. particular demographic. And yet that's what the study says is where, where a lot of this happening.
Ben Whitelaw:how are they able to, just unpack for us, Mike, how are they able to know that it was women? What's the methodology that allowed them to do that?
Mike Masnick:now you're, you're going deep into my knowledge. Cause I read this a few days ago. Um, Okay. So they, they had a panel it's. 664, 000 registered U S voters who were active on Twitter during the 2020 election, and then they identified super shares of people that accounted for, 80 percent of, who were sharing fake news content on the platform. And then they were able to sort of track who was, who based on the data that they had, um, And so, I hadn't gotten that deep into the methodology beforehand. So maybe there are some questions around, around that, but, yeah, from what I'm seeing, I mean, it looks legit, but I can, I could go deeper into
Ben Whitelaw:Yeah. Okay. I mean, that's definitely interesting. There's, you know, kind of is kind of intuitive, right? The idea that it's, older women that are the vectors of this stuff. I
Mike Masnick:Yeah, I mean, I don't know if it's counterintuitive, but it's certainly not like the mainstream that people think about, um, and is kind of interesting. And I did think that it was like, there was the other study this week, which is not the same. But I thought it was interesting in conjunction with that, which is, the study that, came out of NYU and the university of Michigan, looking at, next door. Yeah. so much of the conversation about, social media and disinformation and all that kind of stuff focuses on Twitter, Facebook, Instagram, Tik TOK. And there are other platforms and Nextdoor is a really interesting one. Certainly in some ways, a controversial one, one that there's a lot of discussion around, I think anyone who uses Nextdoor, you know, has this experience where It's an interesting platform. It's because, you know, basically you can only interact with people who are in your geographic nearby vicinity. it creates a very different experience and there is no sort of general next door, and yet it often feels, that. The experience that people have on next door is very, very similar where they, often are horrified by, by their neighbors, where, you know, it seems like a place to, raise perhaps unjust concerns
Ben Whitelaw:Yeah. Yeah. Yeah. We were saying beforehand. It's like, it's always a message that pops up saying there's a man walking suspiciously down the street.
Mike Masnick:Yeah.
Ben Whitelaw:Or, you know, some sort of grainy picture of a door cam, which
Mike Masnick:Yes.
Ben Whitelaw:could be anything
Mike Masnick:There are a lot of
Ben Whitelaw:best view.
Mike Masnick:there. There are also, you know, certainly useful aspects of next door in terms of like, you know, need to find a plumber or a babysitter or something along those lines, but, there are concerns. And so, it's interesting that, you know, it's just a platform that hasn't been as well studied and, and NYU was studying it, and. found some similarities to, to this other report that we were talking about, that it is, you know, more widely used in sort of suburban, more affluent neighborhoods, which, matches a little bit with the disinformation super spreaders. Uh, you know, I, I'm not saying there's a one to one overlap and they're, they're looking at two different things, but it is interesting to think about, the normal conception of, the concerns that people have about social media are not often looking at like local communities and local sharing, and, who is doing, that kind of sharing, you know, and, somebody had just pointed out to me recently that, you know, next door is like the prime example of why real names. is not a solution to the problems of social media because people on next door use their real names, and are often horrible.
Ben Whitelaw:Yeah.
Mike Masnick:Um, and so it is interesting to begin to think about different ways in which, you know, what communities there are, how they're intermediated by social media, who are the, people who, spread the word. different kinds of information in different ways and just beginning to see these interesting studies starting to come out.
Ben Whitelaw:Yeah, definitely. And you know, this, funny to think that the cause of misinformation is kind of high income urbanites with lots of time on their hands with kind of twitchy. I think I said it on the podcast before twitches with twitchy fingers, you know, there's like people worried about the you know What things mean for them is kind of a thread that spans across those two pieces. So
Mike Masnick:yeah, yeah,
Ben Whitelaw:out if you have neighbors like that
Mike Masnick:We all have neighbors like that, unfortunately.
Ben Whitelaw:You just don't know them yet
Mike Masnick:yeah.
Ben Whitelaw:Um, okay, great. that's really interesting mike. Thanks for digesting those pieces of research. Um, You know, talking of, I guess, like real world dynamics playing out online, I was really interested by this piece that CNN published this week about how social media babies, kids who have been captured online, taking photos of and had pictures posted of them on their parents profile. We're kind of rising up against this trend, and urging lawmakers to protect their rights. This is something that I was completely kind of passed me by. I've never, never something that happened to me. And I don't know if I'd ever do this if I've had kids, but basically there's, a tranche of kids influencers who are trying to get laws passed in various U. S. states against the monetization of kids lives online. It's called sharenting. I didn't know this. Um, so kind of parents who share online and there's a couple of laws that have been passed in, Illinois. It's coming into effect in July and a few others in train as well. Basically this piece outlines the, the impact of, parents who, who share lots of photos of their kids. And there's some quite kind of difficult to read, I guess, stories in there about how kids were bullied because. Photos were posted of them and they were seen by classmates of, getting anxiety because teachers saw, stuff that was posted by parents and, and they treat kids differently, basically, a very kind of niche online harm, I guess. And it's obviously really difficult for those kids, my reaction really was like, isn't this a kind of parenting question rather than an online platform question, you know, Parents who aren't necessarily aware that sharing photos in this kind of way with, without the right settings and without the right privacy. I'm gonna compromise their kids' health. Like that's a kind of no brainer to me. Now, maybe that's because I'm of a different age, but really interesting to see this, these stories unpacked for us. I dunno if you, Mike, have ever shared pictures of your kids, of your, do you think you're at, at risk of, you know, being, being taken to the cleaners when your kids are 18
Mike Masnick:uh, yeah, so I am, I am, very, careful about that and that I do not share publicly very much information about my family at all. To the point that like, it's like, do I even mention that? So, you know, I've always been, very careful. That's not to say that that's the right thing for everybody to do, but it's sort of the decision that, I made as a, you know, semi public person that it is not my decision to make for my kids. It's their decision to make when they're, at the appropriate age for them to make that decision. I will share them in private groups, uh, you know, with family, so that family can keep up on stuff, but I do not share information publicly. and that, you know, that's a choice. And, and so, you know, it's funny to me, this story, really actually ties back a little bit to the last story about, you know, older people who, just don't, just don't have the media literacy understanding in that case about sharing mis and disinformation. And in this case about, when it is appropriate and when it is not to share information about their kids or, or oversharing, because I think, you know, I think there is. It's perfectly fine in lots of cases for parents to share pictures of their kids. They're proud of their kids. They want to show stuff. I think where the issue comes in and where that CNN article really gets deep is when it's, you know a parent who is trying to be an influencer, who is going to a different level and they're doing it for their own benefit, not for their child's benefit or not for their family's benefit. Like there are lots of cases where I think it's totally appropriate to share, you know, your kid wins, uh, dance contest or something. And you want to share a photo of them with the award or something, or they graduate or whatever it is, like, I think that's natural. And that's just being a proud parent where the issue comes in. And what I think is highlighted in this article is that these parents are doing it for themselves, uh, and not for their kid. And that is, you know, I'm making a judgment there and maybe that's unfair, but, reading through this, this is oversharing and it is entirely about, the parent rather than the kid. And that is, I think in many ways, actively harmful. And that's sort of what is, what is coming up in terms of the laws that are coming out. I think those laws are really just kind of matching sort of child actor laws like were a whole bunch of stories, many years ago about child actors where their parents basically, again, for the parents benefit that the parents are taking all the money that the child actor was earning. And then when the child becomes of age, they're kind of broke and left on their own. The parents made all the money and are often terrible about it. And so there were these laws passed that if you're making money that, a significant portion of that money has to go to the child, maybe in a trust fund or something that becomes available to them at an older age. And that's what the Illinois law is like, it's sort of matching that kind of thing, but for online. Parent influencers who are, sharing stories about their kids, you know, and again, like, I'm certainly not in a position to tell anyone else how to parent. But there are these questions of like, how much of this really is on the parents versus how much of it, you know, and some people claim that like the nature of social media today encourages people to overshare. And that leads into this. Um, I tend to think it's, you know, There is some level of common sense that should be applied, and I understand that there is no, you don't have to take a class to become a parent. There is no, requirements to become a parent. Um, and so different people have different approaches to it. But I hope that this is the kind of thing that parents will start to learn about what they feel is appropriate and what sorts of things, what sorts of rights their children have to privacy and how they want to handle that. As I hope that a lot of the issues that we talk about, including around disinformation are ones that, as the older generation gets older and the younger generation is taught these things and educated on media literacy and privacy, that maybe Society itself begins to take care of some of these issues.
Ben Whitelaw:Yeah. No, certainly it's really interesting little piece. I think that definitely opened my eyes to, some of the potential misuse of. Platforms even beyond the ones that we know about and talk about a lot.
Mike Masnick:Yeah.
Ben Whitelaw:Okay. So that covers off, I guess, kind of the offline versus online, little chance of small stories talking of kind of how, parents, monetize their children on a platform that leads kind of neatly onto Tik TOK and, and the story you spotted in the Washington post
Mike Masnick:Yeah, yeah. Uh, I was going to say, is TikTok a situation where kids monetize their parents?
Ben Whitelaw:monetize themselves. Maybe.
Mike Masnick:Yeah. Yeah. Well, I have seen, there are some accounts of like people sort of, you know, younger people. turning their parents into, it's sort of the reverse, uh, Yeah. I'm not going to, that's, that's beyond the point this, this particular story was interesting, obviously there's been all these discussions around the TikTok ban in the U S and there's the legal fight that is currently progressing and has been accelerated this week. I think the court said. Timing for how that is going to play out. And hopefully they're going to go through that relatively quickly. So it'd be interesting to follow that. But the Washington Post was reporting on some of the details of the deal that TikTok apparently offered the U S government. Some of this, I'm pretty sure I'd heard before. and so I'm not sure all of this is new, but it was talking about like, Project Texas, which we've spoken about in the past, sort of taking it to another level where TikTok effectively offered to give the U S government almost full control over TikTok. I mean, it would allow them to, to determine, who was hired, in certain key roles and, and certain, you know, oversight roles. they also also offered to give the U S government effectively an off switch if they felt that, TikTok was not. following these rules that they could turn the app off. and It struck me as like, that's crazy for, for a bunch of different reasons. You know, the article presented it as in this idea that like, the U S government wanted to control Tik TOK and was worried about why wouldn't it take this deal? Because basically Tik TOK was offering it everything it could have possibly wanted and the way it's presented in the article is that the U S government just Tik TOK. They just stopped replying. They were having these conversations, Tik TOK offered this up and the U S government just. Like stopped responding. Uh, the, the thing that struck me that was not in the article was that this would be terrible for, for a wide variety of reasons, including that as soon as you give the government that level of control and it's some sort of official deal, There's an issue with that we've spoken about in the past of the state action doctrine, which is that companies can moderate freely and do what they want because they are not state actors. They are private actors and they can make those decisions. That's part of their own First Amendment editorial discretion. As soon as they are state actors, then they have to abide by the First Amendment, meaning they cannot moderate in most cases. And so, I think if the U S had taken this deal and tick tock had, gone by it, that there would be very credible lawsuits, not like the Murphy lawsuit, which we've discussed is not, was not credible at all, but there would be very credible lawsuits claiming that tick tock was now a state actor and therefore could only moderate based on the first amendment. And so I almost wonder if the reason the U S rejected this deal was that some lawyer there was like this would be the opposite of what we want because it would actually limit what TikTok could do in terms of its moderation capabilities. But it's not mentioned in the story at all. And it just struck me as a really odd omission.
Ben Whitelaw:that's really interesting. So, this idea of the kind of the deal going dead, do we have any more information about like why that was? And, or do you hear anything on grapevine?
Mike Masnick:it's unclear. I mean, this was part of the discussion with CFIUS, which is the, agency that, regulates sort of foreign ownership of companies in the U S and they had been negotiating with, with Tic Tac for a while. It came out a little bit in the lawsuit when Tic Tac filed it, they said, you know, look, We've been negotiating This begins to get into the first amendment weeds, but like there are questions of like, when, under different types of scrutiny under the first amendment, is this like the, least problematic way of dealing with it. You know, uh, were there other options that would have less of an impact on speech? And so I think part of TikTok's argument is like we were having negotiations on other ways that short of banning. The app entirely, or, you know, requiring divestiture that were, less intrusive versions of this. but it's unclear exactly why the U. S. stopped having that conversation. I mean, a lot of this around the whole TikTok ban is just fuzzy because nobody, nobody wants to talk about. facebook. com this is the history of what's actually happening. There's all these vague concerns about national security or propaganda. And again, like there are different situations based on that. If it's a national security concern you handle that in one way, if it's a concern about propaganda, you handle that in another way. And in fact, the first amendment sort of prevents you from handling that. And so it's unclear exactly what the reasoning was, what the thinking was. I feel like this particular story was probably Um, leaked from the TikTok side, uh, to sort of present like, Hey, look, we tried to give the government everything that they wanted and they didn't take it. but my guess is that the, the U S government probably looked at that and I'm sure that they had lawyers who said like, wait, this will never work because this would just create a huge different kind of headache.
Ben Whitelaw:Yeah, it kind of costs, it costs the ban in a slightly new light on both sides.
Mike Masnick:yes.
Ben Whitelaw:kind of maybe explains why the US government pursued a ban. Because this alternative obviously has massive holes and massive, reasons for not, being pursued. And, you know, also on the TikTok side, the, like you mentioned, kind of suggested there were other options on the table, which were ignored. So, it's clearly all in the run up to the kind of decision basically on the ban happening. Right. So we'll probably see a bit more of this.
Mike Masnick:Yeah. And I mean, obviously like there are other things that could have happened in between these two types of these two different things. It's not just ban or give the U. S. total control. You know, I think TikTok is really trying to present it as in like, Hey, look, we were willing to go and do whatever you wanted. Whatever would make the U. S. happy short of being banned or short of divesting. and so, you know, it'll be interesting to see how this plays out, whether or not this impacts the lawsuit at all. But it did strike me as kind of an extraordinary idea that the company even offered this. And it does raise other questions, which did come up a little bit in the Murthy case around like meta. Uh, saying, around COVID, we're just going to accept what the CDC says is mis and disinformation. Um, and that's one thing, that begins to tow this interesting line of, as a private company, you can say we are deciding to rely on the experts and the experts happen to be government, but at what point does that switch over to like now the government is deciding who gets blocked and who doesn't. that's a really interesting and kind of meaty legal question. and it's not clear that there's a real, totally clear answer on that, though there was just yesterday, the U S Supreme court came out with a ruling in the Vulo versus NRA case. Not going to get too deep in the weeds on that. That was heard on the same day as the Murthy case. And that. Where the unanimous Supreme Court said that when there's coercion by the U. S. government to try and get a platform to moderate in a certain, this wasn't moderation, but when there's coercion by the U. S. government to get third parties to act in a certain way based on disfavoring certain speech, in this case, it was, uh, New York official trying to get insurance companies to not work with the NRA and effectively threatening investigations of the insurance companies if they didn't cut business ties to the NRA. And it was basically said like, yeah, there's like, clear threats and coercion there. But what happens in the case where the company's willingly, you know, what if you take that same case and the insurance company is going to that, that same, elected official regulator and saying like, Hey, tell us who not to work with, then what, like, is that, is that still a first amendment violation or what? And so I think that's, sort of a big question that's coming that'll, that we'll have to deal with.
Ben Whitelaw:Yeah. Interesting. Yeah, good to have TikTok back in the agenda after a few weeks of, Not being mentioned. It's always there lurking in the background and, uh, to wrap up today's episode, our final story of that we wanted to talk through was actually another DSA story. So we're bookending today's episode with, with some more DSA. And the tech policy press have done a really nice series of. articles a hundred days on from February the 17th, which was the day when DSA became applicable for all online intermediaries in the EU. And there's a whole number of articles that I'd recommend going away and reading, but the piece I really enjoyed and thought was really interesting was an interview with, Ireland's digital services coordinator. We talked about DSCs before they're the kind of role appointed, basically as a coordinator between the national regulator and the EU to kind of be a bit of a bridge between the two. And this guy, John Evans, from Ireland's new media regulator, the commission Namian, which is, uh, hopefully a pronunciation that will please my Irish ancestors. Um, talks about the kind of scaling up the regulator from 40 to what will be 200 people at the end of the year. Talks about kind of splitting between. Broadcast as one of the, kind of focuses, but also having a large contingent of people focusing on online safety and, basically some of the challenges in, doing that, uh, some of the kind of teething issues that it has had, lots of countries have had issues with finding DSCs, um, which we talked about before Ireland was one of the first. And I think the, the interesting thing here is the fact that, he's doing an interview at all. You know, there's basically. the DSE is a kind of relatively controversial position in some senses because you've kind of become the lightning rod for potentially any issue of speech in, in the country. Ireland has had a, had a rough time during COVID, when it basically a number of alt right and kind of right of center media organizations went after government attempts to try and police COVID information online. And so I was always thinking that DSCs might actually try and stay a bit more below the radar. So the fact that this interview is happening is really interesting. It's quite pleasing in a way as well. Ireland's obviously famous for being the headquarters in Europe of lots of media. Big platforms. It's I think 13 of the 23 blocks prior to the newest one joining today are headquartered in Dublin, so it has a, you know, potentially very powerful regulator within the block and, You know, this John Evans also mentions the kind of dynamics at play between the EU and the, national regulators. So really interesting quote about how platforms have been really willing to talk to him and his team. However, when the EU opens up an investigation. Things change and the dynamics change, which
Mike Masnick:Really?
Ben Whitelaw:surprising. Um, I, I didn't kind of be surprised if it didn't, if it didn't happen, but kind of explains the complexities of, I guess, regulating at these two levels. And, um, yeah, just kind of is a bit of a look behind the scenes about how the DSA is playing out.
Mike Masnick:Yeah, I mean, the thing that struck me most interesting was that last point, right? You know, so much of the framing around the DSA and we talked to People who are involved in the process and regulators in Europe and even some of the companies was this framing that this was not supposed to be a U. S. style regulation where. where it's all hammer, right? It's all enforcement and it's all threats and do this or else. The DSA was supposed to be this more, cooperative experience in which the regulators were going to work with the companies and find solutions and, you know, we're gonna work together on this and find the best way to do it and figure out the best practices. And, we'll come to some agreement, which always struck me as a That's a little idealistic and not particularly realistic. And I think a lot of the big companies certainly recognize, like, we have to, there's no way around this system, you know, other than telegram, as we discussed earlier, like we have to buy into the system. And so the fact that they were happy to, I don't know about happy to, but they were, you know, very communicative with the DSCs early on and talking to Not that surprising, but then the second the folks in Brussels open the big investigation, which could have massive, massive consequences, they clam up. And of course that's going to happen. That's what any, any lawyer at these companies is going to tell them is once the investigation starts. Like, shut up, like you're only going to make things worse for yourselves and like let the lawyers take over effectively. and so it'll be interesting to see what the longer term impact is. I hope that the DSCs continue to talk about it and we find out more information that there is some transparency here. I think that's great. Because, you know, the model was that this was supposed to be a collaboration between regulators and the companies to get towards better solutions. And if it turns into an antagonistic investigation and punishment regime instead, I think that will suggest the DSA was a failure for what it, what people were hoping for originally. Um, so, but we'll have to see. It's still super early, obviously.
Ben Whitelaw:yeah, definitely. I think there's a risk of losing sight of the fact that both regulators and platforms are actually aiming for the same thing. which is a safe kind of online space. And
Mike Masnick:I'm not sure people in Brussels realize that, but that's a, it's a little bit of commentary there.
Ben Whitelaw:yeah, in, in at the last there, Mike, as usual with you. DSA critique. Um, okay. Well, we've, we've run, out of time today. We've kind of done a whole bunch of, I'd say, medium sized stories rather than are too big and rest small. But, I think we've covered a whole bunch of ground and yeah, really, really broad conversation. Hopefully listeners took a lot out of that. That was this week's news and analysis. we'll wrap up there. Thanks very much everyone for taking the time. the taxi stops here, Mike. You're going to have to get out. Uh,
Mike Masnick:And now we get to the rating point.
Ben Whitelaw:we'll see you next week.
Mike Masnick:All right, see ya.
Ben Whitelaw:Bye bye.
Announcer:Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.