Ctrl-Alt-Speech

Digital Oligarchs Gunning for Europe (DOGE)

Mike Masnick & Ben Whitelaw Season 1 Episode 46
Ben Whitelaw:

So Mike, I'm guessing you've probably used at least one of the big Chinese platforms. The marketplaces that everyone is, using nowadays. Sheen or Timu. I don't know. Maybe is that Mike stand from one of those platforms?

Mike Masnick:

No, no, no. The mic stand is not from one of those five. I have used, the original one of those, which is AliExpress. Uh,

Ben Whitelaw:

Oh yeah.

Mike Masnick:

purchased many things from AliExpress, but I have not gotten into the, Timu Shian, uh, elements of this world.

Ben Whitelaw:

Okay. Well, you don't need to anymore actually, because Amazon, you may have seen late last year introduced Amazon hall, which is essentially the same thing. Super cheap goods from China. You know, at your convenience, basically. And I thought I'd use their user prompt, their homepage prompt the start of today's control or speech podcast. Amazon hall prompts you to find new faves for way less.

Mike Masnick:

Well, given how costly, the folks ripping apart our government is, I'm, hoping that we can, find a, new governmental system technology for, for everything that is possible. Being, uh, ripped apart in our government so that we might have a functioning U S federal government.

Ben Whitelaw:

that seems like a good trade. I'm not sure if Amazon hall has that, but

Mike Masnick:

Yeah. What, what about you? What, uh, what new fav do you want?

Ben Whitelaw:

well, I'm actually in need of a new. Hat, a new hat. It's pretty cold over here in the UK. So I'm going to, buy myself a new fave from Amazon hall. maybe the logo that says control alt speech, we make sense of the chaos.

Mike Masnick:

There we go.

Ben Whitelaw:

Hello and welcome to control alt speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's February the 6th, 2025. And this week's episode is brought to you with financial support from the future of online trust and safety fund. My name is Ben Whitelaw. I'm the founder and editor of Everything in Moderation. And I'm with Mike Masnick, editor and founder of Tector, who has had a hell of a week.

Mike Masnick:

Yeah. I mean, you know, there's a lot of stuff going on.

Ben Whitelaw:

Yeah, yeah, I think that's fair to say, Mike, I mean, not a politics podcast, but we should kind of address the elephant in the room, which is the fact that, Elon Musk and others in his immediate circle are getting access to the federal government. In the States and doing crazy shit with it.

Mike Masnick:

Yeah. Yeah. Which is, has been interesting. And, and, you know, one of the interesting things is that. Because so much of what is happening is sort of Elon Musk driven, the folks who spent a lot of time following what he did with Twitter in 2022, 2023 and so forth, seem to recognize what's actually happening much faster and, and much more clarity and depth than political reporters who are treating this as sort of an ordinary transition in power. And. You know, I think the best reporting on it really come from Wired, you know, which is famous technology magazine, old school technology magazine. and some of the other tech reporters as well, actually have a deeper, better understanding of Elon Musk, the way he works and what it is that he's doing and how incredibly crazy it is. Because as folks will remember who followed this stuff, I mean, he bought Twitter and he had every right to do this, but he went in and just, quickly made it clear that he had no care and no intellectual curiosity to learn how things worked, just assume that he knew how it must work, was wrong on almost all of those things, but just made a whole bunch of very, very quick decisions, fired a ton of people, broke systems, ripped out servers, and just assume this is all stupid. Nobody was here before was smart. They were all, you know, woke idiots. And therefore, you know, we can just rip stuff up and nothing will go wrong. And of course, lots of stuff went wrong and they lost all their revenue and a whole bunch of other problems cropped up. And he's using the exact same playbook on the U. S. government. and that is absolutely terrifying for a long list of reasons.

Ben Whitelaw:

Yeah. I mean, you wrote about it this week on tech, the kind of Twitter deconstruction playbook. I mean, we could spend hours and hours talking about just that single story. We're going to try and, branch out a little bit because it has been covered so extensively, but just talk a little bit about what it's been like. Kind of writing on tech that this week, because you have seen this huge increase in readers and we've seen a bunch of people come to old episodes of the podcast and listen to that last week's episode with Rene D'Arresta was one of our largest yet. So just talk a little bit about what that's been like, from your own perspective.

Mike Masnick:

Yeah. I mean, it's been, it's been overwhelming and crazy. I'm, I'm working on less sleep than usual and just trying to, to keep up with everything. I mean, there's way more stuff than, anyone can possibly keep up with. Even like, larger newsroom, will have trouble keeping up with this stuff. but it's, been fairly overwhelming and sort of trying to, figure out which of the things that, you know, The traditional media is having trouble sort of putting into context and figuring out, you know, can I take that and put it into context and speak to experts and find out what's really going on and then figure out how to explain it in a way that makes sense. And, that has been a lot of my week and it's somewhat terrifying just because, we've seen how this plays out. In a way that is doesn't really matter, right? You know, with just a second tier social media network, an important one, an important one for voice. And we've sort of seen how that up allowed for things like, blue sky again on the board disclaimer, blah, blah, blah, but also mastodon and everything else we've seen how that enabled those things to spring up, which was interesting, but what is the, blue sky of the United States, right? Like there's, you don't have the same exit path, that you do with, a social network. And there's a whole bunch of really important things. And, the fact that we're seeing over and over again, the same thing where Musk and his team insist they know what's going on and don't understand it and have no interest in learning why things are actually done the way they are. And, you know, it sort of culminated with one of his, he has this 25 year old kid who worked for him, who was given, first of all, terrifyingly enough, any access to the treasury nonpolitical thing. It was run by a guy who had worked there for 40 years and who was, somebody in the former Trump administration said, you know, he was. You know, at no point did I have any sense of what his politics were because his job there is to be the sober person in the room. He, that person got pushed out and instead this 25 year old, kid who worked for Elon Musk was given full access to the system that handles U. S. government payments. Six trillion dollars go through it and was apparently given right access to it. It's this old system that is, you know, using COBOL code and the kid was, was making changes to it. without testing it. There are all these terrifying things of things that could go wrong. The system is set for migration this week. Nobody knows how that's going to work. This morning, a court stopped them and said that they have to have read only access, which is already terrifying enough. So he can't write to it anymore. and I've seen reporting saying that the right access has been turned off, but this is, is terrifying. It's not, It's not, Twitter, right? Like you could break Twitter and the world will move on. You break the entire United States the world becomes a very different place.

Ben Whitelaw:

And you're right. There's eerie. parallels between what he did at Twitter X, you know, going in, firing the whole trust and safety team, getting rid of the kind of adults in the room and what he's doing here within the kind of department of government efficiency. And, And this will probably be out of date by the time this, podcast goes out, but like, what, what are you thinking will happen in the next kind of seven days, like by the time we next record, like what, what are you kind of expecting will happen?

Mike Masnick:

it's just more craziness, right? I mean, we, we don't know, right? I mean, the other reporting that came out last night was another, you know, Musk employee, a kid, was, basically put in charge of controlling the systems from the, NOAA, which is, you know, oceanic and handle weather stuff and all of these things. And then they, everybody who works for NOAA were, was given an order not to communicate with any foreign national, which like. that's what they do, right? They have to like communicate with other people about, weather patterns and, things that are happening and like, are there hurricanes in the Caribbean? Is there a typhoon or in the Pacific? Like all of these things that's, that's their job. And you have these kids who are like, think they know. Everything. I mean, at one point, you know, some idiot on Twitter, Twitter, X, whatever, was saying, Oh, you know, Elon Musk is so brilliant. the first thing he did was come in and say, like, well, we have to see where the payments are going. No one has ever thought that. What do you mean? No, one's ever thought that. Of course, I've thought that like this, all of these systems in place, they just act as if, Everybody else must be an idiot. And they are the only ones who think of these things. And what the reality is, is they don't understand all of the different reasons why things were done. You know, is there waste in government? Of course. Right. Like, is there fraud and abuse in government? Absolutely. But There are reasons why systems are in place and if you're going to change them, you have to understand them first. And no interest in that. there's not coming in to try and find out why are these things in place. The only interest that they're having, and again, there's like this 19 year old kid who recently graduated high school who was going in and demanding people explain their jobs, but not to learn from them, but to see whether he could fire them, basically. I mean, whole thing is just, it's so scary.

Ben Whitelaw:

Yeah. There's something very apt about the United States not knowing figuratively and, in real terms when a typhoon or a hurricane is going to hit the country because of, an edict that says you can't speak to Foreign workers abroad about this stuff. I think that is, it is insane. I would also just, you know, shout out to the media organizations, as you say, that have been doing this work this week. I mean, we cover a lot of their work. we analyze a lot of the reporting that comes out of, of outlets that you've mentioned. And we obviously you do a lot of writing yourself on this, Mike. So I think it's worth just, you know, shouting out those guys for. covering this in a really crazy time. and also we spent the best part of two hours prior to recording trying to figure out what stories to cover and how to make sense of it for listeners. So it's been a crazy week or two and it's likely to be, like that for a while. So that I think probably is a point where we can dive into a part of the U S government, Mike, that we've touched on before. and somebody we've, come back to time and time again on controllable speech. before we do that, did you know that Jim Jordan is a two time wrestling champion?

Mike Masnick:

Unfortunately, I do know that

Ben Whitelaw:

you, I was reading his Wikipedia and I was like, this guy's got range.

Mike Masnick:

well, the history though is right, like, you know, he became famous before he was elected to Congress, he was a wrestling coach Ohio, but there's a huge scandal of abuse involving his wrestlers. That it is reported repeatedly that he looked the other way on and was aware of. And these are like, one of the scandals about Jim Jordan, is how, under his leadership as a coach, there were some sexual abuse scandals that. He never took responsibility for. So yeah, he's, you know,

Ben Whitelaw:

okay. Okay. There's a pattern. Let's say there's a pattern. Well,

Mike Masnick:

yeah,

Ben Whitelaw:

he's in the news this week again, for a letter that he sent to the European commission's Henna Verkanen, who is, the kind of tech, uh, Boss the EU and took over from Thierry Breton last year. This is one of a number of letters that he sent EU over the last six months or so, Mike, but it represents a bit of a change intact talk us through kind of what you read about the letter.

Mike Masnick:

I mean, you know, I don't know. Is it a change in tech? I don't know. I mean, it's like, basically is, you know, screaming at the EU about the DSA and how it is a censorship bill. And I mean, if you've listened to me on this podcast and on TechTurt for the last few years, like there's some stuff in here that, you I am concerned about the DSA and how it can be used for censorship. And I've called out Terry Breton and, and his attempts to use it. And so some of that is, repeated, but this is a fairly aggressive letter, which is also in sort of Jim Jordan style, but, you know, he talks about how. The DSA requires social media platforms to systematic processes to remove misleading or deceptive content, including so called disinformation, even when such content is not illegal, and says though nominally applicable to only EU speech, the DSA has written may limit or restrict Americans constitutionally protected speech in the United States. And so he sort of, you know, Calling out these things and basically, in a very, accusatory way saying like, if anything DSA related leads to the suppression of American speech, then the U S Congress might take action in some form.

Ben Whitelaw:

Yeah. he calls for a briefing. Um, as part of the letter, doesn't he? So he says to work and, and, you know, we need a sit down for you to talk about what the DSA means for us. And we need it by the 14th of February. which I think is, you know, romantic. Yeah.

Mike Masnick:

Valentine's day, uh, meeting

Ben Whitelaw:

Yeah. maybe we can kiss and make up. Um, but yeah, there's a kind of like, we need to be told what the plan I think that's for me where there's a slight change. It's, you know, it's on the front foot. It's trying to demand, seize control. And I think in the way that prior to Trump being elected, there was maybe slightly less of, uh, of backing from the

Mike Masnick:

Yeah. and I actually do think that element is really important, right? So, before, even though in the house the Republicans were the majority and therefore had power within the house, that power was limited to what the House had, right? They didn't have power in the Senate and they certainly didn't have power at the executive branch. Now, Republicans have all three, and so the nature of this letter is sort of reflecting that, which is that the whole of the US government may actually listen to Jim Jordan rather than him just sort of making noise and being obnoxious in the way that he is normally obnoxious. Now, there was another thing which is not in, the letter itself is interesting. And we talked about that. There's this sort of political article that alerted us to the letter. There's another political article, which I actually think might've been interesting if they'd combine the two, which is saying, Musk and Jim Jordan have had a mind meld that the two of them are working very closely together And so suddenly you start to put this letter into a slightly different context, which is Is Jim Jordan doing this on behalf of? on behalf of one U. S. citizenry? Or is he doing this on behalf of one Elon Musk, who is, as we have just discussed now sort of running large elements of the government while also running a social media platform that is being targeted by the E U and the D S A and to Whose advantage is this? and, you know, there are all sorts of issues about conflicts of interest, but this article talks about how Elon and Jim Jordan are, buddy buddy and are constantly working with each other now.

Ben Whitelaw:

Interesting. I hadn't seen that. That's a really interesting,

Mike Masnick:

actually saw it like literally as we started recording and I was skimming it during the, the intro here.

Ben Whitelaw:

how fresh this podcast is. and obviously, you know, last year we talked at various points during the year about Jim Jordan and Elon Musk and GARM and, you know, the reports that, his subcommittee wrote about the, kind of awful effects of GARM and the kind of, again, the industrial complex that was surrounding the advertisers. coming away from Twitter, taking away their spending from Twitter. And so again, there are continuations of a theme, but there was also a kind of emboldening, I think of a message here, which is, DSA, and the we're not happy, we need to be told what's what. The irony about all of that, Mike, is that in the very same week, Joel Kaplan, who is the new global policy chief at Meta and obviously a Republican, in terms of his political leanings and the new Nick Clegg was in Brussels, essentially, defining how EU speech should, work. You know, he, he was, he was on a live stream. In Brussels and was declaring to a bunch of, Brussels wonks about how the announcement that Meta made earlier this year around, changing the policies on the platform, getting rid of fact checking, adding community notes was also going to be rolled out in the EU. He predicts in 2026 and that he was ready to work with regulators in the EU about how that would work. So. The timing is very funny. You know, you have Kaplan on behalf of Meta in Brussels saying, this is how we're going to do things. We expect this to be happening in the next 18 months. And then we have also Jim Jordan, requesting more information from the DSA because he feels, the EU is, infringing on, you know, the way that US companies work. What did you make of the Kaplan stuff?

Mike Masnick:

Yeah. I mean, it is a massive step up in terms of the aggressiveness of meta on the EU. Right. I mean, we've had years now where meta has been, I honestly think too compliant with the EU on stuff. Right. I mean, they've, there've been different things and pushback and fines, but for the most part, like meta has, been it. really willing to kind of bend the knee to the EU when the EU says we're going to regulate social media these ways, the notable thing is, media reporting on it has been, it's always set up as like, Oh, you know, the EU is cracking down on meta, but meta has been sort of a happy participant, and, really sort of embrace the DSA approach to regulation because They had indicated that they could handle it, and they sort of recognized that smaller competitors would have a more difficult time. And in fact, you know, within the U. S. context, MEDA had been really kind of going around to both the federal government various state houses and kind of hinting at the fact like, look, you pass a law that's kind of the same thing as the DSA, we're not going to complain about it. and, the unspoken part of that was like, look, we're already complying with the EU. We'll comply with the U S ones. And we know that it'll cause trouble for upstart competitors. And that's the thing that Meta has been scared of, right? Because their growth had sort of stalled out and you saw the rise of things like Tik TOK, that came in that. Really took medicine by surprise. So their move on the political front had been let's embrace regulations because that makes it harder for upstarts to compete. And so this is a big change. And I think, This also came up in the, Joe Rogan, Mark Zuckerberg interview talking about the EU and him and Zuckerberg saying like, any kind of foreign regulation is the equivalent of a tariff. And we expect the U S government to basically have our backs and, protect and defend, domestic industry, which is a very different way of looking at it. you know, the tech industry has looked at international trade, around technology and internet issues forever, right? Since the beginning of the internet industry. And so looking at it as, as industrial policy and being able to fight, and so Joel Kaplan, then going to the EU and basically saying this, he doesn't do that if he doesn't think that the whole of the US government under Donald Trump has his back and. we've had a couple of, like, weird skirmishes around tariffs, uh, in the U. S. in the last week as well, that, that

Ben Whitelaw:

I've been, I've been following.

Mike Masnick:

Yeah, it's just, you know, again, sort of outside the purview of this podcast, but like, you know, among those things, Trump did suggest that, he's also looking at tariffs on the EU. And so, it's all sort of part of this negotiation. And Kaplan and Meta seem to think like, well, we can go to the EU, make our demands, and we'll have Donald Trump and Elon Musk and Jim Jordan to effectively back us up. And so, for all their talk over the last four years about how perfectly fine. And perfectly in compliance they were with the DSA and the DMA and related laws in the EU You I think they just see this as look we have this very stupid very brash willing to smash and break things person in the white house right now Let's take advantage of that and see if we can smash and break All these other things in a way that just favors us and this is how we're seeing it play out

Ben Whitelaw:

Yeah. I mean, I can hear almost the European listeners of control or speech say, Hey, actually the EU is 27 States and almost 500 million people. Naturally matter is going to have to kind of negotiate in the way that it has done in the past, you know? so I think there's, that, but you're right. The, you know, there has been a. Compliance there, to use a term that kind of Daphne Keller has been using increasingly, because there are competitive benefits to doing so. and that seemingly changed. the other thing that was interesting to me was how it was a note in the, reporting on Kaplan's comments about the code of conduct, which we've talked about in the last couple of weeks as well, you know, he was referring to the AI act of code conduct that's accompanying official legislation that was. past last year, and it came into play this year. there's apparently a code of conduct that sits alongside that, which is additional voluntary obligations that the companies can sign up to. He's basically said that we're not going to sign up to those. And it made me think about the fact that we have, the hate speech code of conduct, the disinformation code of conduct, one of which has already been subsumed into the DSA, the other, which is likely to be subsumed into the DSA, and how, again, The relationships are becoming much more tense, much less about voluntary agreements and signing up to stuff willingly and the hammer of, of regulation being used instead.

Mike Masnick:

Yeah. this is all uncharted territory and in lots of ways. And, I obviously, part of my role on this podcast is to be the critic of EU regulations, because I, I have my concerns about EU regulations. And so I'm in this weird position where I'm reading some of this stuff and it's There's this kernel of truth within the complaints that the American companies have with regulations, their complaints that I have made about these same regulations. And so there is this truth, but they are not doing this. for honest and good purposes, right? This is just like a smash job, right? Like we're just going to break these things because we have the opportunity to, and honestly, I'm torn because like, would I like these laws to be better? Absolutely. Do I have concerns about how these laws work? 100%. I've been calling them out since the DSA was first being discussed. and those, Concerns still stand there, but I am perhaps equally, maybe more concerned about what happens if the EU caves to these demands, because. that will embolden them to go further and do more and make things even worse. And so again, this is part of my, distress at, at this moment, which is like, there are reasonable arguments behind all this, but they're going about dealing with it in the most destructive, damaging, horrific ways possible that will have long term. Potentially extraordinarily negative impact. and so, and you can take any of these quotes or comments from Kaplan out of context. And, and I might agree with it and say, yeah, some of these EU regulations are really problematic, but the intention here is basically to destroy any, any sort of regulatory state, any sort of administrative state. And where that leads is extraordinarily dangerous.

Ben Whitelaw:

yeah. And we'll hopefully get a readout next week of this briefing that Jim Jordan has asked for, and we'll know a bit more then. But I also am concerned about if Virkin and then the EU, do cave. it seems crazy that we're going to talk about meta again. A friend of mine, Mike was. At the event that we ran last week in London. And she was saying that right now feels like a kind of meta dead zone. We've just been talking about it forever. And in some ways, today's kind of second big story is a continuation of that theme. You and Renee talked about it last week as well, but it's the, reporting we've seen come out this week around, advertisers and their feelings towards the platform. In the wake of Mark Zuckerberg's announcement, a couple of new stories from Digi day and from marketing week, essentially saying that. Within the kind of advertising ecosystem, there's been really no discernible impact on spending, by major players. And yeah, you touched on it last week. They're unlikely to say so. because of the risk of being targeted, but we also saw in the earnings call, Meta's earning call last week, that they have also seen no change in advertiser spending in the short time since that announcement was made. At the same time, there's. interesting kind of sub thread that is in a couple of those stories and in a story from the Wall Street Journal around some advertisers going back to X slash Twitter. so the Wall Street Journal report that Amazon has started to raise its ad spending on X. in recent months and a few larger, brands who, decided that X was kind of persona non grata for a while considering once again, spending on a platform. And I wondered what you thought were the, maybe the kind of drivers of this, Mike, you know, there's a cynical take, which I imagine you might give us. Um, but where

Mike Masnick:

cynical. No,

Ben Whitelaw:

where do you see, these shifts happening and when do you think we'll know fully whether, you know, the question of Where the brand safety is still, you know, alive and well.

Mike Masnick:

look, I so much of this is just sort of marketing, right? I mean, it's for show brand safety is still a thing. Brands. Matter, right? And if, something bad is happening with the brand that harms the bottom line and every company will react that way, the major difference. And this is what we discussed last week with Renee is that companies aren't going to talk about it. And so it's not a surprise to me that companies are taking a wait and see approach. I mean, you and I talked about this, last month at some point around, ROI on meta ads is way better than anything that they ever were on X. And so they're important. as a part of strategy. And so I think a lot of companies and a lot of advertisers are concerned about this, but it was one of these things where it's like, well, let's, not make a big show of this because that leads to attacks from Jim Jordan, and, Elon Musk and potential lawsuits and all this stuff. So let's take a wait and see approach. Let's see how this actually plays out. We don't need to make a big stand. The sort of public sentiment at the moment feels like they don't want us taking a stand on this. In the past, when we did take a stand, it was because public sentiment was in one direction and now it's in another. So I sort of understand the kind of wait and see. But if there is like actual brand safety stuff, I think that companies will focus on their bottom line. And if it is damaging to their brand to be advertising on a certain platform, they will start to move away. Some of that will just be general public sentiment. That is a different story than the X. Story. And this story of advertisers potentially returning X. Now, there have been similar stories for over a year, every few months, there's a major publication that writes a story saying so and so advertisers are returning to X. And it is touted often by Elon himself and his fans is like, Aha, see, everything's coming back to normal. The details are always less than Enthusiastic about it. They're often examples of them returning at much lower volumes, 5 percent of what they used to spend 10 percent of what they used to spend. And, you know, obviously the overlying, issue right now is the fact that it is all just like to get into the good graces of the person who is currently running the U S government, which is Elon Musk. And so like. Jeff Bezos has done a bunch of things with the Washington post lately to signal that they're going to be much friendlier to the Trump administration. So another way to indicate that and to give a public signal of that is to say, Oh yeah, we're going to bring Amazon back to advertising on X. We don't know how much, we don't know how involved, but sure. let's do it. And I think that is what other companies are doing too. It's like, well, right now, Elon Musk has so much power. If we don't want to be in the crosshairs, if we don't want to get sued by him, maybe the easiest thing is to like take some advertisements out. Now this is again, horrifying in general, if you think about free speech and the fact that, people are effectively being coerced into giving money to the world's richest man, who is also running the government at the same time. There are all sorts of reasons to be terrified by that. You know, when you put it that way, it seems pretty bad, right? it's problematic on a variety of levels, but I think the reasons why advertisers are doing what they're doing on meta versus X are two different things. You know, one of it is sort of currying favor and the other is kind of, uh, you know, this platform has been okay for us. Let's wait and see if it actually turns as bad as it does. But if there are moments of actual attacks on brands or brand safety problems, then I think Companies will act the way that they always act.

Ben Whitelaw:

Yeah, talking of, lawsuits and bringing claims against advertisers only this week, Musk and X brought a whole bunch of new advertisers brands into, a filing that they'd, uh, Submitted last year. And so we've got new conglomerates, corporates, Shell, Nestle, Palm Olive, Lego, who've been added to this as well. And again, like there's, there's a kind of, I guess, willingness to, roll out this playbook for whatever brands that he sees fit, you to kind of force them the platform, it

Mike Masnick:

Yeah, it was interesting. I mean, it's, the same lawsuit as from last year. The one that, that effectively brought down Garm. it's just been expanded. It's this, their proposed second amended complaint. and so the court still has to accept it and add in these new defendants. but that'll probably happen. now remember this case was originally filed with a very friendly judge, Reed O'Connor, who was sort of willing to bend over backwards, but he had recused himself from the case because he held stock in one of the advertising defendants. So it is now in front of a different judge. Uh, who is not known as being quite so partisan, let's say. and so, the case may go, you know, a little bit more seriously than other cases, but we'll have to see, it would still roll up to the fifth circuit, which is crazy as it gets there, it was interesting to me that among the companies that were added here were. Twitch, which is owned by Amazon. We were just talking about Amazon and Jeff Bezos and caving. So it'll be interesting to see how that plays out if Twitch stays in the lawsuit, but also Pinterest. And so to me, both Twitch and Pinterest. are in some ways competitive to x and the idea that they should be forced to then advertise on x Seems really problematic from a whole wide variety of things And in fact I was sort of confused by their inclusion in the lawsuit. And even the way it was written, like, they're talking about how much they normally advertise and how much they spend on promoting themselves, but is that always on a competing platform? I don't, I don't know. The whole thing just felt a little weird. Like maybe this was like a backdoor way to attack. competitors for social media attention. and so it'll be interesting. It is still just, it is a ridiculous lawsuit in so many ways. I, I need to stress that because I feel like we sort of skipped over that part. I know we talked about it last year, the idea that, Choosing not to advertise on a platform represents an illegal boycott is absolute nonsense. There are Supreme Court rulings on record about how boycotts around things like this are speech. They are a form of speech. They are protected by the First Amendment. The kinds of boycotts that are illegal are ones that are done for anti competitive purposes, which maybe that's why he's adding Twitch and Pinterest to it and going to try and claim that there's like an anti competitive element to it. But that is. just obviously ridiculous and silly. and so, we'll have to see. I'm hopeful that before a more reasonable judge that, quick work is made of this case and that it gets dismissed quickly. But then of course it'll be appealed to the fifth circuit where, anything goes.

Ben Whitelaw:

yeah. Okay. So yeah, we, we, we'll see how this pans out. I mean, I, for one might wonder to what extent we're going to see a big brand safety. Event like we did in the mid 2010s, that's what I'm, I'm wondering is, is in the kind of near future, if you remember, like you would be very privy to this, but the, big YouTube story, 2015, I think it was 2016, 2017 was, terrorist videos, ISIS videos being uploaded onto, YouTube and other platforms and having large Brands advertised next to them in a way that was completely unsuitable and led to massive changes to the way that the platforms vetted advertising and tried to provide a kind of brand safe environment, it feels like we're going down a similar route. It feels like there's going to be a similar kind of event like that, but it does rely, as you mentioned on. Media reporting it in a way that is kind of nuanced and recognizes that, you know, this is happening. I wonder it and worry actually that it's going to get lost in the noise of everything else. that this is no longer something that, we can expect from platforms that if you want to advertise in a way that is. allows your brand to sit next to content that is, you know, not egregious and not harmful and not, you know, offensive to users, we've kind of lost sight that that's even possible. so yeah, it goes back to your point earlier about how the reporting of the likes of 404 and, and other tech sites who do really do good work here is going to be so key over the next few months, I think.

Mike Masnick:

Yeah. I mean, we'll have to see, right. And, and, the whole thing with like advertising and brand safety is that it, you know, at times it did get overheated and, and right. Like, I think most users will often recognize that, Advertising that appears next to any particular piece of content. There's an algorithmic component to it. It's not like, AT& T is choosing to advertise next to pro Nazi content or, or whatever. And so like the actual impact brands. maybe overstated, but there is this general sense of like, if an entire platform is supporting the fascist takeover of the United States, do you want to be helping to fund that? and that is the type of thing that could boomerang. And, you know, I remember going way back into like early two thousands where there was this big freak out of a. programmatically delivered advertisement for a set of steak knives or something showing up, I think it was on the New York post, next to an article about a stabbing

Ben Whitelaw:

right.

Mike Masnick:

was like the beginning of the list. Like, Oh, when we have these algorithmically driven advertisements, They might show up in a way that it feels inappropriate. And then the media can mock it. And then the companies get all worried. Cause they're like, we don't want to be responding to media requests about how come our knife ad is showing up next to a stabbing. So those kinds of things will happen. It will depend on the media, but the media has a lot of other things that they may be focusing on that are probably more pressing than, whose ads are showing up where. And so we'll see how that plays out. and sort of where things go. I will note, this is a very, very small sample size and probably not indicative of anything, but, my office, right down the hall from my office, sort of across the way from my office, there had been a brand safety company. and I noticed recently that it's gone. So maybe, maybe they may have moved. I have no idea. I

Ben Whitelaw:

they've tripped. They've trebled in size.

Mike Masnick:

perhaps, perhaps I just noticed that they were no longer, on my floor in my office building.

Ben Whitelaw:

Right. All right. Yeah. I mean, this will be an ongoing conversation we have, I think. And there are big brand safety summits coming up in the next couple of months as well. So, you know, I would love to hear from listeners if they are attending those or they're, they're in the industry and want to share their thoughts. Cause, it's only a bit of a kind of knife edge as to where it goes.

Mike Masnick:

yeah, and I'm sort of curious, like, you know, the way that, Musk and Jordan have sort of attacked the whole concept of brand safety, you know, how that conference is going to be like, how big is that conference or are people going to be afraid to go to that conference and attend? Because it feels like, you know, similar to the way that the whole concept of trust and safety has been sort of negatively attacked. so too, will the space of brand safety. And that, that seems like a concern.

Ben Whitelaw:

Yeah. Okay. let's move on to a story that has been kind of bubbling along in the background the last few weeks, but we haven't necessarily talked about actively on the pod. this is the story of deep seek, which many of our listeners will have heard about this, new. Model that's been trained at a fraction of the cost of some of the other big general LLMs. and what we're seeing this week is a couple of parts to the deep seek story, Mike, but you picked out a story from 404, about a possible lawsuit, being filed for people who download deep seek. Which would include you, I think.

Mike Masnick:

Well, that depends. Depends on how you define all of this. And so it's not it's not a lawsuit. Just to clarify here. Josh Hawley, Senator Josh Hawley, who's problematic in all sorts of ways as well, introduced a law, right? That that would effectively extend, a ban on import export. To cover Chinese related AI, which would include deep seek code. Now, then the question is, what does that actually cover? And that becomes a lot more complex because what is actually deep seek, right? You know, what everybody is talking about with deep seek is this model, but the model is open source. but also DeepSeek itself as a company offers a hosted model, just like chat GPT or, Google Gemini or whatever that you can go log into. But because the code is downloadable and you can run it, there are lots of people who are running it. on their own laptops, if you have a powerful enough laptop, which is actually not that powerful, or there are lots of people who are hosting versions of the deep sea code. They downloaded it and then created their own hosted version of the same model, or perhaps a slightly adjusted model. You can take it because it's open sourced. You can adjust some of the weights and, mess around with it. I know Microsoft recently announced that they're launching a version of it. I have played around with two different us hosted versions of it. rather than the Chinese hosted version. So it's unclear to me, Under this law, if it passes. And again, it's just been introduced the likelihood of it actually going anywhere. I don't know, but it would criminalize the downloading of designated Chinese AI. so you could go to jail in theory for using open source code or even for downloading open source code, which is very problematic.

Ben Whitelaw:

yeah. 20 years you could be imprisoned for, or fined not more than a million dollars, which is, which is nice to cap it at a million.

Mike Masnick:

And, speaking of 20 years, it has been, well, it's actually been more than 20 years since we fought some of these battles around, import export controls and software code over encryption. So in the 1990s, there was encryption software and the U S under the Clinton administration tried to ban the export of, high grade encryption saying it was a form of a weapon. and. effectively tried to block it. And the court said no. and basically gave us one of the first rulings that indicates that software code is speech. And under the first amendment, you can't limit it in this way. And so I think that kind of fight and those kinds of rulings come back into question if this passes. But now at the same time, we're living in the wake of the ruling on TikTok, which we have talked about, where suddenly the Supreme court sort of, uh, The first amendment doesn't matter as much when we're talking about China, so how does this play out? If this goes into law, does the Supreme Court say it's okay? And then what does that mean? Because it's been downloaded so many times and Microsoft is offering it and other services, multiple other services are offering a version of DeepSeek.

Ben Whitelaw:

mm

Mike Masnick:

and then on top of all that, right. you know, we didn't talk about like deep seek itself in terms of how it works. I mean, you talked about how, was. trained in a way that was much cheaper, which, you know, it's being referred to by lots of people sort of like a Sputnik moment, this sort of realization, that might be a bit of an exaggeration, but it is a wake up call for the belief that, so called frontier models know, the, the big models, which know, uh, open AI, anthropic, Google Facebook's llama that only those companies can afford to actually do the training which cost hundreds of millions of dollars. Whereas, deep seeks basically saying we could do this with cheaper GPUs because the US already bans export of the powerful GPUs. And, you know, we can make this model a lot simpler and a lot lighter. Now I'll note just as an aside, because it'll come up no matter what among commenters, OpenAI effectively claims that part of DeepSeek's secret sauce is that they distilled an OpenAI model. Not going to get into what distillation is, but it is an important part of DeepSeek. element of sort of how this model was chained, trained. And we know that they use distillation. They admitted that the question is, did they do the distillation on, open AI's model? And then what does that mean? Does that violate a contract? All those things, like we're going to leave that aside, but the model itself is very lightweight. Very cheap to train and then incredibly powerful in terms of, what it does. And as, as I mentioned, I've been playing around with it and it's a really good AI model. it's very impressive. It has limitations. the default model has somewhat famously been directly coded in a very, very obvious, clunky and awkward manner to not talk about things like Tiananmen

Ben Whitelaw:

hmm.

Mike Masnick:

Taiwan sovereignty. Um, and you know, some of the people who have downloaded versions and run them locally have figured out how to extract those restrictions from it and, get the code to actually comment on reality properly. But, there are some concerns there. And then obviously also there are some other concerns. And I think you had found this, report on a study on DeepSeek. Did you want to talk about that

Ben Whitelaw:

yeah, I think there's been a, wide conversation about how safe is this model, when actually, you know, it's been trained, like you say, pretty quickly and, and so cheaply compared to other, frontier models. And there's a, a piece of analysis that was put out by a company called Encrypt ai, which calls itself, you know, like they all do the a leading security and compliance platform and.

Mike Masnick:

leading. We're the leading podcast,

Ben Whitelaw:

Exactly. Exactly. Um, they should come and sponsor an episode of control or speech. Um, but did a kind of red teaming analysis and found that the model was much more likely to generate harmful content than, open AI is Oh one model and. threw out basically lots of bias and discrimination. there was cybersecurity risk. It was spitting out kind of malicious code and viruses much more readily than some of the other models that it has tested in the past and was much more likely to, produce responses that were toxic, that contain profanity, hate speech, and kind of extremist narratives. so again, you know, the trade offs between producing lightweight models that are very kind of agile and can do things very quickly. quickly and safety side, which obviously some of the big frontier models have had to focus on because of the companies that they're coming out of is an interesting dynamic that we're, we're seeing, and I think we'll continue to see play out. and it's worth going to have a read.

Mike Masnick:

Yeah. And I think like The fact that the model, it doesn't have as many safety things built in. Didn't strike me as surprising. I mean, I sort of expected that, right? I mean, if it's a cheaper model, the company is, they're not commercializing deep seek right now in any way. And in fact, that's raised a bunch of questions about what is the real purpose of this? the company CEO has sort of come out and sort of expressed like, traditional, open source. helping the world. We're just want to make this available to everybody. People question how, you know, truthful that is there's the argument because it's attached to a hedge fund that, maybe they were doing this to short sell a bunch of American companies that may have lost a lot of money when deep seek suddenly became popular, there's all sorts of conspiracy theories, who knows what the reality is. But they have less incentive to actually spend the time on building safety aspects of it. And I think the important thing to think about there is that that is the future, right? We're going to see more lightweight. But very effective models and there is going to be less incentive for them to have the safety training and so You know the thing that I would say folks who are listening to this who are interested in safety stuff specifically is that relying on the model makers to build in safety is probably not going to work and that we have to start thinking about safety at different levels of the stack in terms of who's actually implementing and using these models. As I mentioned, like the deep sea code Microsoft is, hosting a version of it and others are as well, we might need to start looking at who is hosting these models and who's providing access to the models rather than the model makers themselves for safety stuff. But on top of that, we also need to get more. literacy among users, right? I mean, this is where it always comes back to, which is, you know, at the end of the day, the users of these products need to understand the safety implications of what they're doing. And, we may be over correct when we expect other intermediaries to, stand in the way and, be the safety cops.

Ben Whitelaw:

Yeah. I mean, how does open source. play into that, because so many of the, the models are open source metas, Llamas, open source, deep seek, you know, you kind of suggesting that actually part of the problem is that when somebody kind of takes the code and, and, you know, host it themselves, that they don't have the necessary awareness of the, of the safety challenges, or like how does the open source element play out against the safety element?

Mike Masnick:

that's a trickier question to answer than you would like, right? So like open source in the AI world is also slightly different than open source elsewhere, right? So open source AI models, we still don't know the training data. You don't, it's not like, You know, open source code in most contexts, as you get access to all of the underlying code, with the AI models, it's really, it's the weights that you're getting access to. And so you can implement it yourself, but you still don't know the underlying, how it was trained. And so like what biases were built in, you still have to discover that separately. And then you can adjust and change some of the weights yourself. maybe deal with the safety aspect where you can add an overlay or filters or other things along those lines to deal with the safety questions related to it. but you're dealing with multiple different things, right? So, the underlying training is still. a secret, right? This is why came out and accused DeepSeq of basically building off of their own work. and so, again, it's just a question of like, where and how do you insert safety related features? And it could be in multiple places and, how that's done, we'll see. Whereas like, really at this point, you have llama and, and, meta who are trying to build in safety, or at least were, you know, maybe now with the new super enlightened, Mark Zuckerberg, they will stop those, those practices. I mean, you had other open source models. There was mixed role, which came out of France.

Ben Whitelaw:

Mm hmm.

Mike Masnick:

though they've moved from open source to a more closed source approach as well, you know, so You know, this is all sort of in flux right now, but we'll see. But just the fact that DeepSeq was able to do what they did means it's almost certain that we're going to see more open source models that are released, that are powerful, that are way cheaper than the existing costs of the frontier models, and are going to lead to. some changes to how the market is viewed. And there will be tremendous temptation to make use of these models if they're much cheaper to use on a, just use basis. so, where the safety aspects come in is, going to be a big deal and something people need to think about.

Ben Whitelaw:

Yeah. I'm waiting for the story. The, you know, the first story where deep seekers used for some sort of, giant, don't know, attack on the U S federal government. Um, it's

Mike Masnick:

something like that will happen. Absolutely.

Ben Whitelaw:

Yeah. Okay, Mike, what we've got probably time to mention one more story very briefly, which is one that I, stumbled across, which takes us back into Europe and to the regulatory regime there. This is a story, in which Apple is very kindly warning us about the, perils of having other app stores on its app store. so there's an app store called Alt Store Pal. Which is actually an open source, app, and it's designed for kind of sideloading other apps onto, the iPhone. It gave a story to Bloomberg this week in which it said it was worried about the fact that out store has, got a porn app. so users can now use out store to download a porn app, which aggregates lots of pornography from various different sites, and, uh, you It's kind of suggesting actually, this is a problem for context last year. And over the last few years, Apple's come under fire from the digital under the digital markets act, which is suggesting that actually Apple has a dominance in terms of the marketplace for apps. And in only last June did an investigation into Apple's dominance of the place. And so this is essentially kind of Apple trying to push back against that a little bit. It's a slightly cynical attempt to make everyone worried about the DMA. And it suggests that actually, you know, users are going to become less safe. If it's allowed to do, these kinds of sideloading apps via the app store. So again, little bit naughty from Apple, suggesting that this is going to be, it's going to open up a whole range of harms to users in the EU. What do you think about it, Mike? Very

Mike Masnick:

Uh, it's, it's nonsense. I mean, I, I, I, I have, I have all sorts of criticisms about the EU regulatory regime, as I've talked about. this is not it. This is, you know, I think Apple's is, this is the fainting couch, Oh my goodness. Like somebody could install a porn app. You know what? You have a browser on your phone. You can access all sorts of pornography if you want. Right. this is not the concern. The fact that you have alternative app stores, like, I don't know why Apple's making a big deal of this. Like Android on Google Android, you can have alternative app stores. They have existed for a long time. All of these fears have never been proven to be true. Most people don't use them. Most people are not going to alternative app stores to download dangerous apps. There are all sorts of reasons to complain about the EU. Maybe this ties back to the story where like Apple sees this as an opportunity to just try and smash EU regulatory regime right now, but this is not it, it seems silly, it seems like, Oh my, Oh, you know, very, very prudish. Like, Oh my gosh, someone might be able to get porn on an iPhone. Oh, come on. Like, uh, this, I don't know. I don't understand why Apple's doing this. I mean, I think everybody should be able to see through this as it's just a nonsense complaint.

Ben Whitelaw:

Yeah. Yeah. It's big companies pulling up the moat again, isn't it? which is something we've talked about, So are kind of four stories for our listeners this week. we've tried to kind of distill everything that we've heard. and you've written about this week, Mike, it's been a hell of a, of a pod this week. thanks to trying to unpack everything for us and sorry, you've had such a kind of shitty week, basically. listeners, thank you for taking the time to, listen this week and for staying with us all the time. we're going to be back next week, but we're going to be recording on Friday rather than our usual, our new Thursday slot. So tune in then. And, if you enjoyed this week's episode and all of the episodes that we put out, do rate and review us on your favorite podcast platform really helps us get discovered. thanks for joining us this week. Take care. Bye.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode