Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
Age Against the Machine
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Mike is joined by Jess Miers, law professor at University of Akron School of Law. Together, they discuss:
- AI Chatbots: Last Week Tonight with John Oliver (HBO)
- OpenAI’s Sam Altman apologizes to Canadian community after failing to flag mass shooter’s conversations with its AI chatbot (CNN)
- OpenAI And Sam Altman Could Face Dozens More Lawsuits Over School Shooting In British Columbia, Lawyer Says (Forbes)
- Manitoba premier addresses province’s plan to ban youth from social media, AI chatbots (CTV News)
- Turkey Passes Legislation Barring Children Under 15 From Social Media (NY Times)
- How YouTube Took Over the American Classroom (WSJ)
Support the podcast by joining our Patreon, with special founder membership available until May 28th.
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So, Jess, I know that you were a, a, big internet user and certainly probably, I'm guessing here of the age where when Vine was popular, that sounds like something you were into, did you, did you use Vine?
Jess MiersYes, I, I was an observer of Vine. I didn't create vines, but uh, yes, vine was, I think my college days. Yeah.
Mike Masnickcool. So as some people listening to this may know this week, a new company showed up on the scene called Divine and it basically relaunched Vine and, uh, brought back all the old six second looping videos that at one point Twitter had bought only to shut it down soon after that, but now there is Divine built on. Noster, which is a open social protocol, which is very cool. But the tagline, I have no idea if this was the tagline on the original Vine,'cause I did not use the original Vine. But the tagline on this new Divine it says, wander the network. Find your people. So Jess, tell me how do you wander the network and find your people?
Jess MiersWell, I have to ask why would I need to wander the network and find my people in today's day and age when I can just code my people. I can build my own agents with Claude.
Mike MasnickNice digital people. Very, very, very nice. Um, for my part, I will mention I had a wonderful experience this week, with Rabble, who is the creator of Divine, along with Mike McHugh, who's the creator of Surf and Flipboard, which are also some very cool open social network protocols. Um, the three of us actually recorded a podcast that will be going out later, and then we got to. Go set sail on Mike Chu's. Very nice sailboat in San Francisco Bay. On a lovely evening. We had a very nice, evening sail the night before rabble was going to launch Divine. It was kind of funny. We were on the sailboat and he's sitting there saying like, as soon as we get back to land, we have to get ready to launch Divine. So.
Jess MiersWow. Did you make any vines while you were on the boat?
Mike MasnickRabble did ra. If you look at Ravel's account, you'll see some, short six second looping videos, including of a seal that we saw on another boat and us floating past Alcatraz. It was a lot.
Jess MiersNice. I love that. I'm, uh, I look forward to checking this out too.
Mike MasnickHello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It is April the 30th, 2026, and this week we are talking about kids and chatbots. And kids and chatbots and some more on kids and chatbots and maybe something about kids and YouTube. we have a bunch of things to talk about, some of which may be related to each other. I am Mike Masnick, the founder and editor of Tech Dirt, and as we mentioned last week, Ben is off. On a well earned paternity leave for the next month or so. And so I am finding wonderful guest hosts to sit in his chair in his place, and this week we have a. An absolutely wonderful guest host Jess Meyers, law professor at University of Akron School of Law, and an occasional contributor detector when she has something she's willing to give me, expressing her opinion and thoughts on whatever it is in the world. How are you doing, Jess?
Jess MiersI'm doing great. I'm so excited to be on this podcast. I am a longtime Stan of Ector. I listen to the Tech Dirt podcast. I love the control speech, episodes as well. and thank you for letting me anytime I wanna rant about something of giving me the outlet for teer and waiting months upon months of me to get around to actually writing these topics too. So it's wonderful to be here.
Mike MasnickYes, yes. Yeah, I always enjoy your very thoughtful and sort of deeply nuanced writing and, uh, always appreciate it when you have something that, that you wanna say. Even if sometimes I do have to nudge you a few times to say, are you, are you really writing that piece?
Jess MiersIt's the magic. The magic and the editing Mike.
Mike MasnickThere we go. Uh, very nice. I would also like to remind folks, if you listen to last week's episode where we announced that we are starting a Patreon, and a bunch of you have already signed up. It's very exciting. We are thrilled to see that. but just as a reminder, If you sign up in the next month or so, I think until May 28th, which is when Ben returns, we have a discounted founder signup. and when. Ben comes back on May 28th. We're going to move to a slightly different format. we will still have the free podcast that you can listen to, but one of our in-depth stories will go to only Patreon subscribers. and there are two different levels at which you can subscribe with different. Benefits, but the founder level is only available until we move to that new format. So you should jump on that just like many people already have. you can find in the show notes, the link to the Patreon page, but it's just patreon.com/control alt speech. LALT. Speech, you know how to spell speech. Anyways, as always, we will suggest that you rate, review, subscribe, all the stuff that people say. I think that's the wrong order. I don't know. Just, you know, do things to help the podcast. We appreciate it. It's very nice. But with that, we want to. Get into our stories of the week and the first one. I am very excited for this conversation. a while back, I actually forget how long ago it was. I meant to look before we started recording, and I didn't. So I have failed in my job as host. But, a while back, John Oliver, the, wonderful host of last week tonight, uh, a very funny man who does deep dives into all sorts of subjects, usually. I really appreciate them and think they're very good, especially on complex and nuanced topics. And we had done a controlled speech episode where we talked about his episode on content moderation and saying it was one of the few times we'd seen, a mainstream media representation of. Content moderation that actually got at the nuances and the impossible choices that were there. And I really appreciated that. And I know that his team does really deep dives and usually, deep fact checking and research and trying to understand all the nuances of a subject. Usually they're very successful at that. Occasionally they are, maybe not so much. And that was my initial reaction to the latest episode, of last week tonight, which talked about chatbots. and a little bit on sort of the. Potential association of chatbots and suicide. I had concerns about it. Jess, I know you had some concerns about it. Do you wanna sort of run us through, the episode and then maybe some of your concerns?
Jess MiersYeah, I'll give some high levels. So, um, I echo everything you said. I am, I'm a big John Oliver fan. It, it would be an honor if anyone from his team ends up listening to this episode as well. So just starting out with that. And I assign, um, the content moderation, episode. I've assigned that to my internet law students this past semester too. So, um, I agree. I thought that was really well done with a lot of nuance, which is why I was a little bit surprised about this past episode. So the episode is primarily about chatbots. The way that people are using chatbots, and it starts out at the very beginning, it's kind of humorous, how people are using chatbots to explore religion. There's a, a joke about people wanting to talk to Satan using chatbots, for example. and actually chatbots and religion has become a huge use case in the chatbot world. actually finding people are finding their way back to. Faith and spirituality, whatever that means for them. interpreting religious texts, again, whatever that means for them, using chatbots so that it is, it is a, a big use case. it goes then to meta AI's role playing, chatbots. We see, I think a discussion between a user and ai, Snoop Dogg, uh, that, uh, mark Zuckerberg had, displayed at one of their, uh, conferences. and so it's sort of the, through line is people are using chatbots, we're using'em for these different. use cases. And the through line, through this discussion is that people are using chat bots. in ways that could be potentially harmful. and that the chatbot companies, they've designed them in this way for that use. Uh, John Oliver says, the idea is that these chatbots are designed to quote prey on our desires to be validated and affirmed. And then we see these use cases of these chatbots being used in that way. So there's one use case, where you had a user who was, who had actually said, look, I know that this chat bot is a chat bot. I know it's not a human. but the person was using the chatbot, because his wife at the time was, uh, in a caretaker position for a parent I believe who had dementia and the user was saying. At times it gets lonely because his partner, his wife, sort of had this, what we call empathy fatigue. And so throughout the relationship, it's understandable when one partner is sort of dedicating their entire emotional, bandwidth to an ailing parent, for example, the other partner might start to feel, not validated. Lonely. And so the user was talking about how, look, I know this is a chat bot. I know this isn't a real person, but he says, every once in a while, it's nice to hear that the words that, you know, these words of validation, affirmation are being reflected back to me. so that was sort of one really important case. And, and I'll, flag, you know, look. Validation, affirmation, hope. These are considered protective factors for things like suicide and self-harm. So it was one use case I thought was really interesting, but it kind of got played out as sort of like a joke, like look at this person who has developed this relationship with the chatbot. and it, to me it's sort of came off as well, making fun of somebody who is experiencing something that is inherently very normal and very human, and is now turning to a tool to sort of find some sense of normalcy in this. Instance in his life where, his wife's emotional bandwidth is not there, and understandably so. the conversation then moves into a discussion about how, you know, going from, okay, these use cases where people are using them for loneliness and, and affirmation validation into, well, how are people using it for more harmful use cases. the show talks about AI's sick fancy, for example, where a. folks who might be experiencing some sort of delusion, the AI might be pushing them, or might further validate those delusions. And of course, that could be considered harmful. But then there was another discussion about how users could potentially use chatbots to do things more nefarious, like how to build a bomb. I know we're gonna talk later about some of the mass shooting discussions and how that, kind of content came into play. In the building a bomb discussion, John Oliver shows a, a demo on the screen of, trying to ask a chat bot, how do you build a bomb? And what we see is over and over and over again, the chatbot sort of flags its guardrails and says, look, we're not gonna help you do this until it finally does. And that, tees up John Oliver to talk about. people are able to jailbreak, they're able to get around the guardrails of these chatbots, in order to do worse and worse things. The segment ends with, I wanna say it's about a five minute discussion about the suicide cases, and John Oliver presents some of the tragic suicide cases. We hear about the Adam Rain case, which I'm happy to get into. Um, we hear about, the recent Gemini, Vallis case. Again, happy to have, get into more of that discussion and it ends with essentially. Don Oliver praising laws that are coming out currently in California and New York and other states that would essentially, require these chatbots to develop quote protocols to prevent suicide ideation. Or cease communications entirely. one protocol suggested in the California law is to throw the 9, 8, 8 number, which John Oliver then puts on the screen and says, you know, look, it shouldn't be that hard for these chatbots to do this. Right? and the segment sort of ends with John Oliver calling these companies and calling these CEOs, quote, suicide enablers. And, uh, they have these sort of suicide robots that are causing suicide. The conversation sort of ends there, which is where my concerns start to come in. So hopefully that was an okay overview. I'll throw it back to you, Mike, for your take as well.
Mike MasnickI, I, I think, that's a good overview. and I know that over the last few months, certainly you've been doing specific research, focused on this idea of AI chatbots and suicide and and where the connection is and sort of how to deal with, you know, mental health challenges with different individuals and, and the relationship to, So I would like to dig into what were the parts of that side of the conversation. We can talk about some of the other stuff later, but what parts of that conversation were most concerning to you?
Jess MiersThere was one discussion with the, CEO of Nomi mean, throughout the, episode, there were sort of these very short segments of interviews with CEOs. And look, I worked for Silicon Valley Tech CEOs and I did my tenure. Silicon Valley tech CEOs are not exactly, the, the most brilliant PR people when it comes to the way they talk about their companies, but actually the Nomi, CEO, Alex Cardile, I'm not sure if I said his name right. he was actually to me, I, I went and watched the entire interview instead of just the short segment.
Mike Masnickit was, they took a clip from Hard Fork, the, the Hard Fork
Jess Miers2024 interview with, Alex. And, actually I thought his interview was, one of the more grounded and sort of, human, kind of interviews that we see from tech CEOs. And so the very short clip that, the John Oliver show showed as a segment, or sort of what they were teasing was that when CEO was asked on the hard fork. Podcast. What do you do about self-harm content when that comes up on your service? the CEO had said, well, we trust the know me to sort of figure out, what the right response is. And the clip ends around there on the John Oliver segment.
Mike Masnickjust to clarify, the Nomi being the ai, character that is conversing with the human user.
Jess MiersYes. Like leaving the idea there was the no me ai sort of knows best and, should respond accordingly. And I believe there was a part of the segment that was on the John Oliver Oliver Show, where they had said, well, yeah, you wouldn't want it to break character. And that's where John Oliver sort of jumps in and says, well, maybe it's good that we break character every once in a while. The reality is actually there's a ton of nuance to this. And from watching the entire interview with the Nomi, CEO, with Alex, and from reading the research on how we, how ologists, how clinical psychiatrists actually approach, people who are experiencing active versus passive suicide or suicidality. Alex's approach is actually not that far off. For example, if you watch the entire interview, Alex talks about, look. If the Nomi breaks character, we know that our users, they're going to detect corporate speak, and if they detect corporate speak, they may be less likely to accept the help that Nomi might be wanting to suggest later. If Nomi wants to suggest, for example, the suicide and crisis hotline, or Crisis Text line, or some sort of local resource in that user's area. people are more willing to accept help, when that help comes off as genuine, authentic, and empathetic. And so that was sort of the missing component on the John Oliver segment was the CEO was saying, look, Nomi has been interacting with people. With each of its users for quite some time, and the CEO says it has a very, very good memory. So it knows that, look, if this person told'em I had a bad day at work, it can tailor its individualized response to each user as it detects a user potentially stepping into a crisis zone. And what we're finding from the research is that individualized and personalized response to those users is actually something that's more likely to get people, one, to disclose what they're feeling, but also two, to accept help.
Mike Masnickand I think that part is, is really interesting. I think there's this overlay, and this came across in the the Oliver piece, which is that if, if you believe that the technology is just inherently bad and inherently harmful and that it, it is only sort of a hallucination machine, then you think. Well, it can't create a personalized response that is actually helpful and I think that was part of my concern with the Oliver piece was that it doesn't take into account that, you know, yes, like there are bad CEOs out there and there are bad companies out there, but it appears that, you know, Nomi in particular seems to have actually thought this through and looked at and understood the research of, if someone is having a crisis, how do we actually get them to accept help? Which is a huge challenge as I understand it, in dealing with people in crisis.
Jess MiersYes, most of the suicide, there's a lot of different suicide prevention protocols out there because again, suicide is very individualized and very personal to each person. And so, you know, for example, I have said this before in pieces that I've written, I said this on Blue Sky. 9, 8, 8 is. Kind of the default where, you know, if you experience any sort of signs of passive or active suicide or suicidality, throw the 9, 8, 8 number. Well then there's a multitude of reasons why why 9, 8 8 might not be effective for a person depending on what they're experiencing at that period of time. but it seemed to me that in the show, at least John Oliver might've been advocating for that default response. The NOMI should just, if any sort of sign of crisis, the NOMI should throw 9, 8, 8. And that was sort of, i, found Nomi, CEO to be sort of unique on this approach of saying, look, take the user where they are, rely on Nomi's memory of all of the discussions user's been having, and come up with a more personalized approach so that if somebody is experiencing this, they will be more willing to accept help or the resources that the NOMI might suggest.
Mike MasnickYeah. and to be clear, like there is evidence and research that 9, 8, 8 is actually very, very helpful. But you have to actually get people to want to or be willing to, call and to have that discussion and figuring out how to get people to that point is a very, very tricky problem. And one that it sounds like. Nomi is, is trying to think through how do we do this in a way that will actually get people to accept help? And I understand like personally, it, it is tough. Like I understand the instinct that as soon as you see anyone discussing any element of suicidal ideation in any sort of way to be like, ah, like, you know, get them help immediately. But if immediately switching to corporate speak and immediately just. Shoving 9, 8, 8 in front of them is going to make them not willing to do that and make them turn off. That could actually be more problematic. and I get that this is really, really difficult and really, really nuanced and I'm glad that I'm not designing chatbots that, that are in that situation to figure out. But my impression of the John Oliver segment was that Whereas normally I expect him to be really nuanced and thoughtful on these things and, especially based on like the content moderation episode, to recognize that there aren't easy answers here. Instead it felt like he immediately jumped to like, how dare they not just immediately break character and throw 9, 8, 8 on, on the screen.
Jess MiersRight. And other thing that I noticed throughout the episode as well, I mean it starts at the beginning and it's again a through line through the episode, is maybe these chat bots shouldn't have been released in the first place. The idea being that they were rushed to market. that they should have solved the suicide and self-harm problem before these chatbots ever entered the market in the first place. And again, going back to his content moderation episode, I was surprised to hear those points because the content moderation episode recognizes the content moderation and possibility problem of we will never. Fully solve these problems. And in fact, as you put one of these services on the market, you then discover that there are new ways in which people are, harming themselves are breaking guardrails. It's always going to be a cat and mouse game. So I, I really was surprised by sort of the disconnect between. Content moderation is hard on social media. And then all of a sudden we're sort of expecting people to solve the suicide problem, Foris as if there is a perfect solution, which by the way, you know, the laws that he, advocated for those two laws, the law in California and the law in New York. The law in California, the way it is written, assumes that AI also can perfectly mitigate. Suicidal ideation. And again, it says the way you read it, if you don't have a protocol in place to prevent suicide ideation, you should cease communications with the user entirely. And just as we've seen studies that, you know, flashing 9 88 at the wrong time, corporate speak, um, might turn people off from being able to accept help. Ceasing communication with somebody who is in active crisis can also be very dangerous. It's why, again, a lot of these protocols, they actually say in training, if you're going to help somebody who is experiencing active crisis, you need to have a lot of time. You need to be willing to sit down and have a full conversation with them because you can't stop that conversation once you start it.
Mike MasnickAnd so I'm sure some people will respond to all of this and, and say, you know, there are real problems here. And the companies, you know, certainly some of the big companies don't seem to care that much and have sort of a dismissive attitude. you know, and this is sort of the point that John Oliver makes is that, well, we need laws to make the companies care. So how, how do you respond to that?
Jess MiersYeah, and I was thinking about this again. He says at the beginning that, and honestly throughout the episode that the bots are. Trained this way to maintain your attention. And I know this sounds kind of morbid, but if the end result is that the user then dies, right? I don't see how that matches sort of the goals of the chat bot developers. Obviously, as we've seen in the social media context, there is not an. Incentive in place, for users to become harmed by their services. And so, you know, I understand sort of this push to, well, we need to regulate, we need to do something here. but the reality is that as we saw in the content moderation space with social media, if you overregulate, if you press too hard, what you're actually going to do is you're going to make it harder for these services. To actually make their products better. And we're seeing we're, we are seeing, from studies, on some of the earlier models, the way that they used to respond to suicide prompts, to today's models, we actually have seen an improvement over time. There was a that was taken, uh, a few years ago back with chat GPT-3 0.5, where it showed. these early models, they were not great about flashing the resources. They were not great about detecting when somebody was entering into crisis or entering into sort of a spiral emotional spiral. but today, nearly all of the mainstream chatbots, they follow sort of the same approach to recognizing when somebody is, heading into crisis, responding in a way that provides resources, but also responding in a way that is empathetic. Avoids the corporate speak validates, affirms and provides hope following the kind of preventative strategies that psychologists and clinical, psychiatrists have been, saying for, for decades now on, on how we should approach suicide. So the reality is that actually. These companies are making a lot of great progress on this front. They have incentives to make progress on these fronts because it's not great when they end up in the mainstream media saying that they're causing suicide, they're killing people. and so they are making these strides. But if we overcorrect, they're not gonna have any incentive to sort of continue to better these products and services. Instead, they'll turn off the conversation or throw 9, 8, 8 and be done with it.
Mike MasnickYeah. Well, wanna move on to our next story, but I, I sort of feel it's a continuation of, of this story in some, in some form, as many of the stories today actually are. It's, it's amazing how much all the stories kind of lined up. one involves open AI specifically and a particular story that happened in British Columbia, which you may have heard of, where there was A young person who ended up. Killing some family members and then going and shooting up a school. It's the largest school shooting in Canada in, I don't know how long. It's a very, very tragic story. But what came out earlier this year was that that individual, who was clearly. Troubled and facing a mental health crisis. had been using chat t and having a conversation with chat GPT and a few things came out later about that. starting with some Wall Street Journal reporting and some leaks from someone within open AI that, in fact. The chats that that individual was having had set off some alarms within the company. They had actually shut down that user's account. and there were concerns within OpenAI that the individual was likely to engage in some sort of violent activity and a suggestion that they should inform law enforcement, which OpenAI then chose not to do and decided not to do that. Though they did cancel the user's account. The user then did open a second account, and proceeded to, continue the conversation, and then eventually there was this tragedy. So earlier this week, there, there's obviously been a lot of reporting on this. Earlier this week, Sam Waltman, released an apology letter to the families of that, small town where all of this occurred. put it in the local sort of news paper, I guess. and then the very next day, a bunch of lawsuits were filed. Apparently a bunch more are coming as well, and they're basically trying to hold open AI liable for the shooting and pointing to, what certainly appeared to be pretty bad facts in terms of the fact that some alarms were raised within the company Including telling them to alert law enforcement and they chose not to do so. the lawyer who's bringing the cases is Jay Edelson, who's, somewhat well known as a plaintiff's lawyer. you know, a sort of. I was trying to think of a diplomatic way of saying this, but you know, files, lawsuits where he accuses companies of being horrible, terrible things and demanding lots and lots of money, and he's fairly successful at that. I find his characterizations of what has happened that lead to the lawsuits is often, Exaggerated and perhaps not entirely accurate. but that's, you know, they have a, a big time lawyer filing these cases. And apparently the cases were filed by some of the families of the injured or, unfortunately, People who were killed, victims of, of the shooter. and apparently there are more lawsuits expected as well, though I imagine these may get consolidated. will see on that. so this is a, specific example that sort of goes back and relates to the kinds of things that were talked about in the John Oliver episode. Here you have a chat bot here, you have somebody who's using the chat bot not for self harm. But to lead to, a mass shooting and, and many deaths. And it's a incredibly tragic situation. And you have a specific situation where the company, there were flags raised internally. So I'm sort of curious what your take is on this particular lawsuit and this situation.
Jess MiersYeah, there's quite a bit of overlap between the story and, uh, the John Oliver segment. And actually, this gives me an opportunity, to flag, you know, one of the other big concerns that I have, both with this story and the John Oliver segment, is that. whenever these incidents happen, whether it's the self-harm or it's the mass shooting, situations, because I know there was also, I think one in Florida as well and OpenAI was named. you never really hear the sort of other side of that narrative. From reading this, the several suicide cases that we heard, right? The John Oliver, episode had flagged the Adam Rain case, for example, which is a very tragic case. when you read the, plaintiff's complaints, the entire point of the plaintiff's complaint is to really. Tell the story and yes, even to exaggerate, what you're not actually hearing about is sort of the social environment, the, mental issues or, or, harms that this person was sort of struggling with. Before it got to the result that it got to. And so for example, in the Adam Rain case, the other side of that story was that the teenager had actually reached out to multiple people in his community. His family, for help had flagged several times throughout that he needed help. And to me that indicates, okay, there's a, sort structural failing that we really need to be paying attention to. But when we sort of assign the cause to technology to chat GPT. It sort of, it eliminates our ability to have that discussion. So my spidey sense started tingling when I was reading this case about the, shooting in Canada. which by the way, the last, uh, mass shooting in this area was apparently in the 1980s. I, I was thinking, okay, well. What led up to this. And so I did some digging and found that actually this was a very troubled individual. Another instance of been a very troubled individual, but law enforcement federal authorities are how I think the R-R-C-M-P, right? They had apparently visited the home of this person on numerous calls. You'll have to fact check me, but I believe it was 18 times they had visited the home of this person on multiple complaints, on multiple, concerns about violence and in fact. Weapons were removed from the house at one time over these concerns only for the family. Allegedly, they petitioned to get those weapons back. The weapons were returned to the household, and then those were the allegedly the same weapons that were used in this shooting. And so, yes, it's easy for us to sort of turn to Chacha bt and say, cha GBT, you caused this, you caused this mass shooting. And to sort of ignore all of the structural failings behind what, potentially led to, this incredibly tragic event.
Mike MasnickEspecially if the complaint is that OpenAI failed to alert law enforcement if law enforcement was already involved with this individual and knew about them and the weapons. That seems yeah. Certainly diminishes the claim that there's a, responsibility on chat GPT, that they, or on open AI that they failed to handle.
Jess MiersAnd, and it begs the question too, again, as we see in the content moderation space, what should OpenAI, what should chat bot providers do in these instances? OpenAI had already removed the account. this person had then created a new account with, at GBT. from just reading the story very quickly, my guess is if we were, if we had access to the chat logs, which that's the other problem with a lot of these stories both in the suicide and these other sort of harm context, is we don't always get the full picture, the full context of the chat logs. For example, the firm that is, again, I think you've mentioned Jay's firm, if you read the actual complaints. What you end up seeing are a lot of summarized snippets of what the discussions were with the chatbot, but you actually don't see, sometimes you see quotes, but you don't see often a lot of quoted material, again, using very, highly sensational language as well. And so you really have to pause and think, yes, this sounds horrible, but we need to look at what actually happened in the context of the conversation. And my guess is, as we've seen in all these suicide cases. Likely there were several guardrails that were thrown as well. Um, there were probably several attempts. Uh, again, I don't know, but there were probably several attempts to sort of steer the user in a different direction. But users have figured out how to jailbreak these systems.
Mike MasnickI mean, we had seen that with, certainly with some of the suicide cases where, it's easy to sort of pick out the sensationalized part of the conversation, but it often leaves out, the ones that I've seen where early on, the first thing that the chatbots do is suggest. resources and help and nine often, 9, 8, 8 or or other, support systems and. in many of these cases, what happens is the individual keeps trying to figure out how to get around it. And the, you know, the sort of classic example in one of the suicide cases, I don't remember if this is the Adam Rain case or a different one where they said, oh, I'm writing a book. this is not me. This is about a novel that I'm writing, so I just need to help, you know, help me discuss it as if I'm a character in a novel. And there are all sorts of ways to do that. And you can say that the chat bot should be able to figure out, you know, how to deal with that. But. just like the impossibility of content moderation. this is a somewhat impossible situation as well because there's always some way around it. At the same time as the guardrails get stricter, you're going to come across more and more crazy situations where people make fun of the guardrails, where it's like, oh, I'm just trying to figure out how to do something, perfectly reasonable, but it's hitting, it's hitting me with guardrails, and I've even seen some cases of this. I was like, in a slightly different context where I was looking for a quote from a book, and I would have that AI would Generate the quote and then make it disappear because it, violated a guardrail. It's like, you have it right there, just show it to me. I'm not doing anything bad. But it's like, it's tough for them to recognize for the system to understand the context of what is actually happening and which people are really at risk of harm, whether it's self-harm or harm of others, and not, and if we don't have the full transcript, it's, it's very difficult To understand how to deal with that. how do you think these lawsuits are gonna go? I mean, I know they were just filed, and they were filed in, even though this happened in British Columbia, they were filed in San Francisco, which is where OpenAI is based. I thought that was an interesting choice. I don't know if that's required by the, user agreements, which it might be, or if there's some other reason to file it here. But what's your sense of where you think these lawsuits are gonna go?
Jess MiersYeah, it's, it's hard to say, to be honest.'cause now we have the, you had asked me this question before the social media addiction litigation, I think I may have had a different answer. I mean, we saw, at least with the, shooting in, Buffalo, I think that involved, uh, if I remember correctly, Twitch, and, Think another tech company. It's
Mike MasnickI think Discord
Jess MiersDiscord. I think that's right. we saw the court turn around and say, no, of course, section two 30 would apply here. and, you cannot assign cause or causation, to the tech companies for this sort of offline tragedy. You know, I think that's a possibility in this type of suit as well. If we can make, if they can successfully make some of the same, causation arguments, which is that, look, this is a superseding event. This is an individualized event, right? the person took it upon themselves to, to commit the tragedy. At the same time though, we have seen in recent many, especially in social media addiction litigation, many of these cases have made it through the motion to dismiss phase by simply just saying this was dangerously designed. And we are seeing courts. in these suicide cases become more sympathetic to the dangerous design language. because when it comes to ai, these courts don't really see AI systems as, providers of speech or, or providers of information, but truly more like products like, you traditional blenders and cars that are typically swept up in these products, liability and wrongful death suits. So. It's anyone's guess. I could see it at least for at this point. If I had to make a prediction, it will probably get past the early stages of litigation. From there, we might see what we see in the suicide cases, which is probably settlements.
Mike MasnickHmm. Interesting. All right, well, I wanna move on to our next story. But as I warned, all these stories sort of flow into one another, and this time we're staying in Canada. but, we're moving over to Manitoba where they have announced that they are planning to put in place a. Youth ban on social media, and AI chatbots. They're lumping in both of them together. and this one was, you know, we've certainly seen other places banning kids from social media and chatbots, but I thought there were some quotes that were. provided by the premier of Manitoba. I'm not sure how to pronounce his name, but it's wa I believe, that I thought were really revealing and perhaps not the most flattering way for Manitoba.
Jess MiersYeah.
Mike MasnickBut, but, but, just as you wanna talk a little bit about, what is happening up in Manitoba.
Jess MiersYeah, it's a lot of what we've been seeing elsewhere, throughout the world. So as we know, Australia has, was sort of the first to this to ban people under 16, from social media. we know, I think we're gonna talk about Turkey in a bit as well. I, last I read, I think France was considering a similar for under 15. and so we're seeing the same thing here in Manitoba. What was interesting, you were referencing some of those. Quotes. Um, I read the article about why they were approaching this in Manitoba, and usually you see more of a discussion around, kids are addicted to the services or kids are experiencing, you know, some of these mental health harms that have not been really proven out in research and literature. But I'll leave that, aside. there was more of a discussion about, well, Trump was elected and. The kids have measles and they're harming themselves. And again, it, there's such an excellent thread in this podcast from the John Oliver segment to here. It's the same sort of issue of these are social and structural harms are taking place in the background. And instead of having the discussion, we need to have about, well, why do the kids have measles? Why are the kids not getting vaccinated? Um, why do we have a country. Donald Trump was able to be elected twice in the first place. It's technology is sort of this easy backstop. The, you know, okay, well if we kick the kids off of the internet, all of these problems will be solved in Manitoba. All of these problems will be solved in Australia. Right. So that's at least how I, I see it as the same thing
Mike MasnickYeah.
Jess Mierswe've been seeing.
Mike MasnickYeah, the quote is, literally, he says, look what it's done to us. Donald Trump is president. You have kids who are harming themselves. You have measles outbreaks in our province. All of these things are because we've let social media dominate our lives without understanding how pernicious billionaires who design this have been targeting us and making us addicted to our phones. And that just conflates a whole bunch of different things. I don't think it is the kids on social media who made Donald Trump become president, I don't think it is. The kids on social media. Who have created the measles outbreak. I think that is adults, and adults making choices, uh, uninformed, often very unwise choices. But like to then say, because of that, we need to ban kids from social media. H how it just feels like, anything that has gone wrong in the world blame it on social media and kids on social media, and it, feels like this sort of, you know, people. Joke about this, uh, sort of somewhat crazy concept of AI psychosis. This just feels like there's a different kind of psychosis going on here where it's like anything wrong in the world, just blame on the technology. Even those either deeper societal problems,
Jess MiersYeah, I'm kind of wondering what happens when the entire world inevitably kicks kids off of the internet and we still have people dying by suicide. Donald Trump is now president for the third time, knock on wood. and we still have a measles outbreak, right? Like what is, what will be the next sort of, object of our societal ills that we will turn to before we will have a real conversation about why people are, why we're. In these situations that we're in. Not to go back to the John Oliver segment, but it really blew my mind by the way, on this topic, that there was a discussion that said half a million people who are using chatbots are experiencing some sort of mania or mental illness. Why? Why we need to have that conversation. There's obviously a clear problem in our society, not the chatbots, but the reason why people, why are so many people dealing with these things? age verification, banning chatbots, banning technology is not going to get to the root of that problem.
Mike MasnickYeah. Yeah. I mean. it's a refrain that we've, I've certainly made many, many times over and over again, which is that we have real societal problems that are very, very difficult to, impossible to, completely fix. But to do so, you have to have an actual understanding and an actual discussion about them. But it is much easier, especially for some in the media and certainly some politicians to just say, well, there's this new technology that must be the problem, even though these actual. Problems predate all of this and are tied to other larger societal problems that then get a, free pass because we're just focused on, oh, it must be the AI chatbots. and so, yeah, I find this really, really problematic. but I, I do wanna move on to the next story, which once again. Straight through line. This is Turkey, which is now trying to pass legislation banning children from under 15 from using social media. Um. Turkey, somewhat famously, has often done things banning various internet aspects. They've at times banned YouTube and Wikipedia and Twitter, and I think Twitch at some point. And, you know, Turkey under its leader, Erdogan as not known as a particularly free society right now. and again, like you can look at this and say, okay, well Turkey is just doing the same thing that Australia and. Greece and France and Malaysia and all these other countries are, are looking at doing the uk. but what I remember is that Turkey somewhat famously likes to put in place new laws that increase the power of its authoritarian government. By claiming that they are the same laws that more free countries are putting in place. And this to me is another example of that. So the, famous previous example that I remember was that when Germany passed its social media content moderation law called Nets dg, which was effectively overtaken by the Digital Services Act across the eu. when. They passed that law Turkey soon after passed its own content moderation law, which they explicitly said was modeled on the German law. The fact that it was then used to force social media companies to suppress speech critical of the Erdogan government was sort of, you know, it was used as an excuse to. Give the government more authoritarian speech, suppressing power, and this is the same thing. the Turkish government is effectively justifying this law by pointing to Australia and their ban and saying, well, we have to protect the children. But when you read the details of this law, unlike other countries that have age verification laws that sort of say to the platforms, you work it out, this one requires everyone in Turkey who wants to use the internet to abide by this law, to use a government portal. Where you have to give your ID and age and information in order to access the wider internet, which I think most people listening to this podcast can recognize is a real potential for surveillance and certainly suppression and oppression of anyone who is critical of the Erdogan government, and yet they get to justify it. By pointing to Australia and everywhere else that is exploring this and saying like, oh, we're just protecting the children. The fact that we also get to surveil everybody and everything that they do on the internet is just a side bonus. and I've made this point before, but a lot of these laws about, protecting children on the internet or protecting national security on the internet in supposedly free. western countries are then used by authoritarian countries to justify greater and greater oppression, and I think it's, it's a really important point that gets lost in a lot of these discussions. I, I've been ranting for a little bit here. Did you, did you have any thought on the, the situation in Turkey?
Jess MiersPlus one. Absolutely agree. And I'll just throw in, as I mentioned in the case with Canada, when I was reading the background on the story, right, again, we're seeing less of this discussion about, well the internet is harming kids and I believe instead the MR of justice, if I have this correctly, there was a reference to anonymous accounts being sort of the plague of, their society. Anonymous accounts lead to harm. freedom has its limits. Um, and so again, there's, there's, we're not seeing a, a big sort of discussion about, you know, any of the research or I guess non-existing research out there that the kids are being harmed by social media. It's really, this is being used as a proxy, for control in the country. And that's what we've seen. As you mentioned time and time again in places like Turkey especially. I'll just know before I jumped on here, I like going to Reddit just to see, you know, what are, people saying? and subreddit, for folks who are talking about Turkey. was a lot of discussion about, so what if we oppose the government, we now lose access to internet. That's at least the way several people in Turkey are sort of, apparently viewing this or outside of Turkey are viewing this as well. So I think your instincts are right here.
Mike MasnickYeah. the exact quote from the Justice minister in Turkey was, we are bringing accountability to social media. And then he says, freedom is everywhere, but even freedom has limits. And it's like, those are the quotes that, you know, it is, An authoritarian government justifying its oppression by, using the language of other supposedly free countries and, and what they're doing online. All right, for our final story of the week, we're again keeping this through line, but there was a really interesting story in the Wall Street Journal. this is a little bit different. talking about how YouTube took over the American classroom, and I should note that, you know, one of the aspects of the Manitoba discussion, which we didn't mention there but ties into this too, is that, part of that ban on under fifteens is he said, we're going to get YouTube out of the classroom. And so here there's this Wall Street Journal report that talks about. how many schools are using YouTube in the classroom and, and sort of suggest that it's sort of gotten out of control. Did you get a chance to read this one?
Jess MiersYeah, I've, I've been following the, devices in schools problem for quite some time now. And, you know, this is gonna sound weird coming from me, but I tend to be on the side of, yeah, there is too much technology at the lower levels of education. some of the stories that I've read, and including the one that you're. Talking about now. kids when they go to recess, they now have options in some schools to either watch YouTube or play outside. Right? I think that is inherently a problem. We do also have research, right, where I've kind of said on this, show now throughout, look, the research about social media addiction, is very sparse and in many ways not really conclusive. we do actually have more conclusive research about how. Kids learn. And it has been shown that, screens are not as good as picking up a textbook and reading from the book, writing in cursive allegedly, which has kind of gone to the wayside, was one way in which it was, it sort of. Formed the neurons to help kids learn later when it comes to writing and processing language. And so I do think we actually have, this is one of those societal, structural issues that I've been ranting about. I think our education system does have an actual problem with devices in the classroom. and I think it is something that we should be thinking about because when we talk about things. Kids are becoming addicted to screens. Where did it start? I read a lot of these stories where parents say, look, we don't allow our children to use devices in the home, but they're learning how to do infinite scroll from their infinite scroll textbooks on their Chromebooks, right? Kids aren't even using physical books anymore. I thought that was shocking. they're learning these bad digital habits. In the classrooms at these early ages where yes, it has been shown, that does affect the way in which they learn and process information. So I do think, you know, again, I, I'm not sure if it's a, you know, blame the tech companies, I don't think, I don't know if that's the right approach here. But yes, there are detrimental aspects of device use for kids at very young ages for long periods of time. And maybe we should be thinking about the way in which these devices are in the classrooms.
Mike MasnickYeah, I mean, you know, some of the things that struck me about the article was this suggestion of, you know, certain schools really relying on YouTube and, and you know, there are different sides to this. Like, there's an element of, YouTube has all of the world's information in all sorts of formats, and that can be really valuable in a educational setting. you know, if, if you can show an amazing experiment or something that you wouldn't be able to see or, you know, an amazing teacher that can explain a certain subject. There are all sorts of opportunities and, and we know, there are. Tools out there, like, you know, former, controlled speech guest host, uh, Hank Green, uh, you know, has amazing educational content that he has created. He and, and his company, have created some wonderful educational content. And you have my, my former roommate, Sal Khan, who created Khan Academy, uh, has out there and created all sorts of wonderful educational content that is on YouTube as well. There is all this usual stuff, but. kids have to learn to use it appropriately. And so my only thought on this was I'm not so much against the use of technology in school. My problem is this idea that. You know, when they let it run away. Right? I think that there's an appropriateness to, part of learning in school is not just the subjects that you're learning, but also how to learn and how to understand things. And that includes like how to use the technology appropriately. And, I feel at least somewhat lucky that I, I believe. From what I can see of my own kids' schools and, the local schools here, that they've been pretty good. They do use Chromebooks, but they don't over rely on them, and they really do sort of try to teach the kids how to use the technology appropriately. They don't go to like outright bans of phones. but they have systems to say like. We don't use phones in the classroom while we're teaching. They actually have these like boards with pockets that every student puts their phone in as they enter. But as they're switching classes, you know, occasionally they can, check their phones. It's not like no phones allowed on campus. And that has led to useful things like, when they had the, bunch of high schools had anti-ice protests, earlier this year, and kids were filming it and sharing it, and then the high school news organization was able to use those videos and things like that. And I think those are valuable examples. But the important point is learning how to use the devices and technology appropriately, because I also have this. Fear, on the flip side of like, if we go to this world where we just have a complete ban, no Chromebooks, no YouTube, no, phones, no technology used in school, that then at some point, kids are going to graduate and they're gonna go out in the real world and they're going to use these things. We all like. I stare at a laptop all day for my job. That is part of my job. I would like to believe that I can use it appropriately and, thoughtfully as part of, my productive life and, and everything. But if we're, having kids not learn how to use it appropriately and being told that it's bad and evil, and then sent off in the real world where they're going to have to use a computer for most of the day, I wonder if they're, ill prepared for that if they're not taught like. here's how to use it appropriately. Here's how to use it in moderation. So I would like to see more of that as being built into the curriculum. and I know some people get upset, like, oh, don't bring AI into schools. Don't bring the technology into schools. But I think, recognizing these are things that people are going to use in the real world. it actually would be more appropriate to teach them how to, how to then use it appropriately. And that should be a part of the schooling experience.
Jess Miersto the point where, right, like, let's take all the devices outta the schools. And also kids have been kicked off of the internet, so they're gonna graduate and now they have no idea how the internet works or any other devices. No, I wholeheartedly agree. It actually got me thinking before jumping on here, I don't know if your school had this, but I, like, I remember I had a computer lab when I was growing
Mike MasnickYes,
Jess Miershad
Mike Masnickdid too.
Jess MiersWe had like computer time, and that's where we learned typing and we learned like, how to consume information appropriately and use computers to write book reports and whatnot. And so I, I was kind thinking like, not to go all the way back in the past, but like that balance can be there. I'm, I'm with you on, it's when it, they run away with it. Um, I think that's when the harms start.
Mike MasnickAnd I think, I think that gets to the, the sort of nuance of the situation. I will note, by the way, I am so old. We did have a computer lab in my high school and that is where I learned how to use computers. but we didn't learn to type in the computer lab. We learned to type in the typing room, which had typewriters. That's how old I am.
Jess MiersDid not, I did not have that. No, we did all of it in one place. Um, but
Mike Masnickhad two separate, we had
Jess Miersit was
Mike Masnickrooms.
Jess MiersYeah. And you still learned.
Mike MasnickYeah. but learning how to use these things appropriately, I think is the really important point. And that is a more nuanced take than the one that we're having. And I think sort of the through line of this entire episode is, that is important to have that nuance and, but very difficult at this time when people are sort of quick to rush to judgment and sort of blame the technology for things that may be. There's a, a deeper discussion to be had about. But, Jess, this was a really, really wonderful discussion. I'm really glad that you were able to, to join and, co-host and, be Ben for the week.
Jess MiersYeah, I'm honored. Thank you so much for having me. It's a joy to be here.
Mike Masnickwell, thanks again and thanks to everyone for listening. and again, I will remind everyone check out the Patreon. hopefully you can, subscribe to the Patreon. We have another month where we'll be doing regular episodes with me and guest hosts until Ben comes back from paternity leave. and then we will switch over to the new format. And by then, you should already be signed up for the Patreon. Check it out, but thanks for listening either way, and we'll be back next week.
AnnouncerThanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.com. That's CT RL alt speech.com.