Ctrl-Alt-Speech

Is This The Real Life? Is This Just Fakery?

Ben Whitelaw & Cathryn Weems Season 1 Episode 30

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Ben is joined by guest host Cathryn Weems, who has held T&S roles at Yahoo, Google, Dropbox, Twitter and Epic Games. They cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Concentrix, the technology and services leader driving trust, safety, and content moderation globally. In our Bonus Chat at the end of the episode, clinical psychologist Dr Serra Pitts, who leads the psychological health team for Trust & Safety at Concentrix, talks to Ben about how to keep moderators healthy and safe at work and the innovative use of heart rate variability technology to monitor their physical response to harmful content.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

So Catherine, in honor of your decade at Yahoo, I thought we'd use a user prompt from that famous old search engine, and a product that only closed a few years ago, I was reading. So as they used to say on Yahoo Answers, what would you like to ask?

Cathryn Weems:

Um, I think my first question is, how this podcast is gonna, cope with two British accents for the whole week. Um, because I think it's the first time that it's happening, so hopefully we don't cause too much havoc. But also, I wanna know, I wanna ask, what odds I'd get for betting on when you are gonna run out of prompts from random products,

Ben Whitelaw:

Never, never. It's an unending stream. We're sure of it. It's just a case of digging. That's hello, and welcome to Ctrl-Alt-Speech your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you with financial support from the future of online trust and safety fund, and by our sponsor Concentrix, the technology and services leader, driving trust, safety, and innovation. And content moderation globally. Mike, as you will have heard so far is absent from today's podcast. He's actually at the trust and safety research conference at Stanford. And I'm very, very pleased to be sat alongside virtually speaking, the brilliant, the magnificent Catherine Weems. How are you doing Catherine?

Cathryn Weems:

I'm doing well. Thank you. I'm very, um, very grateful and honored that you reached out to ask me to stand in for Mike, but I'm a little hesitant of how well of a Mike impression I'll do. So we'll see.

Ben Whitelaw:

No, no, we, uh, we've, we want you, we want your, your Britishness in all of its forms. Um, and, I should probably introduce Catherine before we start. Catherine has. A vast experience of trust and safety from working at Yahoo. As I mentioned at Google, at Dropbox, Twitter, Epic games, the list continues. She's currently a trust and safety consultant and is about to move into a new role, which you can't talk much about, but Catherine, yeah, your, your resume speaks for itself.

Cathryn Weems:

then there's nothing more for me to say. No, I've been, I've been doing this work for a really long time. And when I started, it was not referred to as trust and safety, but we did start to coalesce around that term. I think in about 2010, 2011. 12 around that time. as you mentioned, I've worked for a bunch of well known companies and had a variety of different roles, and hopefully some of that experience will be relevant with the stories that we touch on today.

Ben Whitelaw:

And just talk us through the really early days that kind of long and distinguished career. So back at Yahoo, what did kind of trust and safety back then look like? What was your role? What were you doing?

Cathryn Weems:

So when I first started, I started a role called Yahoo surfer, and it was a role that the job advert was, do you like surfing the web? Do you want to get paid for it? And I was like, yeah, I do like surfing the web. Sure. That's how it sounds fun to get money. Um, and, they used to actually be t shirts that said Yahoo surfer on the front and on the back, it said, and yes, we get paid for it. Um, because we were literally just getting paid to surf the web and we were categorizing websites into the Yahoo directory, rest in peace. Um, similar to how librarians would categorize books. so that was, my first role there and the trust and safety elements were making sure that the websites we categorized were actually legitimate. Um, And also, were relevantly added to, or added to the relevant categories. so that was kind of the first job. And then I moved into working on Yahooligans, which then changed its name to Yahoo Kids, uh, because, because young kids of seven to 12 years old couldn't type Yahooligans very successfully. Um, so it moved into Yahoo Kids, where there was more trust and safety elements. because we were thinking about what was appropriate for that age group and kind of, putting content into the walled garden almost versus trying to get it out of there. so that's how, how some of, the early days, and then I started working on products such as Yahoo Answers and, various other Yahoo groups, Yahoo Profiles, Yahoo 360, Yahoo Mobile, um, That had various other challenges that you would imagine that got us into more of the current day trust and safety issues.

Ben Whitelaw:

And this is years before kind of even some of the big social platforms were even launched, right? This is early 2000. So when, When those platforms emerge and you started to see more kind of user generated content emerge and some of the issues that we talk about every week here on the podcast, how did you recognize that your skills were going to be so valid and going forward so important?

Cathryn Weems:

I don't know that I ever had that sort of lightbulb moment. at that time it was more that I I was in a role and then I tried to be successful and at the role I was doing and then different opportunities would come up. come across your plate. And I was curious and wanting to learn new things and try different things. and so I don't think I thought about it as in the big picture sense at the time. It's only really in hindsight looking back where I realized that there was a level of trust and safety in all of the roles in terms of thinking about, the judgment skills, hopefully good judgment, um, that I've honed over the years and just that spidey sense of something feels potentially problematic here. and that's kind of how, you know, those skills have, have developed over the years. and yeah, until we're where we are today, I guess.

Ben Whitelaw:

Great. And you've definitely shown good judgment in terms of the stories that you've picked for today's podcast. So I'm, I'm very looking forward to talking them through with you. Catherine has been a long term reader of everything in moderation. I think, I think I can say that that's not outing you.

Cathryn Weems:

And a subscriber for three or four years, I think.

Ben Whitelaw:

yeah, I kind of paying bona fide paying subscriber, which, which is really appreciated and a listener to the podcast as well. I think you've listened to every podcast. Every episode you said, which

Cathryn Weems:

Every episode. Mm

Ben Whitelaw:

Really appreciate that. And so, yeah, we're, we've got, a whole series of stories. We're going to chat through today, which is really exciting. before we get into that, I just want to say that as well as dissecting today's news together. I've done a bonus chat with Concentrix's is, Dr. Serra Pitts on how moderators are best kept healthy and safe at work. And she's shared some really interesting, insights and research about the innovative use of heart rate variability tech to try and make that a reality. So, once Catherine and I have gone through all of today's stories, we've got a little segment at the end, so stick around for that, we will get started Catherine with one of your former employees, um, who, and that doesn't necessarily narrow things down a lot. But, you brought a really interesting story from Google about, how they're handling fake images. And there's a couple of kind of adjacent stories that show that I think maybe Google as a company is perhaps struggling a little bit to work out, it's approach to missing this information. Would you say?

Cathryn Weems:

Yeah, I think the, there's two sides of this coin. they've got some products that are allowing for, new, technology and, new opportunities that are exciting. And we'll get into some of the details of that in a second. but also then on the flip side, they're trying to figure out how ways to prevent any harm from those products or others that are available. and I think that's actually, it's kind of. the core of trust and safety, it's all about the trade offs. There are going to be products that companies come out with that you don't want to stop them from launching, but you're trying to help make sure that they are launched in the most safe way possible. and that the harms are minimized. And also you're going to try and. Advocate for potential safety products, or products that help users understand what it is that they're looking at, seeing, reading, as well. So I think it's, it's kind of, is actually a very sort of interesting, situation.

Ben Whitelaw:

Yeah. So this, story, falls into that latter bucket. Doesn't it be the fact that Google are planning to roll out some tech that allows users to see what content is being created by generative AI. so take us through that, what you, what you found interesting. Why is it timely and relevant now?

Cathryn Weems:

Yeah, so they've, decided that they're going to help label or identify images that are faked. and they're going to use a technical standard and I'm not a technical wizard with some of these things, but the standard is called the C2PA. C2PA. technical standard, which I believe stands for the Coalition for Content Provenance and Authenticity. That's a mouthful. Um, and so they're going to help label some of the images that are faked, and that will obviously be incredibly useful for users. I think that we've seen over the years. with labeling of content, whether it's misinformation content that's labeled or fake content that's labeled, I do think that that is helpful for people who are then viewing that to give them additional information. So I think this is a really interesting development from Google. I do think that it's going to be. interesting to see how, how we'll be able to trust anything in the future. I'm a little, I'm a little, a little concerned, because, you know, but if the fact that the content is going to be labeled is going to be helpful, but will they be able to label everything? Are they going to know everything that's faked? don't know enough about the technical issues behind that and whether, people will be able to fake the technical side of things as well as faking the image. Um, if people just do it in a very, standard way, then yes, they'll be able to pick up on that is my understanding, but I don't know how clever and creative people can, well, I know that people can get very clever and very creative, um, and so I, I, I'm not sure if they'll be able to pick up on all of the, fake imagery, and I worry that if there is labels for a lot of the fake imagery, then people will assume anything without the label is real, and I don't know that that will be the case, and so I think it's still a good thing, and I'm glad they're doing it, but I do worry what the unintended consequence may be.

Ben Whitelaw:

yeah, definitely. I mean, this is, I think, one of, a series of different attempts, I think, to make metadata about, AI and how a piece of content is either produced or edited, that is surfaced to the user. So. Google basically said that in search results, initially, you're going to be able to get a, about this image feature and, you know, see whether the content has been edited or produced using AI. and. There are some questions for me about how this would actually work in practice, which the blog post that you picked up on and the report from the verge don't really touch on, which is like, how likely is it that a user is going to see this because it's, you know, it's all well and good for, platforms to, make this information available, but like a lot of kind of metadata on, on images, actually it's somewhat hidden from view. It's not necessarily. In the kind of search results or, you know, the SERPs and you have to kind of dig for it to really find it. So it kind of only really becomes useful if you are looking for that metadata rather than actually if you're more casually browsing or searching for it, is that a kind of concern for you is like the extent to which, this data is a useful for the user and, be visible to them.

Cathryn Weems:

Absolutely. And I think the, if, if it's literally an A label on the image, for example, then it will be visible. That that's, that's great. But yes, if it's just something you can click on a an i information icon and you can then find out more information. Um, not everyone's gonna be that proactive. They're gonna just take things at face value. How many times have we said we've read something and we mean that we've just read the headline and we haven't read the full article? We've all done it. We're all guilty of things like that. We're all moving so quickly, and also there's so much information, always, that it's, you can't possibly, go into detail for everything, and so if people will have to seek it out, like you say, that is definitely concerning because I think most people won't, or maybe they will prefer just one or two things. And so, I hope that there will be some, some visual label on the content itself when they've picked up on, fake, Issues with the creation, even if you then can go to a sort of like a pop up or a more detailed explanation, somewhere else. So that that's absolutely a concern for sure.

Ben Whitelaw:

in a platform that all of us use all the time. But there's a broader interesting trend here around CTPA itself. so for folks who don't know, this is a kind of. Technical standard that was, started by a few different companies. Adobe was really important to that. Some hardware camera companies also involved, Leica and Sony, some of the initial ones. And so basically this is an attempt to kind of across the industry, create a single unified idea for. what metadata needs to be baked into images, videos, et cetera, for us to be able to kind of have a clearer sense of, what's true and not true. And a whole bunch of companies have come on board since likes of Amazon, the likes of Microsoft open AI recently did so, but actually this is the first time that any of those companies have really started to do anything more than approve C2PA and kind of get on board. So. This is a, it's a really big deal from that perspective here. We have like massive tech company, not only adopting it as a standard, but then starting to kind of put it into his products. Do you think Catherine that like this might cause more platforms to follow suit? Like based upon, you know, your work in the past, how did it. How did it work when one of your rivals or one of the other platforms did something, that you kind of were thinking about doing or, if you weren't thinking about it, maybe were forced to. Yeah,

Cathryn Weems:

to more people using the standard and putting it in front of consumers in a way that is helpful, hopefully helpful. There, there is somewhat of an arms race. We, we see it with a lot of the company where, and I, I sometimes fear it's an arms race to the bottom versus arms race to the top. but I've worked for companies that are all, for profit companies and we live in a cap, I live in a capitalist society. I'm over in, in the U S um, and, the companies are incentivized by shareholder. Maximizing shareholder profits and so therefore they have to try and make sure that they are competitive and therefore if Google's come out with this, then yes, other companies are going to want to have that to either, like, keep up with the Joneses or to at least meet them where they're at and hopefully then do something above. And in, areas where it's focused on user education or user safety, then that's great. sadly, it's that same level of competitiveness some of the other side of things. but yeah, I, I think it is, it is a positive trend when it's about something that is, is useful.

Ben Whitelaw:

definitely. And it also comes, really interestingly timed this announcement with a, piece by, Alexios Manzales, who writes the newsletter Faked Up, who kind of coauthors a piece with, more Norman, on tech policy press about Google's hardware and how it's pixel nine phone is really enabling. AI generated imagery. So again, if you're not, up on, on this, this is a new piece of Google hardware that allows you to kind of remove, buildings and remove aspects of a photo. It allows you to very quickly add elements to it. And Alexios and more show, in really interesting, series of. How you can quickly take a picture of a polling station and then add elements to it to make it seem like, some votes have been kind of spread all over the floor. And there's another kind of slightly funnier one with some tar everywhere. So we have a situation, Catherine, where you've got Google trying to kind of. address the issue of generative AI, through search and by including metadata, but at the same time, kind of enabling it through products like pixel nine, what do you think that says about Google itself and other, other big companies like it in this quest to address misinformation?

Cathryn Weems:

I think there's two sides of this. There's the side where they want to come out, they want to use the technology that is available today that wasn't available before, to do really cool, innovative, creative things. And if you think about the Pixel 9 feature in terms of how you could create a really fun image for your family or something, then that's great. That's, that's imaginative and creative and positive. It's only for the potentially bad. Uses of this that we're concerned as we should be. and obviously there's plenty of people that are incentivized for various different reasons to fake imagery that could be broadly problematic, or dangerous or affect the election, results based on, you know, data. Um, limiting people, or making people think that they shouldn't go and vote because something has happened to their polling station, or that there has been a major attack on the day of the election, or any other number of issues that we may see that I'm frankly quite terrified, um, about, um, of what could be possible. So I think these companies want to use the technology and come out with products that allow for, um, The creativity and the really positive side of, a lot of this stuff, like AI, generative AI is so useful and so wonderful in so many ways, but yes, we have to make sure that we're handling these products responsibly and I think because The responsibility part of, the flip side of that sort of creative and new products and innovations of coin that the responsibility part, it isn't as incentivized, in the way that the companies are structured and the way that the companies are. also incentivizes, you know, themselves in terms of, gaining profits. and so I think with some of the regulations that we've seen recently, where there is a requirement to do certain things that are more on that responsibility side, that is actually helping. And so whether Google is, coming out with the C2PA standard and using it more in, in results, I don't think that's directly because of regulation, but there will potentially be regulation in the future that would require companies to label every fake image that they know is fake, if it's fake by this standard, for example, and so they're ahead of the game there, which is great. But the regulation is helping, you know, on that responsibility side, but there's also that innovation side. You know, you kind of need both, or you see both in these companies from what I can tell.

Ben Whitelaw:

Yeah. I mean, you, you've obviously worked in some of them as well. How would it work? Would you ever be involved in a kind of product launch or a feature like this? And, and can you maybe give an example for the listeners of like, How this process works. Like we want to do this cool thing. We want to use this cool technology that makes total sense. But like, we understand that it may be used in a certain way. Catherine, you're our expert. What, what do we do?

Cathryn Weems:

so it used to be the, we trust and safety wasn't consulted before things launched. And I'm sure that there's people who listen to this call that will say, Oh yeah, that happened to me last week or something. I'm sure it's still happening sometimes where things go out the door that the trust and safety team. broadly didn't have any idea about at all, sadly. it used to be way more common where there just wasn't any, thinking about the potential safety issues ahead of time. We have heard many times, I'm sure you've mentioned it multiple times on this podcast, about safety by design, um, privacy by design. You know, just bringing in some of the cross functional stakeholders into the conversations early so that you can, um, try and prevent some of the most likely issues. you're not going to prevent everything. You're not going to limit or stop everything. But if you can at least tackle that low hanging fruit of the obvious stuff that you know, you know that with this Pixel 9 feature somebody is going to fake an image that is going to be problematic related to elections. That is, that's a given, um, and so therefore, like, how do you best, you know, ward against that? So if the team came to me or somebody on my teams or whatever, asking about this feature, the first thing for me is trying to make sure that I understand what the purpose of it is and what it is that the, team who's come up with this innovation, is trying to accomplish. What, what gap in the market are they trying to meet or what is their goal? Because you want to try and enable that while also still making sure that there's all the safety features and so, sorry, go ahead.

Ben Whitelaw:

No, as I say, you don't, you see, you don't not trying to say no all the time. You literally, your, your job is to kind of like. Say yes, yes, and rather than no, but yeah,

Cathryn Weems:

we're now in improv. Yeah, exactly. Um, it's, it is for sure. And it is the same with the legal teams. they used to be seen as, you know, the, no, the blockers and that that's generally not how they're seen or hopefully they're not seen that way anymore. We're trying to help enable the business because these companies I've worked for are all businesses. We need to be successful. We need to make a profit. And so therefore you want to be able to enable something, but maybe it's, um, maybe it's as straightforward as there's going to be a new type of feature. And so therefore you need a new type of reporting option, or a new help page that helps explain to users who go to look at the pages. isn't everybody, but the people who are trying to educate themselves about what is and isn't okay, or what this feature can and cannot do, then there's a new help page that explains some of that and explains how to report potential issues related to that. Um, and so you're just trying to put those, safety features, into the product. Also, maybe sometimes if you can see some obvious, um, you're trying to see if the product can be architected in a different way. Um, so the, this, the core goals can still be met, but you're actually avoiding some other features. Like just even going back to, you know, you brought up, some of my early days at Yahoo, do you really need comments on every single article and page? Maybe, maybe commenting is really part of the core goal of the, of the feature or the product, but if it isn't. that's going to lead to issues. And do you want those issues? Cause they're expensive and potentially brand, um, reputation issues related to them. Um, and so it's just really trying to figure out what are the core aspects of a product or a feature and which ones are likeliest to cause problems and, see what the balance there can be.

Ben Whitelaw:

I really like that. And I think, Alexios and more in their piece on tech policy press advocate for having kind of more checks and balances within this pixel nine feature. more filters for prompts that a user might put into kind of phone, more labeling for, you know, Any images that get outputted, you know, they, do a bunch of kind of, suggestions really, I guess, playing the role of a, Catherine in that example, and trying to, suggest ways that actually this could be mitigated. And maybe this is, not really, uh, the best way forward talking of Google. we've touched on a couple of stories there, but you also found a really interesting piece related to another of its products, which is speaks to this whole. morass of, of information that both excited you and worried you.

Cathryn Weems:

I mean, it worried me more for you than it did for myself, but let me explain. Um, so I read a story from Kyle Orland on Ars Technica. it was written a few days ago and he writes about a podcast where his recent book that he's just come out with, his book is being discussed in this podcast. That's not so strange, that, that would happen, hopefully, if you're, if you're coming out with a book, you kind of want it to be discussed on a podcast. the book is about Minesweeper, which I'm sure you wasted countless hours growing up, seeing that was the only game, kind of, you know, showing some of our age, maybe, but, um, I, I played hours of Minesweeper and got very good at but, the book's about Minesweeper, and so, There's a podcast about it. Not so weird. The unusual aspect is the podcast that's about this book isn't real. And the podcast hosts aren't real. The whole podcast, including the hosts are all AI generated. and so there's a Google product called notebook LM notebook, LL, LM. That's apparently a hard thing to say. They should rebrand it. Um, so. But it was released, I guess, a few months ago and I hadn't heard about it and I, I found out about it through this article. And I think they're still developing it and, and fine tuning it, but, they have this feature called audio summaries. And so you can input an entire book. In this case, it was like a, I think, 30, 000 page book or something. 30, 000 word book. uh, or you can input an article or a report or something and then ask it to provide you an audio summary. Um, And you can have this fake podcast generated from this input. And it's, grounded in just the content you've given it. They may pull in other things, but this podcast, I think you listened to it as well, and it was super conversational back and forth, very quippy. it sounded like a scripted TV show about a podcast versus more natural, but it was. It's very entertaining and you can totally see the use case for people who might use something like CliffNotes to learn about some book that they have to read for some class or whatever. And they may have read the CliffNote version in the past. Maybe if they hear the podcast version in the future, maybe that will help them learn better because they, learn better from, you know, with audible material. and so it's kind of interesting, but then also. I worry like whether you and Mike are going to be needed anymore. So that's my concern concern for you. Like, can you just put in all of these articles and say, create a fun, interesting, relevant, articulate podcast with one British accent, maybe sometimes two, um, uh, about all these stories. And I I'd love your take on it. I just, I thought it was really fascinating.

Ben Whitelaw:

I've seen a few people talk about notebook LM and be really, really excited by it. This is the first time I've obviously heard it be used as well. And I was also really impressed with the naturalness of the speech between the two voices. There was a few moments where, apparently they kind of used factual errors. So there was information that wasn't in the book that they Somewhat made up, which I think as an author, having your work summarized, that's an issue, but in terms of what it sounded like as a user, you wouldn't a know that was, not true and yeah, the, the kind of camaraderie and the, personality was really interesting. It was a little bit too smooth. I think, you know, Mike and I tend to talk over each other. We like to kind of, uh, hold the mic a little bit and there wasn't any of that naturally, which I think is. It's a bit odd and a bit off putting, but you're, you're totally right. You know, like what if we just fed, notebook LM, everything in moderation, the written newsletter and all of Mike's articles from TechDirt that week. And what would it produce? It would be fascinating to know. maybe we'll do that as an experiment and put it in the feed. Let's see, uh,

Cathryn Weems:

Have people, yeah, you can have people rate, I don't want to put you out of a job, so please, but I don't want to be blamed for this podcast going under, but it would be interesting to kind of see what it would, what it would create and you and Mike have enough of your voice out there that I don't think. That these, the, the voice, AIs really need that much content to be honest, um, but you have plenty of your voice out there that it would be able to probably even do it in your own voices, which I just, how, I mean, are you going to need to put in a keyword each week just to make sure that everyone knows this is the real you versus fake AI Ben? I don't know. It's, it's kind of interesting.

Ben Whitelaw:

no, exactly. You have, you'd know, I think, from Mike's anger and frustration about Thierry Breton, whether it was him or not, I think that's what I will say. So

Cathryn Weems:

think that the, the, uh, the LL would have picked that up though. I mean, I, I was, I was going to make a joke about that because it's, it's just, it's one of the standards of this podcast. Right. So, um, and I was just wondering who we have to, you know, pick on instead of him this week. So,

Ben Whitelaw:

yeah, no, we'll find somebody. Don't worry. Yeah. But yeah, you know, the, the cloning aspect is also something that, I'm, I'm thinking a lot about, obviously now we've started to do the podcast and. It's a kind of world of, new questions and answers that are kind of as, as yet undefined. but yeah, all three of those stories, I think illustrate Google's kind of varied approach to thinking about missing disinformation, both, the creation of it and the mopping up of it, which I think is, something that the companies. Various different companies also having to consider is like, what is their effect on the information ecosystem? Which leads us neatly, I think, onto our second kind of big story of this week, which talks exactly about that. so you might have heard about the intergovernmental panel on climate change, that's the big scientific body that talks about the risk of climate, change in the world. You would have heard about the reports. Well, you might not have heard about the international panel of information. Environments, which is the I. P. I. E. this is a pseudo, into government panel that's been created, by some academics out of the University of Oxford, I believe, and they last year for the first time did a, A report that tried to summarize the concerns of the academic community about the information ecosystem. And this week they've recently published their second version of that report based on 412 responses from academics and researchers around the world, 66 different countries. they were essentially asked about their concerns the trends in the global information. Environment and Catherine, I've got to say, if you were not feeling that great about things already, you don't want to read this report. It is, it is pretty bleak what these experts are saying. two thirds expect the information ecosystem to worsen in the future up nine percentage points from last year. So in just one year, this group of really feel like it's declining pretty quickly. They're really worried about, in particular, social media owners, as well as government leaders and politicians. Really interesting, particularly that, they're the groups that, these experts are most concerned about. And when it comes to AI, two thirds think that generative AI has negatively impacted the information environment already. and obviously it feels like we haven't even really got started. are you as upset or as kind of like bummed out by these findings as I am? Hmm.

Cathryn Weems:

it's the time of year that we're meant to be scared, so I guess maybe it's okay that we feel terrified. I think, terrified is possibly a better, way to describe how we're feeling than bummed out. But, yeah, we're in a really interesting stage of All of this generative AI and deep fakes and and disinfo. and you're right. We, we are only just at the beginning of a lot of it, I think. And so it's, it's interesting. I'm definitely not bored. Um, there's definitely lots of things that need to be figured out and it'll be kind of good to see how things play out over time. but yeah, absolutely. there's a level of concern as this. I'm not surprised that these are the results. but yes, I wish that the results were different. I wish that the reality was different. but I'm not I'm not surprised that they are quite dire. the CEO, or at least the C suite that, you know, the, depending on how decisions are made at each of these different big companies, um, it sometimes is really just one, one person, often it's a few people, or, you know, the C suite, maybe the board, depending, and they have enormous, um,

Ben Whitelaw:

Mm hmm.

Cathryn Weems:

if we're talking about the biggest of the big, they global platforms, they have huge user bases, decisions that get made affect millions of people, in ways that. Are expected and also ways that are unexpected. And so, and that's a huge amount of pressure on these individual people or group of people, it's not very many of them globally that we're really talking about right realistically. And so, I think it is concerning to have that much power with so very few. I don't think it's that different than history would show has happened in, in many ways in the, um, in different situations. So, I don't think that necessarily is, is that different. It's just the, the speed of which. They can have that impact, I think, is definitely a lot quicker. I think there's just, there's a general lack of trust in governments and educational institutions and science and, just in general. And so, I'm not surprised that there's also some level of concern about social media. Social media is potentially, or tech platforms more broadly, are maybe part of the reason we're in, in this general lack of trust era. But do think that all of the stuff that we just talked about with the AI and the technology and the deep fakes, I think it is only going to get worse until we really figure out how to handle all of this more responsibly. and until that group of people, the CEOs or the, the small group of people who are making the decisions, if they are trying to be responsible and they do have Good practices in their companies of reaching out to civil society and certain experts, and they partner with a bunch of different people to help a provide information that they could not possibly know all of, um, even if some of them may think that they do, um, um, be, provide inputs and perspectives from various different walks of life and people around the world. in general, I think that if there are good practices about including those other voices and there's other experts and there's other opinions, I think it will help mitigate some of the concern. it's okay for a CEO to be the final decision maker about the products and services. And that decisions being made for their company for their platforms, but if they know that what's brought to them, the options of, you know, option 1, 2, 3 have been fleshed out with all of these like partnerships and civil society and experts, around the world, then at least they'll be making better and more informed decisions that hopefully we'll have less, you know. Unintended consequences or will be less safe in general for, for the world. And those are very lofty things, but I do think that that's, kind of where we're at.

Ben Whitelaw:

Yeah. That's really interesting. I mean, from your experience, how How often does that happen? often do the options that get taken to board level or kind of C suite level? How often are they, do they have inputs from a wide variety of stakeholders, would you say? Is that common?

Cathryn Weems:

I think it's common for the big platforms, partly because they're either what we talked about earlier, either other people are doing it. So they're kind of shamed into doing it also, or just incentivized to do it because they see the benefit. Let's, let's be positive. Um, let's assume, assume the best. So there are a lot of, the big companies do have robust partnership. Teams who are working with a variety of people. There are people that have safety councils, companies that have safety councils that they work with to get that expertise in, and really help make sure that their product features and their information that they're providing to their users. It costs money, right? You have to have either at least one person, if not a whole team of people in different countries to find these experts or these organizations, and also then work with them, be available to answer questions that they may have, be available to help with escalations that you're going through. Inevitably going to see come in from them because they have a contact at a company. You know, it's going to get used the other way as well. and costs money. And again, these are not the areas where these companies are incentivized. this is not what helps maximize shareholder value at this stage. That isn't one of the criteria, right? You know, having a robust partnerships program to help you be better globally. That doesn't seem like it's valued as much as I would I hope it will be in the future, and so maybe, maybe there'll be some new regulation where it will require something, and whether that's good or bad, I don't know yet. but yeah, I do think that it does cost money and it's not the, some of the companies that don't have these types of programs or official programs or large programs, it's not because they don't want to, it's not because they don't care. It's that they have limited resources and they have to figure out where best to use them. none of the companies I've worked at when I was there, have ever been trying to do something bad. They just, it's just, it's impossible to make the decisions that you're needing to make and have them be okay for the vast majority of people or for everybody. It's just, it's, it's really, really difficult.

Ben Whitelaw:

It's a global nature, isn't it? I mean, I think this, the safety council's piece is a really interesting one. We saw X slash Twitter get rid of their, uh, moderation, safety council, whatever it was called when, when Musk came in, they kind of come and go. and yeah, we don't know much about how those people are appointed or, you know, we don't know really what decisions are being made there. So there's, I've always wanted to do really kind of analysis of the safety councils of the major platforms and see what their makeup is and see really like what their background is, their experience, because I think there's something really interesting there. If they are the people helping to make contributions or changes to policies and if they are the group that are being asked their opinion on some of these really knotty trade offs, then I think that would be really interesting. there's something fascinating in that group and how it works. There is a really, it's probably a reason why the survey responses are so bleak. I would guess, um, I was looking at it before we obviously started recording. There's about 187. of the like over 400 survey respondents, for this report are from the U S we are a couple of months out from the presidential election in November. You're based in the States, Catherine, like how does it feel, at the moment when it comes to the election and particularly some of the issues we've talked about?

Cathryn Weems:

it's, it's a very, very important election, and the stakes are incredibly high. It feels like that. You can't go very long without hearing something about the election or wanting to talk about it because it's so overwhelming or wanting not to talk about it because it's so overwhelming. Um, it's, you know, hiding, hiding my head in the sand isn't going to make a better outcome. but it, yeah, it feels, it's a lot. it's huge and they're, uh, we're, we're very divided as a country, which I, I wish that we weren't. Um, that's far, far too idealistic, um, and naive, but I do think that we have way more in common than we have. then we have differences, but that's not what's focused on. That's not what sells, you know, I was about to say newspapers. I don't know if people are buying newspapers anymore, but that's not what, that's not what sells the eyeballs. Right. And, um, and yeah, it's very, very concerning. It's

Ben Whitelaw:

Yeah. Interestingly, on the media, the experts in this survey were asked what the main threats to the information environment, media, journalists, news organizations did come last, but a good chunk, almost kind of 35%, if I read this chart, right, felt that media had a moderate or large effect negatively on the information environment. So I think it's important to note that actually, yes, social media owners, and governments and politicians are at the top of that list, that will be what gets the headline, but actually there are responsibilities across, private organizations, news media kind of civil society and kind of activist organizations as well that contribute to this. So, you know, you know, we've all got a part to play. interesting. So an awful lot there on, Google's approach to misinformation, the kind of experts, slightly negative, slightly kind of downbeat, attitude towards generative AI the information environment. We need to get some positivity back in this, Catherine. We might, we might've lost our listeners now. Um, but, but sadly we have to talk about Elon Musk.

Cathryn Weems:

It has to happen at least once on each podcast, right?

Ben Whitelaw:

It has to happen, the M word, it's here, this is a kind of interesting story because of your background in particular, folks might have seen this story, as they've been reading around this week. essentially Elon Musk has produced, Nexa produced their first ever transparency port under his watch. to us about what you read into this.

Cathryn Weems:

Yeah, so, I don't know how much he was, you know, directly responsible for the, for the publishing of it. Um, maybe, who knows? Um, yeah, they came out with the new transparency report a couple of days ago from what I could tell and it's the first since the takeover happened. So just for, I, I, I dunno if it's a disclaimer or just background, but I used to be very involved in previous transparency reports that Twitter put together during the time that I was there. I oversaw, the teams that were responsible for a few different sections on that report. My team was also responsible for the reports that were acquired by global content regulation such as under NETS DG in Germany or now the DSA and India's IT Act, et cetera. Um, so I have, I was very, very involved with a bunch of, variations of transparency reporting. and so. Some background that people may not realize is that the amount of coordination and the length of time it takes to put together a transparency report is Huge. It's, it really is a huge undertaking. I don't know how many people worked on this one. There's obviously a whole lot less people there now than there were. but in the past, it was a very major cross functional effort. And also other companies I've worked at major, major cross functional effort with people in multiple teams. really trying to figure out all the data that underlies all the requests, as well as having people in the teams validate that data and make sure it's Seems reasonable and go back and check it to make sure it's as accurate as it can be. And then also then have some of the commentary that's, been common, for transparency reporting to kind of help users understand what it is that they're seeing. You can see a bunch of numbers and you can see the numbers have gone up or down or stayed the same. That's interesting, maybe, by itself, but having just a little bit of color, where possible, can be really, really helpful in, understanding what you're seeing, especially if you're an average user versus, you know, a researcher that focuses on this or something. so it's also, the, the positive side of these transparency reports, I remember it was, We used to release two a year and those days were the days where you could actually share the work that you've been doing, because especially for some of the legal requests that my team was responsible for, there's non disclosure agreements, there's privacy and sensitivity related to them. You can't talk about the work in any way. specific way, whereas the transparency report allows you to share broadly in, in a generic way, the work that you've done and you can actually have that moment of pride of, of what you're doing and the, the huge volumes that you've been handling, even though nobody knows the individual requests because they're not allowed to for good reasons. So so I'm really excited for the team that have worked on the requests that are represented here because it it will hopefully have that, that moment of. Reflection of yeah, we did look at all the work that we've done. So that's one side that may not be something that other people would would think about.

Ben Whitelaw:

Yeah. I didn't realize that actually, that kind of, it represented a milestone for internal teams as to their work and their impact. Like how did you end up celebrating or acknowledging when a report went out? Was there anything kind of done internally?

Cathryn Weems:

It, it varied for different companies, but, um, when I was at Twitter, there was a number of transparency reports where we would publish it relatively early on the West coast in order to try and, time it reasonably well for the rest of the world. Um, and then we'd go out for a breakfast in a cafe near to the office and just sort of celebrate and yeah. Um, and then do the various posts on, social media to sort of share and have that broad impact of, you know, Mom, look at what I've done, kind of, you know, um, for whoever it was that cared about, what we do.

Ben Whitelaw:

Mainly me. I think I linked to a lot of those in everything in moderation. So they were, they were read and appreciated.

Cathryn Weems:

Perfect. Well then I, I'm glad that you were reading them. So I, this new report is a lot shorter, a lot. I think it's just a PDF. It's about, I think, 15 pages. The old versions of the transparency reports kind of, I don't know if they were always bigger than the previous ones, but we always seem to have more information we wanted to add and include and be transparent about. So they, the, the site became a transparency center versus just the individual transparency report and

Ben Whitelaw:

You could, you could search through data. Can't you like, and kind of be interact with it. Yeah. Not

Cathryn Weems:

Yes, so you can't interact in the same way. You can still get the old reports, but it seems like they've actually done screenshots of the old interactive page, and so you actually lose some of, it doesn't matter. Um, so I think X has still been publishing some of the legally required reports. reports for content regulation. I think they've still done the DSA required ones, but this is, as we said, the first one they've done, that's like proactive and voluntary.

Ben Whitelaw:

What stood out for you from it?

Cathryn Weems:

Yeah. So it's really hard to compare the numbers in this one and the data to the previous ones, because some of the policies have changed. I've purposely not. I've been keeping up to date with all of the changes, so I don't want to, you know, be any authority about, you know, well, this has changed and therefore this number is different. But I think comparing this report to the next one they publish, presumably in about six months time, that seems quite common, will be a more useful exercise than trying to compare this one to the previous ones before the takeover, because so many things are different. I think a few things that said in this transparency report, they're aiming to cement X as a truly safe platform for all, which I found interesting. I do think that that might have been more easily achieved by reviewing some of the policy changes or some of the other decisions about the product versus, um, versus Coming out with the transparency report, um, maybe staffing the safety function more, appropriately would, you know, would, would help more with that goal, but it's a good goal. Um, I do think that there are, uh, think it was about 5 million account suspensions, that they mentioned in this transparency report and then about 10 million posts that were either removed or label. Um, it doesn't show the appeal rate or the accuracy rate, error rate of those decisions. So I don't, those numbers are a little higher than previous. reports, but again, it's really hard to compare. So I think it's somewhat or more of a fool's error to kind of compare some of those numbers. but I wonder if the numbers are higher in this report because there is more automation because there are fewer people, which I'm okay with a level of automation. I think it's necessary and also it can be really helpful in certain aspects in terms of moderator wellness and other things and consistency. But, If there's automation making a lot of those decisions, and that's why the numbers are quite high, then how many of those were appealed and how many were actually errors. We don't have that insight. So, and maybe that those numbers aren't actually any higher. I'm not saying they are. I'm just, we just don't have that information.

Ben Whitelaw:

Right. It's, it's hard. It's kind of hard to cover the compare the like for like, isn't it? When, when so much has changed and there's a really interesting quote actually in the wide article from a, an ex spokesperson who says, as an entirely new company, we took time to rethink. How best to, transparently share data related to the enforcement of policies. Now that is, it's a little bit rich, entirely new company is not very true. I would say there's obviously still quite a lot of people who are there, who were there before it says it's a kind of classic Twitter PR response. I thought, yes, the

Cathryn Weems:

I actually. I agree with that more. No, yeah, I actually, I thinking about it as an entirely new company. Absolutely. Some of the people are the same. And if you go to certain forms, you still see the Twitter Larry Bird instead of it saying X. so yes, of course it is still built on what was there before. But so much has changed, even though it is some of the same people, the processes that they're following a change, the policies they're following a change, the way that decisions are made is definitely different. The like, so it isn't a brand new company. It's not an entirely different company, but I believe that that person believes that because that's how I see it and how I, from people that I do know that are still there, I think that they think about it very differently.

Ben Whitelaw:

Yeah.

Cathryn Weems:

So that actually makes sense to me in all honesty.

Ben Whitelaw:

Yeah. Okay. Well, that's, that's interesting. And, you know, like you say, this is a benchmark for future transparency reports, which we will be able to kind of come back on and maybe we'll get you into talk through that because that was a really Interesting explanation of how they work. normally, thanks for sharing. we should probably just squeeze in one more story before we wrap up today. the ongoing story that we've covered on the podcast about Telegram and the arrest of Pavel Durov and the charges that have been brought against him. we talked about a few weeks ago, there's a bit of an update this week when Telegram has announced that it will hand over users IP addresses and phone numbers to authorities who have search warrants for them. And so this was the very much at the heart of the charges brought against them. There was reports in the media that, French authorities had sent thousands of requests to Telegram, and had received kind of no response back, which had led to the quite dramatic arrest. when he arrived in France on a private jet, a few weekends ago. And so, Girov has posted that the changes to the terms of service should hopefully discourage criminals. it would, reduce the amount of illicit activities on the platform. And it's a bit of a U turn in terms of the approach to trust and safety that Telegram has adopted for a significant amount of time. I know Catherine, you've been kind of following this story a little bit. What do you make of this?

Cathryn Weems:

Yeah, so, I actually, I'm, glad that they are going to be responding to some law enforcement requests. I think that's for me the thing that stood out when the arrest happened. Obviously it was instantly quite concerning and then the more we heard about it, where we realized that, or where we were told at least, that Telegram hadn't been responding to any law enforcement requests, that actually made me feel. less concerned about, you know, the likelihood of other CEOs of tech companies getting arrested suddenly because all of the companies I've worked for and many, many others, all of them that I know of, are responding to law enforcement requests and government requests. They may not be complying with those requests always, but they are responding. And again, maybe the timeframe they're responding in isn't what law enforcement would like always. It's not always as quick as I'm sure they would like. But they are responding and they're not just ignoring it or going into a black hole. And so, I used to manage the team at Twitter and, when I was also at YouTube slash Google I, I handled the law enforcement and government requests that came into the company. and, yeah. the process, there was very clear internal process and there was also, public facing information of like how these requests were handled as much as possible. Obviously, the internal guidelines were more detailed as you would expect. But once you, once you verify whether something's actually valid, because there is sadly some situations where you do get fake requests from non law enforcement, there's Opposing as law enforcement officers to try and get user data, for example. Um, and so once, um, once you've verified that the, the request is as valid as you can check that it is, and also you make sure that they're actually reaching out to the correct company and they haven't just copied a, you know, something from a, on a different subpoena or search warrant for Google and you're actually at Twitter or you're at Telegram and they have, you've reached out to somebody at like, Dropbox, whatever. So you that, that, silly things like that. So that's some of the reason why companies don't comply with the request is'cause it. came to them, but it wasn't actually really for them, or it was maybe about a user that doesn't even exist or something like that. Maybe it was a typo, maybe it was just complete mistake. So non compliance isn't always the company pushing back to say, actually we refuse to give you this information. so just that's something just, just to know. Um, but when, when there's a valid request from a country, especially if you're doing business in that country or you have people on the ground in that country, at some point. Noncompliance gets really quite dicey. and there has been some regulation, where certain roles are required to be in country so that the governments do have some leverage over the companies in order to be able to, you know, twist their arm when needed. metaphorically, hopefully. Um, but, yeah, so I, I think it's standard what they seem like they're doing now at Telegram is industry standard, from what I can tell. They will be responding to, hopefully just valid requests, not invalid ones. whether they're going to comply with all of them, I'm sure they're not, just cause I don't think any company does. but you do have to protect the privacy of your users with the compliance with, you know, valid legal process. So, yeah, I think it's. seems like they're just now up to the, benchmark of where they should be for, for this kind of thing.

Ben Whitelaw:

That is good. And you know, it's, it's does, like you say, dissuade this idea, dispel this idea that, all CEOs of social media platforms are going to be arrested. I think that's a really kind of, a nice clarification to have. I think it doesn't seem like that's the way it's going to go, which is a slightly upbeat way to finish off today's podcast. I would say maybe if you're a CEO of a, of a major platform, at least, um,

Cathryn Weems:

news. Finally, we got you there eventually.

Ben Whitelaw:

Thank God. Um, so yeah, thanks so much, Catherine. You've been an amazing co host for today's control of speech. Thank

Cathryn Weems:

Thanks for having me.

Ben Whitelaw:

we managed to mention about half of the companies you work for. We, we touched on moderator wellbeing just a little bit. but that's actually the topic of the conversation that I had with concentrics is Dr. Sarah pits, who leads the psychological health team for trust and safety at concentrics, in the next bonus chat that you'll just hear, you'll hear Sarah shares why psychological safety is so important for content moderation teams and talks a little bit about the heart rate variability tech that concentrics use to monitor physical response to harmful content. So, thanks again, Catherine, and, uh, enjoy the bonus chat. I'm absolutely delighted to welcome Dr. Sarah Pitts for her control alt speech debut today. Sarah is a clinical psychologist and leads the psychological health team for trust and safety at Concentrix. Uh, welcome to the show, Sarah.

Dr. Serra Pitts:

Thank you so much, Ben.

Ben Whitelaw:

Great to have you here. so we were talking before we started about how you've worked in trust and safety for a couple of years, but your background is actually in high stress, high risk work environment. So can you briefly tell us kind of what that entails and how it compares to working in trust and safety?

Dr. Serra Pitts:

Yeah, sure. So, I haven't been in this space for very long, as you said. I know some people are lifers, which I love, and I'm hoping to become myself, but for the last 15 years or so, I've been working in these high risk environments. So, think about emergency services, think about, military special forces, think about financial securities, those environments where people have a lot of demand, there's a lot of, um, think about an emergency department where it's constant, right? It's constant trauma. Accidents, things coming through the door. You don't know what's coming. That's the thing about these environments is it's demanding and stressful and you don't know what's coming. And that is similar to, content moderation in a space because you are sitting in front of a monitor with a cue and there are things coming at you and that is your job, you deal with what comes at you in that day. And that is, that's quite why I feel comfortable in this environment actually, cause I've been around this kind of space. Yeah.

Ben Whitelaw:

And what made you make the shift? Was it something you'd planned? Something you became interested in?

Dr. Serra Pitts:

No, I always love hearing people talk about how they got into the space and for me I'm on the fell in by accident side of the experience table and, I didn't know that much about it. I think a lot of, um, a lot of people don't, but even, psychologists and clinical practitioners certainly don't. And so I have experience working in a corporate environment and I thought this was really interesting, really different and unique and it really excited me.

Ben Whitelaw:

Yeah, great. These skills are obviously vastly important in the trust and safety space. Wellbeing and moderator wellbeing particularly has been a massive topic in trust and safety for the past couple of years, really, since I've been writing everything in moderation back in 2018. And there have been reports of labour conditions poor mental health support for teams. I mean, you're very focused on, you know, Psychological safety in particular. So talk us through what that is and why it's a necessity for companies involved in moderation.

Dr. Serra Pitts:

Okay, so I, consider myself a leader in trust and safety, and as such, psychological health and well being in these teams, in particular, like you mentioned, is my primary interest, is my team's primary interest. our objective is to, reduce the potential harms of viewing different types of content and help teams take control of their health and their well being. By learning to cope effectively and make sense of the content they're seeing, and then helping them to perform at their best and do this valuable work, because we know content moderators take great pride in being part of this kind of frontline operation. And so when you talked about psychological safety, um, That is a concept that's really important when you think about health and well being or, healthy environments, because we know already that moderating disturbing content can be stressful. But one of the ways to protect teams from harm and reduce that risk of exposure is by focusing on psychological safety in the trust and safety ecosystem. So the whole team, the whole environment, and psychological safety, if you don't know, is defined as an environment where. Where individuals feel safe to express themselves without fear of negative consequences. So for content moderators, think about someone who's looking at sensitive content all day or egregious work. And there's a point in the day when it gets too much. I don't, I can't look at this anymore right now. I need to walk away. I need to take a break. Or maybe, something's happened at home and the content you're looking at is reminding you of something at home. And it's just something you need to move away from. So it's affecting you. If you're in a psychologically safe environment, you are not afraid to say that to your team leader or your ops manager or whoever is helping you in your role to say, I need to go take a walk. I need to get out of here for a few minutes, because what we don't want is a punitive environment where people feel compelled to sit. for longer or compelled to not feel their emotions because that really is what causes burnout, anxiety, depression. In order for us to be healthy, we have to express how we feel about what we're seeing and what we're doing. I'll just add one more thing to that, which is that, of course, psychological safety impacts mental health and well being, but it also impacts performance. So when moderators feel safe, they're more likely to perform better. because it encourages people to make decisions more confidently and contribute ideas and suggestions and feedback without the fear of any negative repercussions.

Ben Whitelaw:

Right. And we, you know, there is this kind of stereotype and this idea in some parts of the industry of, moderators not being able to step away and to kind of put their hand up and speak to managers. So I think this is a really crucial topic still, unfortunately. does that look like in practice, though? What does a business who prioritizes, psychological safety actually look like? How do we know if people feel safe at work?

Dr. Serra Pitts:

Okay, so there's a couple, answers to your questions. The first one is how do we know if people are healthy or if they feel safe at work? One of the things that really matters for working with these teams is assessment. We need to know how are people feeling over time? How are they coping with this material? Are they taking concerns home with them? How do we get answers to those questions? Well, the answer is we have to ask people. It's pretty obvious. Ask them. And now self report measures or surveys that we're all familiar with, they're all really useful and they form an important part of any framework. But alone, being subjective, they're not as effective as when they're also combined with an objective measure. And here's where it gets interesting for me, because one of the objective ways to measure psychological safety is using wearable device devices. technology. So think about your Fitbits, your Garmin, your Apple Watches. There's loads of people who wear, those devices on a regular basis, but we've been doing some work with our moderation teams, especially people looking at really egregious or sensitive content like child safety material, violence and extremism material, adult content. What we're doing is we're looking at HRV. Most wearables measure HRV, and that is heart rate variability. HRV reflects the time difference between each successive heartbeat. HRV is not fixed. It is not consistent. It varies with every beat based on what's happening in your life or your body at that moment. And the reason this technology is interesting for content moderation is because there's a significant relationship between HRV, mental health, So, for example, HRV is a, higher HRV is associated with better emotion regulation, reduced anxiety, and lower levels of depression. Now, that's something that we can really pay attention to in a proactive objective kind of way.

Ben Whitelaw:

Right. And that's really interesting. So are you aware of other companies that do this? Is it something that only concentric stars? Is this a broader trend you're part of? Mm.

Dr. Serra Pitts:

technology as interventions, and some companies provide wearables to employees to educate them about their physical fitness, or their sleep cycle, or to help them learn to relax better, and the intention there is they hope their people will then use that data from the devices to become more active and healthier as an individual, right? We are using this technology in a different way and I don't know of anyone else doing this, which is why I'm so excited to talk about it. We're looking at, HRV technology as a monitoring device so that we can in real time, know how people are functioning over the course of their day or their shift. And the reason that this matters is because the sooner you can make an intervention, the less likely that a long term problem will develop. So even though wearables aren't new, in fact, that industry is booming,

Ben Whitelaw:

Mm.

Dr. Serra Pitts:

a new way to use that technology in objective way.

Ben Whitelaw:

that's really interesting. So can you give us a kind of concrete example of where a manager at one of the concentrics, operations would use that data? Would they make decisions in real time to take them off of the queue or to ask them to take a break? What does that look like?

Dr. Serra Pitts:

So it's actually partnering with our, psychological health teams. So our psychological health teams will be the ones who see this information. It's private, it's insensitive, and what we can do with the data is we can identify, well, who's in need of a check in, who needs to have a chat now, or today, or who needs to maybe be moved off the queue because this is happening frequently for them. And so, what we do is we will have a conversation with the patient. People affected the person, the individual involved, and then work with that person to, to identify any workplace adjustments that might be needed. Do I need to be put on a different queue? Do I need a day off? Am I totally fine? I just need a break. Right? And it will differ based on whatever's happening in that person's life or whatever kind of work they're looking at. And then we work with the management team to identify what's a strategy for this team, or this group or these individuals.

Ben Whitelaw:

Amazing. And where, where can it end up? Like how far can this be taken this, this kind of wearable technology to better understand psychological safety? Where do you see it finishing?

Dr. Serra Pitts:

a new type of data. So this is a type of data that doesn't exist yet in the literature or in the knowledge base in the trust and safety environment. And ultimately what we want to identify is how, these are really basic questions that we try to answer all the time. How does, moderating different types of content actually affect people, actually. And then what are the most effective strategies to help people remain healthy and safe in these environments? Now I say, those are basic questions because we're all addressing those questions every single day, all of us in this, industry. But we're, doing it based on limited research. We're doing it based on limited data sets, different types of data. there's not a lot of data around of, like, moderator health in psychology. And so this is a deeper, richer look at what's actually effective, what's the impact, and what's effective.

Ben Whitelaw:

Brilliant. And before we wrap up, where can people, where can listeners of control or speech find out more about concentric work and about this kind of interesting intervention that you talked about today?

Dr. Serra Pitts:

Yeah, definitely. So this topic is so important to me, obviously. And so I'm doing a workshop at the Trust and Safety Festival in Amsterdam on the 20th of November. And I'm going to deep dive into the use of this technology and how measuring new variables in an objective way can give teams and leaders that deeper insight into the impact on teams. And that will, again, help people identify practically. how to direct interventions, how to design workforce management strategies. We're going to conduct a live experiment if you want to participate, it's super cool in real time to see how this technology works. and the reason again, it's so important is because the only way we can really be effective In our interventions or preventative health and safety programs is to really know what's going on and target the most appropriate issues. And right now we don't, we don't have that information.

Ben Whitelaw:

Yeah, no, it definitely makes sense. And, more data is more understanding in a lot of cases. So this has really been fascinating to hear about, Sarah. Thank you so much for taking the time to speak to us. I'm looking forward to hearing how the experiment goes in Amsterdam. looking forward to hearing who the guinea pig is. And again, thanks for taking the time to talk us through it today. It's been great to hear from you.

Dr. Serra Pitts:

Thank you so much. It's my pleasure.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode