Ctrl-Alt-Speech

Spotlight: Modulate CEO Mike Pappas on Voice Moderation

July 23, 2024 Mike Masnick & Ben Whitelaw Season 1 Episode 20
Spotlight: Modulate CEO Mike Pappas on Voice Moderation
Ctrl-Alt-Speech
More Info
Ctrl-Alt-Speech
Spotlight: Modulate CEO Mike Pappas on Voice Moderation
Jul 23, 2024 Season 1 Episode 20
Mike Masnick & Ben Whitelaw

In this sponsored Spotlight episode of Ctrl-Alt-Speech, host Ben Whitelaw talks to Mike Pappas, the founder & CEO of our launch sponsor Modulate, which builds prosocial voice technology that combats online toxicity and elevates the health and safety of online communities. Their conversation takes an in-depth look at how voice is becoming an increasingly important medium for online speech while technology is making more advanced voice moderation possible, and what that means for trust and safety.

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and our sponsor Modulate.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Show Notes Transcript

In this sponsored Spotlight episode of Ctrl-Alt-Speech, host Ben Whitelaw talks to Mike Pappas, the founder & CEO of our launch sponsor Modulate, which builds prosocial voice technology that combats online toxicity and elevates the health and safety of online communities. Their conversation takes an in-depth look at how voice is becoming an increasingly important medium for online speech while technology is making more advanced voice moderation possible, and what that means for trust and safety.

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and our sponsor Modulate.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Mike Masnick:

Hello, and welcome to Ctrl-Alt-Speech. If you are a regular listener, you'll know that we sometimes feature sponsored bonus chats at the end of each of our episodes. This is something slightly different. It is a sponsored spotlight interview in which we get to go deeper with leading figures in the trust and safety space about important issues having to do with online speech. Last week, Ben got to speak to Mike Pappas, the founder and CEO of our launch sponsor, Modulate, which builds pro social voice technology that combats online toxicity and elevates the health and safety of online communities. The really fascinating conversation touches on how voice is becoming an increasingly important medium for online speech and what that means for trust and safety, including some perhaps surprising ways that voice can actually be easier to moderate than text. Take it away, Ben.

Ben Whitelaw:

Mike, great to have you back on Ctrl-Alt-Speech. Thanks very much for taking the time today to have a chat with us. It's been a few months now since you appeared on the podcast, modulate being this launch sponsor of Ctrl-Alt-Speech, how have things been in the time since we last spoke?

Mike Pappas:

Yeah. Thanks very much for having me again, Ben. Excited to be here. Things have been going well. Yeah. We, recently launched a case study with call of duty on the impact of voice moderation in these massive AAA games. So that's been really exciting to share some stats. I'm sure we'll get into that in a little bit. Um, and otherwise just been continuing to roll out and chat with a lot of folks across a wide variety of industries about what safety looks like in this rapidly changing world we're in?

Ben Whitelaw:

Yeah, love to hear more about that. Cause I know you're going through a lot of product exploration as a company. You know, you've made a lot of inroads into. Gaming and have a real deep client base there. What is it you're seeing when you're speaking to trust and safety leaders execs in other companies where Modulate might fit?

Mike Pappas:

Voice has always been very impermeable to trust and safety teams, right? There's just been this general understanding that there's nothing you can do about the risks that are there. So I think what's caught people's eye about what we've been doing in gaming is just the fact that we can access that voice in a way that's still respectful of privacy rights in a way that's actually cost effective and really crucially in a way that understands emotion and context so that we're not just, you know, false flagging everything left and right. The way Modulate really thinks about our mission, we have this phrase, pro social voice intelligence. And what that really boils down to is whenever people are having these voice interactions, there's various ways for them to go wrong. It could be open hostility. It could just be a culture clash. It could be people misunderstanding each other, but all of these things sort of destroy value we could be getting from each other in these conversations. And so our vision is we would just want to be. In every conversation where that value is being lost, can we be helping recover some of that value in some way?

Ben Whitelaw:

I'm really interested in kind of tone and the interest from companies that you've spoken to. And particularly the level of understanding about the challenges That manifest themselves in voice, like how, how familiar are folks that you're talking to about how harmful content kind of emerges. And when you talk to them about what you can do, is it a surprise to them?

Mike Pappas:

I'd say yes, you know, we're, we're talking with a lot of smart people who grok it very quickly, but so many intuitions in this space are built on text chat. And there really just are some fundamental differences there. some are more obvious, like in text, you can replace the eye with an exclamation mark to try to get around the filter. There's not really such a thing in voice. So it actually is that voice can be more resilient and you can end up with higher precision rates once you solve the early problems of how do you even access the voice in the first place, which is counterintuitive to a lot of folks. And there's a lot of things like that, that we have to sort of walk people through as they Yeah,

Ben Whitelaw:

I imagine. So in terms of those areas, those types of companies that you've been talking to in the kind of exploration work you've been doing, where are you seeing interest? What's new, how far away from the kind of gaming industry are these companies? Can you talk us through those?

Mike Pappas:

I think we're seeing a lot of interest from everything that is just sort of social platform. So that can range from the obvious sort of gaming and other sort of hangout spots to things like online dating platforms or social media, within that space. Obviously, there's a lot of focus on things like mis and disinformation and the distribution of that. Um, there's certainly a lot of attention on things like financial fraud, pig butchering and stuff like that in online dating world is very big. And of course, there's the, forgive this terrible turn of phrase, but good old fashioned hate and harassment, which is still around and very much still a priority for everyone. we're also starting to have some conversations on the enterprise side. Some companies that do things like hosting call centers or having other ways that you can reach out to an employee when you're having a problem. Unfortunately, there are sometimes consumers that. Are very sort of rude and unfair and sometimes escalate even further with those workers that they're contacting and the platforms that want to be able to protect their employees and take care of their mental health and support them in these situations. They need tools to be able to recognize when something bad is happening there as well.

Ben Whitelaw:

So let's kind of go into that a bit more, cause that's really a really interesting case, I guess. Where do you kind of fit in that particular piece? You know, is it a case of delaying, customer service representatives from, receiving messages? Is it that they're reported in a different way? How, how does that help that issue that they brought to you?

Mike Pappas:

So across everything that we do. We never do like live bleeping or live interception of the audio that just adds too much latency and breaks the ability to have real conversations with each other. But we are sitting on top of it and able to recognize very quickly within a matter of seconds when a conversation is going foul. So what that could result in and say a call center ecosystem is, Hey, maybe one of your junior reps is getting their head torn off. And you need to escalate and bring in a manager who can help solve this problem. And so we can help flag that and bring someone in that can also help with companies that are moving to AI agents and are trying to think about when is the customer too frustrated with this AI agent and we need to get a human on the line. And on the flip side, of course, you want to make sure that your agents are not mistreating your customers as well. Um, this is especially important on both sides. In, enterprise applications where you have physical interaction between the employee and the customer in some way. So a lot of gig economy, platforms in particular have the ability for you to call some gig worker who you might end up physically interacting with. And if someone on that phone before the physical meeting is, you know, shouting a bunch of slurs and threatening violence or whatever it is, You might want to cancel that order in some way. You might want to do something about that instead of letting what could be a really extreme harm come to pass.

Ben Whitelaw:

Yeah. I mean, those, kinds of incidents are all too common in, obviously the food delivery and taxi services. Why do you think you're seeing an interest in among those companies now? You've existed as a company for a fair chunk of time. You know, you've obviously focused primarily on gaming. What is it about the last few years or the current context has seen these companies emerge and start to kind of want to talk about themselves?

Mike Pappas:

think there's three major factors to that. A simple one is just we're making more noise now. It's easier to find out about what we're doing. And again, this is a very new space being able to do this kind of voice analysis. So a lot of platforms just. hadn't ever conceived of it as a possible thing to wish for, um, and it took us sort of making some of this noise being able to talk about our work with Call of Duty and others that got enough attention on it. I think the second piece is it's no secret to say, you know, the gig economy in particular has gotten a pretty bad rap over the last few years. there's a lot of real issues there that need to be resolved, but a lot of these platforms, they really do care about their workers. They're trying to figure out how to make something work in a very low margin space where they're constrained from a lot of different areas and saying, hey, is there something we can do here that actually fits into this very sort of tight equilibrium? that genuinely can benefit these people. That is something that there's appetite for. I think the last and most mundane reason is just that the industry is getting a little bit more mature. And we've seen this in a lot of spaces. Social media became mature before gaming and gaming followed on on a lot of that stuff. Gig economy is even more new and is starting to become more mature. So we're speaking to some of these platforms that are really just finishing up, or even just getting into. Some of the most basic trust and safety work and starting to think about what's a level two sort of iteration of this, and that's just kind of the natural course as a new industry sort of finds its feet.

Ben Whitelaw:

Yeah, worth noting that, obviously TrustCon is happening this week as this episode goes out. And I was reading that this, this year's conferences over 1, 300 trust and safety professionals is the biggest so far. And, you know, it's obviously you're, you're right. This is an industry that's kind of evolving and growing and. Um, even if platforms are maybe reducing the size of their trust and safety teams, that doesn't mean that there are other people thinking about these challenges and working with the platforms in different ways, such as yourselves. when you're talking to these companies, Mike, you're trying to figure out their pain points, their problems, how do you prioritize yourself? Which areas to explore as a company? How do you, prioritize requests? How do you prioritize insights and start to build them into the products that you have? Primarily tox mod, but, but I'm sure there are others.

Mike Pappas:

One piece of it is we have a pretty stern sort of ethical stance on what kind of voice analysis we actually feel comfortable participating in. That's come up even in the voice moderation space where we've had platforms ask things like, Hey, can you shut down all political speech and stuff like that, that we don't feel great about. And one of the very specific designs of ToxMod is that We will create the harm categories. We don't give the ability to create new detection types. To our customers because we want to some oversight over how they're using that tool. Similarly, you know, without going into specifics, any listener can probably imagine some organizations that hear, Oh, you could do voice analysis on all kinds of calls. There's some applications for that. We've gotten some of that inbound. That's some stuff that we're, you know, very, very careful before we're going to get anywhere near those applications that we want to make sure we're thinking about. So that's, of course, one piece. The other side of it is, you know, the simple truth that modulate is a business. We need to make money. So within the bounds of what we feel is something that is ethical, is pro social, is benefiting the world in some way. The next question is, what is ultimately the scale and size of that market? What is the kind of value we can deliver for people? Because if we're, you know, pricing things in a non exploitative way, the amount of money we can make is going to be proportional, not just to the number of people in the market, but the amount of value we're generating for those people. And so that's what we're really trying to unpack in each of these markets, how much good can we actually do for the average user here?

Ben Whitelaw:

Are you able to go into kind of how you do that? Because I think that's a really interesting dynamic that you talk about there. You get these inbound requests or you talk to prospective clients, they have a particular need. What is the kind of mechanism by which you. You figure out whether this is something that modulate needs to do. Because, because I think there were people listening to this episode and other episodes of Ctrl-Alt-Speech who are building similar companies who are, creating startups who are figuring out how to scale and grow. Whilst also having a kind of ethical principled approach to some of these questions that you raise. Can you talk a bit about how you do that without lifting the lid too much? I guess.

Mike Pappas:

Yeah, I think the key question is to start by asking like, why are people coming together from both sides of the platform here? So from, from the perspective of a social platform, and I don't mean to, you know, paint everyone with greed. There's a lot of other incentives here too, but Again, companies need to make money that part of their perspective is we're trying to create a space that people will want to come and spend time and ultimately spend money. That's part of what they're trying to facilitate. So, from that perspective, you want to understand how do they measure where that money is coming from in games that might be things like user churn or how long they spend. Playing in any particular game session, which increases the odds that they're going to invest in things like skins in the game that are in game purchases. But you also look at it from the flip side and say, why do people play games? They don't play games because they want to give a studio their money. They play games because they're getting all of these social benefits from it. And there's tons and tons of studies on the positive impact that games can have. Um, that'd probably be a whole other podcast episode to go into the weeds on it, but Everyone sees these enormous benefits because of the opportunity to socialize with people you might not otherwise have immediate access to. The ability to really find people that you can quickly build deep bonds with. And so from the perspective of the players, we want to improve those financial KPIs for whatever platform we're looking at, but the way we want to do it is by making the player experience actually better. and so that's where, you know, a trust and safety application works really nicely, because it's not, oh, we're going to be so malicious and greedy that we're going to make sure no one gets harassed. If they actually, we feel really good about making sure no one gets harassed. And the natural result of that is that if the ecosystem is a fun place for people to be, they'll naturally spend more time there. That'll translate into those KPIs. And then if we have a good understanding of what the impact of that is, we can translate that back into the actual dollar value. Of modulate sort of standing on top of that platform and how much we're generating for it. And I'm using gaming as an example, because it's the place that we've spent the most time. Obviously you can apply similar lessons to everything from call centers. Why, why do people want to call call centers? They have a problem. They care about how quickly they get their problem resolved. You can apply that same lesson. But also again pointing back to that Call of Duty case study we recently published. One of the stats I was so excited about there is in an A B test, we found that a moderated space After 21 days had something like 30 percent more active and engaged users than an unmoderated space and that's in the game with such dedicated players as call of duty. So being able to point to something like that and say, we are actually not just improving your player experience, which is hugely important, but we're clearly improving the bottom line as well for every studio that we talk to. That's a huge part of our sales pitch, but it's also a huge part of our internal logic of, is it worth trying to go and sell to these platforms? Will that be impactful enough for it to make sense?

Ben Whitelaw:

Yeah. Okay. So you're figuring out. What are the kind of similar metrics for those different industries in the same way that you've, you've kind of isolated those for gaming, right? It's funding those equivalents in terms of how you stack those. Cause I think the, how you measure the success or the implementation of a tool like modulate, you could do in a number of different ways if you're a platform and I'm sure you, you, there are particular metrics you want see a change in once you come on board, how do you rank them or stack them or think about them holistically? Because it's always the case that you can, try and optimize for one, maybe the number of incidents of hate speech or the number of reports, and you can have unintended consequences on others. So how do you balance those and how do you work with, the partners you have to do that?

Mike Pappas:

Yeah, it's a, it's a really interesting question. My first reaction, which might be slightly skewed to what you're getting at, so I promise I'll give you a chance to push me back on, but my first reaction is to talk about leading versus lagging indicators, especially because this idea of voice analysis is so new. Most platforms don't have a really rich understanding of even what's going on in their voice ecosystem yet. When we turn this on, whether for a call center, a game, an online dating app, whatever, It's probably going to take three to six months before you really see that change propagate through all parts of your community ecosystem. And you really get a full picture of how much value we're generating for you. But for a platform that needs to make a big purchase decision on do we bet on modulate for this long period of time, Dragging that out three to six months can be painful for everyone. So we think a lot about what our leading indicators that are not necessarily the final answer, but give us a really strong prompt on what's actually happening here. So as an example, again, in the gaming space, the lagging indicator are things like player retention and engagement levels. That's the stuff we're really trying to improve the leading indicator. is how many instances of hate speech do you come across per hour? That we can measure immediately upon turning this on. We can see our immediate impact. We have a pretty strong hypothesis that bringing that number down is going to increase player engagement. But it's always possible that, hey, we bring that number down But also the game ships an update that messes up their user experience and players leave for so it we can never, you know, directly tie that leading indicator to the lagging indicator, but it's a strong enough hypothesis that it makes people feel comfortable saying, okay, it's worth. taking maybe a year to do a more sort of really robust analysis of this. And that's where you get stuff like this case study from really investing that time and energy to get a really deep understanding of what's happening within your ecosystem.

Ben Whitelaw:

Right. and so. I guess like the kind of ideal series of indicators, my pushback would be what, what would you ideally have those be if the kind of there are leading and lagging indicators, how would you necessarily in an ideal world frame those?

Mike Pappas:

think the typical leading indicator is what are the kinds of immediately bad experiences that we think lead to long term behavior changes. Um, and we can note those down. Maybe it's exposure to hate speech. Maybe it's actually you submit a player report. And no one responds to it. Maybe that's one of the bad experiences that we think leads to negative behavior changes. And that, in that case, you might set different KPIs about how responsive can we be to user reports. But you start by working with the platform and saying, What would you bet are the couple, like, crystallizing moments that really harm your community experience? That's our leading indicators. And then again, the lagging indicators usually ultimately come from the finance team saying, what, what are the inputs into our model that tell us how we're actually economically doing? And once you have those two things on paper, you can start to draw up a specific plan for how to kind of combine those together over a more extended test.

Ben Whitelaw:

Okay, cool. And I want to have one more question about your, I guess, investigations into the wider, wider market and where you see modulate fitting. Are there other applications beyond hate speech toxicity, where you can really make a mark? Do you think, I'd love to hear you kind of talk a bit more about the other types of harm that modulate and tox mod are able to identify, but are there areas where you're seeing. Opportunities that the current models don't necessarily focus on, or are there, new products that you think you can build out from what you've got so far?

Mike Pappas:

Yeah. I mean, the, the basic motif of what we're trying to build is. Sort of changing the proportion of different types of experiences people have. So that usually comes down to which experiences are really bad, which experiences are really good. On the bad side, hate speech and harassment certainly is one example. There's other kinds of bad outcomes too. I talked about things like misinformation, things like fraud, and scams. There's also sort of more extended bad outcomes, things like grooming a child or radicalizing a disaffected teen, where those are particularly interesting because the victim In the early stages perceives what's happening as I'm being treated. Well, I have a new friend or something like that. So you're never getting user reports of that. So that's a whole new class of things to be detecting. so there's all that on the harm side. On the flip side, there's also the positive stuff and something that we're working on with a couple of our partners right now is how can we identify your users? Who are really positive members of the community in a gaming situation that might be folks who coach new players And help them get more used to the game or folks who are just really good sports Even when they lose and make it more fun to sort of play competitively with each other in a call center situation that might be the reps who are able to diffuse a situation really effectively if a caller is really sort of Frustrated with something or the folks who are able to actually Get a deeper understanding of where the customer's sort of motif is coming from and build a bit of an emotional connection with them instead of just reading through a script. So there's the benefit of being able to not just detect and reduce those negative experiences, but also detect those positive experiences, figure out what leads to them and then see, can we do a little bit more to promote more of that sort of positive experience for the rest of our community?

Ben Whitelaw:

Uh, really interesting. I in an old world, I used to kind of moderate comment spaces on new sites and we used to kind of, it's not the, not the best job in the world. We used to kind of pull up the best comments into a kind of, must read section of the, of the page. We used to bring people into. the newsroom to talk to editors in general, so I really liked the idea of positively encouraging behaviors that you want to see more often. How do you think that that will bear fruit in terms of the clients you work with and the products that they have, like, do you, what do you, what could that manifest itself as?

Mike Pappas:

I think there's a huge potential for this. Um, again, most bad actors on any of these social platforms, especially, are not actually bad actors. They're not coming in with the intent to do harm. Again, sometimes it's culture class, sometimes it's kids just sort of being Being kids and saying some stuff. Sometimes it's someone who learned the wrong norm and thinks they're in a sports bar when, in fact, they're in a library, but getting that understanding of what what kind of space are you supposed to be in. Study after study shows that you get really profound changes in this. And one, one common example is just, Hey, if you give people a notification, instead of saying you did a bad, you're banned, but Hey, this was a violation of the code of conduct. If you don't understand that, here's a little bit more explaining why that's a violation and what good behavior could have been instead. I think it was apex legends. Produced a study on that a couple of years ago, where they saw an 85 percent drop in recidivism. We've been doing some pretty light work on that with call of duty. And again, in the case study, we find that I think, like, 8 to 10 percent month over month drop and repeat offender rate. So we're seeing really continuous growth and improvement of the community as they learn more and more. What's desired of them? What are they? What's good look like? I think a lot of the reasons people aren't behaving better again, it's not they're trying to ruin people's experiences. It's just frankly, no one ever taught them how they're supposed to behave in a world as diverse and multicultural and complicated and often emotional and competitive as the online world ends up being. Yeah,

Ben Whitelaw:

the constant shifts, I guess, in those standards and expectations and policies on the platforms and sites that you go to also makes that a very confusing, world to be in, isn't it? So I'm really interested, Mike, to kind of. I guess, flip to how you, you build these products and models, because I guess once you've taken the insights from, platforms and, and the clients you you're thinking about and talking to, the question is like, how do we build those out ourselves? And AI safety has become such a big topic in the last few months in particular. And there's been such a focus on how do you. You know, very consciously and ethically build out models that, don't cause unintended harms. How does modulate think about that in the work that it does?

Mike Pappas:

I think first off we sort of break down the separation between the safety from AI and using AI for safety, both of which have their own important sort of considerations. So safety from AI. I mostly think of that in the context of, like, making sure Gen AI doesn't teach someone how to build a bomb or something like that. But there's applications to us, too. I think, you know, if we were building a tool that had a ton of inherent biases in it, that's a potential failure of the AI actually making some people more unsafe. So that's an area that our team invested quite a lot initially on sort of manual curation of a really rich, expansive data set. And we have a benefit here because we've been in so many of these games. We've been able to accumulate hundreds of millions of hours of real emotive conversations between people. A lot of the other data sets that are out there are things like someone talking to their Amazon Alexa, where they're talking in very flat tones, trying to be understood by a robot. And in the data we have, it's much more rich and human. And so we've been able to build out a really strong data set there that spans all kinds of sort of diverse speakers, accents, languages, cultures, and allows us to make sure that we can routinely test our models and make sure that they are performing consistently across all those different demographics, different ways of using vocabulary, flipping to, you know, how to use AI for safety. A big area we think about here is. People say context and it gets sort of overused, but the way we really think about it is harm versus behavior detection. So behavior detection is, did someone say the n word? If so, let's do something about it. Harm is, did someone cause harm? And the, the importance here is using the n word isn't always causing harm, right? There, there's reclaimed uses of a lot of these slurs out there. There's other examples where the same sentence to one recipient is sexual harassment, but to another is flirting, depending on the respectiveness there. So instead of starting out and saying, this sentence or this phrase or this word is bad. We start out by building an AI that can look for the myriad human ways that we exhibit, I feel harmed. That might be I'm shouting, it might be I'm debating, or I'm crying, I might have just gone into a shock silence. But we can look for those early cues that say, I, I actually feel harmed. That's the bad outcome we care about. And then we can say, why do we think Mike is harmed? Oh, well, Ben was just shouting the N word a whole lot three seconds before. Those things are probably correlated. So in this case, it was a bad use of the N word, but by being able to start with harm first, that's something that really requires this more contextual, more sophisticated AI that can understand those emotional cues from people. But now that we have AI's that can do that. We're able to do something much richer that still allows for friendly banter, allows for self expression, but can also stamp down on harm when it does come up.

Ben Whitelaw:

Yeah. Okay. That's a really elegant, I think, description of, how it works. I mean, we have to ask, cause there's so many, I guess, incidents of this in the news recently. Mike and I have covered it a lot on, on the podcast of, over kind of censorship or, you know, over, policing of certain content. And I imagine that in voice with the complexities that you described, that risk is even higher. And we'd great to understand how. You think about that and how you maybe have that as one of the indicators of the work that you do. Where does that come out in, in the work?

Mike Pappas:

Yeah, I mean, there's a couple pieces of it. So the first is modulate doesn't determine what is acceptable on a given platform. Again, we do have some lines we won't cross in terms of things we will not help you detect. But take two of our customers of, you know, rec room versus call of duty. Call of Duty is an M rated game full of banter, full of competitiveness, full of violence. If you want to talk a little violently, if you want to throw around the F word a little bit as part of that banter. No one should be stamping down on that, right? You should be pretty free to do that as long as you're not clearly causing someone harm or intending harassment or something in a more directed way. In a platform like Rec Room, which has a much higher density of kids on it is much more sort of social. There's less of a fundamental violence element. Someone's saying, you know, I'm going to hunt you down and shoot you. It's a very different kind of comment there. Um, and so. that's the first piece is we work really carefully with each platform to understand what is acceptable and something we push the platforms then really hard on are, okay, now that you've figured this out, tell your users, and there's a lot of platforms that will say, oh yeah, we have a code of conduct, but what they actually have is a piece of legalese that none of their players can hope to penetrate. Okay. And so their, their players don't know what space they're coming into, but if instead you say, all right, we have a clear understanding of what's acceptable here again, like, are we a sports pub or are we a library or are we a playground for kids being able to differentiate that really clearly tells people up front. Is this the kind of space you were looking for? If you want to have a big, you know, political debate with someone and get a little rowdy, is this the place for it? And so that, that's how I kind of think about censorship. For me, it's not about Can you speak on any one platform any more than it's you should be able to shout whatever you want in every building in the world? There are some spaces that should allow you to do that for sure, but each proprietor of their own space should have some control over what kind of behaviors are acceptable there. A healthy society should have a diverse set of those spaces that allow for all these different ranges. But that shouldn't be imposed on any single platform, as long as that platform is being clear and upfront about what kind of space they're trying to be.

Ben Whitelaw:

Yeah. Okay. Great. Mike, that's really fascinating. I'm we're coming to the end of our conversation today. I wanted to, just ask, what is it you you're most excited about in the coming weeks and months, as far as voice moderation goes and within the trust and safety space, what is it you're excited to see? What are you predicting? We'll hold you to account. And I'll ask you this, uh, in a few months time when we speak to you again.

Mike Pappas:

Yeah, I mean, I, I think one very, very selfish thing is like, we're, we're going to get to make a couple of pretty cool announcements here about this expanded set of applications where voice can really make a difference. So, I'm very much looking forward to being able to tell those stories a little bit more openly. I also think there's a really interesting. thing evolving in the world right now about voice as a user interface. Um, we've had some of these moments before, Oh my God, chatbots are going to revolutionize UI and then chat GBT came out and there was the same thing. And it's certainly not the case that audio is the right user interface for everything, but I do think as some of these AI tools get better, being able to say, Hey, you can consistently use your voice, whether you're talking to a person or an AI Adds a lot of simplicity, adds a lot of clarity to these kinds of applications, makes it easier to swap between in some cases where maybe say for childhood tutoring, you can start with an AI talking to the kid and bring in someone who's a human who can give even deeper analysis as that kid sort of requires that guidance. There's a lot of interesting ways that you can sort of develop more interplay there. And what I think is really cool is if you're mixing AI and human voice. It's all just audio. And so from a, from a safety standpoint, from an experience standpoint, we don't need to do something really fancy to be able to tolerate that. It's something that just happens very organically. As new sources of audio come up. So I'm excited to see kind of the new platforms that are emerging in this world, the new ways that people are using voice to create engagement with each other. Um, especially because adding that, you know, synthetic voice. It is in a lot of ways easier to just continue processing under the same paradigm we've already built as compared to other types of synthetic media that might need more sophisticated new kinds of tools being built around them.

Ben Whitelaw:

Yeah. Really interesting. So kind of voice voices, a medium is finally here and, uh, modulates here for it as well. Um, Mike, thanks so much for your time today. It's been fascinating to chat to you. Really grateful for your time and, uh, we'll speak to you soon.

Mike Pappas:

Same to you, Ben. Always enjoy the conversations. Thanks so much for having me.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.