Ctrl-Alt-Speech

Spotlight: Five Years of the Oversight Board, from Experiment to Essential Institution

Ben Whitelaw, Paolo Carozza & Julie Owono Season 1 Episode 86

In this sponsored Spotlight episode of Ctrl-Alt-Speech, host Ben Whitelaw talks to Oversight Board co-chair Paolo Carozza (Professor of Law and Concurrent Professor at the University of Notre Dame, Indiana) and Board member Julie Owono (Executive Director of Internet Without Borders and research affiliate at Berkman Klein Centre) about the Board’s five-year journey and its plans for the future.

Together, Ben, Paolo and Julie discuss the Board’s recently published report, From Bold Experiment to Essential Institution, and what it means to call Board “essential” in today’s ever-evolving internet landscape. They also talk about how the Board has changed, the criticisms it faces around cost and influence, and what comes next in 2026 and beyond.

This episode is brought to you in partnership with the Oversight Board.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Speaker:

Hello and welcome to a special sponsored Spotlight episode of Ctrl-Alt-Speech. In this episode, Ben gets to sit down and speak to two members of Meta's Independent Oversight Board set up with great fanfare back in 2020, and which is sometimes likened to a Supreme Court style body for content moderation. At the end of last year, the board produced its five-year report charting how it had evolved from an innovative experiment in platform governance to becoming what it likes to call an essential institution for internet users and their online rights. In this episode, Ben talks to two members of the board, Paolo Carozza and Julie Owono, about what that actually means in practice. For users, for meta and for the wider ecosystem of online speech and how the board plans to execute on its considerable ambitions. So please enjoy.

Ben Whitelaw:

Paolo Julie, first of all, welcome to Control Al Speech. Really glad to have you on the podcast for today's sponsored Spotlight episode. And a big thanks to the oversight board for financially supporting Control, al speech, and the work that Mike and I do, each week. before we get into today's discussion, I want to do you the courtesy and, and good service of introducing you both to our listeners. Paolo Za is a professor of law and concurrent professor at the. University of Notre Dame in Indiana, an expert in constitutional and international human rights law, and a current co-chair of the oversight board. And Julia Wino is the Executive Director of Internet Without Borders, a nonprofit that defends citizens digital rights, and promotes the free flow of information online, as well as a research affiliate at the Bergham Klein Center and a member of the Oversight Board. Great to have you both here today.

Paolo Carozza:

Thanks, man. It's great to be here. been a frequent listener and learned a lot from your podcast, so it's a privilege to be on with you.

Ben Whitelaw:

Brilliant. so glad to have you here. let's get started then with, a bit about the oversight board, because it's something that is, been really, described in various terms over the five years that it has been around. Most people often just talk about it as a Supreme Court for content moderation, but I wondered actually how you both describe it to your friends and family and colleagues. What is the kind of way that you explain it to others?

Paolo Carozza:

I guess I can start then. I, I describe it as, an oversight body that is charged with making. Decisions and policy recommendations on difficult and important content moderation, problems on meta's platforms. I will say quickly that I don't like the Supreme Court analogy, because it tends to focus on our resolving individual decisions, which certainly we do. But the more important thing that we do is we try to make sort of broad systemic improvements across the whole ecosystem of META'S platforms.

Julie Owono:

Yeah, completely agree with, Paolo's, uh, description here and. what particularly strikes out for me is the role of the board in kind of ensuring a fair balance between, you know, the rights of free expression and other, other very important fundamental rights for users and how really the board helps and supports a company like meta. To center decisions to ensure that these rights are respected. So, that's how I try to, describe it to friends and colleagues.

Ben Whitelaw:

and that important aspect of defending individuals' rights online, I mean, that feels like a really core part of, what the board does and potentially why you both, agree to join the board. can you give a little bit of, background as to how you both came to be on the board in the first place?'cause it's something we don't often hear about is, is how the board members ended up there and, and what it's like, to be on the board.

Paolo Carozza:

Yeah. Julie, why don't you start, you were one of the original members I came on later, so probably best for you to give, a sense of the origins.

Julie Owono:

Yes. Well, I joined, I will get a little bit personal there, because it's really, I think an interesting story. I came on the board because I found on social media. As a young woman at the time, I'm still very young. I was younger at the time. I was not living in my country. I'm not originally from France. Now I am, but am, I was not, at the time, I was not French. I was studying in the country and I can say, you know, pretty much my fate was traced, as an immigrant and black. Woman in a foreign country, European country. But I found with social media platforms a, really, a platform to speak, to express, to, uh, share unfiltered opinions. And these opinions, you know, gave me access and opportunities to so many other great things. And I wanted to, in all of my work, I've always wanted to ensure that this would be the same for anyone else, so that people would have The ability to express themselves and to, have opportunities from that expression. And so when the debate around creating a sort of appeals, a court of appeals or supreme court, but of course we've seen that the analogy is not necessarily, perfect here. But when discussions emerge about an appeals mechanism to ensure that people would have the ability to defend. Their right to speak on the platform. I thought I have to be part of that, and yes, that's how I decided to join.

Ben Whitelaw:

Amazing. And what about you, Paolo? Coming on later on.

Paolo Carozza:

Yeah, in my case I was recruited about two years into the board's activity. And I was one of the very first people identified and recruited and appointed by the board itself rather than by meta as part of the original group. And I think, you know, my now colleagues on the board, did so largely because I had had about three decades of experience in international human rights. including, leading major human rights institutions, both in the Americas and in Europe. and it was that sort of institutional human rights experience that both was not just the reason for my recruitment, but the reason why I was interested. And it, this strikes me as really sort of the forefront of, you know, the next generation of institutional creativity, innovation for the protection of human rights around the world.

Ben Whitelaw:

It certainly feels like that paler and everything in moderation. The newsletter that, that I write some listeners will be familiar with has followed that, over the last five years and, and will come onto the report that you're both here to talk about. Before we do, I want to get a sense of what it's like to be sent to one of the cases. we had Kenji, another board member on the podcast a few weeks ago. he was co-hosting when Mike was away, and he talked a bit about that process. But what is it like when you kind of receive a case, or you are part of the of process of deciding what cases the board takes on? Do you feel a sense of, responsibility or is, is there a kind of, importance to the selection process that you feel.

Paolo Carozza:

Oh, absolutely. There's, I think the core background question for all of us, and, and these are the selection happens in. a committee of five or six members at any given time that rotates. But, I think all of us on the committee are always trying to be attentive simply to what's going on in the world, right? Where is there a crisis? Where is there an election? Where is there, consistent reporting of a particular problem you know, regarding freedom of speech or, basic safety issues on the platform? Form, the different geographies of the world. so I guess that's a long way of saying, what drives, I think everybody here is sort of a constant attentiveness to what are users' needs. where are the needs acute at this moment, around the world and on what themes and how can we. Sort of dig through the tens of thousands of cases we're receiving all the time in order to find the ones that are most emblematic of those more, broad range problems that people are facing. See.

Ben Whitelaw:

And Judy, do you find that the representation of the board is really important in, getting to the, I guess, the heart of who are the users in need at this very point in time? What kinds of discussions do you end up having with the other board members, in that process?

Julie Owono:

Oh, absolutely. The global. Experts who composed the board today are an essential part of how the case election process works. we have colleagues all over the world, really in Australia, in Taiwan, in Europe, the uk, in Ghana, in, uh, South Africa. So. It's really a snippet of the world at a given moment, just like Paulo was saying. And the fact that we are all colleagues with these diverse perspectives and points of views, but also experiences, geographic locations are a wealth of knowledge that we can put into the case selection, process, in order to choose cases that. Will be the most impactful. That will of course address the individual issue, but offer an opportunity. To the board and also to the company to address sometimes blind spots that the company itself has. I have examples in which the company was actually grateful for the board, having chosen a specific case. Uh, Can give some examples. we've had cases coming from Ethiopia at a moment when there was a conflict, an active conflict in the Amara region. A few, few years ago, the board surfaced the really. Difficult question of rumors. How do you balance the expression with rumors and conflict? Right? Or, there was another case coming from, Iran about political speech figurative political speech, and in this case, calling for the death of a leader how do you balance. the right to free expression and political expression, specifically with, of course, the right not to harm somebody else, and in this case a public figure. So yes, we have this global perspective that the board brings is really a way for a company like Meta to be one, two steps ahead by surfacing sometimes very difficult questions and blind spots that the company may have.

Ben Whitelaw:

Yeah. And I think the report, which we'll come on to talk about, it does a really good job of outlining some of those really tricky decisions that you've had to make and some of the, I guess the broad range of cases that you've, taken on Before we go in that, point around the communication with meta and, and receiving kind of thanks from the company for looking at specific cases, what does that communication process look like in practice? who is liaising with who. Give us a kind of a sense under the hood of the dynamics, there and what you've maybe been most surprised by.

Paolo Carozza:

you know, I'd say there are, it's important to answer the question on two different levels. There are a lot of very formalized, structured points of contact in order to exchange information, right? I mean, part of the reason that the board is such a valuable institution. In my view is because we have a particular kind of relationship of confidence in which we can request and receive a great deal of confidential information from meta, that requires a certain institutional trust, certain, structured guarantees to protect that information. negotiations around what we can reveal and things like that. And, in order to do that in a way that protects the board's independence. that's highly structured along certain very formal lines between our permanent staff and, their counterpart support. But necessarily at a different level. They're also much more informal kinds of conversations, you know, that take place simply because we, we meet each, each other. We have, we have conversations, and it's in that latter, sort of group of exchanges that I think one of the things that's was most interesting and surprising to me, coming from. context where I've worked either in universities or in public institutions and not in the private sector, is simply how complex an institution meta is, and how many different points of view there are. So when we engage in the conversations, it's really clear that there are lots of different points of view that pull different directions within the company. And, often it's hard for them to resolve, uh, exactly those different views. they also evolve, so in the course of our conversations, I mean, you know, there are people who, when the board was, first started, were very explicitly and avowedly against the board. Didn't want the board to be established, thought it was a waste of money, a waste of, uh, you know, a risk and now who are very outspoken advocates of what the board has contributed. and so I think that dynamism and complexity within the company is helpful to be aware of and something that that we always need to be in some kind of dialogue with at a more informal level as well.

Ben Whitelaw:

Yeah, really interesting. So let's go onto the report then. this is a report that came out just before, the end of last year, just before the Christmas break for those who, celebrate it. And the report is entitled From Bold Experiment to Essential Institution, how five Years of Independent Oversight made Meta more accountable and protected the rights of Users. Can you explain why you think the board has moved from kind of experiment to essential institution and unpack a little bit more about who you think it's essential for.

Julie Owono:

Yeah. Why it's become an essential institution because of the changes that it has brought to, meta platforms. we've brought those changes through one of the points of criticism of the board, which is, oh, you don't work on, you only work on 60 ish cases per year, and that's nothing compared to the millions or billions of pieces of content posted. But what we can say is that the case selection process, which we have discussed, really helps us surface those cases that will help the company solve a problem that affects not just one user, but really the entire experience for all users virtually on the platform. I'll share examples here. I, I guess it's the best moment to do that. First and foremost, we've protected speech. We've helped protect political speech. One of the most important speeches in a democracy, right? we've helped people be able to criticize their leaders sometimes in countries where they would not be able to do so. We've also, empowered users themselves. To understand the decisions that are being made by the company. We've recommended and then the company accepted our recommendation. We've recommended the company to ensure that users are aware why their content was taken down in the first place. If you remember Facebook, for instance, as I do seven years, back. Your content was taken down and that was the end of the story. Now it's different. You have a reason why it's taken down, you know, what policy violated, and you offered sometimes even ways to correct that behavior. We've learned about, initiatives that the company that Meta has put in place to, allow users to correct course before posting publication. We've also. encouraged more transparency from the company and that transparency is now, has now become kind of industry wise. I mean, we've seen transparency reports from, companies increasingly since support has started its operations that are getting more details on government influence, for instance. That was an issue that was very secret before. I don't wanna say before the board, but I really want to believe that we've helped. Take the criticism, the really important and helpful, you know, constructive criticism that existed before with regards to relationship between companies and governments behind closed doors. We've helped taken those criticism to, a point where the company has accepted, we've recommended, and the company has accepted. To publish more information about those relationships. For instance, meta now notifies users, when their content was removed due to a government request for instance. And unless it's restricted by law, now the company identifies which government entity made the request. I mean, imagine, again, where we were on these issues. We've also, the most important, I would say, one of the most important work of the board is that we've shed light on the crosscheck mechanism For those who don't know or don't remember, we had a policy advisory opinion through which we make policy recommendations to the company that are not just specific to one case, but are really. that affect the broad policies of the company. And what we exposed through that policy advisory opinion was that Meta had a content moderation pipeline through which it treated high profile users differently than. Us the common users. and we revealed that this pipeline had backlogs, that it had also, special treatments that was not necessarily transparently, you know, adjudicated. so we, do have an essential role. In ensuring and helping a company Correct course. And some behaviors shed light, as I was saying, on some things that are not working blind spots, but also help a company have one, two years, advanced knowledge have an opportunity to really correct course, while working with the oversight board.

Ben Whitelaw:

Yeah, that's really interesting. And, and the Crosscheck program was one of the standout, I think, decisions for me that was a, a story that the Wall Street Journal broke, if I remember rightly, in which, high profile sports stars, politicians were given basically a direct route into, the policy team at Meta and, and had content taken down that, that probably shouldn't have been taken down in some cases. Thanks for kind of outlining that, Julie Paolo, the idea of meta kind of taking the recommendations from the board and doing something with them was something that people were very skeptical about at the start, including Sounds like that internal employee you referred to earlier. Have you been surprised by just how many, decisions that the company has taken and, changed or enacted as a result of your work?

Paolo Carozza:

to be honest, no, I'm not surprised. for a few reasons. I mean, if I could sort of take half a step back and I, I'll answer that question by piggybacking on Julie's answer of the previous one, you. One of the ways in which I think the board is an essential institution is, not simply for the company and not simply for the users, but sort of for the public space in general. it's clear, that it's so vast, so complex. mean, you know, you, you and Mike like to say all the time, content moderation at scale can't be done well. It's impossible. and so it's always relative. You're always trying to work at the margins to improve it. And in that context, it's absolutely clear that the companies by themselves can't do it. Their incentives are misaligned, they can't, you know, can't be trusted frankly, to simply take care of their own responsibilities on their own. it's also increasingly clear five years into our experiment that, public regulation is inadequate too. it's either insufficient because it lags behind, or it's insufficient because it's much too heavy handed and serves a variety of other interests, not the rights of users, but, the interests of people in power. And so an essential institution is one that fills that gap, right? That helps to fill that space in between those two, and provides, an independent, but engaged voice, like ours. And that's exactly the context in which it doesn't surprise me that the recommendations get accepted, right? Because as Julie pointed out earlier, there are tons of gaps. There are necessarily gaps in what's being done, both from a regulatory point of view and from the self-governance and regulatory point of view of the companies. We're filling those gaps. We're trying to point them out and to fill them in, as they go. And, we've gotten a lot better at doing that. a lot of our, recommendations, or fewer of our recommendations were accepted and implemented in early on in the board, and now they are at a much higher rate, in part because we've learned a whole lot more about technically how the system works, what's possible, what's not, what's economically viable, what's effective, and so forth. so as we've improved and as the needs necessarily continue, I think the companies. Sees that it's good both for the quality of their product, as well as for the public accountability that is necessary, that our recommendations should be followed in more, most cases, probably about three quarters of them get implemented, either in whole or in the process of being implemented right now.

Ben Whitelaw:

Yeah, and there's a whole process. I've come to learn about tracking that implementation, which I think is a fascinating part in and of itself. Um. go into, you both touched on the criticism, that some people have made about the board, and I want to go into a bit more detail about that. some of the kind of commentary around, the board and the recent report. But one of the things that I think, demonstrates the essentiality of the board is its expanded scope since it was founded five years ago. So, it now does, focuses on threads as a platform, whereas it started off as a Facebook and Instagram only. it's now looking at content being restored and taken down on those platforms. there are various ways that it has kind of grown. its influence. were those changes kind of driven by user demand or by what the company was looking to do or by the board and what it saw, in the broader, I guess, online ecosystem.

Julie Owono:

if I could very briefly start on this, I would say it's a mix of three. of all those three, we have seen, for instance, How it has been pushed by the board itself is through the fact that we realized very soon that the content, decisions issue was not just a take down or leave up discussion. There was a gray area in between, that needed to be addressed as well. And that's why in common, agreement with the company, we decided also to allow the board. To make decisions to, add, interstitials, instead of taking down or living up, adding, additional information about, content or, sort of a, a, filter to hide the content. so that's when it came from the board. Now it also came from users from the fact that we are seeing. for instance, they wanted to better understand how the recommendations were made since those recommendations are not binding. So how do you ensure that, the fact that they're non-binding has any effect? So we rapidly in conversation again with the company, realized that it would be important to make sure there is a tracking of, the recommendations made. It also comes from the company itself, as almost immediately, as soon as threads became available. The company reached out and said, would you be interested in also taking a look at threads? And it made sense because it's also a, a platform where users express themselves and their policies. So I think it's really it came from all these three different stakeholders. and also it came from a lot of the experiences that we've had during, uh, in the course of the past five years.

Ben Whitelaw:

Yeah. Paula, anything to note on on that?

Paolo Carozza:

I just say this, Ben, I mean, we don't pretend it would be gr greatly naive to suggest that the oversight board by itself is the answer to the problem, all the problems on meta's platforms, let alone in the industry as a whole. it is only one small actor in a vast ocean of responsibility in this area. I think one. Is making great contributions. and so we acknowledge that there's a lot of things that are outside of our scope that are really important to human rights, to freedom of speech and to users, experiences on the platforms. but exactly in that recognition, we're constantly pushing to expand what we can do so that we can contribute more. That's really what it comes down to.

Ben Whitelaw:

Yeah. And, and that's an, I think, an interesting segue into talking about the value of the board. and there have been critics who say that the board is expensive, as Julie mentioned earlier, for the number of cases that it hears. you have heard a fair few cases that the scope of them has been broad, as you say. and you have this newly kind of expanded remit. How valid is that criticism, sitting on the inside and receiving these cases? And, what other ways would you like for the board's efficacy to be measured? Um, other kind of KPIs do, do you think about as an institution?

Paolo Carozza:

Well, look, I, I, welcome criticism of the board on everything, including its financial accountability. I mean, that's, you know, we're an institution that's trying to drive transparency and accountability. It would be foolish for us to resist that kind of criticism. So yes, bring it on. Make us better by criticizing it. I welcome it. but specifically on the financial challenges, I'd say a few things. you know, one is, the board's an expensive institution as a human rights institution. there's no question about that. but, it, represents less than one 100th of a percent of s turnover annually. that's a pretty small price to pay for things like, having a crisis protocol in place that was driven by the board's recommendations, or an election integrity system that was driven by the board. Recommendations. and so the right way to measure the adequacy of the return on the investment, so to speak, is not by case numbers. the board has never been about an industrial scale, content moderation, Resolution mechanism for massive numbers of people, uh, is precisely about trying to resolve a small number, but very difficult and impactful, issues. And so, you know, the right way to evaluate it and whether it's worthwhile. Is by doing what we try to do, but doing it independently and holding us accountable to it, which is tracking the impact empirically across the entire system of meta platforms on a global basis. What are the impact of our recommendations and decisions? Who has benefited how many people? you know, when users now, in, hundreds of millions have access to content moderation policies in their own languages. As a result of that, that's a pretty high payoff, I think, that can't be measured simply by saying X number of dollars per case are being spent. And the last thing I'll say is this. the, investment in content moderation, in general has been, declining. We know that, over the industry. and so for the board to continue to represent an investment. you know, significant investment that continues to put humans in the loop of responsibility over these questions. is incredibly valuable and worth a big investment, right? Because otherwise, the decline in cost is all because everything's being automated increasingly, and that's great in the sense that as the automation gets better, more people have more accurate enforcement of the rules applied to them and so forth. but there needs to be a level of human responsibility as well, and that's, more expensive than just applying another algorithm to the content moderation process.

Ben Whitelaw:

Yeah. And the report makes very clear that, artificial intelligence and automation is, is increasingly a focus of the board's work. Julie, I wondered, you've just come out board meetings that look ahead to next year and beyond. are the kind of biggest, unresolved questions or what do you think will be the focus for the board in the coming 12 to 18 months, let's say?

Julie Owono:

Well first of all, we do have a very important policy advisory opinion that was sent our way a few weeks ago about community notes. we also have accounts level policies that we're gonna start working on. so these are very exciting. But beyond that, there is obviously the, question of artificial intelligence, right? And how it's being used on meta's platforms, both for content including content generation. But also for moderation on content generation, for instance. I think the value of the board, which we're talking about was quite evident two years ago. When chat GPT really started emerging, the board published a decision that related to manipulated media at the time. That was how the policy of Meta, its Manipulated Media did not touch on issues related to artificial intelligence, and the board was. One of the first to recommend based on external stakeholder engagement that we did with organizations that work in this space, in the artificial intelligence space, and, responsible and ethical innovation in this space. We've worked with them, we've engaged with them, and we've made a recommendation based on disengagement. That recommendation has led the company meta to label when content is generated artificially, and the company itself published, how much this labeling brought to engagement with artificially generated content. the numbers were like millions, probably even billions on Instagram. People engaged when they had the AI label. People clicked on it, read about it, and learned more about this new innovation. and that was two years ago. Right. Another very interesting future in this space is that yes, we are, also interested in. being in conversation with new set of stakeholders in this space, and particularly advisors of AI startups, investors in AI startups who are certainly concerned about the risks. There are plenty of risks, there's a lot of potential for innovation, and I know much of the conversation in AI is focused on innovation at the same time. The risks are very real. Every single day we have news about suicides, especially in youth. And that's another, scope in which we are very interested, and I'll come to that in a few seconds. But, we have news every day about the risk of ai. How can an entity, an independent, and that's important, independent entity. We have no stake in the, in all of this, our only stake is to ensure that. Humans and users have remained central and remain a central preoccupation for the benefit of society. so that's a, a very important, interesting area in which, we will be more vocal and present in still in collaboration with stakeholders, who are working in, in this space. And of course, youth rights. There's a lot of focus on safety of children. As a mother, myself, of a preteen who is getting more interested in, in social media, these are conversations that I look at and listen to very carefully. At the same time, there are lots of questions that are not being asked. One of the first questions is. Do our kids have the right to free expression to do they have the right to express themselves on those social media platforms? That's a question that would, potentially be asked to Australian lawmakers, for instance, uh, who have passed a law recently. I mean, who enforced a law recently? To ban social media to under 16 years old. do parents have a role to play? Absolutely. But what is the nature of that role? do companies have a responsibility? Yes. And what does that responsibility look like in a time where, the industry is developing different set of parental controls that did not necessarily align with each other? I mean, is there an avenue there for an independent, body, like the board to come in and say, Hey, we have a lens, we have a framework that's called human rights, and that's still very relevant. That is the concentrate of what humanity think is important in terms of human dignity and in this time of technological evolution and progress and disruption. Here's the framework that we can bring in your innovation process and that can give you 1, 2, 3 years advance, sometimes even five, compared to other competitors in terms of putting that, human back into the loop. So there's a lot of exciting things that I'm personally very much looking forward as a board member.

Ben Whitelaw:

Yeah, it sounds like you've, got your hands, busy there. You've got a lot to be working on as, as an institution, as a group. Particularly interested in, in the kind of use safety, which is a topic that Mike and I talk about very frequently on the podcast. Paolo, if you know, you've just published this five year report, it details all of the impact and, cases that you've taken on in that time. What will the, 10 year board report look like? what would it say, what would its title be? if you could have your way.

Paolo Carozza:

Well, if the board is successful over the next five years, I think the next report would, at a minimum have two big themes to it. one is that the board has served as a template or a model for other institutions of dependen. Oversight in the industry. Not that the board is gonna do that for everybody, but that, you know, the board still stands out five years later. It still stands out as unique. I'm all in favor of the advisory bodies that other, company actors are standing up on a variety of issues. I'm sure they provide a lot of value. but they're not independent. They don't select their own membership or leaders. they don't operate transparently, they don't set their own agendas. All things that we do. And that are necessary to providing independent accountability. So if, five years from now we're successful, there will be more bodies like the board in its independence and oversight roles. the second thing is that it will have expanded, not just at the margins of more issues within the traditional social media space. But it will have expanded to other kinds of technologies. I, I can't help but notice. And, and you and Mike have talked about this a little bit yourselves in some of your episodes, lately, that the moment that we're in, in, terms of AI and generative AI and chatbots in some ways seems remarkably similar to five years ago in terms of like this surge of concern about safety issues. How to balance that with the, core sort of. Expansion of expression and connectivity that these technologies give us. it's a moment that's so strongly parallel in my mind that there's a lot to learn from the experience of the board over these last five years. and, so if some of those lessons are pulled into and help to shape the landscape. Of independent accountability and oversight in the AI and generative AI space. Then that's another way in which I think that the next five years we will have been very successful.

Ben Whitelaw:

And Julie, what about you? What does, if you were to look back in, 2030 at the work that the board has done, what would you hope to have achieved?

Julie Owono:

First that agility that Paolo has just referred to. It's really in the DNA of the board. we've adapted to different times in the past five years but agility without renouncing to the core. Mission, which is to ensure that users, no matter who they are, no matter where they are, are not an afterthought when innovating in technology. I think that would be What I would love to see five years from now, I will not be on the board anymore, uh, technically because, it's a maximum of three years terms. But, that's what I would really love to see in the next five years.

Ben Whitelaw:

Yeah. Yeah. A very, very interesting five years ahead. I really hope the board is successful in its work and it's been fascinating to chart its progress over the last five years. and great to read the report, which documents some of that interesting work that you brought to life today. Just wanna say thank you very much for your time. it's been great to have you on the podcast. We hope listeners enjoy today. Spotlight episode Paolo Judy, thanks very much for joining me today.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.com. That's CT RL alt speech.com.