At a crossroads of uncertainty and opportunity. How do you navigate forward? This podcast focuses on making smart choices in a rapidly changing world. We investigate the challenges of being at a crossroads and finding the opportunities that arise out of disruption. Listen, in on future forward conversations with the brightest luminaries, movers and shakers, let's navigate forward together and create what's next.
Lisa Thee (00:24):
Hello everyone. And welcome to the Navigating Forward podcast. My name is Lisa Thee, and I'll be your host today. We love to bring you the most innovative movers, shakers and luminaries in the field to help understand where we're going in the future and how we can all be part of that journey. Today, I have the honor of interviewing Irakli Beridze who leads the Center for AI and Robotics for the United Nations. He will be joining us to talk about interesting topics, including web three and the metaverse and where we're all going. So I want to welcome you to our podcast today. Thank you so much for joining us.
Irakli Beridze (01:00):
Thank you, Lisa, for inviting me to the podcast.
Lisa Thee (01:04):
Yes. So, we had a chance to get connected via the World AI Festival in Cannes, France earlier this year, do you mind sharing a little bit about what you talked about on those stages?
Irakli Beridze (01:16):
Thank you Lisa and, it's been quite a fascinating journey for last five, six years, especially even longer where AI became a mainstream issue and there has been numerous events globally organized on this exponentially growing technology, which has an impact both on the positive and negative side, and potentially can change the way we living altogether from the United Nations to the events, like what was organized in Cannes, there are many, many different type of discussions, how artificial intelligence can help all of us to solve large problems like the UN sustainable development goals, or at least contribute to, to solution to them because we don't need to be naive that AI is going to solve it all. It certainly is not. It's a tool which will help us to solve problems, but at the same time, it has potential to create numerous issues, which we would need to solve. And unless we solve it, we may actually end up in a situation where human rights might be infringed or numerous other type of problems might be created, which we would definitely would like to avoid.
Lisa Thee (02:21):
Yes. Agreed for sure. Can you talk a little bit about your childhood and how you ended up leading in this field? What do you think inspired you to go this direction?
Irakli Beridze (02:30):
Hmm. Uh, there are, yeah, probably a number of things here, but my childhood was actually I, I, I was growing up in a country which doesn't exist any longer. It was USSR Soviet Union, in a, in a very non part of Soviet Union, which is called Georgia. And this was quite a kind of eclectic changing childhood. While it was quite peaceful in the beginning, later on, I lived through two or three civil wars were to reach my university first year. At least I had to cross two firing lines or two firing sort of, sides, during the civil war. So there, there has been a number. Yes. And, my, my first job at United Nations was during that time when I joined the world food program and I assisted actually the program to deliver food in a war-torn regions of Georgian surrounding, where we had numerous issues related to food shortages or, or extreme poverty due to the civil civil war later on.
Irakli Beridze (03:36):
I actually left Georgia and, and went as a student to the United States where I studied political science, continuing my law studies in Georgia in parallel. And and later on, ended up in Finland, studying international relations and technology at the university that led me to numerous travels and ending up in the Hague, working for the organization for the prohibition of chemical weapons. That's a global body, which, ensures the proper implementation of the chemical weapons convention. I worked there on numerous issues related to counter terrorism, chemical terrorism, more issues related to bringing and difficult countries into the convention. And so, and so forth later on, I actually worked on specialized UN programs on biological, chemical and nuclear discernment and risk mitigation. And, and then at one point, in 2014 initiated the first program on artificial intelligence and robotics within the UN agencies ending up organizing a meeting on artificial intelligence at the UN general assembly in 2015, where we probably first time discussed the issues of the near term or long term consequences of artificial intelligence, both from the negative and positive side. And from that onwards, basically, I've had to creating the Center of AI and Robotics, which is located in the Hague, the Netherlands, City of Peace, Justice, Security, and Innovation. And we continue implementing numerous programs with support and for support of the United Nations member states.
Lisa Thee (05:16):
Wonderful. And for folks that are trying to understand where AI has a role as a tool, not a solution in counter-terrorism and deescalating conflict, can you share with us a little bit of the vision of the center and why it's important to the UN to have a point of view and a place for innovation?
Irakli Beridze (05:35):
Well, number one is that artificial intelligence is a, is a very powerful exponentially growing technology, which has a potential to, and I'm quoting secretary general of the UN, change lives in a spectacular fashion and calls AI a game changer - technology, which can actually do both lot of good, like what we examining in the UN at the AI for good summits, for example, or other venues, and can do enormous harm, like complete infringement of human rights or other type of harms like AI being used for criminal purposes or terrorist purposes. Or even long term consequences, like the existential risks, like intelligence and other type of issues. The center is focused on issues to prevention, criminal justice rule of law, human rights. And we looking at how criminals can use artificial intelligence and how to mitigate that and how law enforcement can use artificial intelligence to solve the problems, but how to do all of that in a human rights compliant manner. So this is what basically center envisages and envision is very clear that we are promoting the use of technology, but use of technology responsibly and in such a way which will not infringe and which will not violate human rights.
Lisa Thee (07:03):
Yeah. As we all know, most technology innovations are neutral. And unfortunately with that it, there can be some unintended consequences. So I think it's really interesting when you talk about the space around how to use technology of this caliber with law enforcement in a human rights compliant way. Can you talk a little bit about how you guys are managing through those ethical lines? Because a lot of that isn't technology driven any longer it's more government and philosophy and all the humanities that help us to decide what's okay. And not okay. Would love to hear your point of view on that.
Irakli Beridze (07:38):
Yes, definitely. And I think that, I think that this is extremely important to get it right from the onset and from the design to its deployment entire cycle. Otherwise we may actually this is the area where, where we operating, it's extremely sensitive and extremely, yeah, probably this way, extremely sensitive in that sense that if you make mistakes here, you create such a big public backlash that we may, it may take us years to repair that. Therefore it is extremely important to get it right from the beginning. And that's why we are implementing multi stakeholder processes along the way. So we are consulting all interested stakeholders, whether it's private sector or academia, government, civil society, and others. Now, one of the big streams, what we're involved right now is the development of a specialized toolkit. And we call it toolkit for the responsible use of AI for the law enforcement. And this is our joint work from, with the, and Interpol with support of European Commission. And we are going to launch end of the year, an operational guidebook, which is going to help the law enforcement agencies implement and use the technology like artificial intelligence in a human rights compliant manner in a responsible way. So we are developing numerous resources for that toolkit, which is going be extremely practical and extremely usable for the three countries in the world. So this is like entire UN.
Lisa Thee (09:11):
Wow, very exciting to hear that we're convening and starting with safety by design and really thinking through the implications of things in a coordinated international way. Because as we know, criminals, don't respect state and country boundaries, right?
Irakli Beridze (09:25):
Lisa Thee (09:27):
<laugh>, It's, it's something that affects us all from geolocation to multiple platforms, to multiple governments. We need to learn how to be a more United front on what is okay and not. Okay. So with that, can you talk a little bit about how you guys are using data and analytics to decide where to focus? It's such a broad purview of AI and robotics. You can apply that so many places. How do you prioritize where you put your focus and what your goals are for the next year or two?
Irakli Beridze (09:55):
Well, sure. I mean we are a UN agency, so we are very closely working with the United Nations member states. We are working within our strategic framework and consulting UN member states. So any action which we take it's in consultation in support of the action states by the United Nations member states. Now in our area, we are looking at issues related to the crime prevention. And we are looking at some of the topics where we think that artificial intelligence can make a bigger contribution. For example, one of our flagship projects right now it's called AI for Safer Children. This is a project where we are working to build and and it is almost actually built and gonna be launched very soon. A global hub. We have a collection of artificial intelligence tools more than 40 through the tech providers, which is going to help and service 93 member states.
Irakli Beridze (10:54):
The hub is going to include its learning center and training and mentoring possibilities to the law enforcements across the globe. Wow. These are the type of tools which use use different type of techniques, face recognition, technology recognition, technology, natural language processing and number of other techniques, which helps an individual law enforcement agent to conduct its investigation on the sexual exploitation and abuse of children online effectively. Uh, when we see the type of backlogs the law enforcement agents have, which is, could be from six months to one year or more, if not using the type of technologies, if not dwelling on the power of artificial intelligence, we may actually end up in a world where problems such as sexual exploitation abuse of children online will never be solved or will be only marginally solved. So through that type of larger project, we're bringing together law enforcement, tech providers and other entities like the UN member states to help them leverage the technology like artificial intelligence to make a very direct, practical contribution to their work.
Lisa Thee (12:15):
So it's a very focused, best known resources model where you can go to one place and you don't have to go reinvent the wheel agency by agency. You're going to be provided solutions that take you from the journey of education through mentorship and how to use the tools that are available on the platforms more effectively.
Irakli Beridze (12:35):
Yes. Yeah, basically it is. And uh, and as a UN agency, obviously we are doing all of this free of charge. Everything is the service of UN member states. So there are no participation fees or any training fees involved in that, on the contrary. We are actually there to support countries to receive such trainings which will start doing it from the next year. This year, we focused on the building, the itself, building the community around the, and I can tell you that in two weeks, we're gonna have a second stakeholder meeting for law enforcement where we're have around four law enforcement agencies participating virtually to be introduced to the details of the hub and understand how to use its services in the couple of months to come during the general assembly session in New York. In September, we launch officially the hub together with our partners of the United Arab Emirates which is supporting, launching and implementation of this project ministry of interior of it. And later on, we're going to start conducting trainings and other activities.
Lisa Thee (13:43):
Well, I must say, as somebody has worked and focused in this field since 2015, this is like a dream come true. I feel like when we were talking about CSUN back then, we were fairly naive about where we were going and what we could do about it. And I think we've really accelerated the best in breed thinking. And I'm really excited to see the impact that it's going to have on marginalized women and children globally by providing better tools for law enforcement to strengthen our protections of our legacies. So with that in mind, do you mind sharing with me some of your favorite accomplishments so far that the center has been able to contribute?
Irakli Beridze (14:18):
Number one is that the Center has put on the global map or global discussion map, the issue related to how AI can be used responsibly by law enforcement and gather together the law enforcement agencies from all over the world together in interp to participate in a dialogue where on the one hand we share practices how to solve problems efficiently and second, and which is really important and underlying is how to do it a responsible and human rights compliant manner. Number two is that we've we've issued numerous reports and numerous research papers. All of them are cutting edge. One of the latest one was the report together with Europe, where we talked about malicious uses and abuses of artificial intelligence and extremely important guide for the law enforcement agencies to understand how criminals can use AI and how to preempt and how to ensure that you're prepared for that is only five years old. And we've been on all major global forums at the moment and contributing and governance discussions and governance of AI going be one of the dominant issues, probably within the unit nations in years to come. I do believe that we'll have sectoral governances and one of them will be in a, in a sensitive area like law enforcement where we are directly contributing to.
Lisa Thee (15:42):
A lot of our listeners are the C-suite or leadership of large tech companies, including the boards. If you had one thing that you wanted them to be more aware of when it comes to safety and collaboration with law enforcement on global platforms, what would you like them to know that they can do to be part of solving this problem?
Irakli Beridze (16:02):
Well, number one is that we are actually collaborating with number of tech companies and we'd be happy to expand our horizon and bring in more in our thinking. I think it is extremely important that UN and specialized agencies are in a direct and intensive or vigorous talks with and collaborative talks with tech companies, especially on the companies at. So we are very open and happy to collaborate on that sense. Be able to have a direct conversations, how they can provide and help and sustain. Obviously all our projects require funding. So I'd be very happy to discuss that type of possibilities, how others can contribute in the making good from the AI-related tech. We looking at very dangerous phenomena of growing global divide, where we see that only number of countries have adopted the national plan on AI, on technology and are accelerating their research, which is wonderful and, and investments. But then we have almost two thirds of the world, which actually has not adopted such plan and, and maybe falling behind or there might be such a possibility. So we all need to actually worry of such problems and take steps to mitigate that.
Lisa Thee (17:18):
Fair enough. Fair enough. You work in some pretty difficult areas. What do you do, where do you reach to on the days where you wake up and you're just not feeling excited about where the future's going, how do you recharge your batteries? How do you focus on your own wellness?
Irakli Beridze (17:34):
That's a good question, Lisa. I mean, we have a, I have a wonderful team who I'm working with and the very motivated young people who thinking along with us with me 24 hours, and we are actually are extremely motivated to see how much can be accomplished within that space. How much is there to be done in the future and what are the new potentials and new initiatives right now, we have new projects, which we are looking at to launch and and preparing in the field of, for example, which been sort of dominated discussions in recent years and or recent month rather than years, or issues related to blockchain, web three cryptocurrencies and others as well. So we have quite a lot to do and a quite motivated team to try to accomplish all of that.
Lisa Thee (18:25):
So you're living your mission along with your career. So you recharge your batteries at the office, huh?
Irakli Beridze (18:32):
I definitely do.
Lisa Thee (18:34):
Very good. Well, thank you so much for folks that want to follow along with the journey of the center and the things that you're doing. Where's the best place to connect with you and follow along?
Irakli Beridze (18:43):
They can actually follow us on social media. I post quite a lot on LinkedIn on Twitter as well. So please do that. If anyone has interesting sort of collaborative ideas, they can directly reach out to us through our website, to the UNICRI website. So we'll always respond and will be always open for dialogues, new partnerships and ideas, how to actually build the safer world and how to ensure that technologies is there to used for good. And its risks are mitigated properly.
Lisa Thee (19:14):
Thank you so much for your time today. I really appreciate it.
Irakli Beridze (19:17):
Thank you Lisa very much. And I'm glad that I was invited.
Hey everyone. Thanks for listening to the Navigating Forward podcast. We'd love to hear from you. At a crossroads of uncertainty and opportunity, how do you navigate forward? We'll see you next time.