Nada Mohamed
Hi, everyone. I hope you are having a great day and it’s going to either – be even better with this webinar. I will wait just for a couple of minutes and then, we’ll be starting. Alright, it’s time to start. Hello again, I hope that you’re doing great. This is a Common Futures Conversation webinar on young people’s idea for using AI as a force for good. My name is Nada Mohamed. I’m a Common Futures Conversation member, but as you can see, I’m also a woman with many curls. So, I’m a Public Policy Researcher, I’m a Teaching Assistant, but today, I’ll be wearing the curl of the Chairman, or the Chairwoman of this meeting.
So, for those who are joining us for the first time, the Common Futures Conversation, or CFC as we like to call it, is a Chatham House Centenary project centred on bringing together young people from Africa and Europe to develop their knowledge on issues of our current days. The aim is to give young people a literal seat at the decision-making table for these kind of policy discussions, and to do that we have our CFC community. We’re conducting two or – two months policy cycles where we engage with specific experts to enhance our knowledge in understanding of these topics and follow-up with this – with our solutions. And this was – this is what we’ll be doing today with AI.
So, with that said, I’d like to extend a heartfelt welcome to members of the Common Futures Conversation community, as well as Chatham House members who are joining us today. But not only that, we want to, on behalf of the CFC community, to extend a welcome to everyone who’s joining us from every walk of life today. We really are excited to engage with you today and we’re looking forward to your interaction.
So, with that said, before we quite jump into our insightful discussion, I’d like to highlight a few housekeeping rules. So, first of all, this event is being recorded, so if you’re opening a camera and if you’re uncomfortable with that, it’s going to be posted on the Chatham House official YouTube channel. So, if you’re uncomfortable with that, feel free to close up your camera. Second of all, as you can see, everyone is muted and will remain muted throughout the presentations, but feel free to use the Q&A button to pop up your questions, either for our distinguished panellists or our amazing CFC members.
Now that that part is out of the way, I’ll introduce our distinguished panellists and policymakers briefly, before handing it over to them to give us some opening remark. First, we will hear from Dr Rooba Moorghen, the Former Permanent Secretary to the Mauritius Ministry of Information Technology and current member of the Mauritius Working Group on AI. Dr Moorghen has 46 years of experience in the Mauritius Civil Service, where she had the opportunity to be associated with several IT and eGovernment projects, eServices, cybersecurity issues and robotics. She has acquired a wide experience in IT related issues and has also learned best practices while attending conferences in India, Estonia and other countries. During her ten years in office, the strategic plan, Digital Mauritius 2030 was launched, which encapsulated the vision of the Ministry. Dr Moorghen, thanks a lot for joining us today. I’ll hand it over to you to start us up with some opening remark.
Dr Rooba Yanembal Moorghen
Okay. So, good morning. I don’t know, good morning, good afternoon, everyone, and greetings from Mauritius, also to those who are watching this live today. At the very outset, I wish to thank the Director and Chief Executive of Chatham House for having extended an invitation to me to participate in this policymaking webinar on artificial intelligence. In fact, like you said, as Permanent Secretary and Administrative Head of the Ministry of Civil – of Information Technology, Communication and Innovation, I was there from Sept – as Permanent Secretary from September 2016 to two thou – to April 2019.
And together with the team of the Ministry, we did – we have launched several documents, mainly the Digital Mauritius 2030 Strategic Plan, the Digital Transformation Strategy 2018-2022, the Mauritius Artificial Intellectual Intelligence Strategy November 2018, and then the promulgation of the Data Protection Act 2017. We have introduced the Info Highway and for the Estonianity, the X-Road, and many other initiatives, which have and now are still being implemented by the Ministry. All these documents which are mentioned, they are online, they are available online, and I think it would be helpful for all of you to have a look at it, because it contains all the strategies, and you can follow-up on this.
Coming back to AI, to artificial intelligence here, I am happy to note that the – in the Strategic Plan Digital Mauritius, the Mauritius Emergent – Emerging Technology Council has been enacted and it is in direct vision with the AI, has been enacted in 2021. And in line with the provisions of the Act, it provides a roadmap for the right ecosystem to enable Mauritius to adopt new technology as enabler of both. The Act also is online. The Act also makes provision for capacity building, for incentives, development of strategical alliance, as well as using AI in different domains, such as health, energy, manufacturing, healthcare, biotech industry, agro-industry, ocean economy and transportation. Review of regulations to allow AI to develop in financial tech, mobile payment, is also included.
So, coming to the theme of the discussion today, “How Can We Make AI a Force for Good?” There are several factors which we need to take into account, because AI, it is in – it is a multi – we need to adopt a multidisciplinary approach. A definition of AI is important. We look at the benefits of AI.
In his virtual presentation on November 8th 2023, Professor of Public Policy at Harvard Kennedy School, Sharad Goel, he posits on the many benefits of AI, of the latest generative AI, in different situations. And he said, “AI can produce novel content.” He says, “Computer code, an artwork which can be used to improve predictions by analysing more information, like written reports and images.” So, these are the benefits of it. And generative AI can also be used to improve outcomes for education. AI Tutor helps the – a student to learn and think critically. It is useful public interest agencies to improve allocation of limited resources. It improve communication with clients. It improve efficiency of administrative task. So, these are the benefits of AI.
Not only this, I’m also an Evaluator and I am member of the American Evaluation Association. In the field of evaluation, the members of the evaluation community, they are using AI as collaborative research assistant, it’s streamlining various task, developing standards. But all of these benefits we have to admit there are certain risk.
We have the Bletchley Declaration, which highlights the different types of risk, and this Bletchley Declaration, it comes from Chatham House. We have different type of risk, unforeseen risk, safety risk, substantial risk and the report the declaration which involve, I think, there were at least 19 countries present, they make recommendation on how to address these risks at local and international level. There is another Author, Arthur Holland Michel. In his research paper dated April 2023, entitled “Recalibrating Assumption on AI,” there was an evidence-based and inclusive AI policy discourse, made insightful recommendations on how to address such risk.
But then, these risk needs to be assessed before coming up on government agenda. The assumptions need to be verified, but AI can be, at times, a mystery, as well. Estates cannot stifle innovation, but they need to develop policies, regulations, which are coherent, which are more evidence-based and inclusive, and estates can’t do it alone. There is a whole gamut of stakeholders. When we talk about developing an overall, a global strategy to address AI risk, we think about states, about different countries. We think about NGOs, about the community, about international institutions, about all of them.
And when we talk about the risk and the – because when we talk about AI, it’s not local, it’s global. You can’t, for example, within a country, develop regulations which will be limited to the country, we can’t do that. It has to be global, but then, when it has to be global, there need to be some standards, some core principles. And some risk we can come up with, like, charters, best practices, we can come up with regulations, but in certain situations, I think there need to be enforcement, as well, where there would be compliance.
I just went through – I think in one of the documents, I find that there is a – in the UK, a new Ar – AI Artificial Intelligence Regulation Bill, was tabled at the UK Parliament on 20 2nd 2023 for the first reading. And I find this very interesting, and I think that there are other countries who already have regulations, as well. They talk about the definition of AI, they talk about the functions of the AI authority, and they talk about other thing about – and this is very structured. I don’t know whether this will go in this, they talk about privacy, about cybersecurity, about all of these.
Nada Mohamed
Thanks a lot, Dr Moorghen, for such an insightful…
Dr Rooba Yanembal Moorghen
Oh…
Nada Mohamed
…take on…
Dr Rooba Yanembal Moorghen
…it’s already five…
Nada Mohamed
…things.
Dr Rooba Yanembal Moorghen
…minutes? Okay, then, right…
Nada Mohamed
Yeah, I know…
Dr Rooba Yanembal Moorghen
…I won’t go on.
Nada Mohamed
…you can talk…
Dr Rooba Yanembal Moorghen
Yes, okay.
Nada Mohamed
…a lot about AI and never get bored, but unfortunately, we have to skip to the next…
Dr Rooba Yanembal Moorghen
No, that’s okay.
Nada Mohamed
…speakers. But then, we’ll definitely be getting back to you. Okay, so our next panellist is Mr Dragoş Tudorache, a member of the Renew Europe group in the European Parliament and a former Chair of the European Parliament’s Special Committee on Artificial Intelligence in a Digital Age. In the European Parliament, Mr Tudorache sits on the Committee on Foreign Affairs, the Committee on Civil Liberties, Justice and Home Affairs, the Committee on – of Inquiry into Defence, as well as the European Parliament’s Delegation for Relations with the United States. He began his career as a Judge and worked on anti-corruption to support Romania’s accession to the EU. Welcome, Mr Tudorache, over to you.
Dragoş Tudorache
Well, good afternoon to everyone and many thanks for having – inviting me. Perhaps the most important element that you did not mention is the fact that I am the co-Rapporteur for the AI Act, which I think, right now, is the most comprehensive legislative effort, at the global level, to regulate AI, which I think in the conversation, is relevant.
I want to also, at the start, congratulate you for the work that you’re doing, including the approach that you’re taking. I think the one thing that we need more than anything else, I would say almost more than the effort to regulate or govern AI, is to actually increase the awareness and bring AI to the people. I think if there is one merit that the creators of ChatGPT have is the fact that they, at least, prompted a conversation that was only somewhat happening in smaller bubbles of experts, Scientists or aficionados of AI. And right now, finally, the discussion has gone democratic, and projects like the one you’re making, which is trying to land the reality of AI as closer to the ground as possible, is in fact, one of the key building blocks for developing trust with the AI. Ultimately, if people will not trust this technology, if they will not trust the polls that are based on AI, then the whole beautiful benefits that we all talk about are not going to happen.
So, anyway, back to the question. How do we develop, actually, AI for good? I would start by saying that, first and foremost, by recognising its huge potential and its huge impact and how transformative and how ubiquitous AI is going to be. It is already, to a large extent, whether we see it or not, it is already around us in almost every product or service that we access, but it is going to be amplified a thousand times over in the years to come, so much so that it is going to be part of our everyday life. And I think the thinking of how do you develop it for good, you first need to recognise it’s there, recognise this enormous impact, which comes with huge benefits, but also with risks. And also, recognising that these risks are no longer just theoretical, they are no longer just the realm of, again, expert conversations and theories. They are real, they are around us, from these creation biases, all the way to what is now being the subject of global conversations, the more existential threats, the more existential risks posed by the big models.
So, I think that the question, and this moves me to the second point of what you need to do to have AI for good, the second question that jumps out of the recognising the potential benefits and risks, is what do you do about it? What kind of a governance do you put in place to deal not only with the benefits, but also to live with the risks? And this is where there are several models which are now being pioneered around the world, from a soft governance, let’s say, approach, which argues that the technology is still too new, it’s nascent. We don’t know enough. It’s too much of a moving target. It’s constantly evolving and therefore, because we haven’t really measured, seen, developed standards to grasp its full impact, that we are better off for now working with more softer tools, with guidelines, with self-regulation, with principles, with voluntary commitments.
So, that is one school of thought, one model which is being discussed, and then, another model, which is that – recognising that we are already past the point of being satisfied with the impact of principles or voluntary commitments and actually moving to the point of putting hard rules in place. It’s – i.e., legislation, which is where the European Union is right now.
The legislative act for which I am the Rapporteur is about that, is about hard rules, hard [audio cuts out – 19:05] to protect us from the risks of AI, and again, I’m not going to pronounce myself on whether one model is better than the other. What I think is key, and this still links to the idea of developing AI for good, is that we now try to work towards creating an interoperable framework between the different models, because we – the technology is one. Those that are developing this technology are working across jurisdictions, and also, the human [audio cuts out – 19:45] and interests which are being affected by the rollout of AI, they’re also the same, whether they are in Africa, in America, in Europe or in Asia.
So, I think that we, as policymakers, we owe this effort for convergence, or at least interoperability between the very different models and frameworks of governance that you are going to end up with. I think diversity of these solutions is a given, we cannot work around that. There will be divergence in terms of the kind of rulemaking that we will see in Europe, in the US, in Latin America, in Africa, in Asia. So, I think the effort right now needs to be focused on how do we make all of these frameworks interoperable? How do we work together on technical standards that will be sitting behind these normative efforts of different degrees of sophistication on detail, to make sure that, again, we have a – as much of a common approach to a technology that we can drive our lives forward and work in the same way across countries?
So, I will stop here as an intro and happy to engage during the assessment. Thank you.
Nada Mohamed
Thanks a lot, perfect timing. I think we see a thread between what Dr Moorghen had mentioned and what you’re saying about the great potential that AI does have. There are still risks at stake and hopefully, our Ambassadors or CFC members will be providing us some diverse solutions today.
So, last but not least, we are moving onto our third panellist, who is Miss Kristel Kriisa. Miss Kriisa is from Estonia and currently holds the position of AI Project Manager at the Estonian Information System Authority. And Kri – liaising with the Ministry of Economic Affairs and IT, she leads a project to support public sector organisations seeking to provide AI-based services. Her responsibilities also include ensuring worldwide visibility of Estonia’s AI capabilities and contributing to Estonia’s third National AI Strategy. She’s also a passionate educator, having served as the full-time Teacher for 12 years. Welcome, Miss Kriisa, over to you.
Kristel Kriisa
Yeah, thank you, and thank you for having me here today. It’s a pleasure, and so, a few words about maybe Estonia and AI. So, I hope you all know where it is. It’s a tiny little country. People often don’t know where exactly it is, but I guess, right now, it would be the best time to visit it, because it looks like a winter wonderland right now. So – and I guess coming here as a tourist, it seems nice, but having to deal with it for six months can be a bit tricky.
So – but a few words about what I do and what our approach here in Estonia is like. So, as it was already mentioned, I work in the Estonian Information System Authority, which is all about interoperability, but also, cyber defence. And so, together with my colleagues, we build and defend one of the best digital societies in the world, and as it was also mentioned, I work closely together with our Government Chief Data Officer, who’s based in the Ministry of Economic Affairs and Communications, and we work together on different projects. So far, within the last five years, Estonia has done – we do have quite a lot of experience, so we’ve done, like, 130 AI projects in the public sector in Estonia, whilst trying to make the public services better and more efficient.
Also, we – sometimes we are asked how come we are – we have been able to do such things? And then we always say that “Well, first of all, I guess we have quite a good digital infrastructure, so we have quite a lot of machinery to build data, because 99% of our public services are online.” And every Estonian has a digital ID, which is, again, compulsory, so you have to have it, but it lets you use all those public services, and it also lets you sign documents and give up on all those paper solutions that many countries are still using.
When it comes to Estonians, we are quite reserved, pragmatic and tech savvy, I guess, and Estonians usually like to get things done without much fuss. We like practical solutions more than just long documents saying…
[Break in recording]
Whitney Westbrook
Okay, we should be started. I’m so sorry, Miss Kriisa, I totally interrupted your remarks while you were saying that Estonians are tech savvy. So, obviously, we can have some learnings from you. But please, if you would finish and we will carry on. Apologies for everyone rejoining.
Kristel Kriisa
Okay, yeah. So, I can’t remember where I stopped, but just a few words ‘cause we are running behind now. But – so, as – I guess I was saying that we are more interested in having practical tools, rather than long regulatory documents. And also, when it comes to AI projects, they are so complex and they involve so many different aspects, that you, as the owner of the project or the data person, or I don’t know, whoever, the developer, there are so many things you need to think about that it can become quite overwhelming at times. So, we try to provide support, really practical support, not – again, not long documents saying how things should be done, but we actually try to involve people who have got similar experience and who have maybe already made the mistakes that you are about to make, so that you could avoid those mistakes and you could be successful.
So, our approach is safe and ethical by design. So, we do put a lot of effort into planning the project, thinking about all the risks and problems that might have – be stopping us. We also involve different people from the public sector, also from the private sector, from universities, but also our international partners, as well. And as it was already mentioned, we are currently working on our third AI Strategy because we have noticed that things are developing so quickly that it’s very hard to keep up with all the changes. And probably having a short-term strategy is actually a better solution than having a really long time strategy, because then, it’s easier to predict what is going to happen and where to put all your resources and what to focus on.
But thank you, and I guess we can move on.
Nada Mohamed
Thanks a lot, Miss Kriisa, for this very insightful intervention about AI and the power of data, as well as e-Estonia.
So, now that we have heard from our experts, it’s time to hear from the protagonists of this event, which are our CFC members. I’d now like to invite the members of the Common Futures Conversation community to present their policy ideas about artificial intelligence. A quick reminder, though, we need to keep our timing to a little under five minutes, so that we can hear as much feedback from the panellists, as well as from the audience. So, first up, we have a powered collaboration between Heather Searle from the UK and Ekow Adu-Mensah from Ghana. They wrote a collaborative solution on decentralising and diversifying development of AI, and we can’t wait to hear it. So, Heather and Ekow, the floor is yours.
Heather Searle
Okay, thank you so much, Nada, and thanks so much for the policymakers, as well, who’ve raised so many good points. Ekow and I’s, kind of, contribution centres around two processes: decentralisation and diversification. AI is obviously international by nature and AI – we’re starting to see that global collaboration on AI is not impossible, but that the tech disparity between countries needs to be addressed. This means that currently, the majority of benefits of AI will be reaped by global powers and also, private actors. When obviously, as we’ve heard from our policymakers, there’s so many opportunities in AI for good.
Decentralising AI should be encouraged by the international community and NGOs as a great opportunity for national governments and also, charities to empower national and local actors to make AI work for them. The main barrier to this is simply lack of resources. It’s never a question of capability, but capacity. Communities and indigenous groups could make great victories in revitalising language and controlling community data if they were simply given the tools to make AI work for them.
A great example in this is Te Hiku Media, as an organisation that’s really used machine learning to empower Māori groups in New Zealand. And AI really needs to be, kind of, harnessed by governments so they can stay relevant in digital spaces, and it holds great amounts of regulatory power if its biases are addressed, which Ekow will now talk about.
Ekow Adu-Mensah
Right, Heather, thank you very much. Okay, so, in the ever-evolving landscape of AI, Heather highlighted crucial things and the need for decentralisation to ensure that technological progress benefits everyone and avoids reinforcing biases. Today, I extend this conversation by underscoring the imperative of considering and upholding the perspectives of the Global South, particularly Sub-Saharan Africa. Sub-Saharan Africa, with its potential to harness AI for sustainable economic transformation, must not be left behind.
We advocate for decentralised AI development and governance, urging governments and tech giants to prioritise this approach. Addressing the diversity crisis in AI, we propose that the establishment of a global regulatory body compelling cons – Big Tech companies to report on diversity, equity and inclusion considerations in developing generative AI models. Furthermore, we urge tech giants to partner with digital startups in Sub-Saharan Africa to spearhead allegiance artificial intelligence future. Third, at the Conference of the Parties approach, popularly known as COP, we proposed a global platform to convene countries, observers and non-state actors, facilitating a comprehensive conversation on regulation of artificial intelligence.
In conclusion, our commitment to a diverse and inclusive AI future demands collective action. Let us recognise the transformative potential of AI in Sub-Saharan Africa and beyond, ensuring that the benefits of the fourth Industrial Revolution are shared equitably. Thank you.
Nada Mohamed
Thanks a lot, Ekow and Heather, for this very much needed solution and I’ll leave it for the panellists to move ahead and comment us – our young CFC members’ questions. The floor is yours [pause].
Kristel Kriisa
I guess there were so many important aspects mentioned and I think it is a bit of a mission impossible, as well, trying to regulate everything and trying to make sure that you have a perfect AI system, because nothing ever is perfect. And hmmm, I think there are some things that maybe are worth considering. We should probably think about the lifecycle of an AI solution and which phase are we trying to regulate? Should we start regulating in the design phase, already, or should it be when it’s actually deployed? So, there are so many things to think about, but this is probably one of the things I wanted to mention [pause].
Nada Mohamed
Thanks a lot, Miss Kriisa, for this. Dr Moorghen, would you like to add up on anything, or…?
Dr Rooba Yanembal Moorghen
Yeah, I just want to make a comment, you know, for example, Mauritius, from part of Sub-Saharan Africa, and we are trying to adopt AI. That’s why we have the legislation. You know, countries are at different levels of development and when we talk about AI, because there needs to be the enabling environment when we talk about ICT, we talk about I – AI.
So, what I propose, I don’t know, it is that we have regional blocs, like SADC, what we – what need to be done, it is that each country they need to do a readiness assessment of where they are in terms of ICT, in terms of AI. Because once you have this assessment, then it would be able to look for donors for big corporation to see how they can help, because if you don’t have evidence, you don’t have what is the status of countries nowadays? Then, we can see how we can help, because we need to bridge the – of course, we need to bridge the digital divide, but then, there should be a structured approach. There need to be a framework.
I just want to make a comparison with SDGs, Sustainable Development Goals. Where – under the UN guidance, each country is reporting the VNR, the Voluntary National Report, and some of them are doing well, some are lagging behind. But still – and I think recently there has been a decision there would be some support, some funding, to help these countries, and under the recommendation for that, when we report, we report only on output, but then we don’t report on outcomes. And there was the recommendation that we need to into – to integrate evaluation in our report, to look at evaluation, to look at outcomes, to look at impact.
So, I’m just making an analogy, a comparison with the SDG goals. I’m just thinking that if the – just like the SDG goals, the UN takes the responsibility, for example, as a global entity, to address the risk of AI or to promote AI, or the benefits for the welfare of all, so that no-one is left behind, especially taking into account the SDG goals, we need to develop a framework. Once we have the UN, like the SDG, there is a high-level committee where we have Technicians. They can come up with guidelines, with framework, with roadmap, and then, at the level of each country, they can do the assessment, the regional groups can assist, for example, SADC can assist, and then, the international community, the World Bank, the donor agencies, they can come in and assist, because at times, we don’t have the expertise.
Even Estonia, they have a lot of expertise. I visited Estonia once, so they have a lot of expertise in terms of eGovernment, because they need to be the foundation. You need to have this enable framework. You can’t just jump into AI like this. For example, we talk about data, our data centres, we talk about cybersecurity, we talk about rules, regulations, about Data Protection Act. So, there is a whole gamut of strategies that need to be put in place before AI comes in.
Of course, they are small, quick wins. For example, AI, you have different – you have robots, you have some, this one-off, you can have it. For example, in schools, I’ve seen that in the case study in schools in remote areas, we can have international agencies giving tablets, initiating people – initiating parents, have a sensitisation campaign. We can do that, but then, we need to have the assessment first, what needs – on a needs basis, what do they need? Because I think each country, they have their own particular needs, their own requirements. Let us do this.
Nada Mohamed
That is 100% true. There is no one size fits all, and that’s why…
Dr Rooba Yanembal Moorghen
Yeah.
Nada Mohamed
…an assessment’s quite crucial. Thanks a lot, Dr Rooba, for this insightful feedback and I’m sure that our CFC members will be in contact with you to further understand your insights and leverage them.
Now, moving onto our next policy solution with Nosipho Dube. Nosipho is a CFC member from South Africa, who’ll be discussing AI’s use in public diplomacy. Over to you, Nosipho.
Nosipho Dube
Okay, thank you, thank you so much, Nada. I hope I’m audible. Can you guys hear me?
Nada Mohamed
Yeah.
Nosipho Dube
Okay, okay, great. So, ladies and gentlemen, thank you so much. So, my proposed AI policy solution looks at enhancing public diplomacy, but in a South African context, in a way that can collaboratively make the use of generative AI, but alongside creative storytellers and those of us who have served as Diplomats and Civil Servants in advancing South Africa’s global footprint with AI.
So, my AI solution was inspired from having listened and gained insights from both CFC webinars and other separate webinars in my own capacity, and my key takeaways from these were just, you know, the overall importance of AI and the application of technology, how crucial it is to think of sets of tools and its applications. But especially as we’re in a context of narrow artificial intelligence that already dictates so much of our lives unintentionally, from searching something on social media, watching content on YouTube, every search engine that you possibly use.
So, with that in mind, I came up with the idea that, you know, governments, especially as it pertains to international affairs and public diplomacy, can use and leverage AI tools, but to make up for the biases that currently exist, especially as they pertain towards Black people and people of colour. Creating a public diplomacy project by the South African Ministry that works with creatives, works with – on an unclassified capacity, of course, that works with creatives, authors and storytellers to show and create a project to tell the story of international affairs, from a South African point of view.
And I think it has the potential to be a first of its kind project that can be initiated by the South African Government to demonstrate its role in wanting to advance itself in the world, as it has one. But also showing that governments can play a role to work with young people in digital storytelling with AI. Most importantly, I envision this as do – digital storytelling project that focuses on young people and is targeted towards young people, but is an opportunity to further educate young people on the prominent figures within South Africa’s history who have played a role, not just during Apartheid and coming out of Apartheid, but who have also played a role in the South African Foreign Service itself. They may be really young people who are not aware of the important roles that prominent South Africans, especially late prominent South Africans, have played in advancing South Africa’s role in the world and in the international stage.
So, I propose that this collaborative project prioritises goals, but also, it can be a nice enhancement to the school projects that the Foreign Ministry already engages in with Diplomats and with embassies in the current context in Southern Africa. And also, propose that it could be a wonderful enhancement to activities pertaining to open days. So, open days are days where seniors from high school, university students, get to see the Foreign Ministry and further understand what is diplomacy? What is South Africa’s foreign policy? How does an embassy and as consulate work? How does the footprint of it work? And to create this through the use of digital generative AI tools to further enhance and show and tell a story to get a young people – a young person and a young school child’s mind engaged and interactive, and curious to further understand what role they could see themselves in, in the future in advancing diplomatic relations. But also, to show the history of how certain figureheads that one may merely see on TV and assume that “Oh, I may not fully understand,” to bring it at a grassroots level and a closer level.
And I feel like the use of AI, along with those who are creatives, storytellers and those who have probably written books on prominent Diplomats, and doing it together and collaboratively could be a first and one of its kind project. But it can also change the narrative of AI generative imagery that is currently being used for ill in the context of wars, changing that narrative to show that one can use AI in a way that can be positive and can shed a positive light. So, I hope that [audio cuts out – 17:46] a policy in its own can show that it’s a small step in a unique way. So, thank you.
Nada Mohamed
Thanks a lot, Nosipho, for sharing this.
Nosipho Dube
Over time?
Nada Mohamed
No, you’re good, you’re good.
Nosipho Dube
Okay.
Nada Mohamed
Thanks a lot for sharing this. I think trying to break down the technical language and make it in a storytelling kind of way is, kind – which is quite insightful, but that is me. I don’t know anything about AI, let’s hear from the experts. Dr Moorghen or Miss Kriisa, would you like to provide some quick – very, very brief feedback, because we are…
Dr Rooba Yanembal Moorghen
Yes.
Nada Mohamed
…really tight for time and we really want to take some questions.
Dr Rooba Yanembal Moorghen
Yeah, this is an excellent initiative. So, AI would be another domain where – storytelling, which can be maximised. I just make a comparison again. I just attended, in October this year, the conference organised by the American Evaluation Association, and the theme of the conference was “The Power of Story.” So, power of story because you know previously, evaluation was done using the quantitative rigorous analysis, but then shifting to qualitative measures, is storytelling, sharing of experience, building on past experience, to project the future. This is being done right now in evaluation. And even evaluation, there is decolonisation of evaluation, different types of evaluation, but storytelling is good.
But then, the storytelling, it should be well documented. It should be well documented in order not to be bias. It should reflect truthful replication of historical events, so not bias on assumptions against race or gender. But then, this is a very good initiative, power, history, power of sharing, because – and face-to-face communication, you know, at times you see that the history, the evolution, how the – for example, the Ubuntu principles, the people coming, young people, won’t know about it, won’t talk about the principle.
So, there is the need for sharing of experience, the power of history, I come back with this, because this is being used in evaluation, as well, and we’re using AI, I think this is better, but then, it should be a truthful replication of past events, historical events. It should not be manipulated, but there is great hope that this can be done.
Nada Mohamed
Thanks a lot, Dr Moorghen, for this powerful feedback.
Nosipho Dube
Thank you.
Nada Mohamed
For the benefit of time now, we’re jumping up to our next CFC member, but Kriisa, please feel free to reach out to Nosipho and to discuss further the – her amazing initiative.
Kristel Kriisa
Yeah, I, basically, agree. It’s a very nice and creative approach to things and again, so as they say, there’s two sides to every story, so you have to be careful about the bias, but yes, I agree, it’s very creative and a very good idea.
Nada Mohamed
Thanks a lot.
Nosipho Dube
Thank you.
Nada Mohamed
Alright, so last, but not least, we have Deme Christofi from the UK. She’s a CFC member, as well, and she’ll be presenting her idea on regulating AI in terms of conflicts. Over to you, Deme.
Demetrou Christofi
Thank you, Nada, for the introduction. So, for the interests of time, I’ll keep this, like, as quick as I can, and it’s, obviously, quite tough to follow these amazing proposals, but I will do my best. So, like you say, I’m presenting on the regulation of AI and the challenges in regulating AI. So, the emergence of artificial intelligence has transformed how we understand conflict and its modern iteration. Recent technological advancements have set in motion dramatic change across combat zones, where wars can now be fought and won with the use of unmanned weaponry and cyberwarfare, amongst other applications.
Whilst AI is not an entirely new concern, its dynamic development inspires debate on the future of security and the ethics of its use in modern conflict, as well as the pressing question of its regulation. For example, the recent UK AI Safety Summit attests to the importance of this issue. AI has been vital in the development of autonomous weapons systems, interested with lethal capacities, for example. This technology poses a significant risk to civilian populations and is vulnerable to exploitation. These specific algorithms informining – informing this burgeoning technology are inevitably influenced by existing biases, such as who constitutes a civilian or a target within a specific conflict? And this depends on who is launching an attack.
In this sense, the racial and gender biases that already inform destruction dealt by human hands, risk being replicated by autonomous systems. There is currently little international consensus, however, guiding the production and use of AI. International efforts so far have been largely limited or isolated. For example, the EU is finalising its AI Act, the G7 nations have published an AI Code of Conduct and China has produced its own law regulating generative AI. Without clear internationally agreed limits, such isolated efforts are ineffective. The challenge here is resolving the democratic deficit that currently colours the international governance of AI.
Therefore, to address these concerns, a structured regulatory body should be created to oversee this regulation. This proposed body must have an emphasis on both collaboration and dynamism. Firstly, the need for a collaborative approach stems from the widespread production and use of AI. The technology transcends national borders and complicates the international space as we may traditionally understand it. In addition, the issue of con – achieving a consensus is compounded by the influence of multinational companies invested in AI. It is not simply states involved in this conversation, as industry drives the development of AI.
Therefore, to account for these conflicting interests, all stakeholders must actively play a role in the discussion of AI legislation. This could involve, for example, industry experts, Researchers and leaders from existing international organisations. Of course, with companies competing for market share, they may not easily be persuaded to discuss regulation. Here, by finding the right incentives and ensuring industry experts have a say in these discussions, an international agreement may be conceivable.
Second and importantly, AI governance cannot be equated with nuclear disarmament. AI is more intangible and given the material differences of their production, cannot be regulated in the same way. Its regulation, therefore, needs to be as dynamic as the technology itself. This could involve a similar tier system to the proposed EU AI Act, with unacceptable high, unlimited risk, for example. Likewise, to keep up with the constant evolution of AI, this body must regularly gauge new or heightened risks with routine assessment and consistently review regulations in line with these findings. The challenge, therefore, remains to mitigate against some potential harms of AI without stifling innovation or inadvertently furthering the uneven advancement of the technology.
To conclude, AI certainly offers significant benefits in terms of increasing efficiency, advancing healthcare and analysing data, to name just a few examples. Yet, whilst recognising the need for such innovation, limits and regulations must be developed in-step with these changes and consistently monitored to protect against any potential harms. Thank you for listening.
Nada Mohamed
Thanks a lot, Deme, for the very thorough walkthrough of your policy solution. Miss Kristel, I’d love to hear from you, your intake about what Deme had shared with us.
Kristel Kriisa
Hmmm hmm. Yes, so, again, I’m a strong supporter of collaboration and I think we do need to share best practices and we need to learn from each other’s mistakes. When it comes to conflicts and I don’t know, work there and things like that that are a bit more complicated, and also, coming from a cyber defence agency, there are some areas where you can’t collaborate very easily, because otherwise, the information you share trying to help, maybe, someone, might be used in the wrong way and it might actually hurt you in the end. So, I guess, in some situations, I think it’s slightly trickier to collaborate and finding the right balance can be, also, quite difficult sometimes.
Nada Mohamed
[Pause] That is very true. There’s always a trade-off somewhere, there’s always a trade-off. So, with this, we will be closing, briefly, our policy solutions and feedback and we’ll be opening it for the audience to share with us our amazing questions. We already have some amazing questions trickling in. And we might be staying overtime for, like, five minutes, but feel free to drop out if you have any engagements.
So, apologies in advance if I mispronounced your name. So, we have Thea, she’s mentioning – directing question towards Kristel. She is wondering how we can “keep up with the developments of AI, since, as you had mentioned before, how it’s become increasingly difficult to follow-up with these advancements.” So, how – as someone who’s outside of the AI research, to be able to follow-up with the implementations and regulations, etc.?
Kristel Kriisa
Yes, I guess it is quite hard to keep up, and here in Estonia, working on our third AI Strategy, we’ll – we’ve also noticed that we need to educate people more. We have to make sure that people understand what AI is and what it can do and what it cannot do yet. And I guess it can be quite tricky and so, we are also coming up with different programmes where we have different target groups. We are trying to educate and inform, but I guess it’s also – I guess I also mentioned it before, but one of the things I also quite often think about is this – whether you choose this top-down approach or bottom-up approach.
And I think, when it comes to AI and us being quite a practical country, we would actually like the developers and the people involved in building the new systems to realise, also, that there are risks involved. And when they realise what the risks are, then they will start thinking about it and start mitigating the risks. When things are just put on you and you have to do them, then I guess it becomes, kind of, like, a checklist, okay, I’m going to tick this box, tick this box, without even thinking about the things, really.
Nada Mohamed
[Pause] Yeah, that is very true, and I think it’s, kind of, similar to what Dr Moorghen mentioned earlier about how we’re focusing on outputs rather than outcomes. So, it’s such a dilemma, really.
Okay, so, we have a hands up. We have someone who’s courageous enough to speak in the meeting. So, Mahesan Lamrani, if you’re with us, please unmute yourself and ask your question.
Mahesan Lamrani
[Pause] Hi, everyone, can you hear me?
Dr Rooba Yanembal Moorghen
Yes.
Mahesan Lamrani
I’m really sorry. I, basically, just pressed on it by mistake. So, I just wanted to – I, basically, just shared a link on the Moroccan experience, because we’re just launching the first national strategy on the digital – it’s a digital strategy, basically, for 2030, and we’re really just starting the dialogue on AI. So, thank you for this webinar. It’s been really, like, helpful to, you know, to hear all your inputs. We’ve also recently visited Estonia during the OGP Summit, and we learned a lot on their experience, as well. And I also downloaded the documents that – on Mauritius and I’m certainly going to read that. And I – and my question was just if it’s possible to share with us the Estonian National Strategy on AI, if that’s possible. Thank you very much.
Nada Mohamed
Thanks a lot, Mahesan. I know it was [inaudible – 31:09], but you provided us with such an insightful evident about Morocco. I’m from Egypt, so having a fellow North African country just having these strides on AI, it’s quite wonderful. And Kriisa, if you want to mention anything. Oh, sorry if I intervened.
Kristel Kriisa
No, no, it’s okay. So, yeah, I’ve shared the link with you, where you can see our old – the previous two strategies and a report written by our AI Taskforce. And, as I said, we are currently working on our third Strategy. We’ve also realised that you can’t have AI projects without thinking about data, obviously, and so, data and AI are intertwined, and so, we are focusing more on data, as well. So, we are currently working on data and AI whitepaper and then, there will be this AI Strategy, as well. So, we don’t have the newest version available yet, but I’ve shared the link there so we can have a look.
Nada Mohamed
Thanks a lot. Alright, so one final question. We have two very interesting questions. They’re really hard to choose between questions at this point, but it’s mentioning how – “who will be able to finance these developments [audio cuts out – 32:33] need to provide an assessment and then, to assess the gaps, but what then, if we found out what is the gap, who will finance these very technical or infrastructural needs that most developing countries, unfortunately, do not possess or have?” [Pause] Doctor?
Dr Rooba Yanembal Moorghen
Finance is one, but then, the other ways of collaboration, it [audio cuts out – 33:05] I don’t know, maybe the big corporations they can do that.
Nada Mohamed
[Pause] I’m not sure I – I think I didn’t catch the answer because I’m having technical difficulties [pause]. Alright, is it just me, or [pause] – okay. So – alright, I think this marks the end of this very amazing, insightful discussion. Really, one hour is not enough to discuss these solutions and the very different perspectives about the benefits, as well as the risks of AI. We had such an amazing panellists with us. Thank you so much, Dr Moorghen. Thanks a lot, Miss Kriisa, for joining us today and providing us with very diverse perspectives, from Africa as well as from Europe. But special thanks for CFC members for providing us with these very diverse solutions and hopefully, we will get to implement them real soon, because we have the power and we are creative and we are insightful and hope is in sight. Let’s end it on a good note.
Thanks a lot, everyone, for tuning in today and make sure to check out the recording if you entered late, or if you want to revisit whichever part you think is very interesting to revisit. And stay tuned to our next webinars and expert sessions, and I hope to see you very soon. Goodbye.
Kristel Kriisa
Thank you.
Heather Searle
Bye, thanks, Nada.
Nada Mohamed
Thank you. Thanks, everyone.
Nosipho Dube
Bye.