Introduction
On Monday I watched the morning sessions of the 6th Edition of the Athens Roundtable. It is a conference/discussion on the governance of AI, with representatives and leaders from a large range of stakeholders.
I have had no prior experience of such events, so this was interesting experience for me. I only attended the first 2-hours as I unfortunately did not have the time. If you know about good summaries of the event, let me know!
Main learnings
High chance I am naive, but it is wild to me that a round-table is considered a good format for discussion and collaboration. You only get little sound-bites and hints at what people believe. This does not feel like efficient way of sharing such information and background. All the real action presumably happens over lunch and behind closed doors.
There is significant activity in the policy space, with many good ideas being discussed. Useful to see the various perspectives, requirements, preferences and focusses bring brought to the table.
Challenge will be to come up with framework(s) that strike a balance between these many requirements: satisfies people who care about different risks (e.g. misinformation vs bias vs x-risk), actually does reduce risks, does not hamper positive benefits of AI, has wide adoption, is not fragmentary / they are inter-operable with each other, is flexible enough to deal with fast moving field and potential future breakthroughs in AI, politically feasible.
From the big tech/big AI, I only saw people from Anthropic, Mistral and AWS. I assume there is some information you can glean from this…
Chronological notes
These are summaries of my notes from the morning sessions. Standard disclaimer: high chance I mis-heard or mis-typed, so be appropriately critical.
The points in bold are items I find more interesting, based on my concerns and prior knowledge. Nevertheless, I recommend having a skim through all of it to get a sense of what people care about.
(And if it feels long, consider that it will take you far less than 2 hours to read through it all! Gives a sense of how inefficient - at least in my opinion - the roundtable format is.)
Diplomat from OECD [missed name].
Joked that as a diplomat, cannot avoid empty words despite being asked to avoid them.
Is aware of x-risks and said something like ‘from minor nuisances to collapse of world as we know it’.
OECD created AI principles, which have become G20 principles. Also has proposals for definitions and frameworks (e.g. for AI) which has been used/adapted for EU AI Act.
There is a risk of too many incompatible regulations and frameworks being set up. Should aim for regulations to be inter-operable.
Chloe Goupille, AI Action Summit.
The Paris AI Action Summit is an upcoming big event. The main event is on the 10th and 11th of February, but there are four previous days on AI and science and on AI and culture.
Nicholas Moes, Executive Director of The Future Society (hosting the event).
Gave impassioned speech, explaining in simple terms the serious risks we are facing, the need for accountability and incentives, how positive intentions are not enough, comparison to tobacco and oil industries, the need to have uncomfortable conversations and push through disagreements, and fact this is a challenge we can overcome as humanity has done previously.
Carme Artigas. Co-Chair at the Al Advisory Board, Office of the UN Secretary General's Envoy on Technology
I was particularly impressed with clarity of Artigas.
There was UN project from Oct 2023 to Sep 2024 to answer three questions: Do we need to regulate AI on global level, what do we want to govern, and how. In Dec 2023, had interim report for first two questions and in past 6 months have first blueprint for international governance.
Found three gaps: lack of inclusiveness (e.g. compute is concentrated in global north, data represents global north), lack of coordination which will require multilateral agreements, lack of accountability along all parts of the value chain.
Two suggestions. One is international scientific panel for AI, similar to IPCC. But would require high frequency of reports, e.g. once every 6 months giving how quickly AI field moves. Aim is to provide common understanding and grounds for action. Second is ‘capacity development network’ to help build capacity and AI skills globally, funded by ‘Global Fund for AI’.
Audrey Plonk, Deputy Director, OECD Directorate for Science Tech Innovation
Objectives are to get better evidence base and for interoperability to the extent possible.
Mentioned G7 Hiroshima AI process, which included code of conduct and 11 actions for developers of advanced ai systems.
Developing a survey (‘we are good at surveys’) to act as a transparency report. Done a trial and have improved it based on feedback.
There is some existing regulations / principles for responsible business conduct, that goes back to 1970s. Want to make use of existing regulations and frameworks and will have exercise to map the Hiroshima process to business guidelines. ‘Embed AI into existing instruments’
Cedric Wachholz. Chief of Digital Innovation. UNESCO
UNESCO had 2021 Ethics of AI recommendations that were adopted by 190+ states. Includes transparency, accountability, human oversight.
Hosting global forum on ethics of AI in June in Bangkok
Doing projects around upskilling and training for judiciary and their use of GenAI
Lucilla Sioli, European AI Office
February milestone. High risk use case prohibitions will be active, e.g. for social scoring or public manipulation.
Elizabeth Kelly, Director of US AISI
Mentioned some highlights from US AISI and the AISI network. Joint report with UK AISI, will soon release report for o1. Already had a successful joint exercise with the international AISI network on methodologies for multilingual evals.
Sebastian Hallensleben, involved with EU AI Act, CEN CENELEC
Progress on ‘AI governance toolbox’ going well, with structures emerging both nationally and internationally
Need second toolbox around resilience to bad actors. Gave example of security protocols and how internet moved from http to https. What are analogues for AI and dealing with flood of AI content and AI agents coming our way
Fotios Fitsilis, Hellenic Parliament
Stressing question of who is accountable to whom? E.g. governments institutions are accountable to parliaments. Would like to see more representation of parliaments - pointed out one or two other people in the room.
?, Mistral
Made a sales pitch for Mistral
Made comment about how we need understanding to have good regulation
If we want to be held accountable by public, then need common taxonomy and ways of talking about AI. Have had various efforts around social media literacy and will need similar for AI.
Made points in favour of open source. Says that open source has not created new misuses, do not want concentration of power to few players
Nicholas Moes
Noted we do not need understanding to give good KPIs. E.g. policymakers do not need to understand how planes work to give KPIs based on disasters per decade.
Cameron Kerry, Brookings Institute
Should embrace complexity.
Example of how internet is governed without single central body. Maybe can do similar with AI. Idea that networks of organisations can have redundancy and robustness. Idea of crowd sourcing solutions
[missed name], 5 Rights Foundation
Need to take into account children and have specific regulations that take children’s perspective into account
Raja Chatila, Professor at Sorbonne University
I missed most of what was said…
Sebastiano, Digital Semi Alliance
Big proponent of open source, need things to be opened up to get transparency
Need effort to define open source, because unnamed big players are trying to say they are open source despite not following standard open source practice.
Carme Artigas
Should distinguish three terms: ethics, governance, regulation
We want people to be ethical, governance is set of tools to incentivise ethical behaviour, and regulation is just one tool (others including market incentives or scientific panels)
Jack Clark, Anthropic
We are obviously not impartial so need 3rd party to weigh up risks and benefits of open source
But point out that it is easier to remove safeguards from open source models and produce abusive images, for example
Paul Nemitz, European Commission
We need to pay attention on power and ensure there are mechanisms for individuals and civil groups to challenge the people in power and the people who enforce the regulations
Asma, Ethical Alliance
We need to put more focus on urgency. Already AI is affected people’s lives negatively, e.g. AI decisions for healthcare
Have various ideas including 3rd party audits and inclusivity by design
Gaia Marcus, Ada Lovelace Institute
Agree with Paul and Asma
Need to focus on power, move past voluntary commitments, get international regulation that then informs national level.
Nozha Boujemaa. Decathlon
We need metrics and benchmarks. We cannot reduce risks that we cannot measure.
Robert Tragor, AIGI University of Oxford
Need set of institutions that together provide resilience, e.g. technical institutions, legal institutions, some that produce public knowledge. Compared to aviation industry.
David Satola, World Bank
Lack common taxonomy.
Look at internet governance forum which fixed specific problem. Have academia, tech, private, governments and international orgs. Hopefully can have similar for AI
Should ensure ‘multi-stakeholder-ism’. Do not want only state-centered approach but to include builders and users of tools.
[Missed name], Hong Kong University and Berkeley
Few colleagues in room who have deep expertise or long experience with AI.
Too much recency bias in the room. Need to stop thinking about how AI is today. AI is not just large models. AI in future will be small data and small models. This will be possible. Please consult more of us who are familiar and can say what ai will be like.
Benjamin Prud, MILA
Hard for individuals to use their rights. So prefers ex-ante approaches
Dislikes idea that ‘we cannot govern things we do not understand’. Companies should be responsible for what they produce, even if there is lack of understanding.
Sasha Rubel, AWS
Big questions. What are we regulating - not yet have consensus on what is definition of AI. How are we regulating and lack of ‘metrology’ of evals and testing. Why are we regulating, and idea that different startups have different flavors of risk and shared understanding of risks.
This ended Session 1, and Session 2 had focus on key challenges and gaps.
François Nkulikiyimfura, Embassador of Rwanda to France
Rwanda’s AI policy has six focus areas: building skills and literacy, infrastructure and compute, robust data strategies, ethical guidelines
Prioritizing international partnerships, e.g. AI Toolkit developed with commonwealth, AI Playbook developed with Singapore, AI Summit for Africa.
To bridge global divide, crucial to incorporate African context and include diverse voices
Jack Clark, Anthropic
Was at this event four years ago, just before covid pandemic. Example of how governments can take big urgent action. Analogies should be made!
Belief within Anthropic, e.g. by CEO Dario, that in 2026 can get Nobel level performance. Imagine country of geniuses in a data center. This is not marketing. It is what we believe.
Would you want this to be in a single company? No, that is crazy.
Want people to get clarity on transparency requirements. Need clarity on safety protocols and research, not only on develoment processes
Need 3rd parties to get public trust. Cannot and should not trust companies themselves
Need international network for coordination. Need option for hard power governance interventions. If transparency is done (by force?) you get possibility for governance. Just a matter of ambition and urgency. [High chance I mis-understood this. My notes were unclear]
‘Despite English dour nature, I am more optimistic than I was four years ago.’ Have lots of early work in the right direction, e.g. network of AISIs, lots of early testing
Brando Benifei, Member of European Parliament
A chief negotiator for EU AI Act. Working on Code of Practice.
[I didn’t take other notes…]
Maya Wiley, President and CEO, The Leadership Conference on Civil and Human Rights
USA largest and oldest civil rights groups
Currently US laws have no mechanism to address bias we are already experiencing with technologies. Without legislation, we will see harms go unaddressed. Unless there is unprecedented good will.
Barry C Lynn, Executive director, Open Markets Institute
The digital space has never been appropriately regulated and as a result is monopolized by big tech: Microsoft, Amazon and Google.
OpenAI and Anthropic are not the ones with the power, it is big tech.
Harms of monopoly’s are known, and ppl are slowly catching on. E.g. EU learning how to use competition policy and our US DoJ recently had success over Google on splitting it up.
There are vast suite of tools in competition policy. We should focus on power of corporations, not on AI itself. Need power analysis and strategy to get accountability.
Story of father who worked hard on somebody elses land. Absent actions, every person or business or nation will learn what it is like to live on somebody elses land.
Threat today is of totalitarianism. Hold the line. Keep your focus on power. If you do, our democracies will be safe and everything is possible.
Juraj Corba, Chair AIGO OECD CO-Chair, Global Partnership
Three points.
1. There is information assymetry between those in frontier labs versus everybody else. E.g. vague stories about hitting limits to scaling. What is really going on here?
2. Need to focus on connections with other existing governance frameworks. E.g. data, digital finance, IoT, weapons, armed/hybrid conflict.
3. As of now, human judges will be ones making final decisions, e.g. if complaint is raised to court. Are they ready? How can legacy institutions handle complex cases. Do we need new dispute and settlement solutions?
? [missed name]
Question about transparency in the various orgs in the room, and how we know if and how our comments are being taken into account.
Brando Benifei
Important to have developers in the loop, e.g. about what is technically feasible.
Lucilla
On transparency, there are processes and have minutes published from workshops. Doing our best to be transparent.
Duncan Caspegs, Center for International Governance Innovation, Canada
Need to think about challenges that can arise from future generations of AI systems. We should embue our minds with range of future scenarios. Easy for us to have discussions in good case where progress is incremental and good governance is put in place. But need to consider more challenging scenarios.
Need to think about the goals, what can be building blocks to achieve those goals and what institutions could achieve those goals.
CIGI is thinking about this ideas, what are possible future international agreements.
? [missed name]
Example recently where constitutional court annulled election due to impact from AI.
Under-represented groups missing from discussions. Various ways to address this: use democratic rights-based participatory approach. AI developed with meaningful input from diverse groups. Keep things open and inclusive. Do not duplicate or de-legitimize existing forums.
Call for making global impact predictions before deploying models. Example of an ex-ante approach, and somebody previously mentioned we need ex-ante.
George Pagoulatos, Hellenic Ambassador to OECD
Should have global ambitions. Safety is global public good
If we do not regulate, get disinformation and increased risks of ‘strong leaders’
Seek global frameworks. Take into account race to the bottom and free-riding. Want to avoid loosening rules to ensure wide scale adoption, and somehow need to think of new ideas.
Linda, AI Africa Lab
Many African nations or small nations do have resources to do independent legislations. Example of GDPR being adapted by nations.
Need to have incentives for developers to stay home, to develop local talent and expertise. Right now, they usually leave to global hubs.
Deborah, Mozilla Foundation
Various points about Mozilla’s contribution and activities in open source
Should look at other industries where audits exist and follow good practice. E.g. often have database of incidents.
Mentioned examples of inappropriate images of Taylor Swift being produced using closed source tools (directed towards Jack Clark)
Jack Clark
Anthropic works with Thorn, which is 3rd party.
Maintain logs / database of iamges that require investigation.
Would like to see 3rd party orgs like Thorn scaled up.
Would like to see success of AISIs lead to increased remits and funding new experiments with potential to scale.
[?, involved in Canada]
Imagines having network of institutions and orgs, that leverage existing frameworks too. Would like to break problem down, tackeld by separate orgs, and then somehow combined and fed back into broader network.
Stephanie Ifayemi, Partnership on AI
Talked about idea of ‘presumption of conformity’. How do the various ideas being discussed or new orgs being created maintain or move away from norms of ‘presumption of conformity’. E.g. having standards and certifications provides assumption that one has followed good practice.