We used game theory to determine which AI projects should be regulated

Ever since artificial intelligence (AI) moved from theory to reality, research and development centers around the world have been racing to come up with the next big breakthrough in AI.

This competition is sometimes called the “AI Race”. In practice, however, there are hundreds of “AI races” heading towards different goals. Some research centers are rushing to produce digital marketing AI, for example, while others are rushing to pair AI with military hardware. Some races are between private companies and some between countries.

Because AI researchers are competing to win the race of their choice, they may overlook safety issues in order to get ahead of their rivals. But the enforcement of safety through regulations is not developed, and the reluctance to regulate AI may in fact be justified: it can stifle innovation, reducing the benefits that AI could bring to the world. ‘humanity.

Our recent research, conducted alongside our colleague Francisco C. Santos, sought to determine which breeds of AI should be regulated for safety reasons, and which should not be regulated to avoid stifling innovation. We did this using a game-theoretic simulation.

AI supremacy

AI regulation must consider the harms and benefits of the technology. Harms that regulation could seek to legislate against include the potential for AI to discriminate against disadvantaged communities and the development of autonomous weapons. But the benefits of AI, like better cancer diagnosis and smart climate modeling, might not exist if AI regulation is too cumbersome. Sensible regulation of AI would maximize its benefits and mitigate its drawbacks.

But with the United States competing with China and Russia to achieve “AI supremacy” — a clear technological advantage over rivals — regulation has so far taken a back seat. This, according to the UN, has thrust us into “unacceptable moral territory”.

AI researchers and governing bodies, such as the EU, have called for urgent regulations to prevent the development of unethical AI. Yet the EU white paper on the issue acknowledged that it is difficult for governance bodies to know which AI race will end in unethical AI and which will end in beneficial AI. .

Look forward

We wanted to know which AI races should be prioritized for regulation, so our team created a theoretical model to simulate hypothetical AI races. We then ran this simulation through hundreds of iterations, adjusting variables to predict how real-world AI races might play out.

Our model includes a number of virtual agents, representing competitors in an AI race – like different tech companies, for example. Each agent was randomly assigned a behavior, mimicking the behavior of these competitors in a real AI race. For example, some agents carefully review all data and AI traps, but others take undue risks by skipping these tests.

The model itself was based on evolutionary game theory, which has been used in the past to understand how behaviors evolve across societies, people, or even our genes. The model assumes that the winners of a particular game – in our case an AI race – reap all the benefits, as biologists argue, what happens in evolution.

By introducing regulations into our simulation – penalizing unsafe behavior and rewarding safe behavior – we were then able to observe which regulations were successful in maximizing benefits and which ended up stifling innovation.

Governance Lessons

The variable we found to be particularly important was race “duration” – the time our simulated races took to reach their goal (a working AI product). When AI races reached their goal quickly, we found that competitors we coded to always neglect safety precautions always won.

In these fast AI races, or “AI sprints,” competitive advantage is gained by being fast, and those who stop to consider safety and ethics always lose. It would make sense to regulate these AI sprints, so that the AI ​​products they conclude with are safe and ethical.

On the other hand, our simulation revealed that long-term AI projects, or “AI marathons”, require less urgent regulations. That’s because the winners of AI marathons weren’t always the ones who neglected safety. Additionally, we found that the regulation of AI marathons prevented them from reaching their potential. It sounded like stifling overregulation – the kind that might actually work against society’s interests.

Given these findings, it will be important for regulators to establish how long different AI races are likely to last, applying different regulations based on their expected timelines. Our results suggest that one rule for all AI races – from sprints to marathons – will lead to results that are less than ideal.

It’s not too late to put in place smart and flexible regulations to avoid unethical and dangerous AI while supporting AI that could benefit humanity. But such regulations can be urgent: our simulation suggests that the AI ​​races that need to end the earliest will be the most important to regulate.

This article by The Anh Han, Associate Professor, Computing, University of Teesside; Luís Moniz Pereira, Professor Emeritus, Computer Science, Universidade Nova de Lisboa, and Tom Lenaerts, Professor, Faculty of Science, Université Libre de Bruxelles (ULB) is republished from The Conversation under a Creative Commons License. Read the original article.

Leave a Reply

Your email address will not be published.