The recent security of artificial intelligence summit summoned by British Prime Minister Rishi Sunak RELAUNCH a bad idea: create a “The IPCC for AI» to assess the risks linked to AI and guide its governance. At the end of the summit, Sunak announced a agreement was agreed between like-minded governments to create an international advisory group on AI, modeled on the Intergovernmental Panel on Climate Change (IPCC).
The IPCC is an international body that periodically synthesizes existing scientific literature on climate change into supposedly authoritative assessment reports. These reports aim to summarize the current state of knowledge to inform climate policy. An IPCC for AI would likely serve a similar function, distilling complex technical research on AI into understandable summaries of capabilities, timelines, risks, and policy options for global policymakers.
At a minimum, an international panel on AI security (IPAIS) provide regular assessments of the state of AI systems and offer forecasts on expected technological advancements and potential impacts. However, it could also play a much larger role in approving cutting-edge AI models before they come to market. Indeed, Sunak negotiated a agreement With eight major technology companies, as well as representatives from countries participating in AI safety negotiations, this lays the groundwork for pre-market government approval of AI products. THE agreement commits large technology companies to test their most advanced models under government supervision before release.
If the IPCC is to serve as a model for international regulation on AI, it is important not to repeat the many mistakes seen in climate policy. The IPCC has been widely criticized for its assessment reports that present an overly pessimistic view of climate change, emphasizing risks while downplaying uncertainties and positive trends. Others argue that the IPCC suffers from groupthink, as pressure is placed on scientists to conform to consensus views, thereby marginalizing skeptical perspectives. Additionally, the IPCC process has been criticized for allowing governments to assemble authorship teams of ideologically aligned scientists.
Like its predecessor the IPCC, an IPCC for AI will likely suffer from similar problems related to the politicization of research findings and the lack of transparency of assessment processes. Confirming reasons for concern, the UK AI Security Conference was also critical for its lack of diversity of viewpoints and narrow focus on existential risks, suggesting that biases are already embedded in IPAIS even before its official creation.
This desire to create committees of elite experts to guide policies on complex issues is nothing new. Throughout history, intellectuals have warned that only they can interpret obscure information and save us from catastrophe. In the Middle Ages, the Bible and the Latin mass were inaccessible to ordinary mortals, placing power in the hands of the clergy. Today, highly technical AI and climate research play an analogous role, intimidating the layman with complex statistics and models. The message from intellectuals is the same: heed our wisdom, or face disaster.
Of course, history shows that the intellectual elite are often wrong. The Catholic Church notoriously hindered scientific progress and persecuted “heretics” like Galileo. Nations that embraced economic and technological dynamism prospered, while those that closed themselves behind backward religious dogmas stagnated. Climate activists today hold equally dogmatic views, resisting innovations such as genetically modified crops and nuclear power that would reduce poverty and protect the planet.
Allowing a small intellectual elite to guide AI governance would repeat these historical mistakes for several reasons.
First, the IPCC has blurred the line between policy advocacy and science, to the detriment of science as a whole. As my colleague at the Competitive Enterprise Institute, Marlo Lewis, once said Put the“Official statements by scientific societies celebrate groupthink and conformity, promote partisanship by demanding allegiance to a party line, and legitimize appeal to authority as a form of argument. »
One of the most pernicious effects of the IPCC has been to popularize the idea of an international “consensus” around public policy discourse, shutting down rigorous scientific debate that might otherwise have taken place. Scientific facts will always be open to various interpretations. We should not leave it to a small group of AI researchers to judge what is safe and permissible. An IPAIS will homogenize and politicize AI research, jeopardizing the credibility of the entire AI research agenda.
Second, a global AI governance body would discourage jurisdictional competition. The IPCC sets arbitrary targets and deadlines by which nations are supposedly required to act. But different nations have different risk tolerances and philosophical values. Some will accept more uncertainty, risk and disruption in exchange for faster progress and economic growth. Instead of asking nations for uniform commitments, we should encourage countries to implement diverse policies in response to diverse viewpoints, and then see what works.
Third, regulations developed by international precautionary bodies, based on manufactured consensus, will inevitably be too pessimistic and too restrictive. No one should be surprised that the IPCC has incorporated the most alarmist emissions scenarios, given the historical tendency of intellectuals to see themselves as the saviors of humanity.
AI has immense potential to benefit civilization, from driving healthcare innovation to promoting environmental sustainability. But overly strict regulations, based on alarmist predictions, will block beneficial applications of AI. This is particularly true if AI systems are subject to centralized control procedures.
The dangers of AI, like other technologies, are real. As AI advances, thoughtful governance is necessary. But the solution does not lie in a globalist technocracy to direct its evolution. This would concentrate too much power in too few hands. Decentralized policies target against real-world harm, combined with research and education from a broad range of perspectives, provides a path forward. The elites with dystopian visions have already led us astray, let’s not let them do it again with AI.