AI should be a global public good
Efforts to develop artificial intelligence (AI) are increasingly being seen as a global race, even a new Great Game. Apart from the race between countries to become more competent and establish a competitive advantage in AI, enterprises are also in a contest to acquire AI talent, leverage data advantages, and offer unique services. In both cases, success would depend on whether AI solutions can be democratized and distributed across sectors.
The global AI race is unlike any other global competition, as the extent to which innovation is being driven by governments, the corporate sector or academia differs substantially from country to country. On average, though, the majority of innovations so far have emerged from academia, with governments contributing through procurement, rather than internal research and development.
While the share of commodities in global trade has fallen, the share of digital services has risen, such that digitalization now underwrites more than 60 percent of all trade. By 2025, half of all economic value is expected to be created in the digital sector. And as governments have searched for ways to claim a position in the value chain of the future, they have homed in on AI.
Accordingly, countries ranging from the United States, France, Finland and New Zealand to China and the United Arab Emirates all now have national AI strategies to boost domestic talent and prepare for the future effects of automation on labor markets and social programs.
Still, the true nature of the AI race remains to be seen. It most likely will not be restricted to any single area, and the most important factor determining outcomes will be how governments choose to regulate and monitor AI applications, both domestically and in an international context. China, the US and other participants not only have competing ideas about data, privacy and national sovereignty, but also divergent visions of what the 21st century world order should look like.
Thus, nationalized AI programs are a hedged bet. Until now, governments have assumed that the country that is first to the finish line will be the one that captures the bulk of AI's potential value. This seems accurate. And yet the issue is not whether the assumption is true, but whether a nationalized approach is necessary, or even wise.
After all, to frame the matter in strictly national terms is to ignore how AI is developed. Whether data sets are shared internationally could determine whether machine-learning algorithms develop country-specific biases. And whether certain kinds of chips rendered as proprietary technology could determine the extent to which innovation can proceed at the global level. In light of these realities, there is reason to worry that a fragmentation of national strategies could hamper growth in the digital economy.
Moreover, in the current environment, national AI programs are competing for a limited talent pool. And though that pool will expand over time, the competencies needed for increasingly AI-driven economies will change. For example, there will be a greater demand for expertise in cybersecurity.
So far, AI developers working out of key research centers and universities have found a reliable exit strategy, and a large market of eager buyers. With corporations driving up the price for researchers, there is now a widening global talent gap between the top companies and everyone else. And because the major technology companies have access to massive, rich data stores that are unavailable to newcomers and smaller players, the market is already heavily concentrated.
Against this backdrop, it should be obvious that isolationist measures－not least trade and immigration restrictions－will be economically disadvantageous in the long run. As the changing composition of global trade suggests, most of the economic value in the future will come not from goods and services, but from the data attached to them. Thus, the companies and countries with access to global data flows will reap the largest gains.
At a fundamental level, the new global competition is for applications that can compile alternate choices and make optimal decisions. Eventually, the burden of adjusting to such technologies will fall on citizens. But before that moment arrives, it is crucial that key AI developers and governments coordinate to ensure that this technology is used safely and responsibly.
Back in the days when the countries with the best sailing and navigation technologies ruled the world, the mechanical clock was a technology available only to the few. This time is different. If we are to have super intelligence, then it should be a global public good.
Mark Esposito, co-founder of Nexus FrontierTech, is a professor of Business and Economics with appointments at Harvard University and Hult International Business School. Terence Tse, cofounder of Nexus FrontierTech, is a professor at ESCP Europe Business School in London and serves as an adviser to the European Commission. And Joshua Entsminger is a researcher at Nexus FrontierTech and Senior Fellow at École des Ponts Center for Policy and Competitiveness.