The EU offers supercomputer access to companies who meet their guidelines for responsible AI
We live in a world refracted through an American lens. Thanks to an enthralling combination of investment and regulatory freedoms, high-quality education and a strong pipeline between universities and industry, as well as the fruits of its global cultural, economic, and military supremacy, the United States has set the pace of technical development for the last six decades.
But in the race to develop AI, Europe—or at least European Commission president Ursula von der Leyen—thinks it stands a fighting chance. In a forthright, confident speech by von der Leyen this September, she argued that Europe, alongside other national and supranational partners, should “lead the way on a new global framework for AI, built on three pillars: guardrails, governance, and guiding innovation.”
Von der Leyen took the stage in Strasbourg, France—seat of the European Union’s parliament—to deliver her State of the Union address on September 13. She told the assembled audience that the 450 million strong trading bloc could help shape innovation responsibly through a combination of sticks and carrots: an EU AI Act, which, if passed, would be the world’s first large-scale AI law. Having already successfully passed through the first stage of legislative discussion months earlier, the law could come into force in as soon as a year. These proposed regulations were the stick. The carrot was Europe’s might in supercomputing.
Supercomputers are to AI what a particle accelerator is to physics: a necessity for cutting-edge research. While AI models can be queried from personal devices, they must be trained with supercomputers, which are able to process huge amounts of data in parallel. Those with the computational muscle stand to not only advance the capabilities of AI but also to shape its ethics and governance. In essence, the availability of supercomputers doesn’t just facilitate the development of AI; it can steer its direction.
“Thanks to our investment in the last [few] years, Europe has now become a leader in supercomputing,” she told the audience. Europe as a whole has eight supercomputers at research centers, including the third and fourth most powerful in the world according to the June 2023 Top 500 list: Finland’s LUMI and Italy’s Leonardo systems, which offer 550 petaflops and 323 petaflops of peak performance, respectively. At a time when it’s widely reported that GPUs are in short supply, said von der Leyen, “We need to capitalize on this.”
As Elon Musk buys thousands of GPUs for his xAI’s projects, and Saudi Arabia does the same at an estimated cost of $120 million, the disparity in hardware capabilities has had a real impact on European startups. Conversations abound that many promising European startups have had to leave the continent for the United States in order to continue to grow their companies—though hard data is scarce. And in a recent survey published by The Applied AI Institute for Europe, half of European AI startups say the bloc’s proposed AI Act would throttle their growth.
Access to the level of computing power offered by von der Leyen could change that. Enabling startups to train more advanced AI models faster would help European businesses keep pace with U.S. competitors who, till now, have had vastly greater resources. And because the European Union is the one doling out this access, it can set standards that potentially have wider impact—similar to how 2018’s General Data Protection Regulation (GDPR) set data-rights standards globally.
At the core of the EU's concerns lies the unbridled use or potential misuse of AI, which could, if left unchecked, infringe upon human rights, privacy, and data protection norms. The Act acknowledges the transformative potential of AI, but warns against its risks. It does that by dividing use of AI into degrees of potential danger. For instance, using AI to power a “social credit” system like China’s would be banned outright. The use of AI in surveillance tech would be deemed high risk. Chatbots are classed as “limited risk systems,” while the advantages of AI in powering email spam filters is seen as acceptable. By encouraging responsible development at the earliest stage of a business’s pipeline, European regulators’ thinking goes, it’ll codify best practices and avoid some of the overreach that has blighted untrammeled tech development in the past.
The EU’s announcement has been welcomed by those focused on AI ethics. “Up until now, startups have been able to access grants and other opportunities without having to think about ethics,” says Carissa Véliz, associate professor at the Institute for Ethics in AI at Oxford University. “All they had to do was focus on their product from a technical and commercial point of view.” If ethics did ever come up, it was “often too late to meaningfully design an ethical product.”
“To have ethics as a requirement for access to funding is a vital way of ensuring best practices,” says Véliz. “It’s something that should be required not only by public institutions, but by venture capital as well. It’s in everyone’s long-term interests to protect democracy.”
However, questions remain about what that process would look like. “It’s encouraging that President von der Leyen wants to position the EU as a bastion of ‘responsible AI’ [by] making access the bloc's supercomputing capabilities contingent on voluntarily meeting the terms of the forthcoming AI Act,” says Mike Katell, ethics fellow at the Alan Turing Institute, the UK’s national institute for data science and AI.
But Katell points out that “responsible AI” is itself a broad-brush term that can often shroud a lot of misdeeds simply by its framing. “When the same term is used by experts who are truly worried about AI harms and by corporate mouthpieces attempting to allay those concerns, we have a problem with meaning,” he says. The key question to ask of responsible AI is “Responsible to whom?”
Katell worries that larger companies can co-opt the language of responsible AI, using publicity-friendly initiatives like internal ethics boards to cover their less-than-responsible intentions. Under the EU’s new policy, that lip service could also buy these companies access to Europe’s supercomputing power.
Meanwhile, Europe’s startups are watching closely to see how von der Leyen’s proposal will shake out. “Our take on the regulation is that it is important to understand how implementation will be carried out,” says Dalia Lasaite, CEO of the Lithuania-based AI startup CGTrader. Access to Europe’s supercomputer capabilities would be a boon for the work her company does: high-quality 3D modeling supported by AI. “We welcome the possibility to use [these] supercomputers, as the compute is a major part of the cost for many AI projects,” Lasaite says.
Lasaite is convinced that CGTrader would be grouped in the lowest possible risk category as defined by the proposed law, meaning that they would not face any regulatory hurdles to gain access to the supercomputers. Still, Lasaite is also somewhat cautious about Europe’s regulatory approach. “AI is a very fast-moving field, and the whole market can change in months,” she says. “It’s important that regulation [does] not slow down the progress of EU startups versus U.S. ones.”
Over half of the European startup founders and venture capital investors surveyed by The Applied AI Institute for Europe in December 2022 believed the EU AI Act would take a too restrictive approach. “Regulation is needed, but it must not hamper the innovation of the European economy, especially for small and medium-sized enterprises [that] cannot afford lawyers," shared Chloe Pledel, Hub France IA’s head of European and regulatory affairs, in the report. Startups “need simple and clear regulations.”
Still others argue the EU should go further and directly fund European startups building new AI technology to compete with large language models like OpenAI's ChatGPT and Google’s Bard. Even before the launch of ChatGPT, industry advocates such as the European AI & Society Fund were publishing entire reports outlining why AI companies should be better supported by their government. While supercomputer access might bring European startups closer to technical parity with those in the U.S., without more financial support they could remain at an overall disadvantage.
While industry advocates contend that the EU AI Act may cause Europe to miss out on innovations that could bring tremendous societal benefit to the continent, the law’s proponents argue that caution is warranted. Andrea Miotti, head of AI policy and governance at Conjecture, a London-based AI company, is someone who believes, in part, in the existential risk of AI. He argues that the AI industry writ large should set upper compute thresholds, beyond which systems cannot work. “The priority is to slow down the death race at the cutting edge, and to limit the proliferation of various systems—we don’t have too much time,” he says.
Europe’s decision to open up access to its supercomputers is designed both to redress the imbalance between European and American AI companies and to course correct a global technology race that until now has gone largely unregulated.
“Having this dual approach of helping people build to safe thresholds, and regulating and monitoring above those, is good,” says Miotti.
Whether it will succeed in either of these goals remains to be seen.
A sale happens not because you want to sell, but because someone wants to buy
Founders can make or break a company, and getting over oneself is the key
A senior scholar at the National Academy of Engineering explains why human error is a symptom—not the reason—for most safety failures