The FDI angle:

  • Supercomputers are powerful specialised machines which are used to conduct cutting-edge research and train artificial intelligence (AI) models.
  • They are part of leading innovation ecosystems and can help these locations attract foreign direct investment (FDI).
  • Why does this matter? Access to supercomputers has become even more important for companies and countries in the battle for technological supremacy.

In the 1980s, Lothar Späth broke new ground in regional economic policy. The former minister-president of the south-west German state of Baden-Württemberg sought to develop high-performance computing (HPC) to boost the competitiveness of local companies, such as automakers Porsche and Daimler.

Advertisement

After a 1985 trip to meet computing pioneer Seymour Cray in the US city of Minneapolis, Mr Späth decided to purchase a Cray-2 — then the world’s fastest computer. This laid the foundation for the University of Stuttgart to become one of Europe’s leading hubs for supercomputers, which are specialised machines with higher levels of performance than general computers.

“They invented the first model of economic use of supercomputing power,” says Ricky Wichum, a technology historian who wrote an account of Stuttgart’s rise as a supercomputing centre. 

Supercomputers have become critical in the modern economy. They are used for everything from climate modelling to drug discovery, finance, product development and special effects in movies. The need for faster and more efficient HPC infrastructure has also grown with the race to develop artificial intelligence (AI).

“Historically, [supercomputers] have been the concern of a group of elite researchers,” says James Wang, product marketing specialist at Cerebras, a company that builds computer systems for AI. “The big difference post-2023 is that AI is now a ubiquitous consumer application. It is a gross domestic product and productivity multiplier for the entire workforce. And you need supercomputers to run it.” 

Just as with the space industry, HPC has transitioned from purely state-funded projects to private actors investing billions of dollars. The pace of development has been remarkable. 

When the Top500 list — a ranking of the 500 most powerful non-distributed computer systems in the world — was first published in 1993, the Thinking Machine at the Los Alamos National Laboratory in California held the world’s greatest processing power. With its 1024 processors, it could perform an average of 59.7 billion complex mathematical calculations per second (a measure known as gigaflops). Today, the most powerful supercomputer can perform 1,194,000 trillion of these same operations per second (or 1194 petaflops). 

Advertisement

Governments and companies around the world, including in Europe, the US, China, Japan and the UAE, are increasingly investing into new HPC infrastructure to keep up with this rapid development and stay competitive against rivals.

However, the growing HPC industry has become increasingly fraught with risks. Supercomputing, and the semiconductors that underpin it, are intimately entwined with geopolitics. As locations across the globe use HPC as a means to drive competitiveness through cutting edge research and commercialisation, experts worry that a movement towards more nationalised and secretive supercomputing could stifle innovation and the potential economic development benefits it can bring.

From atom bombs to exascale

The rolling hills and narrow valleys of eastern Tennessee are the home of more than just country music. Around 40 kilometres west of the city of Knoxville sits the US Department of Energy’s (DOE) largest multi-programme science and technology centre. 

The Oak Ridge National Laboratory (ORNL) has played a key role in assuring US hegemony since its founding in 1943. Originally known as the Clinton Laboratories, Oak Ridge was one of the major sites that enriched uranium for the Manhattan Project — the top-secret programme that led to the world’s first atomic bomb.

Some 80 years after its establishment, ORNL continues to play a role in US national security and competitiveness. The 4000 acres of leafy campus is home to a facility with two of the world’s fastest supercomputers — Frontier and Summit — which, respectively, rank first and seventh on the latest Top500 list, released in November 2023.

“HPC is a critical ingredient in the recipe for competitive success in today’s highly interconnected global economy,” says Gina Tourassi, the associate laboratory director for computing at ORNL. 

In 2022, ORNL announced that Frontier had become the first supercomputer in the world to reach exascale, meaning it can solve 1018 calculations per second. Its peak performance is 1194 petaflops — or 1.194 exaflops — according to Top500.

This is almost double the speed of Aurora, the second-fastest supercomputer on the Top500 list, which runs at Argonne National Laboratory, another DOE-funded facility in Lemont, Illinois. 

Frontier is currently being used by 1580 researchers across 240 active projects, including at start-ups and Fortune 500 companies. Teams are working on computational problems in a wide array of areas from nuclear physics and climate science to mechanical engineering and drug discovery.

By using HPC for modelling, simulation, visualisation, large-scale data analysis and AI, Ms Tourassi says industry researchers are able to “accelerate innovation while lowering its risk, resulting in lower costs, faster time to market and increased revenue”.

It is also used as a means to drive local economic growth. For instance, the East Tennessee Economic Development Agency pitches ORNL as evidence for how businesses can gain access to technology and competitive advantage by investing in the region.

Ecosystem essentials

Supercomputers, however powerful, do not guarantee that an ecosystem is able to compete globally and attract businesses.

“It is not only the hardware. We are in a virtual world and speed matters. It’s the complete package that makes supercomputing successful,” says Christoph Gümbel, a former executive at German automaker Porsche who now advises automotive companies at Future Matters, a consultancy. 

This is highlighted by a bet made by the New Mexico government in 2007 that investing in a supercomputer would spur high-tech economic growth and job creation. 

Almost $14m of public money was spent on the Encanto supercomputer, but it was later scrapped after failing to find enough customers. The Rio Grande Foundation, a local think tank, estimates the state only managed to generate $300,000 from the investment, marking a dramatic loss.

This reflects the fact that supercomputers can be used remotely from any location, making it unnecessary for companies to physically set up next to them. 

“You won’t have new businesses coming in just because they have supercomputing access. What makes the difference is if you can build an intellectual infrastructure around the supercomputing centre in a university or research lab setting,” says Horst Simon, a German computer scientist and co-founder of the Top500 list, who is now based in Abu Dhabi, UAE.

Turning paper to data

In north-central Finland, supercomputers and the data centres they occupy are being used to recover from post-industrial decline. When United Paper Mills shut down its factory in the town of Kajaani back in December 2008, more than 500 people lost their jobs. The closure marked the end of an industry key to Kajaani’s development for almost 100 years. 

The historic site in Kajaani was in desperate need for a new purpose and to attract new employers. Today, the former paper mill has been transformed into the Renforsin Ranta business park, which is home to 40 companies, a data centre and Europe’s fastest supercomputer. 

The Large Unified Modern Infrastructure (Lumi), which has a processing power of 379.7 petaflops, epitomises efforts to bolster HPC infrastructure across Europe. More than €200m was invested in the Lumi system located in the data centre, which is completely powered by hydroelectric power from a nearby dam. While the construction of Lumi employed roughly 100 people across multiple countries, its current operations require just 10 full-time staff in Kajaani— significantly fewer than the hundreds of workers at the old paper mill. 

Lumi’s location in a relatively unknown town was due to its abundance of cheap renewable energy — a crucial factor in where to put these energy-intensive machines. Waste energy from the system also accounts for about 20% of Kajaani’s heat. But its significance is far greater than just giving a new direction to the local economy.

“Lumi is the first time in HPC history in Europe when so many countries invested real money outside their own country borders,” says Kimmo Koski, the managing director of CSC, the Finnish state-owned centre for IT research and operator of Lumi. Several countries are part of the Lumi consortium, including Finland, Belgium, the Czech Republic, Denmark, Estonia, Iceland, Norway, Poland, Sweden and Switzerland.

Two other world-leading supercomputers have been launched elsewhere in the EU. MareNostrum5 in Barcelona, Spain, a €202m machine able to achieve 138.2 petaflops, was the latest to be inaugurated in December 2023. It follows in the footsteps of Leonardo in Bologna, Italy — a machine with a peak performance of 238.7 petaflops.

All these projects are supported by the EuroHPC Joint Undertaking, a joint initiative set up in 2018 between the EU, a selection of ‘associated countries’ and private partners to develop a world-class supercomputing ecosystem in Europe. The initiative set a €7bn budget for use between 2021 and 2027.

The reason for the push to fund supercomputers, a senior European Commission official explains, was two-fold. Firstly, European companies were increasingly going outside the bloc to get time on supercomputers. Secondly, the demand from the scientific community for access to the existing HPC systems was about seven times greater than the available capacity.

The UK is also investing heavily into supercomputers, setting aside £900m for three projects in Edinburgh, Bristol and Cambridge. But not all industry leaders are convinced this will be successful due to how quickly supercomputer systems become outdated.

Keeping up with state-of-the-art AI 

While there is an aim to utilise 20% of Lumi’s capacity for industry research, Mr Koski says the dilemma of publicly funded supercomputers is how to get the private sector to use them. 

“The main target is not necessarily to support European industry, but to support European research, which needs resources,” says Mr Koski. One prominent example of a company that has used Lumi is Silo AI, which is training its large language module (LLM) using the system.

“There are differences not only between supercomputers, but pertaining to how different geographic regions have access to supercomputers and what the implication of that is,” says Peter Sarlin, co-founder and CEO of Silo AI.

The key challenge moving forward, Mr Sarlin states, is how to divide access to supercomputer capacity among researchers and eligible companies all vying for precious uptime. This issue, he adds, is not exclusive to Lumi, but something all supercomputer operators face.

When running a research initiative, it might not suffer much from taking six or nine months. However, “if you’re running a commercial activity, and you are concerned about having a state-of-the-art [AI] model, you wouldn’t want to spend nine months training a specific model, because it is highly likely that after nine months the state of the art has changed.”

Indeed, the EU recently embarked on a strategy to establish access to its supercomputers for “ethical and responsible” AI start-ups, as announced by European Commission president Ursula von der Leyen in her 2023 State of the Union address. 

Silo AI’s relationship with Lumi and the CSC, Mr Sarlin explains, has been very much give-and-take, as it also required active participation from the start-up to ensure that Lumi was built in such a way that its software and infrastructure would support the training of LLMs. 

This, he adds, was not entirely unproblematic, because Lumi is AMD-based, as opposed to an Nvidia-based supercomputer, with the latter leading the market in AI-training-specific graphical processing units (GPUs).

GPUs were originally designed to create images for video games, but are also uniquely suited to other applications like machine learning as they can process more data than their central processing unit (CPU) counterparts.

The AI (super)chip leader

About 95% of Lumi’s performance comes from GPUs designed by American chip designer AMD. While these, as Silo AI’s experience would suggest, can be used to train large AI models, they do not have the advantages AMD’s main GPU competitor Nvidia has spent years cultivating.

About a decade ago, Nvidia turned all of its focus to chips that could be used to train AI — a strategy that seems to have paid off in abundance. The generative AI boom helped Nvidia shares rise by more than 200% in 2023, with the company hitting a $1tn valuation in June — making it the seventh US company to hit this benchmark.

Nvidia also operates its own supercomputers. Its most powerful system currently is Eos, installed in 2023 and ranking ninth on the most recent Top500 list. 

Europe’s first exascale supercomputer, Jupiter, which is intended to come online in late 2024 at the Jülich Supercomputing Centre in North Rhine-Westphalia, Germany, will feature 24,000 of Nvidia’s latest AI superchips — the Grace Hopper GH200. It will encompass another European first, namely the homegrown HPC microprocessor from French start-up SiPearl, founded in 2019. In April 2023, SiPearl raised a €90m round of funding from investors, including British chip designer Arm, the French state and the European Innovation Council. A second exascale system is planned for 2025 in Bruyères-le-Châtel, France.

“We cannot depend forever on chips that are non-European,” the senior European Commission official says. “This was a dream, five or six years ago, and it’s a dream that is becoming a reality — from a start-up that nobody was expecting to be there some time ago.” 

Access to high-performance chips opened another front in the geopolitical rivalry between the West and China. In September 2022, the White House imposed export restrictions on Nvidia AI chips to China. 

Competitiveness has become more dependent on access to supercomputers. As with any competitive endeavour, this brings secrecy. Companies, intelligence agencies and governments often do not reveal anything about the HPC at their disposal. For instance, the Top500 project has found an increasing unwillingness from China to submit any information.

“We have no information about the Chinese supercomputers now,” says Jack Dongarra, another co-founder of the Top500 list. 

Disaster recovery and preparedness

One country that has long bet on local technology for the development of its HPCs is Japan. The Fugaku HPC, which took seven years to be jointly developed by Riken and tech conglomerate Fujitsu, is entirely Japanese-made. This is unlike machines in Europe, which rely on components and technology from the US and elsewhere.

Fugaku, with a peak performance of 442 petaflops, was the successor to the K supercomputer, which was decommissioned in August 2019 after years of featuring among the fastest machines on the Top500 list. The constant investment in HPC reflects the importance placed on it by Japanese policy-makers.

“The utilisation of supercomputers, including Fugaku, is of utmost importance in enhancing international competitiveness. The need for high-performance supercomputers to simulate challenging issues with greater accuracy is essential for meeting the demands of various companies,” says Yutaka Imamoto, the director of Foundation for Computational Science (Focus), an organisation established by the Kobe city and Hyogo Prefecture government to promote the use of supercomputers in Japan’s industrial sector. 

The machine works extensively with major Japanese industrial companies including automaker Mazda, which is using AI and simulations to automate its design processes.  

Beyond being a symbol of national pride, Fugaku’s positioning in Kobe is no coincidence. The port city suffered a 1995 earthquake, which killed more than 6,000 people, devastated the local economy and diminished Kobe’s position as Japan’s main trading hub.

Fugaku is not only a tool for post-disaster economic recovery. It is used to conduct research and mitigate future disasters like tsunamis, earthquakes and public health emergencies.

This was proven true during the Covid-19 pandemic. Fugaku was used to model whether it would be safe for fans to attend the Tokyo Olympics. The supercomputer demonstrated its value in reducing time to assess threats and come up with solutions. A political decision was then taken to not allow any crowds.

New challengers

It is not only major advanced economies joining the supercomputer race. The oil-rich emirate of Abu Dhabi has long invested into other industries to diversify and future-proof its economy. HPC, and the research it supports, is one of its latest focus areas. 

Mr Simon, the co-founder of the Top500 project, moved to the UAE in 2023 to help set up an independent laboratory funded by the Abu Dhabi Investment Authority (ADIA). ADIA Lab is focused on basic research using computational science in areas including climate science, health and finance. 

“ADIA sees itself as a steward of Abu Dhabi’s future,” says Mr Simon, who serves as director of the lab. “This relatively small investment may contribute a lot to the wealth of Abu Dhabi”.

Supercomputers have been developed in a number of emerging markets, including Thailand, Brazil and Saudi Arabia. However, the key for supercomputers to truly drive economic growth, when it comes to AI specifically, is not to continuously build faster and faster supercomputers, according to Mr Wang of Cerebras. Rather, it is to set up many smaller clusters, because “every Fortune 500 company is going to require its own LLM at some point”.

Tesla has chosen to build its own supercomputer called Dojo, which will be used to improve software in its fleet of autonomous vehicles. In July 2023, Tesla CEO Elon Musk said the company would commit more than $1bn to the Dojo project up to the end of 2024. 

“If [Nvidia] could deliver us enough GPUs, we might not need Dojo. But they can’t, because they have so many customers,” said Mr Musk. 

German machine translation start-up DeepL, which recently raised $100m at a $1bn valuation, also grew impatient of waiting for access to HPC systems, and went ahead and built its own. Mercury, as the company’s supercomputer is called, came online last year and runs from a data centre in the former copper mining town of Falun, Sweden.

While not willing to disclose the exact investment required for Mercury, which ranks 34th fastest on the latest Top500 list, DeepL’s founder and CEO Jaroslaw Kutylowski says, all things considered, it was the most “cost-efficient” route available. He adds that it significantly reduces the time to market for new product improvements, “achieving more with less effort”. 

A new age

Supercomputers have taken on almost mythical proportions in the minds of scientists, AI evangelists and national strategists pushing for ‘digital sovereignty’. Immense computing power and the data centres it requires have become part of a narrative of economic renaissance following deindustrialisation, even as access to the technology itself is becoming increasingly cloud-based and location-independent.

The battle for control of the world’s fastest supercomputers is one that experts believe will only intensify. Over the past 60 years, the performance of HPC has improved massively, enabling ever more complex simulations and ground-breaking innovations. 

What historians will write about how the age of exascale plays out is still unknown. But one thing is for certain: supercomputers will remain engines for future innovation and the economic development that can bring.

Do you want more FDI stories delivered directly to your inbox? Subscribe to our newsletters.

Linnea Ahlgren is a senior editor at TNW; Alex Irwin-Hunt is fDi's global markets editor.

This article first appeared in the February/March 2024 print edition of fDi Intelligence.