Turing Award Won by Programmer Who Paved Way for Supercomputers

0
295
Oracle enhances customer experience platform with a B2B refresh

Source is New York Times

In the late 1970s, as a young researcher at Argonne National Laboratory outside Chicago, Jack Dongarra helped write computer code called Linpack.

Linpack offered a way to run complex mathematics on what we now call supercomputers. It became a vital tool for scientific labs as they stretched the boundaries of what a computer could do. That included predicting weather patterns, modeling economies and simulating nuclear explosions.

On Wednesday, the Association for Computing Machinery, the world’s largest society of computing professionals, said Dr. Dongarra, 71, would receive this year’s Turing Award for his work on fundamental concepts and code that allowed computer software to keep pace with the hardware inside the world’s most powerful machines. Given since 1966 and often called the Nobel Prize of computing, the Turing Award comes with a $1 million prize.

In the early 1990s, using the Linpack (short for linear algebra package) code, Dr. Dongarra and his collaborators also created a new kind of test that could measure the power of a supercomputer. They focused on how many calculations it could run with each passing second. This became the primary means of comparing the fastest machines on earth, grasping what they could do and understanding how they needed to change.

“People in science often say: ‘If you can’t measure it, you don’t know what it is,’” said Paul Messina, who oversaw the Energy Department’s Exascale Computing Project, an effort to build software for the country’s top supercomputers. “That’s why Jack’s work is important.”

Dr. Dongarra, now a professor at the University of Tennessee and a researcher at nearby Oak Ridge National Laboratory, was a young researcher in Chicago when he specialized in linear algebra, a form of mathematics that underpins many of the most ambitious tasks in computer science. That includes everything from computer simulations of climates and economies to artificial intelligence technology meant to mimic the human brain. Developed with researchers at several American labs, Linpack — which is something called a software library — helped researchers run this math on a wide range of machines.

“Basically, these are the algorithms you need when you’re tackling problems in engineering, physics, natural science or economics,” said Ewa Deelman, a professor of computer science at the University of Southern California who specializes in software used by supercomputers. “They let scientists do their work.”

Over the years, as he continued to improve and expand Linpack and tailor the library for new kinds of machines, Dr. Dongarra also developed algorithms that could increase the power and efficiency of supercomputers. As the hardware inside the machines continued to improve, so did the software.

By the early 1990s, scientists could not agree on the best ways of measuring the progress of supercomputers. So Dr. Dongarra and his colleagues created the Linpack benchmark and began publishing a list of the world’s 500 most powerful machines.

Updated and released twice each year, the Top500 list — which omits the space between “Top” and “500” — led to a competition among scientific labs to see who could build the fastest machine. What began as a battle for bragging rights developed an added edge as labs in Japan and China challenged the traditional strongholds in the United States.

“There is a direct parallel between how much computing power you have inside a country and the types of problems you can solve,” Dr. Deelman said.

The list is also a way of understanding how the technology is evolving. In the 2000s, it showed that the most powerful supercomputers were those that connected thousands of tiny computers into one gigantic whole, each equipped with the same sort of computer chips used in desktop PCs and laptops.

In the years that followed, it tracked the rise of “cloud computing” services from Amazon, Google and Microsoft, which connected small machines in even larger numbers.

These cloud services are the future of scientific computing, as Amazon, Google and other internet giants build new kinds of computer chips that can train A.I. systems with a speed and efficiency that was never possible in the past, Dr. Dongarra said in an interview.

“These companies are building chips tailored for their own needs, and that will have a big impact,” he said. “We will rely more on cloud computing and eventually give up the ‘big iron’ machines inside the national laboratories today.”

Scientists are also developing a new kind of machine called a quantum computer, which could make today’s machines look like toys by comparison. As the world’s computers continue to evolve, they will need new benchmarks.

“Manufacturers are going to brag about these things,” Dr. Dongarra said. “The question is: What is the reality?”

Source is New York Times

Vorig artikelStates Ask Snap and TikTok to Give Parents More Control Over Apps
Volgend artikelHow War in Ukraine Roiled Facebook and Instagram