Bitcoin Network is Exascale Realized

The bitcoin network is now more powerful than the top 500 supercomputers, combined. Yes, combined. Add up the combined computing power of the 500 fastest supercomputers in the world – that is billions upon billions of dollars worth of hardware – and stack it up against to the raw processing power of every computer currently producing the alternative currency bitcoin, and you’ll find out that it is eight times more powerful, in fact.

Exascale computing refers to supercomputers capable of at least one exaFLOPS (a billion billion calculations) per second. Such capacity represents a thousandfold increase over the first petascale computer that came into operation in 2008. (One exaflops is a thousand petaflops or a quintillion, 1018, floating point operations per second.)

Exascale computing is considered as potentially the most significant achievement in computer engineering, for it is believed to be the order of processing power of the human brain at neural level (functional might be lower). It is for instance, the target power of the Human Brain Project.

Titan, Oak Ridge National Laboratory – 20+ Petaflops 299,008 cores (Opteron) and 18,600 NVIDIA GPUs >20,000,000,000,000,000 floating point operations per second
Titan, Oak Ridge National Laboratory: 20+ Petaflops, 299,008 cores (Opteron) and 18,600 NVIDIA GPUs >20,000,000,000,000,000 floating point operations per second
Currently the fastest systems in the world perform between 10 and 33 petaflops, or ten to 33 million billion calculations per second – roughly one to three percent the speed of exascale. Put into context, if exascale computing is the equivalent of an automobile reaching 1000 miles per hour, today’s fastest systems are running within a range between ten and 33 miles per hour.
The History Of The Exascale Computer

In January 2012 Intel purchased the InfiniBand product line from QLogic for 125 million US dollars in order to fulfil its promise of developing exascale technology by 2018.

Then in February 2013, the Intelligence Advanced Research Projects Activity started Cryogenic Computer Complexity (C3) program which envisions a new generation of superconducting supercomputers that operate at exascale speeds based on Superconducting logic. In December 2014 it announced a multi-year contract with International Business Machines, Raytheon BBN Technologies and Northrop Grumman to develop the technologies for C3 program.

At the end of July 2015, US President Obama, in a new executive order, demanded for a new initiative dedicated exclusively on supercomputing research. Titled “Creating a national strategic computing initiative,” the president’s order outlined plans to create the world’s first exascale computer system in order to establish the country’s position in high-performance computing (HPC) research and development.

most-powerful-computer-in-the-world-2016
National Strategic Computing Initiative (NSCI), is an united government effort, designed to create a cohesive, multi-agency and Federal investment strategy, executed in collaboration with industry and academia.
By “united government effort” it is understood that the initiative will primarily be a partnership between the US Department of Energy (DOE), the Department of Defence (DOD), and the National Science Foundation (NSF), although the private sector will also be consulted.

More recently (today actually – 28th of August 2015), IBM together with GENCI – the high performance computing agency in France – announced a collaboration aimed at speeding up the path to the exascale computer. The collaboration, planned to run for at least 18 months, will focus on preparing complex scientific applications for systems under development expected to achieve more than 100 petaflops, a solid step forward on the path to the exascale computer

Working closely with supercomputing experts from IBM, GENCI will have access to some of the most advanced high performance computing technologies stemming from the rapidly expanding OpenPOWER ecosystem. Supported by more than 140 OpenPOWER Foundation members and thousands of developers worldwide, the OpenPOWER ecosystem includes a wide variety of computing solutions that use IBM’s licensable and open POWER processor technology.
Do Current Architectures Matter?

We are currently following three different paths. The multicore path is built around high-end CPUs, such at Intel’s x86, SPARC and IBM’s Power 7. Then, the Manycore/embedded approach which uses many simpler, low power cores from embedded systems. Finally, there is the GPU/accelerator path using highly specialised processors from the gaming/graphics market space, such as NVIDIA’s Fermi, the Cell processor and Intel’s Xeon Phi (MIC).

Faster than 50 million laptops – the race to achieve exascale computing by 2020 is on
Faster than 50 million laptops – the race to achieve exascale computing by 2020 is on.
One way to look at the race to exascale is as a swim meet. There are three swim lanes, each heading toward the same goal. But who do you bet on to win the race? If you choose too soon, your users cannot follow you. If you choose too late, you fall behind on the performance curve. And if you choose incorrectly, your users face multiple disruptive changes in the technology they rely on said Horst Simon, the Deputy Director of Lawrence Berkeley National Laboratory
The Leader Is China (for the time being)

The supercomputer that holds the current (mid 2014) speed record is the Tianhe-2 of the National Super Computer Centre in Guangzhou (China). The Tianhe – ‘Milky Way’ in English – has a top speed of 33.860.000.000.000.000 computations per second, in computer speak 33,86 petaflops. It is a super computer of the petascale generation, a successor to IBM’s Roadrunner, the first petascale computer built in 2008.

If the growth curve of supercomputers doesn’t flatten out, we expect to see the first exascale computer around 2020. This “dinosaur” will be 1,000 times faster than the IBM Roadrunner, and 30 times faster than Tianhe-2.

Tianhe 2, has a peak processing speed of 33.86 quadrillion floating-point operations per second (petaflops), derived from 16,000 computer nodes, while it has a theoretical peak processing power of 54.9 petaflops.
Tianhe 2, has a peak processing speed of 33.86 quadrillion floating-point operations per second (petaflops), derived from 16,000 computer nodes, while it has a theoretical peak processing power of 54.9 petaflops.
Tianhe-2 has 3,120,000 cores, the power of a million desktop computers, and when it computes at full power consumes as much energy as 50,000 families (24 megawatt with cooling).

“To have this computer work at full power is a real challenge,” says Wilfried Verachtert, Lab Manager ExaScience – Intel Labs Europe
Project Manager High Performance Computing – IMEC. “Already with petascale computers, the fundamental limits of the current technology begin to come to light. We can just about work around. But for the exascale computers, we’ll need real breakthroughs, both in hardware and software.”

Leave a Reply