Many scientific applications like computational fluid dynamics, seismic imaging and quantum computing require high-performance computers. To run these applications faster, a lot of work has been done on the parallelising of CPU cores. However, because of a problem called power wall, this approach is not so efficient. In this project, we want to implement the computation in reconfigurable hardware chip called Field-programmable gate array (FPGA), instead of running it on CPU by software. The result is expected to be faster and more power efficient.
Problem Summary
There are many applications in industry, bioinformatics, social networks and data mining that are more computing intensive every day. IoT (Internet of Things) data is projected to grow more than tenfold during the next two years, and new advanced AI algorithms are going to be more time-consuming by factor of 3 to 4 every year. On the other hand, there is a growing demand in bioinformatics and numerical simulations such as Finite Element Analysis (FEA), Finite-Difference Time-Domain (FDTD), Computational Fluid Dynamics (CFD) that use digital twins and seismic image processing to allow companies to explore a huge design space for their products, enabling them to lower costs or find new solutions for current products.
For about 40 years, computer performance grew based on Moore’s law (a doubling every two years) mostly by scaling the transistors, increasing the clock frequency of processors, and optimising the architecture of the processors.
The length of a typical CPU transistor channel was 22 nm in 2012, and in 2018 it was about 10 nm (it is now at 8 nm). In short, the rate at which we are making chips denser is slowing down to the point where it’s no longer following Moore’s law. But that is not the only problem. Because of the power wall, designers have not managed to increase the clock frequency as much as previously. It was increased from 4 GHz to 4.5 GHz between 2012 and 2018. So, the achieved performance from a single processor core has been increased by a factor of 35000 to 50000 compared to the VAX 11/780 (a processor launched in 1977). This averages to an improvement of 9% per year. But if we only look at the data after 2016, the average improvement is much lower, at 3.5% per year.
To solve this speed wall issue for the researchers, SINTEF decided to run a proof of concept project to evaluate how much speed up can be gained by using reconfigurable computing for these kinds of applications.
The project started in November of 2020 and will continue in 2021, as a starting point for commercialisation of this technology.
What is FPGA
Field Programmable Gate Arrays are programmable devices that makes it possible to implement a digital circuit on the silicon through ready-to-use gates and programmable switches between them. Having a pool of logic gates and flexibility in the way they are connected brings the possibility of implementing a processing demand application in a data-flow manner instead of a traditional control-flow computing method.
An FPGA can be programmed several times and for each application we can tailor its internal logic to compute the raw data faster and with lower power consumption.
Dataflow Computing
In dataflow architecture, instead of the program simply consisting of instructions for the hardware to execute, the program is realised with hardware elements. For many years, this kind of computing has been more complex and expensive than von Neumann architecture or control flow architecture that traditional processors are based on. However, with emerging high-end FPGAs, it is possible to make dataflow computers affordable for many applications.
When we use dataflow approach, we can use FPGAs not only in a parallel way but also in series. This gives us the opportunity to implement longer dataflow engines on several FPGAs. Developers get higher performance and the system consumes less power. The latter is particularly interesting for cloud computing because lower power consumption means lower maintenance costs and cheaper cooling systems.
Scalable HPRC Project at SINTEF
The main goal of this project is to show that by changing the paradigm from control flow to dataflow processing on a reconfigurable platform, we can gain significant speed. To fit this goal in the current budget and time, we are taking seismic simulations code and trying to accelerate it by manually tuning an FPGA card on Amazon F1 instances.
Comments
No comments yet. Be the first to comment!