In this blog post, we’ll be focusing on why FPGAs hit the sweet spot for processing power. Before we dive too deeply into this conversation, let’s address the “elephant in the room” – everything exists for a reason, even ASICs (Application Specific Integrated Circuit), CPUs (Central Processing Unit), and GPUs (Graphics Processing Unit). ASICs provide incredible performance per watt, and are thus very efficient, but they are also much more challenging to design and are more pricey than using an FPGA (Field Programmable Gate Array). On the other end of the spectrum, CPUs/GPUs have the edge in ease of programming and design but lack the efficiency of the others.
Without any further ado, let’s get to the list:
1) FPGAs are Reconfigurable – The configurability of your average FPGA leaves ASICs in the dust. In addition to the hard/soft IP cores (an example: Arm Core on our Arty Z7) that are configured for a specific application, the real value lies in being able to reconfigure (and reconfigure again) after installation – something that ASICs just can’t do.
2) FPGAs Work in Parallel – One of the benefits of FPGAs that make them such a good tool for working with measurement systems and other edge computing applications that require the processing of a large amount of data like embedded vision is that they are able to process in parallel. CPUs/GPUs work sequentially, processing one piece at a time, but with a well-configured FPGA you’ll be able to simultaneously intake and process the next batch of information before the first batch is done, giving a low latency.
3) FPGAS Perform Time-Critical Processing – With the aforementioned low latency, engineers and developers are able to use FPGAs for applications that require very time-critical calculations; like software-defined radio, medical devices, and mil-aero systems. When you don’t have to wait as long on the processor to complete a calculation, the output can be much more accurate. ASICs have even less latency, but again, they are only for a single specific application. For prototyping and design, FPGA is the more forgiving choice.
4) FPGAs Have Optimal Performance per Watt – When compared with a CPU or GPU, you will be getting higher performance per watt (though it is closer when using floating point arithmetic) with an FPGA. This low power consumption can be nearly 3 to 4 times less than that of a GPU. The operating cost of an ASIC is far and away the best, but the high initial cost (sometimes in the millions) does a lot to offset that.
5) No OS Overhead – If the latency and computational power of a CPU/GPU would be comparable to an FPGA, the inside track is lost by the necessity of running an operating system. The OS brings down the processing cost efficiency, as resources need to be dedicated to it, increasing the power used and decreasing the compute power.
6) FPGAs are Essentially Blank Canvases – While ASICs need to have their functionality before manufacturing and CPUs/GPUs are optimized for a narrow set of applications, an FPGA’s blueprint is almost completely user defined. With the right knowledge of HDLs (Hardware Design Language), an engineer can configure the FPGA fabric to tackle any function, and in a lot of cases, multiple functions. In addition, FPGAs have a huge interface flexibility that has recently been advanced even more with the rise in popularity of SoCs (System On Chip; Xilinx’s Zynq 7000 SoC can be found on our ZedBoard), which actually include a CPU alongside the FPGA.
So, yes, there are applications that might be better suited for an ASIC, a CPU, or a GPU, but for engineers that are fluent in HDLs, the FPGA hits the mark for price, processing power, and configurability. For those that are more versed in languages like C, Java, and Python? Keep an eye on our new Eclypse Z7, which has high level API that will allow for an easier interaction between FPGAs and software languages (currently C and C++ are supported, with more coming later).
very insightful ..given more reasons to go for FPGA