Home / Blog / FPGA Design Techniques

FPGA Design Techniques

I feel a little bit guilty about throwing something this technical out there in the middle of the summer, but Adam Taylor’s recent article in EE Times, “10 FPGA Design Techniques You Should Know” was just too good to pass up. So here goes a quick summary of what he views as the techniques (laid out in ascending order from the simplest to the most complex) that engineers working with FPGA should have under their belt.

State Machine Design: In order to get FPGA’s to take care of sequence and control-based actions, you need to know about using a Finite State Machine (FSM), which handles the transitions between states. FSM has two main classes: Moore (“state machine outputs are a function only of the current state”) and Mealy (“outputs…are a function of both the current state and one or more inputs”).

Basics of FPGA Math:  FPGA’s are often used to take care of arithmetic operations. Much of the computational work may be off-loaded to DSPs, but effectively using them requires knowledge of fixed-point math.

First In, First Out (FIFO) Buffers: These are useful tools, with a variety of applications, including buffering data “as it passes from one clock domain to another”; for both signal and image processing apps; and to “remove the need to use ping-pong RAMS to transfer data between two elements if one is being read while the other is being written.” Taylor also includes some formulas to help you guard against buffer overflow.

The CORDIC Algorithm:  Remember in high school when the kids who didn’t “get” (or like) math would always be asking what trig was used for. As engineers, we know that there are actually instances where trigonometry comes in handy in real life. The CORDIC (Coordinate Rotation DIgital Computer) is used when you need to implement trig functions within an FPGA.

Metastability Protection:  Sometimes asynchronous signals have to be accommodated. If they’re not, a signal that changed while setup or hold time is occurring could destabilize things. “In order to prevent that, we need to employ a two-stage flip-flop synchronizer to ensure the flip-flop has time to recover.”

Discrete & Fast Fourier Transforms: “There are parameters of a signal that require analysis within the frequency domain to access the information contained within.” Of the methods deployed to convert between the time and frequency domains, when it comes to FPGA apps, the most useful are the Discrete and Fast Fourier Transforms.

Polynomial Approximation:  As an alternative to deploying fixed-point math to take care of complex functions in an FPGA – which is a time-consuming approach, you can plot the function and implement the polynomial trend line in the FPGA,” with the trend line plotted through a math program (Taylor mentions MATLAB) or Excel.

Infinite Impulse Response Filters: The Infinite Impulse Response (IIR) function is used to handle signal processing techniques within an FPGA.  This requires careful design, and to correctly implement an IIR, you need to be grounded in fixed-point FPGA math (see item two on Taylor’s list) and filter theory.

Finite Impulse Response Filters: “Finite Impulse Response (FIR) filters are used when we want to guarantee a stable filter in which the phase of the filter will remain constant.” A common use for a FIR filter is to correct for DAC sinc roll-off.

Image Processing Filters. Adam Taylor is a chief engineer at E2V, so we knew he was going to get around to imaging. And he does, as he completes his list with a discussion of image processing filters.

Even if you wait until September to take a look, you might want to read the very useful full article, which includes a number of illustrations as well as links to further information on all of the techniques. Make sure to go through the comments – lots of additional ideas in there!