Cost Effective FPGA Design Flow

Mar 05, 2013 admin Blog No Comments





Fig1: A typical FPGA Design Flow

Different stages of the FPGA project work-flow require differing degrees of knowledge of the design under progress and also differing degrees human involvement, as shown in the figure above. Given the varying amenability to automation of the differing design stages, the most optimal resource utilization in a project can be obtained by automating and optimizing the stages that are most amenable to it yet gate development i.e., absence of output from which stage(s) results in idling of resources engaged in subsequent stages. This would translate to “pick the low hanging fruit for automation that give the most bang for the buck”.

Given rapid increase in complexity and design sizes in FPGA, where on any given timing path the routing can constitute up to 60% of the delay, the synthesis, place and route and timing closure stages of the design flow are consuming a lot of time.

This puts a damper on the productivity of design teams by increasing the cycle time of the inevitable iterations in a complex design. Since place and route is largely a heuristic algorithm, using methods akin to simulated annealing, wave-front expansion where the initial start value/seed can greatly dictate the speed of convergence and quality of the output, this should be a prime candidate for automation and faster convergence on which can unblock design teams. Owing to the heuristic nature of this stage, one doesn’t know which dart will land closest to the “bulls-eye” (design objectives) so one should throw a lot of them if the cost per-dart is not an impediment.

Several approaches to automation of this stage can be adopted, such as investigating some of the incremental-compile features offered by the tool vendors (currently, very few to none) to reduce cycle time, increasing the determinism by finding the best timing place and route candidate thus far and automatically using this for guidance of subsequent runs, or using a random seeding strategy. The last approach (random seeding) is attractive in that it can be applied in conjunction with other approaches, and with the rapidly decreasing cost of compute- infrastructure, in-house or in the cloud it is a cost-effective approach.

Fig2: Randomization Infrastructure.

One such randomization approach is illustrated in the figure above. One can automatically iterate through the flow using the methodology illustrated in the figure, once one invests in the scripting and the map-reduce/collate/filter machine infrastructure to do. Any design change can result in a new seed value being a “winner” /best seed to achieve the optimal run for the design objectives and can leverage the framework for faster convergence. The infrastructure can also make several passes through the map-reduce cycles, each time using the previous winner as guidance for the next run. This methodology can be more cost-effective in achieving convergence instead of manually tweaking constraints on every run, as is traditionally done. It can also be effective in avoiding lost time by having to repeat compile iterations when design/timing objectives are not met on a particular iteration, thereby gating the design team.

leave a comment