An ASIC Guy Visits An FPGA World – Part II

Altera FPGA

I mentioned a few weeks ago that I am wrapping up a project with one of my clients and beating the bushes for another project to take its place. As part of my search, I visited a former colleague who works at a small company in Southern California. This company designs a variety of products that utilize FPGAs exclusively (no ASICs), so I got a chance to understand a little bit more about the differences between ASIC and FPGA design. Here’s the follow-on then to my previous post An ASIC Guy Visits An FPGA World.

Recall that the first 4 observations from my previous visit to FPGA World were:

Observation #1 – FPGA people put their pants on one leg at a time, just like me.

Observation #2 – I thought that behavioral synthesis had died, but apparently it was just hibernating.

Observation #3 – Physical design of FPGAs is getting like ASICs.

Observation #4 – Verification of FPGAs is getting like ASICs.

Now for the new observations:

Observation #5 – Parts are damn cheap – According to the CTO of this company, Altera Cyclone parts can cost as little as $10-$20 each in sufficient quantities. A product that requires thousands or even tens of thousands will still cost less than a 90nm mask set. For many non-consumer products with quantities in this range, FPGAs are compelling from a cost standpoint.

True, the high-end parts can cost thousands or even tens of thousands each (e.g. for the latest Xilinx Virtex 6). But considering that a Virtex 6 part is 45nm and has the gate-count equivalent of almost 10M logic gates, what would an equivalent ASIC cost?

Observation # 6 – FPGA verification is different (at least for small to medium sized FPGAs) – Since it is so easy and fast and inexpensive (compared to ASIC) to synthesize and place and route an FPGA, much more of the functional verification is done in the lab on real hardware. Simulation is typically used to get a “warm and fuzzy” that the design is mostly functional, and then the rest is done in the lab with the actual FPGA. Tools like Xilinx ChipScope allow logic-analyzer-like access into the device, providing some, but not all, of the visibility that exists in a simulation. And once bugs are found, they can be fixed with an RTL change and reprogramming the FPGA.

One unique aspect of FPGA verification is that it can be done in phases or “spirals”. Perhaps only some of the requirements for the FPGA are complete or only part of the RTL is available. No problem. One can implement just that part of the design that is complete (for instance just the dataplane processing) and program the part. Since the same part can be used over and over, the cost to do this is basically $0. Once the rest of the RTL is available, the part can be reprogrammed again.

Observation # 7 – FPGA design tools are all free or dirt cheap – I think everybody knows this fact already, but it really hit home talking to this company. Almost all the tools they use for design are free or very inexpensive, yet the tools are more than capable to “get the job done”. In fact, the company probably could not operate in the black if they had to make the kind of investment that ASIC design tools require.

Observation # 8 – Many tools and methods common in the ASIC world are still uncommon in this FPGA world – For this company, there is no such thing as logical equivalence checking. Verification tools that perform formal verification of designs (formal proof), System-Verilog simulation, OVM, VMM…not used at all. Perhaps they’ll be used for the larger designs, but right now they are getting along fine without them.


FPGA verification is clearly the area that is the most controversial. In one camp are the “old skool” FPGA designers that want to get the part in the lab as soon as possible and eschew simulation. In the other camp are the high-level verification proponents who espouse the merits of coverage-driven and metric-driven verification and recommend achieving complete coverage in simulation. I think it would really be fun to host a panel discussion with representatives from both camps and have them debate these points. I think we’d learn a lot.


harry the ASIC guy

Tags: , , , , , , , ,

7 Responses to “An ASIC Guy Visits An FPGA World – Part II”

  1. Shyamprakash Aandoor says:

    Hi Harry,

    There is an important aspect of FPGA design that you missed in your investigation. Unlike ASIC tapeout, you don’t have to be paranoid about potential bugs that may exist in your design. If you happen to ship a product with a bug, you can always send new bitfile with fix, just like Microsoft sends you Windows patch.

    In fact this psychological relief will help faster turn around in product development. Yes, this bug fix will cost your organization, but far far less when compared to such a situation in an ASIC product.


  2. Sean Murphy says:

    Harry, you should approach the FPGA Summit folks with your panel idea, I think they are still planning their program for this year. See the Program Chairperson is Dr. Lance Leventhal
    Phone: 858.756.3327 / Email:

  3. Vinodh C. says:


    Your observation #8 is a little disturbing. I have had several instances where the FPGA vendor provided tools mapped logic to LUTs incorrectly, replaced buffers with invertors, etc. Every one of those instances wouldn’t have been caught without Equivalence Checking tools.

  4. I have done complex verifications of large FPGAs and SOC ASICs and large FPGA do need ASIC like verification flow with OVM, VMM etc. Because debugging in lab on board is quite expensive. The bug is buried deep inside board issues, rtl coding , synthesis and also chipscope type tools do not have visibility of a self checking rtl test bench. Flushing out with RTL test bench is always productive , especially with functional and code coverage. Board respins are costly in terms of market windows. Typically lab fpga verification is best utilized for long transactions (e.g 1 million packets, video streams). Get all your rtl bugs out first with directed tests and constrained random tests in simulation environment.

  5. Hi Harry,

    I have enjoyed both FPGA posts and suggest another one that looks at the distribution of design starts by chip size and family. When you do that you will see that the low end (small and simple) designs follow the traditional “old school” FPGA design process of “no simulation and quick to the lab” approach.

    Anyone who is doing a large design (consider 40,000 luts or larger) is going to simulate otherwise they will find themselves thrashing in the lab. I recently spoke to one company that fired their development team because they were thrashing in the lab and could not get the product to work and in the process I discovered that they attempted to design and debug the large FPGA without simulation! The replacement team started to simulated the design like it should have been from the beginning.

    The recent Gartner design-start report showed the movement to FPGAs from ASICs and that FPGAs projects outnumber ASIC projects by a 30 to 1 margin. The many who are moving from ASICs to FPGAs are using traditional ASIC style verification techniques for their FPGA design and verification.

    In the end, projects where the design is small can get away without simulating but anything of reasonable size or complexity simulation is unavoidable and it is foolish to go into the lab with a design without simulating first.

  6. Rajesh C says:

    I have noticed FPGA designers cutting corner on verification. But here is the fact. Running simulation on FPGA same level as in ASIC would be overkill. But there is got to be adequate simulation, else as someone pointed out, design team get thrashed in lab.

    I think fresh FPGA designers get swayed by FPGA vendor marketing, that everything is cozy and tool will take care of everything. ASIC experience can provide a lot of education here.

  7. Dave Simmons says:

    As an ASIC snob, I have always had sort of the same attitude toward FPGA designers that Bill Widlar (the inventor of the 741 op amp) had toward digital designers generally. Anybody who uses wiring delays to solve timing problems probably isn’t really housebroken, either. The fact that you only get one chance to get it right with an ASIC really enforces some engineering discipline and makes for better engineers, IMO.

    However, maybe if I hadn’t been so arrogant about it, I’d actually have been able to stay employed, instead of ending up involuntarily retired. For years on end, ASICs and FPGAs were leapfrogging each other in design starts, but in the past few years, FPGAs would appear to have taken the high ground permanently. As a result, while the demand for ASIC designers has faded away, the demand for FPGA designers is still there – even if clock tree synthesis is completely beyond them.

    Oh well…

Leave a Reply for Sean Murphy