Is 2D Scaling Dead? - Other Considerations

othercons.PNG(Part 4 in the series Which Direction for EDA? 2D, 3D, or 360?)

In the last 2 posts in this series, I examined the lithography and transistor design issues that will need to be solved in order to save 2D scaling as we know it. In this post I will look at several other considerations.

For the moment, let’s assume that we are able to address the lithography and transistor design issues that I’ve identified in the previous posts. TSMC recently announced it will take delivery of an EUV lithography machine, so let’s assume they are successful in making the move to the 13.5 nm wavelength. IBM, TSMC, and Intel are already using multi-gate FETs in their most advanced process development and ITRS predicts it will be standard for the 32nm node, so let’s assume that will work out as well. If so, are we home free?

 

Not so fast!

 

There are still numerous technical challenges and one big economic one. First the technical:

 

Process variability refers to the fact that circuit performance can vary based upon the variability in the wafer processing. For instance, let’s say we are printing 2 overlapping rectangles on a die. Due to normal process variability, those rectangles can vary from the ideal in size (smaller or larger), can be shifted (north, south, east, west), or can be offset from each other. Thicknesses of processing layers have variability as well. The amount of doping can vary. Physical steps such as CMP (Chemical Mechanical Polishing) can introduce variability. These variabilities tend to be fixed amounts, so at large process nodes they don’t make much difference. But as we get smaller, these variabilities become significant. If we just take the old approach of choosing a 3-sigma range to define best case and worst case processing corners, the performance at lower more variable nodes may not be much greater than at the larger less variable nodes.

 

This process variability introduces performance variability, and not always in predictable ways.  For instance, if two related parameters vary equally based on oxide thickness, and all we care about is the ratio of these parameters, then the variation may cancel out. But if they vary in opposite directions, the effect may be worsened. Careful design and layout of circuits can make it so that process variations can cancel out with little net effect, but this takes enormous effort and planning and still you cannot account for all variation. Rather, we just have to live with the fact that process variation could cause +- 20, 30, or even 50% performance variation.

 

ssta_graph.JPGThere are some methods to account for this variation for digital designs, the most mainstream being statistical static timing analysis (SSTA). SSTA realizes that process variation results in a circuit performance distribution curve. Instead of drawing hard 3-sigma limits on the curve to define processing “corners”, as is done with traditional STA, SSTA allows designers to understand how yield varies with variability. For instance, if the designer wants to stick with 3-sigma bounds to achieve 90% yield then he may need to accept 500 MHz performance. However, if he wants to be more aggressive on timing he may be able to achieve 600 MHz by accepting a lower 75% yield for parts that fall within a smaller 2-sigma range. SSTA helps designers make these choices.

 

But SSTA is not a silver bullet. Process variability can affect hold times to the extent where they are very difficult to fix. Analog and mixed-signal circuits are much more susceptible to process variability since there are many more performance parameters designers care about. Companies like Solido are trying to attack this specific process variability issue, but the cost in time and analysis (e.g. Monte Carlo simulation) is large. And process variability can just plain break a chip altogether. This will only get worse as the dimensions shrink.

 

Yield is the first cousin to process variability. As discussed in the preceding section, there is a direct tradeoff between performance and yield due to process variability. And as process complexity increases and design margins shrink, yield surely will suffer. There’s a real question whether we’ll be able to yield the larger chips that we’ll be able to design.

 

Crosstalk and signal integrity issues are exaggerated at smaller nodes and are more difficult to address. According to a physical design manager I spoke with recently, the problem is that edge rates are faster and wires are closer together, so crosstalk induced delay is greater. Fixing these issues involves spreading wires or using a lower routing utilization, which defeats some of the benefit of the smaller node. And that is if you can even identify the aggressor nets, which may be multiple. It’s not uncommon for days to weeks to be spent fixing these issues at 45nm, so how long will is take at 22nm or lower?

 

Process variability and signal integrity are just 2 of the more prominent technical issues we’re hitting. In fact, pretty much everything gets more difficult. Consider clock tree synthesis for a large chip needing low skew and complex power gating. Or verifying such a large design (which merits it’s own series of posts). What about EDA tool capacity? And how are we going to manage the hundreds of people and hundreds of thousands of files associated with an effort like this? And let’s not forget the embedded software that runs on the embedded processors on these chips. A chip at these lower nodes will be a full system and will require a new approach. Are we ready?

 

And believe it or not, we’re even limited by the speed of light! A 10 Gbps SerDes lane runs at 100ps per bit, or the time it takes light to travel 3cm, a little over an inch. Even if we can process at faster and faster speeds on chip, can we communicate this data between chips at this rate, or does Einstein say “slow down”?

 

Enough of the technical issues, let’s talk economics.

 

Cost is, and always has been, the biggest non-technical threat to 2D scaling. Gordon Moore considered his observation to be primarily economic, not technological. In the end, it’s not about how much we can build, but how much we can afford to build. There are several aspects of cost, so let’s look at each.

 

Cost of fabrication is the most often quoted and well understood.  Although precise predictions will vary, it’s clear that all the breakthroughs required in lithography, transistor design, and other areas will not come cheaply. Nor will the facilities and manufacturing equipments necessary to implement these breakthroughs. $5B is not an unreasonable estimate to construct and equip a 22nm fab. When it costs $5B to ante up to even get into the game, we’re going to see some semiconductor companies fold their hands. We’re already seeing consolidation and collaboration in semiconductor fabrication (e.g. Common Platform, Global Foundries) and this will increase. Bernard Meyerson even spoke of a concept he called radical collaboration, in which competitors collaborate on and share the cost of the expensive basic science and R&D required to develop these new foundries and processes. We’re going to need to be creative.

 

Cost of design is also becoming a challenge. Larger chips mean larger chip design projects. Although I’ve not seen any hard data to back this up, I’ve seen $100M mentioned as the cost to develop a current state-of-the-art SoC. Assuming most of the cost is labor, that’s equivalent to over 200 engineer-years of labor! What will this be in 5 years? Obviously, a small startup cannot raise this much money to place a single bet on the roulette wheel, and larger companies will only be willing to place the safest bets with this type of investment. They will have to be high-margin high-volume applications, and how many of those applications will exist?

 

In the end, this all boils down to financial risk. Will semiconductors manufacturers be willing to take the risk of generating enough revenue to cover the cost of a $5B+ fab? Will semiconductor companies be willing to take the risk of generating enough revenue to cover the cost of a $100M+ SoC? For that matter, will there be many applications that draw $100M in revenue altogether? For more and more players, the answer will be “no”.

 

Despite all these increasing chip costs, it is important to take a step up and consider the costs at the system-level. Although it may be true that a 32nm 100M gate chip is more expensive than a 90nm 10M gate chip, the total system costs are certainly reduced due to the higher level of integration. Maybe 5 chips become 1 chip with higher performance and lower power. That reduces the packaging and product design cost. Perhaps other peripherals can now be incorporated that were previously separate. This will of course depend on each individual application, however, the point is that we should not stay myopically focused on the chip when we are ultimately designing systems. System performance is the new metric, not chip performance.

In the next blog post in this series, I’ll finish up the discussion on 2D scaling by looking at the alternatives and by making some predictions.

harry the ASIC guy

1 Star2 Stars3 Stars4 Stars5 Stars (3 votes, average: 4.33 out of 5)
Loading ... Loading ...

Tags: , , , , , , , , ,

One Response to “Is 2D Scaling Dead? - Other Considerations”

  1. Harry Gries Says:

    More good info on this topic from Richard Goering over on the Cadence site.

Leave a Reply