Posts Tagged ‘TSMC’

Is 2D Scaling Dead? - Other Considerations

Sunday, July 11th, 2010

othercons.PNG(Part 4 in the series Which Direction for EDA? 2D, 3D, or 360?)

In the last 2 posts in this series, I examined the lithography and transistor design issues that will need to be solved in order to save 2D scaling as we know it. In this post I will look at several other considerations.

For the moment, let’s assume that we are able to address the lithography and transistor design issues that I’ve identified in the previous posts. TSMC recently announced it will take delivery of an EUV lithography machine, so let’s assume they are successful in making the move to the 13.5 nm wavelength. IBM, TSMC, and Intel are already using multi-gate FETs in their most advanced process development and ITRS predicts it will be standard for the 32nm node, so let’s assume that will work out as well. If so, are we home free?

 

Not so fast!

 

There are still numerous technical challenges and one big economic one. First the technical:

 

Process variability refers to the fact that circuit performance can vary based upon the variability in the wafer processing. For instance, let’s say we are printing 2 overlapping rectangles on a die. Due to normal process variability, those rectangles can vary from the ideal in size (smaller or larger), can be shifted (north, south, east, west), or can be offset from each other. Thicknesses of processing layers have variability as well. The amount of doping can vary. Physical steps such as CMP (Chemical Mechanical Polishing) can introduce variability. These variabilities tend to be fixed amounts, so at large process nodes they don’t make much difference. But as we get smaller, these variabilities become significant. If we just take the old approach of choosing a 3-sigma range to define best case and worst case processing corners, the performance at lower more variable nodes may not be much greater than at the larger less variable nodes.

 

This process variability introduces performance variability, and not always in predictable ways.  For instance, if two related parameters vary equally based on oxide thickness, and all we care about is the ratio of these parameters, then the variation may cancel out. But if they vary in opposite directions, the effect may be worsened. Careful design and layout of circuits can make it so that process variations can cancel out with little net effect, but this takes enormous effort and planning and still you cannot account for all variation. Rather, we just have to live with the fact that process variation could cause +- 20, 30, or even 50% performance variation.

 

ssta_graph.JPGThere are some methods to account for this variation for digital designs, the most mainstream being statistical static timing analysis (SSTA). SSTA realizes that process variation results in a circuit performance distribution curve. Instead of drawing hard 3-sigma limits on the curve to define processing “corners”, as is done with traditional STA, SSTA allows designers to understand how yield varies with variability. For instance, if the designer wants to stick with 3-sigma bounds to achieve 90% yield then he may need to accept 500 MHz performance. However, if he wants to be more aggressive on timing he may be able to achieve 600 MHz by accepting a lower 75% yield for parts that fall within a smaller 2-sigma range. SSTA helps designers make these choices.

 

But SSTA is not a silver bullet. Process variability can affect hold times to the extent where they are very difficult to fix. Analog and mixed-signal circuits are much more susceptible to process variability since there are many more performance parameters designers care about. Companies like Solido are trying to attack this specific process variability issue, but the cost in time and analysis (e.g. Monte Carlo simulation) is large. And process variability can just plain break a chip altogether. This will only get worse as the dimensions shrink.

 

Yield is the first cousin to process variability. As discussed in the preceding section, there is a direct tradeoff between performance and yield due to process variability. And as process complexity increases and design margins shrink, yield surely will suffer. There’s a real question whether we’ll be able to yield the larger chips that we’ll be able to design.

 

Crosstalk and signal integrity issues are exaggerated at smaller nodes and are more difficult to address. According to a physical design manager I spoke with recently, the problem is that edge rates are faster and wires are closer together, so crosstalk induced delay is greater. Fixing these issues involves spreading wires or using a lower routing utilization, which defeats some of the benefit of the smaller node. And that is if you can even identify the aggressor nets, which may be multiple. It’s not uncommon for days to weeks to be spent fixing these issues at 45nm, so how long will is take at 22nm or lower?

 

Process variability and signal integrity are just 2 of the more prominent technical issues we’re hitting. In fact, pretty much everything gets more difficult. Consider clock tree synthesis for a large chip needing low skew and complex power gating. Or verifying such a large design (which merits it’s own series of posts). What about EDA tool capacity? And how are we going to manage the hundreds of people and hundreds of thousands of files associated with an effort like this? And let’s not forget the embedded software that runs on the embedded processors on these chips. A chip at these lower nodes will be a full system and will require a new approach. Are we ready?

 

And believe it or not, we’re even limited by the speed of light! A 10 Gbps SerDes lane runs at 100ps per bit, or the time it takes light to travel 3cm, a little over an inch. Even if we can process at faster and faster speeds on chip, can we communicate this data between chips at this rate, or does Einstein say “slow down”?

 

Enough of the technical issues, let’s talk economics.

 

Cost is, and always has been, the biggest non-technical threat to 2D scaling. Gordon Moore considered his observation to be primarily economic, not technological. In the end, it’s not about how much we can build, but how much we can afford to build. There are several aspects of cost, so let’s look at each.

 

Cost of fabrication is the most often quoted and well understood.  Although precise predictions will vary, it’s clear that all the breakthroughs required in lithography, transistor design, and other areas will not come cheaply. Nor will the facilities and manufacturing equipments necessary to implement these breakthroughs. $5B is not an unreasonable estimate to construct and equip a 22nm fab. When it costs $5B to ante up to even get into the game, we’re going to see some semiconductor companies fold their hands. We’re already seeing consolidation and collaboration in semiconductor fabrication (e.g. Common Platform, Global Foundries) and this will increase. Bernard Meyerson even spoke of a concept he called radical collaboration, in which competitors collaborate on and share the cost of the expensive basic science and R&D required to develop these new foundries and processes. We’re going to need to be creative.

 

Cost of design is also becoming a challenge. Larger chips mean larger chip design projects. Although I’ve not seen any hard data to back this up, I’ve seen $100M mentioned as the cost to develop a current state-of-the-art SoC. Assuming most of the cost is labor, that’s equivalent to over 200 engineer-years of labor! What will this be in 5 years? Obviously, a small startup cannot raise this much money to place a single bet on the roulette wheel, and larger companies will only be willing to place the safest bets with this type of investment. They will have to be high-margin high-volume applications, and how many of those applications will exist?

 

In the end, this all boils down to financial risk. Will semiconductors manufacturers be willing to take the risk of generating enough revenue to cover the cost of a $5B+ fab? Will semiconductor companies be willing to take the risk of generating enough revenue to cover the cost of a $100M+ SoC? For that matter, will there be many applications that draw $100M in revenue altogether? For more and more players, the answer will be “no”.

 

Despite all these increasing chip costs, it is important to take a step up and consider the costs at the system-level. Although it may be true that a 32nm 100M gate chip is more expensive than a 90nm 10M gate chip, the total system costs are certainly reduced due to the higher level of integration. Maybe 5 chips become 1 chip with higher performance and lower power. That reduces the packaging and product design cost. Perhaps other peripherals can now be incorporated that were previously separate. This will of course depend on each individual application, however, the point is that we should not stay myopically focused on the chip when we are ultimately designing systems. System performance is the new metric, not chip performance.

In the next blog post in this series, I’ll finish up the discussion on 2D scaling by looking at the alternatives and by making some predictions.

harry the ASIC guy

My Obligatory TOP 10 for 2009

Thursday, December 31st, 2009

2009 To 2010

http://www.flickr.com/photos/optical_illusion/ / CC BY 2.0

What’s a blog without some sort of obligatory year end TOP 10 list?

So, without further ado, here is my list of the TOP 10 events, happenings, occurrences, observations that I will remember from 2009. This is my list, from my perspective, of what I will remember. Here goes:

  1. Verification Survey - Last February, as DVCon was approaching, I thought it would be interesting to post a quickie survey to see what verification languages and methodologies were being used. Naively, I did not realize to what extent the fans of the various camps would go to rig the results in their favor. Nonetheless, the results ended up very interesting and I learned a valuable lesson on how NOT to do a survery.
  2. DVCon SaaS and Cloud Computing EDA Roundtable - One of the highlights of the year was definitely the impromptu panel that I assembled during DVCon to discuss Software-as-a-Service and Cloud Computing for EDA tools. My thanks to the panel guests, James Colgan (CEO @ Xuropa), Jean Brouwers (Consultant to Xuropa),  Susan Peterson (Verification IP Marketing Manager @ Cadence), Jeremy Ralph (CEO @ PDTi), Bill Alexander (VP Marketing @ Blue Pearl Software), Bill Guthrie (VP Marketing @ Numetrics). Unfortunately, the audio recording of the event was not of high enough quality to post, but you can read about it from others at the following locations:

    > 3 separate blog posts from Joe Hupcey (1, 2, 3)

    > A nice mention from Peggy Aycinena

    > Numerous other articles and blog posts throughout the year that were set in motion, to some extent, by this roundtable

  3. Predictions to the contrary, Magma is NOT dead. Cadence was NOT sold. Oh, and EDA is NOT dead either.
  4. John Cooley IS Dead - OK, he’s NOT really dead. But this year was certainly a turning point for his influence in the EDA space. It started off with John’s desperate attempt at a Conversation Central session at DAC to tell bloggers that their blog sucks and convince them to just send him their thoughts. For those who took John up on his offer by sending their thoughts, they would have waited 4 months to see them finally posted by John in his December DAC Trip report. I had a good discussion on this topic with John earlier this year, which he asked me to keep “off the record”. Let’s just say, he just doesn’t get it and doesn’t want to get it.
  5. The Rise of the EDA Bloggers.
  6. FPGA Taking Center Stage - It started back in March when Gartner issued a report stated that there were 30 FPGA design starts for every ASIC start. That number seemed very high to me and to others, but that did not stop this 30:1 ratio from being quoted as fact in all sorts of FPGA marketing materials throughout the year. On the technical side, it was a year where the issues of verification of large FPGAs came front-and-center and where a lot of ASIC people started transitioning to FPGA.
  7. Engineers Looking For Work - This was one of the more unfortunate trends that I will remember from 2009 and hopefully 2010 will be better. Personally, I had difficulty finding work between projects. DAC this year seemed to be as much about finding work as finding tools. A good friend of mine spent about 4 months looking for work until he finally accepted a job at 30% less pay and with a 1.5 hour commute because he “has to pay the bills”. A lot of my former EDA sales and AE colleagues have been laid off. Some have been looking for the right position for over a year. Let’s hope 2010 is a better year.
  8. SaaS and Cloud Computing for EDA - A former colleague of mine, now a VP of Sales at one of the small but growing EDA companies, came up to me in the bar during DAC one evening and stammered some thoughts regarding my predictions of SaaS and Cloud Computing for EDA. “It will never happen”. He may be right and I may be a bit biased, but this year I think we started to see some of the beginnings of these technologies moving into EDA. On a personal note, I’m involved in one of those efforts at Xuropa. Look for more developments in 2010.
  9. Talk of New EDA Business Models - For years, EDA has bemoaned the fact that the EDA industry captures so little of the value ($5B) of the much larger semiconductor industry ($250B) that it enables. At the DAC Keynote, Fu-Chieh Hsu of TSMC tried to convince everyone that the solution for EDA is to become part of some large TSMC ecosystem in which TSMC would reward the EDA industry like some sort of charitable tax deduction. Others talked about EDA companies having more skin in the game with their customers and being compensated based on their ultimate product success. And of course there is the SaaS business model I’ve been talking about. We’ll see if 2010 brings any of these to fruition.
  10. The People I Got to Meet and the People Who Wanted to Meet Me- One of the great things about having a blog is that I got to meet so many interesting people that I would never have had an opportunity to even talk to. I’ve had the opportunity to talk with executives at Synopsys, Cadence, Mentor, Springsoft, GateRocket, Oasys, Numetrics, and a dozen other EDA companies. I’ve even had the chance to interview some of them. And all the fellow bloggers I’ve met and now realize how much they know. On the flip side, I’ve been approached by PR people, both independent and in-house. I was interviewed 3 separate times, once by email by Rick Jamison, once by Skype by Liz Massingill, and once live by Dee McCrorey. EETimes added my blog as a Trusted Source. For those who say that social media brings people together, I can certainly vouch for that.

harry the ASIC guy

What Makes DAC 2009 different from other DACs?

Sunday, July 12th, 2009

By Narendra (Nari) Shenoy, Technical Program Co-Chair, 46th DAC

Each year, around this time, the electronic design industry and academia meticulously prepare to showcase the latest research and technologies at the Design Automation Conference. For the casual attendee, after a few years the difference between the conferences of years past begins to dim. If you are one of them, allow me to dispel this notion and invite you to look at what is different this year.

For starters, we will be in the beautiful city of San Francisco from July 26-31. The DAC 2009 program, as in previous years, has been thoughtfully composed from using two approaches. The bottom up approach selects technical papers from a pool of submissions using a rigorous review process. This ensures that only the best technical submissions are accepted. For 2009, we see an increasing focus on research towards system level design, low power design and analysis, and physical design and manufacturability. This year, a special emphasis for the design community has been added to the program, with a User Track that runs throughout the conference. The new track, which focuses on the use of EDA tools, attracted 117 submissions reviewed by a committee made up of experienced tool users from the industry. The User Track features front end and back end sessions and a poster session that allows a perfect opportunity to interact with presenters and other DAC attendees. In addition to the traditional EDA professionals, we invite all practitioners in the design community – design tool users, hardware and software designers, application engineers, consultants, and flow/methodology developers, to come join us.

This first approach is complemented by a careful top-down selection of themes and topics in the form of panels, special sessions, keynote sessions, and management day events. The popular CEO panel returns to DAC this year as a keynote panel. The captains of the EDA industry, Aart deGeus (Synopsys), Lip-Bu Tan (Cadence) and Walden Rhines (Mentor) will explore what the future holds for EDA. The keynote on Tuesday by Fu-Chieh Hsu (TSMC), will discuss alignment of business and technology models to overcome design complexity. William Dally (Nvidia and Stanford) will present the challenges and opportunities that throughput computing provides to the EDA world in his keynote on Wednesday. Eight panels on relevant areas are spread across the conference. One panel explores whether the emphasis on Design for Manufacturing is a differentiator or a distraction. Other panels focus on a variety of themes such as confronting hardware-dependent software design, analog and mixed signal verification challenges, and various system prototyping approaches. The financial viability of Moore’s law is explored in a panel, while another panel explores the role of statistical analysis in several fields, including EDA. Lastly, we have a panel exploring the implications of recent changes in the EDA industry from an engineer’s perspective.

Special technical sessions will deal with a wide variety of themes such as preparing for design at 22nm, designing circuits in the face of uncertainty, verification of large systems on chip, bug-tracking in complex designs, novel computation models and multi-core computing. Leading researchers and industry experts will present their views on each of these topics.

Management day includes topics that tackle challenges and decision making in a complex technology and business environment. The current “green” trend is reflected in a slate of events during the afternoon of Thursday July 30th. We start with a special plenary that explores green technology and its impact on system design, public policy and our industry. A special panel investigates the system level power design challenge and finally a special session considers technologies for data centers.

Rather than considering it a hindrance to attendance, the prolonged economic malaise this year should provide a fundamental reason to participate at DAC. As a participant in the technical program, DAC offers an opportunity to share your research and win peer acclaim. As an exhibitor, it is an ideal environment to demonstrate your technology and advance your business agenda. As an attendee, you cannot afford to miss the event where “electronic design meets”. DAC provides an unparalleled chance to network and learn about advances in electronic design for everyone. Won’t you join us at the Moscone Center at the end of the month?

__________

This year’s DAC will be held July 26-31 at the Moscone Center in San Francisco. Register today at www.dac.com. Note also that there are 600 free DAC passes being offered courtesy of the DAC Fan Club (Atrenta, Denali, Springsoft) for those who have no other means to attend.

TSMC Challenges Lynx With Flow Of Their Own

Wednesday, May 6th, 2009

About a month and a half ago, I wrote a 5 part series of blog posts on the newly introduced Lynx Design System from Synopsys:

One key feature, the inclusion of pre-qualified technology and node specific libraries in the flow, was something I had pushed for when I was previously involved with Lynx (then called Pilot). These libraries would have made Lynx into a complete out-of-the-box foundry and node specific design kit … no technology specific worries. Indeed, everyone thought that it was a good idea and would have happened had it not been for resistance from the foundries that were approached. Alas!

In the months before the announcement of Lynx, I heard that Synopsys had finally cracked that nut and that foundry libraries would be part of Lynx after all. Whilst speaking to Synopsys about Lynx in preparation for my posts, I asked whether this was the case. Given my expectations, I was rather surprised when I was told that no foundry libraries would be included as part of Lynx or as an option.

The explanation was that it proved too difficult to handle the many options that customers used. High Vt and low Vt. Regular and low power process. IO and RAM libraries from multiple vendors like ARM and Virage. Indeed, this was a very reasonable explanation to me since my experience was that all chips used some special libraries along the way. How could one QA a set of libraries for all the combinations? So, I left it at that. Besides, Synopsys offered a script that would build the Lynx node from the DesignWare TSMC Foundry Libraries.

Two weeks ago, at the TSMC Technology Symposium in San Jose, TSMC announced their own Integrated Sign-off Flow that competes with the Lynx flow, this one including their libraries. Now it seems to make sense. TSMC may  have backed out of providing libraries to Synopsys to use with Lynx since they were cooking up a flow offering of their own. I don’t know this to be a fact, but I think it’s a reasonable explanation.

So, besides the libraries, how does the TSMC flow compare to the Synopsys Lynx flow? I’m glad you asked. Here are the salient details of the TSMC offering:

  • Complete RTL to GDSII flow much like Lynx
  • Node and process specific optimizations
  • Uses multiple EDA vendors’ tools  (Synopsys mostly, but also Cadence, Mentor, and Azuro)
  • Available only for TSMC 65nm process node (at this time)
  • No cost (at least to early adopters … the press release is unclear whether TSMC will charge in the future)
  • And of course, libraries are included.

In comparison to Synopsys’ Lynx Design System, there were some notable features missing from the announcement:

  • No mention of anything like a Management Cockpit or Runtime Manager
  • No mention of how this was going to be supported
  • No mention of any chips or customers that have been through the flow

To be fair, just because these were not mentioned, does not mean that they are really missing, I have not seen a demo of the flow or spoken to TSMC (you know how to reach me) and that would help a lot in evaluating how this compares to Lynx. Still, from what I know, I’d like to give you my initial assessment of the strength of these offerings.

TSMC Integrated Signoff Flow

  • The flow includes EDA tools from multiple vendors. There is an assumption that TSMC has created a best-of-breed flow by picking the tool that performed each step in the flow the best and making all the tools work together. Synopsys will claim that their tools are all best-of-breed and that other tools can be easily integrated. But, TSMC’s flow comes that way with no additional work required. (Of course, you still need to go buy those other tools).
  • Integrated libraries, as I’ve described above. Unfortunately if you are using any 3rd party libraries, you’ll need to integrate them yourself it seems.
  • Node and process specific optimizations should provide an extra boost in quality of results.
  • Free (at least for now)

Synopsys Lynx Design System

  • You can use the flow with any foundry or technology node. A big advantage unless you are set on TSMC 65nm (which a lot of people are).
  • Other libraries and tools are easier to integrate into the flow I would think. It’s not clear whether TSMC even supports hacking the flow for other nodes.
  • Support from the Synopsys field and support center. Recall, this is now a full fledged product. Presumably, the price customers pay for Lynx will fund the support costs. If there is no cost for the TSMC flow, how will they fund supporting it? Perhaps they will take on the cost to get the silicon business, but that’s a business decision I am not privy to. And don’t underestimate the support effort. This is much like a flow that ASIC vendors (TI, Motorola/Freescale, LSI Logic), not foundries, would have offered. They had whole teams developing and QA’ing their flows. And then they would be tied to a specific set of tool releases and frozen.
  • Runtime Manager and Management Cockpit. Nice to have features.
  • Been used to create real chips before. As I’d said, the core flow in Lynx dates back almost 10 years and has been updated continuously. It’s not clear what is the genesis of the new TSMC flow. Is it a derivative of the TSMC reference flows? Is it something that has been used to create chips? Again, I don’t know, but I’ve got to give Synopsys the nod in terms of “production proven”.

So, what do I recommend. Well, if you are not going to TSMC 65 nm with TSMC standard cell libraries, then there is not much reason to look at the TSMC flow. However, if you are using the technology that TSMC currently supports, the appeal of a turnkey, optimized, and FREE flow is pretty strong. I’d at least do my due diligence and look at the TSMC flow. It might help you get better pricing from TSMC.

If anyone out there has actually seen or touched the TSMC flow, please add a comment below. Everyone would love to know what you think first hand.
harry the ASIC guy