Posts Tagged ‘EDA’

Dunbar’s Number and #48DAC

Tuesday, June 14th, 2011

DAC Badges

My apologies for the recent hiatus in my blog posting. It’s been a difficult time personally for me the past few months, dealing with family illnesses. Hopefully, I can get it going again.

With all that I had going on, it was a relief to escape last week for a few days to DAC in San Diego. After several years attending as a blogger (what DAC calls “independent media”), it was exciting to be on the floor representing Xuropa at the Synopsys Cloud Partners Booth. I still got to see several friends like JL Gray, who wrote up what he heard from us, and Peggy Aycinena, who accused me of being a sellout since I was in the Synopsys Cloud booth and had a Synopsys badge lanyard. And of course, what DAC would be complete without Eric Thune of AtopTech telling me that cloud will never work for EDA. 

One of the downsides of being in the booth was not being able to attend a lot of the other sessions. I missed The Woz, and the Logan & McLellan show, and Gary Smith, and a lot of the panel discussions. I was, however, able to sneak away for the EDA Cloud Computing Panel discussion, featuring the usual suspects and a few new ones. A highlight was when John Bruggeman of Cadence offered to buy John Chilton of Synopsys a beer at the Denali Party and work out a joint Synopsys/Cadence solution on the cloud. No word yet how that turned out. Another highlight was the audience poll at the end where 1/3 of the audience felt that most of EDA would be on the cloud in 3 years. I don’t know if this is correct or not, but this is the 3rd year we had a cloud panel at DAC, and each year the expectations increase. Richard Goering has a good writeup on the panel.

One booth I did visit and get an interesting demo was Duolog. Duolog is a Xuropa customer (you can try out their tool here), which is why I knew a little about them going in. They have a tool called Socrates Bitwise that does register management for processor based designs. In this tool, you specify all the processor accessible registers, their type (RO, RW, etc), the locations (base and offset), and the tool automatically generates the RTL, verification code (OVM, UVM, etc), register package, C APIs and documentation. If something needs to change, you change it in one place in the tool and all the subsequent files are regernerated correct by construction. With many designs having hundreds or thousands of registers to manage, this is a growing problem to be solved. Duolog has a few competitors as well, but their biggest competition is in-house home-grown scripts.

Of course, there were my 150 closest friends I know from years gone by, too numerous to mention, lest I leave someone out. I’m reminded of Sean Murphy’s perfect description of DAC:

The emotional ambience at DAC is what you get when you pour the excitement of a high school science fair, the sense of the recurring wheel of life from the movie Groundhog Day, and the auld lang syne of a high school re-union, and hit frappe.”

An overall impression I, and many others, had was that the show floor was smaller and there were fewer attendees than in the past. The official preliminary numbers, however indicate that DAC was larger than last year, so I’m not sure whether to believe my eyes or the numbers.

For me personally, it was my annual chance to connect with the entire industry, so I got a lot out of it. At a minimum, it provided me with a lot of good ideas that I can work on for the next year.

harry the ASIC guy

761 Days

Tuesday, March 29th, 2011

Clouds over San Francisco

761 days.

That’s 2 years, 1 month, and 3 days.

761 days ago, I hosted a small group of interested EDA folks, journalists, and bloggers in a small room in the Doubletree hotel after one of the evenings after DVCon.

Most of the discussion that year was around OVM and VMM and which methodology was going to win out and which was really open and which simulator supported more of the System-Verilog language. Well, all that is put to bed. This year at DVCon, 733 days later, we all sang Kumbaya as we sat around and our hearts were warmed by the UVM campfire.

But, back to that small group that I hosted 761 days ago. Those that attended this conclave had shrugged off all the OVM and VMM hoopla and decided to come hear this strange discussion about Cloud Computing and SaaS for EDA tools. Some, no doubt, thought there was going to be free booze served, and they were certainly disappointed. Those that stayed, however, heard a fiery discussion between individuals who were either visionaries or lunatics. For many, this was the first time they had heard the term cloud computing explained, and their heads spun as they tried to imagine what, if anything would come of it for the EDA industry.

Over the 761 days since, the voices speaking of cloud computing for EDA, once very soft, grew slowly in volume. All the reasons that it would not work were thrown about like arrows, and those objections continue. But slowly, over time, the voices in support of this model have grown to the point where the question no longer was “if” but “when”.

761 days, that’s when.

Yesterday, to the shock of many at SNUG San Jose, including many in attendence from Synopsys, Aart DeGeus personally answered the question asked 761 days earlier. Indeed, those individuals gathered in that small room at the Doubletree were visionaries, not lunatics.

There are many reasons why Synopsys should not be offering its tools on the cloud via SaaS:

  • Customers will never let their precious proprietary data off-site
  • It will cannibalize longer term license sales
  • The internet connection is too slow and unreliable
  • There’s too much data to transfer
  • The cloud is not secure
  • It’s more expensive
  • It just won’t work

But, as it turns out, there are better reasons to do it:

  • Customers want it

Sure, there are some other reasons. The opportunity to increase revenue by selling higher priced short-term pay-as-you-go licenses. Taking advantage of the parallelism inherent in the cloud. Serving a new customer base that has very peaky needs.

But in the end, Aart did what he does best. He put on his future vision goggles, gazed into the future, saw that the cloud was inevitable, and decided that Synopsys should lead and not follow.

761 days. Now the race is on.

Winners and Losers

Sunday, March 6th, 2011

Washington General and Harlem Globetrotter at LAXEngineers tend to view the world in binary. There’s the good guys and the bad guys. There’s the right way and the wrong way. There are rich folks and poor folks. Democrats and Republicans. You’re with us or against us.

And there are winners and losers.

This week, working the Agnisys booth at DVCon, I got to see all these types and all the shades in between. I got to see the good guys (me, of course, and anyone who was with me) and the bad guys (the competition). I saw people doing things the right way (telling the truth, or close to the truth) and the wrong way (pure fabrications). I saw rich folks (CEOs in expensive suits and shoes) and poor folks (the guys at the hotel tearing down after the show). Most of the people from Silicon Valley were Democrats, I suppose, and many of the others were Republicans. And, of course, for the Big 3 EDA vendors, it was all about who was with them (on the EDA360 passport) or against them (everyone else).

But, when you look a little closer, you see a lot of shades in between. Personally, I knew people at almost every booth with whom I had worked before. They’re not good or bad, right or wrong, rich or poor, democrats or republicans, or with me or against me. They’re just old friends working in an industry they love on technology they are psyched about.

I actually had some foreshadowing of this as I was flying up to the conference. As I was passing through the metal detectors at LAX, I had noticed some tall gentlemen dressed in green warmup suits. Realizing it was a basketba,ll team, I curiously glanced at their logo and saw the name “Generals”. Later, I was able to get a full view of the name “Washington Generals”.

If you are not familiar, the Washington Generals are the basketball team that travels with the Harlem Globetrotters. They are perennial losers. The spoil and object of countless Globetrotter jokes. According to Wikipedia, the Generals lost over 13,000 games to the Globetrotters between 1953 and 1995, and won only 6 times. That’s a winning percentage of 0.0005! If anyone deserves the title of “Losers”, it’s the Washington Generals.

As I sat waiting for my flight, I noticed some other apparent basketball players dressed in red with white and blue trim. Could it be? Yes, they were the Globetrotters, winners of those same 13,000 games that the Generals had lost. If anyone deserves the ttle of “Winners”, it’s the Harlem Globetrotters.

What surprised me at the time was that these eternal rivals, Winners and Losers, were traveling together, joking and laughing like best friends. Although I know that they obviously travel together and they know eachother, for some reason I had expected them to be separated. The good guys and the bad guys. The Winners and the Losers.

Just as the Generals and Globetrotters are rivals on the court but friends off the court, these EDA veterans were rivals at the booths at DVCon but friends in the bar afterwards. The EDA industry is kind of like a professional sports league. Sure, the teams compete with eachother. But players move between teams all the time and most of the players are friends off the field. In the end, what’s most important is that the league grows and is successful.

Hopefully, going forward, EDA will be more like the NBA than one of these failed leagues.

harry the ASIC guy

Altium Looking to Gain Altitude in the Cloud

Sunday, January 30th, 2011

Altium Enterprise Vault SystemOver the holiday break, I came across an interview of Altium CIO Alan Perkins that caught my eye. Sramana Mitra has been focusing on interesting cloud-based businesses and this interview focused on how this EDA company was planning to move into the cloud. I wasn’t able to talk to Alan Perkins directly, but I was able to find out more through their folks in the US (the company is based in Australia). It was interesting enough to warrant a post.

I knew very little about Altium before seeing this interview and maybe you don’t either, so here is a little background. Based in Australia, Altium is a small (~$50M) EDA company focused primarily in the design of printed circuit boards with FPGAs and embedded software. They formed from a company called Protel about 10 years ago and most recently gained attention when they acquired Morfik, a company that offers an IDE for developing web apps (more on that later). According to some data I saw and from what they told me, they added 1700 new customers (companies, not seats) in 2010 just in the US! So, they may be they best kept secret in a long while. (Ironically, the next day at work after I spoke to Altium, I spoke to someone at another company that was using Altium to design a PC board for us).

According to Altium, their big differentiator is that they have a database-centric offering as compared to tool-flow centric offerings like Cadence OrCAD and Allegro and Mentor’s Board Station and Expedition and related tools. I’m not an EDA developer, so I won’t pretend to understand the nuances of one versus the other. However, when I think of a “database-centric”, I think of “frameworks”. I know it’s been almost 20 years since those days, and things have changed, so maybe database-centric makes a lot of sense now. OpenAccess is certainly a good thing for the industry, but that is because it’s an “open standard” while Altium’s database is not. Anyway, enough on this matter because, as I said, I’m not an EDA developer and don’t want to get in too deep here.

A few years ago, I wrote a blog post entitled “Is IP a 4-Letter Word?”. The main thrust of that post was that IP quality is rather poor in general and there needs to be some sort of centralized authority to grade IP quality and to certify its use. So, when Altium told me they plan to enable a marketplace for design IP by creating ”design vaults” in the cloud, my first question was “who is going to make sure this IP is any good”? Is this going to be the iPhone app model, where Apple vets and approves every app? Or is it going to be the Android model, caveat emptor.

To Altium’s credit, they have similar concerns, which is why they are planning to move slowly. With their introduction of Altium Designer 10, Altium will first provide it’s own vetted IP in the cloud. In the past, this IP was distributed to the tool users on their site, but having it in the cloud will make it easier to distribute (pull, insted of push) and also allow for asynchronous release and updates. The tools will automatically detect if you are using an IP that has been revved, and ask you if you want to download the new version.

Once they have this model understood, Altium then plans to open the model up to 3rd party IP which can be offered for free, or licensed, or maybe even traded for credits (like Linden dollars in Second Life). It’s an interesting idea which requires some pretty significant shifts in personal and corporate cultures. I think that sharing of small “jelly bean” type IP is acheivable because none of it is very differentiated. But once you get to IP that required some significant time to design, why share it unless IP is your primary business. The semiconductor industry is still fiercely competitive and I think that will be a significant barrier. Not to mention that it takes something like 4x-5x as much effort to create an IP that is easily reusable as compared to creating it just to be used once.

Being a tool for the design of FPGAs is an advantage for Altium, since the cost of repairing an FPGA bug is so much less than an SoC or ASIC. For FPGAs, the rewards may be greater than the risks, especially for companies that are doing ASICs for the first time. And this is the market that Altium is aiming for … the thousands of sompanies that will have to design their products to work on the internet-of-things. Companies that design toasters that have never had any digital electronics and now have to throw something together. They will be the ones that will want to reuse these designs because they don’t have the ability to design them in-house.

Which brings us to Morfik, that company that Altium acquired that does IDEs for web apps. It’s those same companies that are designing internet enabled toasters that will also need to design a web app for their customers to access the toaster. So if Altium sells the web app and the IP that let’s the toaster talk to the web app, then Altium provides a significant value to the toaster company. That’s the plan.

Still, the cloud aspect is what interests me the most. Even if designers are reluctant to enter this market, the idea of having this type of central repository is best enabled by the cloud. The cloud can enable collaboration and sharing much better than any hosted environment. And it can scale as large and as quickly as needed. It allows a safe sort of DMZ where IP can be evaluated by a customer while still protecting the IP from theft.

This is not by any means a new idea either. OpenCores has been around for more than a decade offering a repository for designers to share and access free IP. I spoke with them a few years ago and at the time the site was used mainly by universities and smaller companies, but their OpenRISC processor has seen some good usage, so it’s a model that can work.

I’m anxious to see what happens over time with this concept. Eventually, I think this sort of sharing will have to happen and it will be interesting to see how this evolves.

harry the ASIC guy

Scott Clark on EDA Clouds

Sunday, August 8th, 2010

scottclark.jpgAlthough I had heard his name mentioned quite often, it wasn’t until this year at DAC that I finally met Scott Clark  for the first time. Scott was describing how, as Director of Engineering Infrastructure at Broadcom, he led a project to virtualize Broadcom’s internal data center in order to transform it into a private cloud. It was a great discussion. We had lunch a few weeks later to talk about his new business, Deopli, a company that he has founded to help other semiconductor and EDA companies improve their compute infrastructure operations in similar fashion.

So, when I saw Dan Nenni’s blog post on cloud computing and some of the responses, I thought I’d contact Scott. You see, as opposed to most of those commenting on Dan’s post, Scott has actually taken EDA tools and moved them to the cloud, so he knows what he’s talking about. Scott was kind enough to contribute a blog post on the subject, so please enjoy.

__________

Harry the ASIC Guy pointed me to Dan Nenni’s Silicon Valley Blog to take a look at this post regarding Daniel Suarez’s books Daemon and Freedom. His post intrigued me enough to download the first book to my iPad to get a feel for the style and atmosphere. That was good enough that I plan to read both. You can read Dan’s post to see his overview of the books, but at the end of his post, he poses a question that seemed to spark lots of conversation and varying opinions. His question was “Who can be trusted to secure Darknet (Cloud Computing)?”

I think Dan was making reference to concepts in the book where all data in the world becomes controlled by a finite set of service providers, and therefore creates an exposure based on the singularity of the solution. His references hit pretty close to home in Apple, Microsoft and Google, but that did not seem to be the focus of the responses. Because Dan’s background (and blog) is primarily in the EDA / Semiconductor space, the responses seemed to fall into the category of “Should Semiconductor companies use Cloud Computing?” and the array of opinions seemed to align on the two ends of the spectrum. There were a few respondents who felt that EDA would never ever move into the Cloud or gave somewhat skewed definitions of “cloud” to say “it’s impossible” but for the most part, it was refreshing to see some open minded views of what was possible and how things could work. I was particularly intrigued by Dan’s comment that he felt foundries would venture into the cloud hosting space. Given the history of the fabless semiconductor space, how can that not make perfect sense! The leadup to the creation of foundries was that internal manufacturing was growing in capacity and complexity to the point that it made more sense to have that done externally. The same dynamics are happening in the datacenter space for chip design today.

Some of the comments were very accurate in my experiences, so just to highlight a few (please read the blog for specifics so I don’t mis-quote). Daniel Payne made the observation that semiconductor companies will start by creating their own private cloud, and that is exactly where we are today (compute clusters really are private clouds). James Colgan injected sanity throughout and made some very astute observations about the functional dynamics and applicability of cloud to certain parts of a design flow. I can’t say how much I agree with Kevin Cameron’s comments on security; cloud has the potential to be a huge boost in security for the industry. Tom Anderson indicated that he is already doing chip design using Amazon EC2 resources, and I think there are many more like Tom out there. One of the last postings to date is by Lou Covey, and his opinion is that Cloud for the industry is inevitable - I happen to agree with that. It’s not that we “have to” but more of “this is the right answer for the business, and we should do the right thing”.

One of the missing concepts that I notice is that this blog is looking at generic cloud solutions, and not industry specific solution. You will see the development of EDA specific cloud solutions that is very focused on EDA customers, and in the beginning it will be private clouds with technology added to elastic expansion. That said, looking at Cloud for the EDA industry, there are still going to be several roadblocks to adoption that will need to be addressed:

  • Ego – getting around the perception that IT is a core competency of chip design companies. The core competency of a chip design company should be … chip design.
  • Cost – getting around the expectation that cloud should cost ½ as much as what I am currently paying. There are many economies of scale and efficiencies that cloud brings. Cloud is an opportunity for cost avoidance as time goes forward, not a refund policy.
  • Trust – letting go of what is a critical function / resource and having confidence that you can still get the results necessary. This industry has a very powerful model to refer to. In this case, how the fabs were released, and successful partnerships were formed.
  • Control – how to let go of a critical resource, and still maintain control over the resources, costs, schedules, and dynamics of capacity / priority decisions.
  • Security – probably the most wielded blade in the “you can’t do it” arsenal, but also probably the most misunderstood.
  • Performance – the final roadblock, which is the one with the most technical merit, is performance. There are many different facets to performance, but it will primarily fall into “internal cluster performance” and “display performance”.

My perspective, the ego part we can get around. Current conversations with many EDA companies indicate they are already leaning this direction, which is a good sign.

The cost issue is far more ambiguous. There are as many expectations of cloud as there are definitions, but invariably the expectations are rooted in economics. Given that, the only answer seems to be to create a realistic model for cost, present the data, and let nature take it’s course. There really is cost benefit, so companies will want to accomplish that

Trust seems like it should be the easy part for this industry, but it is proving to be more stubborn than that. I think that is mostly because of the implied threat to job security for the people who are currently performing the tasks (who are usually the people receiving the presentation about outsourcing their job). EDA companies should examine their own history to see what to do and how to do it.

The control front falls into the same category as trust. The same way that fabless semiconductor companies created internal organizations and positions for managing the outsource of the foundries, that model should be applied to the outsourcing of computational infrastructure. That is not to say there will not be contention issues for capacity and priority. The cloud suppliers will need to make sure they have enough resources so they can provide sufficient capacity to the customers, or they will not be the supplier long. Again, foundries will be a great model to look at for this.

On the security front, Cloud will at a minimum give data points to show how weak internal security has been historically. Applying best security practices in a consistent manner should actually help evolve an industry specific cloud security solution to better address security issues. And for the time being, we can just avoid the multi-tenant aspects of security by maintaining isolation – private clouds with share dynamic resources.

And finally, given that we are stalking about EDA specific clouds, they will be specifically designed to have “internal cluster performance” appropriate for EDA. It will be designed exactly like we would design that cluster for a companies private datacenter. The tricky part will be in addressing display performance issues for functions like custom layout and board design where network latency causes the engineer’s working style to be impacted.

So really this boils down to proper execution by the EDA cloud providers, and one technical hurdle of display latency, which has many ways to be addressed. There is a lot of money and attention being aimed at these issues and this industry, and really no real reason why it will not succeed. There might be some companies that choose to adopt at a slower rate than others, but I believe this will become the direction everyone goes eventually. Thanks Dan for a great read and thanks Harry for pointing me at it.

__________

Scott Clark has been an infrastructure solution provider in the EDA/Semiconductor industry for the last 20 years, working for companies like Western Digital, Conexant, and Broadcom. He holds a bachelors of science in applied mathematics from San Diego State University and is currently President and CEO of Deopli Corporation. You can follow Scott on his blog at HPC in the Clouds.

Is 2D Scaling Dead? - Other Considerations

Sunday, July 11th, 2010

othercons.PNG(Part 4 in the series Which Direction for EDA? 2D, 3D, or 360?)

In the last 2 posts in this series, I examined the lithography and transistor design issues that will need to be solved in order to save 2D scaling as we know it. In this post I will look at several other considerations.

For the moment, let’s assume that we are able to address the lithography and transistor design issues that I’ve identified in the previous posts. TSMC recently announced it will take delivery of an EUV lithography machine, so let’s assume they are successful in making the move to the 13.5 nm wavelength. IBM, TSMC, and Intel are already using multi-gate FETs in their most advanced process development and ITRS predicts it will be standard for the 32nm node, so let’s assume that will work out as well. If so, are we home free?

 

Not so fast!

 

There are still numerous technical challenges and one big economic one. First the technical:

 

Process variability refers to the fact that circuit performance can vary based upon the variability in the wafer processing. For instance, let’s say we are printing 2 overlapping rectangles on a die. Due to normal process variability, those rectangles can vary from the ideal in size (smaller or larger), can be shifted (north, south, east, west), or can be offset from each other. Thicknesses of processing layers have variability as well. The amount of doping can vary. Physical steps such as CMP (Chemical Mechanical Polishing) can introduce variability. These variabilities tend to be fixed amounts, so at large process nodes they don’t make much difference. But as we get smaller, these variabilities become significant. If we just take the old approach of choosing a 3-sigma range to define best case and worst case processing corners, the performance at lower more variable nodes may not be much greater than at the larger less variable nodes.

 

This process variability introduces performance variability, and not always in predictable ways.  For instance, if two related parameters vary equally based on oxide thickness, and all we care about is the ratio of these parameters, then the variation may cancel out. But if they vary in opposite directions, the effect may be worsened. Careful design and layout of circuits can make it so that process variations can cancel out with little net effect, but this takes enormous effort and planning and still you cannot account for all variation. Rather, we just have to live with the fact that process variation could cause +- 20, 30, or even 50% performance variation.

 

ssta_graph.JPGThere are some methods to account for this variation for digital designs, the most mainstream being statistical static timing analysis (SSTA). SSTA realizes that process variation results in a circuit performance distribution curve. Instead of drawing hard 3-sigma limits on the curve to define processing “corners”, as is done with traditional STA, SSTA allows designers to understand how yield varies with variability. For instance, if the designer wants to stick with 3-sigma bounds to achieve 90% yield then he may need to accept 500 MHz performance. However, if he wants to be more aggressive on timing he may be able to achieve 600 MHz by accepting a lower 75% yield for parts that fall within a smaller 2-sigma range. SSTA helps designers make these choices.

 

But SSTA is not a silver bullet. Process variability can affect hold times to the extent where they are very difficult to fix. Analog and mixed-signal circuits are much more susceptible to process variability since there are many more performance parameters designers care about. Companies like Solido are trying to attack this specific process variability issue, but the cost in time and analysis (e.g. Monte Carlo simulation) is large. And process variability can just plain break a chip altogether. This will only get worse as the dimensions shrink.

 

Yield is the first cousin to process variability. As discussed in the preceding section, there is a direct tradeoff between performance and yield due to process variability. And as process complexity increases and design margins shrink, yield surely will suffer. There’s a real question whether we’ll be able to yield the larger chips that we’ll be able to design.

 

Crosstalk and signal integrity issues are exaggerated at smaller nodes and are more difficult to address. According to a physical design manager I spoke with recently, the problem is that edge rates are faster and wires are closer together, so crosstalk induced delay is greater. Fixing these issues involves spreading wires or using a lower routing utilization, which defeats some of the benefit of the smaller node. And that is if you can even identify the aggressor nets, which may be multiple. It’s not uncommon for days to weeks to be spent fixing these issues at 45nm, so how long will is take at 22nm or lower?

 

Process variability and signal integrity are just 2 of the more prominent technical issues we’re hitting. In fact, pretty much everything gets more difficult. Consider clock tree synthesis for a large chip needing low skew and complex power gating. Or verifying such a large design (which merits it’s own series of posts). What about EDA tool capacity? And how are we going to manage the hundreds of people and hundreds of thousands of files associated with an effort like this? And let’s not forget the embedded software that runs on the embedded processors on these chips. A chip at these lower nodes will be a full system and will require a new approach. Are we ready?

 

And believe it or not, we’re even limited by the speed of light! A 10 Gbps SerDes lane runs at 100ps per bit, or the time it takes light to travel 3cm, a little over an inch. Even if we can process at faster and faster speeds on chip, can we communicate this data between chips at this rate, or does Einstein say “slow down”?

 

Enough of the technical issues, let’s talk economics.

 

Cost is, and always has been, the biggest non-technical threat to 2D scaling. Gordon Moore considered his observation to be primarily economic, not technological. In the end, it’s not about how much we can build, but how much we can afford to build. There are several aspects of cost, so let’s look at each.

 

Cost of fabrication is the most often quoted and well understood.  Although precise predictions will vary, it’s clear that all the breakthroughs required in lithography, transistor design, and other areas will not come cheaply. Nor will the facilities and manufacturing equipments necessary to implement these breakthroughs. $5B is not an unreasonable estimate to construct and equip a 22nm fab. When it costs $5B to ante up to even get into the game, we’re going to see some semiconductor companies fold their hands. We’re already seeing consolidation and collaboration in semiconductor fabrication (e.g. Common Platform, Global Foundries) and this will increase. Bernard Meyerson even spoke of a concept he called radical collaboration, in which competitors collaborate on and share the cost of the expensive basic science and R&D required to develop these new foundries and processes. We’re going to need to be creative.

 

Cost of design is also becoming a challenge. Larger chips mean larger chip design projects. Although I’ve not seen any hard data to back this up, I’ve seen $100M mentioned as the cost to develop a current state-of-the-art SoC. Assuming most of the cost is labor, that’s equivalent to over 200 engineer-years of labor! What will this be in 5 years? Obviously, a small startup cannot raise this much money to place a single bet on the roulette wheel, and larger companies will only be willing to place the safest bets with this type of investment. They will have to be high-margin high-volume applications, and how many of those applications will exist?

 

In the end, this all boils down to financial risk. Will semiconductors manufacturers be willing to take the risk of generating enough revenue to cover the cost of a $5B+ fab? Will semiconductor companies be willing to take the risk of generating enough revenue to cover the cost of a $100M+ SoC? For that matter, will there be many applications that draw $100M in revenue altogether? For more and more players, the answer will be “no”.

 

Despite all these increasing chip costs, it is important to take a step up and consider the costs at the system-level. Although it may be true that a 32nm 100M gate chip is more expensive than a 90nm 10M gate chip, the total system costs are certainly reduced due to the higher level of integration. Maybe 5 chips become 1 chip with higher performance and lower power. That reduces the packaging and product design cost. Perhaps other peripherals can now be incorporated that were previously separate. This will of course depend on each individual application, however, the point is that we should not stay myopically focused on the chip when we are ultimately designing systems. System performance is the new metric, not chip performance.

In the next blog post in this series, I’ll finish up the discussion on 2D scaling by looking at the alternatives and by making some predictions.

harry the ASIC guy

Is 2D Scaling Dead? Looking at Transistor Design

Wednesday, June 23rd, 2010

 (Part 3 in the series Which Direction For EDA: 2D,3D, or 360?)

Replica of the First TransistorIn the last blog post, I started to examine the question “is 2D scaling really dead or just mostly dead?” I looked at the most challenging issue for 2D scaling, lithography. But even if we can draw the device patterns somehow on the wafer at smaller and smaller geometries, does not necessarily mean that the circuits will deliver the performance (speed, area, power) improvements that Moore’s Law has delivered in the past. Indeed, as transistors get smaller (gate length and width) they also get shorter (oxide thickness). There are limits to the improvements we can gain in power and speed. We’ll talk about those next.

Transistor Design

First, consider what has made 2D scaling effective to date. The move to smaller geometries has allowed us to produce transistors that have shorter channels, operate at lower supply voltages, and switch less current. The shorter channel results in lower gate capacitance and higher drive which means faster devices. And the lower supply voltage and lower current result in lower dynamic power. All good.

At the same time, these shorter channels have higher sub-threshold and source-drain leakage currents and the thinner gate oxide results in greater gate leakage. At the start of Moore’s Law, leakage was small, so exponential increases were not a big deal. But at current and future geometries, leakage power is on par and soon exceeding dynamic power. And we care more today about static power, due to the proliferation of portable devices that spend most of their time in standby mode.

leakage-power.jpeg

The reduction in dynamic power is also reaching a limit. Most of the dynamic power reduction of the last decade was due to voltage scaling. For instance, scaling from 3.3V to 1.0V reduces power by 10x alone. But reductions beyond 08.V are problematic due to the inherent drop across a transistor and device threshold voltages. Noise margins are fast eroding and that will cause new problems.

Still, as with lithography, we haven’t thrown in the towel yet.

Strained Silicon is a technique that has been in use since the 90nm and 65nm nodes. It involves stretching apart the silicon atoms to allow better electron mobility and hence faster devices at lower power consumpti0on, up to 35% faster.

Hi-k dielectrics (k being the dielectric constant of the gate oxide) can reduce leakage current. The silicon dioxide is replaced with a material such as hafnium dioxide with a larger dielectric constant, thereby reducing leakage for an equivalent capacitance. This technique is often implemented with another modification which is replacing the polysilicon gate with a metal gate with lower resistance, hence increasing speed. Together, the use of hi-k dielectrics with metal gates is often referred to by the acronym HKMG and is common at 45nm and beyond.

A set of techniques commonly referred to as FinFET or Multi-gate FET (MuGFET) break the gate of a single transistor into several gates in a single device. How? Basically by flipping the transistor on it’s side. The net effect is a reduction in effective channel width and device threshold with the same leakage current; i.e. faster devices with lower dynamic power with the same leakage power.  But this technique is not a simple “tweak”; it’s a fundamental change in the way we build devices. To quote Bernard Meyerson of IBM, “to go away from a planar device and deal with a non-planar one introduces unimaginable complexities.” Don’t expect this to be easy or cheap.

Multigate FET - Trigate

A more mainstream technology that has been around a while, Silicon-on-Insulator (SOI), is also an attractive option for very high performance ICs such as those found in game consoles. In SOI ICs, a thick layer of an insulator (usually silicon dioxide) lies below the devices instead of silicon as in normal bulk CMOS. This reduces device capacitance and results in a speed-power improvement of 2x-4x, although with more expensive processing and a slightly more complex design process. You can find a ton of good information at the SOI Consortium website.

In summary, we are running into a brick wall for transistor design. Although there are new design techniques that can get us over the wall, none of these are easy and all of them are expensive, And the new materials used in this process create new kinds of defects, hence reducing yield. With some work, the techniques above may get us to 16nm or maybe a little bit further. Beyond that, they’re talking about Graphene transistors (i.e. carbon nanotubes), pretty far out stuff.

In my next post, I’ll look at some of the other considerations regarding 2D scaling, not the least of which is the extraordinary cost.

harry the ASIC guy

Brian Bailey on Unconventional Blogging

Tuesday, June 15th, 2010

bailey.jpg

(Photo courtesy Ron Ploof

I had the pleasure yesterday of interviewing Brian Bailey in the Synopsys Conversation Central Stage at DAC. We discussed his roots in verification working with the initial developers of digital simulation tools and his blogging experiences these past few years. There are, of course, even a few comments on the difference between journalists and bloggers ;)

You can listen to this half hour interview at the Synopsys Blog Talk Radio site. I’d be interested in your comments on the show and the format as well. It was pretty fun, especially in front of a live audience.

At 12:30 PDT today, I’ll be doing another interview on Security Standards for the Cloud. You can tune in live on your computer or mobile device by going to the main Synopsys Blog Talk Radio Page. So, even if you’re not here at DAC, you can still partake.

harry the ASIC guy

Where in the DAC is harry the ASIC guy?

Friday, June 11th, 2010

dac_logo.pngLast year’s Design Automation Conference was kind of quiet and dull, muted by the impact of the global recession with low attendance and just not a lot of real interesting new developments. This year looks very different; I’m actually having to make some tough choices of what sessions to attend. And with all the recent acquisitions by Cadence and Synopsys, the landscape is changing all around, which will make for some interesting discussion.

I’ll be at the conference Monday through Wednesday. As a rule, I try to keep half of my schedule open for meeting up with friends and colleagues and for the unexpected. So if you want to chat, hopefully we can find some time. Here are the public events that I have lined up:

Monday

10:30 - 11:00 My good friend Ron Ploof will interviewing Peggy Aycinena on the Synopsys Conversation Central stage, so I can’t miss that. They both ask tough questions so that one may get chippy. (Or you can participate remotely live here)

11:30 - 12:00 I’ll be on that same Synopsys Conversation Central stage interviewing Verification Consultant and Blogger Extraordinaire Brian Bailey. Audience questions are encouraged, so please come and participate. (Or you can participate remotely live here)

3:00 - 4:00 I’ll be at the Atrenta 3D Blogfest at their booth. It should be an interesting interactive discussion and a good chance to learn about one of the 3 directions EDA is moving in.

6:00 - Cadence is having a Beer for Bloggers event but I’m not sure where. For the record, beer does not necessarily mean I’ll write good things. (This event was canceled since there is the Denali party that night).

Tuesday

8:30 - 10:15 For the 2nd straight year, a large fab, Global Foundries (last year it was TSMC) will be presenting their ideas on how the semiconductor design ecosystem should change From Contract to Collaboration: Delivering a New Approach to Foundry

10:30 - 12:00 I’ll be at a panel discussion on EDA Challenges and Options: Investing for the Future. Wally Rhines is the lead panelist so it should be interesting as well.

12:30 - 1:00 I’ll be back at the Synopsys Conversation Central stage interviewing James Wendorf (IEEE) and Jeff Green (McAfee) about standards for cloud computing security, one of the hot topics.

Wednesday

10:30 - 11:30 I’ll be at the Starbucks outside the convention floor with Xuropa and Sigasi. We’ll be giving out Belgian Chocolate and invitations to use the Sigasi-Xilinx lab on Xuropa.

2:00 - 4:00 James Colgan, CEO of Xuropa, and representatives from Amazon, Synopsys, Cadence, Berkeley and Altera will be on a panel discussion on Does IC Design have a Future In the Cloud?. You know what I think!

This is my plan. Things might change. I hope I run into some of you there.

harry the ASIC guy

DAC Yesterday, Today, and Tomorrow

Friday, May 28th, 2010

About a week ago, I got an email from someone I know doing a story on how the Design Automation Conference has changed with respect to bloggers since the first EDA Bloggers Birds-of-a-Feather Session 2 years ago. I gave a thoughtful response and some of it ended up in the story, but I thought it would be nice to share my original full response with you.

Has your perception of the differences between bloggers and press changed since the first BOF?

Forget my perception; many of the press are now bloggers! I don’t mean that in a mean way and I understand that people losing their jobs is never a good thing. But I think the lines have blurred because we all find ourselves in similar positions now. It’s not just in EDA … many, if not most, journalists also have a blog that they write on the side.

Ultimately, I think either the traditional “press” or a blog is just a channel between someone with knowledge to people who want information they can trust. What determines trust is the reliability of the source. In thepast, the trust was endowed by the reputation of the publication. Now, weall have to earn that trust.

As for traditional investigative journalism (ala All the President’s Men) and reporting the facts (5 Ws), I think there is still a role for that, butmost readers are looking for insight, not jut the facts.

What do you think of DAC’s latest attempts to address these differences, e.g. Blog-sphere on the show floor, press room in the usual location?

Frankly, I’m not sure exactly what DAC is doing along these lines this year. Last year bloggers had very similar access as journalists to the press room and other facilities. It was nice to be able to find a quiet place to sit, but since most bloggers are not under deadline to file stories it is not as critical. Wireless technology is making a lot of this obsolete since we can pretty much work from anywhere. Still, having the snacks is nice :)

What does the future hold for blogging at DAC?

Two years ago, blogging was the “new thing” at DAC. Last year, blogging was mainstream and Twitter was the new thing. This year blogging will probably be old skool and there will be another “new thing”. For instance, I think we’re all aware and even involved in Synopsys’ radio show. This stuff moves so fast. So, I think the future at DAC is not so much for blogging, as it is for multiple channels of all kinds, controlled not only by “the media”, but also the vendors, independents, etc. Someone attending DAC will be able to use his wireless device to tap into many channels, some in real-time.

Next year, I predict that personalized and location aware services will be a bigger deal. When you come near a booth, you may get an invitation for a free demo or latte if your profile indicates you are a prospective customer. You’ll be able to hold up your device and see a “google goggles” like view of the show floor. You may even be able to tell who among your contacts is at the show and where they are. Who knows? It will be interesting.

harry the ASIC guy