Archive for June, 2010

Is 2D Scaling Dead? Looking at Transistor Design

Wednesday, June 23rd, 2010

 (Part 3 in the series Which Direction For EDA: 2D,3D, or 360?)

Replica of the First TransistorIn the last blog post, I started to examine the question “is 2D scaling really dead or just mostly dead?” I looked at the most challenging issue for 2D scaling, lithography. But even if we can draw the device patterns somehow on the wafer at smaller and smaller geometries, does not necessarily mean that the circuits will deliver the performance (speed, area, power) improvements that Moore’s Law has delivered in the past. Indeed, as transistors get smaller (gate length and width) they also get shorter (oxide thickness). There are limits to the improvements we can gain in power and speed. We’ll talk about those next.

Transistor Design

First, consider what has made 2D scaling effective to date. The move to smaller geometries has allowed us to produce transistors that have shorter channels, operate at lower supply voltages, and switch less current. The shorter channel results in lower gate capacitance and higher drive which means faster devices. And the lower supply voltage and lower current result in lower dynamic power. All good.

At the same time, these shorter channels have higher sub-threshold and source-drain leakage currents and the thinner gate oxide results in greater gate leakage. At the start of Moore’s Law, leakage was small, so exponential increases were not a big deal. But at current and future geometries, leakage power is on par and soon exceeding dynamic power. And we care more today about static power, due to the proliferation of portable devices that spend most of their time in standby mode.


The reduction in dynamic power is also reaching a limit. Most of the dynamic power reduction of the last decade was due to voltage scaling. For instance, scaling from 3.3V to 1.0V reduces power by 10x alone. But reductions beyond 08.V are problematic due to the inherent drop across a transistor and device threshold voltages. Noise margins are fast eroding and that will cause new problems.

Still, as with lithography, we haven’t thrown in the towel yet.

Strained Silicon is a technique that has been in use since the 90nm and 65nm nodes. It involves stretching apart the silicon atoms to allow better electron mobility and hence faster devices at lower power consumpti0on, up to 35% faster.

Hi-k dielectrics (k being the dielectric constant of the gate oxide) can reduce leakage current. The silicon dioxide is replaced with a material such as hafnium dioxide with a larger dielectric constant, thereby reducing leakage for an equivalent capacitance. This technique is often implemented with another modification which is replacing the polysilicon gate with a metal gate with lower resistance, hence increasing speed. Together, the use of hi-k dielectrics with metal gates is often referred to by the acronym HKMG and is common at 45nm and beyond.

A set of techniques commonly referred to as FinFET or Multi-gate FET (MuGFET) break the gate of a single transistor into several gates in a single device. How? Basically by flipping the transistor on it’s side. The net effect is a reduction in effective channel width and device threshold with the same leakage current; i.e. faster devices with lower dynamic power with the same leakage power.  But this technique is not a simple “tweak”; it’s a fundamental change in the way we build devices. To quote Bernard Meyerson of IBM, “to go away from a planar device and deal with a non-planar one introduces unimaginable complexities.” Don’t expect this to be easy or cheap.

Multigate FET - Trigate

A more mainstream technology that has been around a while, Silicon-on-Insulator (SOI), is also an attractive option for very high performance ICs such as those found in game consoles. In SOI ICs, a thick layer of an insulator (usually silicon dioxide) lies below the devices instead of silicon as in normal bulk CMOS. This reduces device capacitance and results in a speed-power improvement of 2x-4x, although with more expensive processing and a slightly more complex design process. You can find a ton of good information at the SOI Consortium website.

In summary, we are running into a brick wall for transistor design. Although there are new design techniques that can get us over the wall, none of these are easy and all of them are expensive, And the new materials used in this process create new kinds of defects, hence reducing yield. With some work, the techniques above may get us to 16nm or maybe a little bit further. Beyond that, they’re talking about Graphene transistors (i.e. carbon nanotubes), pretty far out stuff.

In my next post, I’ll look at some of the other considerations regarding 2D scaling, not the least of which is the extraordinary cost.

harry the ASIC guy

Is 2D Scaling Really Dead or Just Mostly Dead?

Sunday, June 20th, 2010

(Part 2 in the series Which Direction For EDA: 2D,3D, or 360?)

miracle_max.jpg “Well, it just so happens that your friend here is only mostly dead. There’s a big difference between mostly dead and all dead.” - Miracle Max to Inigo Montoya, The Princess Bride


In the film The Princess Bride, Westley lies motionless and apparently dead on a table in the cottage of Miracle Max. After some complaining from Max and nagging from Max’s wife, Max devises a chocolate covered Miracle Pill that revives the mostly dead Westley so he can save his true love Buttercup from the evil Prince Humperdink.

In our story, 2D scaling is the mostly dead Westley, the semiconductor manufacturers are Miracle Max trying to create a Miracle Pill, and Gordon Moore is Buttercup waiting to be rescued. (I’m not sure who’s Prince Humperdink and Max’s wife, but if you have an idea, please let me know.)

Moore’s law (the colloquial term for 2D scaling) states that certain metrics of semiconductor technology performance proceed at a rate of approximately 2x every 2 years. It can refer to transistor size (channel length), density (gates / sq. mm), cost ($/gate), power (nA/gate), capacity (gates), or a combination of these. Indeed, over the last 2 decades we can draw a pretty straight line curve (log scale) to track the progress of these various metrics. Here is one such curve below that Michael Keating presented at SNUG 2010:

2-D Scaling

To be sure, achieving Moore’s law has not just been a matter of driving down the scaling road with the top down and the tunes on. There have been numerous roadblocks in the past that could have ended the trip, but we’ve always been able to find a detour or new road to keep the trip on schedule. Some are emboldened by our previous innovations and say “yeah, they’ve predicted the end of Moore’s law before and we always find a new way.” Others say that this time is different, that we are hitting barriers of physics; that we are running out of road to drive and will need to find a new means of transportation altogether. So, let’s look at some of these barriers and what may be able to get us beyond.


Most silicon-based semiconductors are produce with light that has a wavelength of 193nm, which is 4x the length of the minimum feature size of the current 45nm production technology. That does not seem possible, but tricks such as optical proximity correction (OPC) actually allow this to work. However, according to most experts in this field, those techniques will fail to work well very soon and new techniques will be needed.

Immersion Lithography is already in use (e.g. TSMC 45nm) and likely be needed starting at 32nm. Rather than air between the lens and the wafer, a liquid (currently very pure water) with an index of refraction > 1 is used to focus the beam and achieve smaller feature sizes. As you can imagine, this is not a simple process extension since the water needs to be contained, free of air bubbles, and then cleaned up without damaging the wafer or interfering with other aspects of the process. It can be done, but adds to the cost.

Double patterning is another technique currently in use and which will likely be required starting at 22nm. Instead of shrinking the patterns on the reticle to smaller feature sizes, the wafer is exposed to two (or more) different reticles each offset from the other to achieve a net effect of smaller feature size. After each exposure the wafer is etched, so this increases the number of process steps and hence the cost as does the need for multiple reticles for each layer.

Beyond 22 nm will likely require using a smaller wavelength of light. Extreme Ultraviolet (EUV) has a wavelength of 13.5nm and will likely take us to the 4nm node, but this technology is just now under development and production solutions may be very expensive when commercialized. Bernard Meyerson of IBM recently cited the cost of one such machine as $100M.

One technique that may actually reduce cost is called Electron-Beam Direct Write (EBDW), which is based upon traditional E-beam lithography that has been around since the early 1990s. Instead of using an optical reticle to define the patterns to be exposed on a wafer, E-beam lithography uses an electron beam to draw features on the wafer directly. This technology can be much more precise in feature size than optical methods, but is slower since the beam needs to traverse the entire wafer. Methods are being developed to utilize a massive number beams to speed up processing. Previous E-beam systems were very expensive and these will be no exception; on the other hand, there will be cost savings up-front since no masks are needed for EBDW. Another benefit - shorter fab runs will be more economically feasible since there won’t be as large an upfront cost for the masks to amortize.

There are some other next-generation lithography approaches being considered that you might wish to look at. All these approaches have been proven to some extent but some will require much more refinement to be technically and cost effective. Given our resourcefulness in the past, it’s likely that one or more methods will emerge to allow us to reach the 4nm node in about 12 years. But that may be the end of the line. As Michael Keating noted at SNUG this year, at 4nm you’re switching 3 electrons, so it does not seem you can get much further without some advancements in transistor design.

Next Blog Post: Is 2D Scaling Dead? - Looking at Transistor Design

harry the ASIC guy

Brian Bailey on Unconventional Blogging

Tuesday, June 15th, 2010


(Photo courtesy Ron Ploof

I had the pleasure yesterday of interviewing Brian Bailey in the Synopsys Conversation Central Stage at DAC. We discussed his roots in verification working with the initial developers of digital simulation tools and his blogging experiences these past few years. There are, of course, even a few comments on the difference between journalists and bloggers ;)

You can listen to this half hour interview at the Synopsys Blog Talk Radio site. I’d be interested in your comments on the show and the format as well. It was pretty fun, especially in front of a live audience.

At 12:30 PDT today, I’ll be doing another interview on Security Standards for the Cloud. You can tune in live on your computer or mobile device by going to the main Synopsys Blog Talk Radio Page. So, even if you’re not here at DAC, you can still partake.

harry the ASIC guy

Where in the DAC is harry the ASIC guy?

Friday, June 11th, 2010

dac_logo.pngLast year’s Design Automation Conference was kind of quiet and dull, muted by the impact of the global recession with low attendance and just not a lot of real interesting new developments. This year looks very different; I’m actually having to make some tough choices of what sessions to attend. And with all the recent acquisitions by Cadence and Synopsys, the landscape is changing all around, which will make for some interesting discussion.

I’ll be at the conference Monday through Wednesday. As a rule, I try to keep half of my schedule open for meeting up with friends and colleagues and for the unexpected. So if you want to chat, hopefully we can find some time. Here are the public events that I have lined up:


10:30 - 11:00 My good friend Ron Ploof will interviewing Peggy Aycinena on the Synopsys Conversation Central stage, so I can’t miss that. They both ask tough questions so that one may get chippy. (Or you can participate remotely live here)

11:30 - 12:00 I’ll be on that same Synopsys Conversation Central stage interviewing Verification Consultant and Blogger Extraordinaire Brian Bailey. Audience questions are encouraged, so please come and participate. (Or you can participate remotely live here)

3:00 - 4:00 I’ll be at the Atrenta 3D Blogfest at their booth. It should be an interesting interactive discussion and a good chance to learn about one of the 3 directions EDA is moving in.

6:00 - Cadence is having a Beer for Bloggers event but I’m not sure where. For the record, beer does not necessarily mean I’ll write good things. (This event was canceled since there is the Denali party that night).


8:30 - 10:15 For the 2nd straight year, a large fab, Global Foundries (last year it was TSMC) will be presenting their ideas on how the semiconductor design ecosystem should change From Contract to Collaboration: Delivering a New Approach to Foundry

10:30 - 12:00 I’ll be at a panel discussion on EDA Challenges and Options: Investing for the Future. Wally Rhines is the lead panelist so it should be interesting as well.

12:30 - 1:00 I’ll be back at the Synopsys Conversation Central stage interviewing James Wendorf (IEEE) and Jeff Green (McAfee) about standards for cloud computing security, one of the hot topics.


10:30 - 11:30 I’ll be at the Starbucks outside the convention floor with Xuropa and Sigasi. We’ll be giving out Belgian Chocolate and invitations to use the Sigasi-Xilinx lab on Xuropa.

2:00 - 4:00 James Colgan, CEO of Xuropa, and representatives from Amazon, Synopsys, Cadence, Berkeley and Altera will be on a panel discussion on Does IC Design have a Future In the Cloud?. You know what I think!

This is my plan. Things might change. I hope I run into some of you there.

harry the ASIC guy

Oasys for FPGA Synthesis? Hmmmm….

Wednesday, June 9th, 2010

A friend asked me what I thought about Oasys’ announcement last week that Juniper Networks was now a customer of theirs. I’ll admit that I was lukewarm. On the one hand, a large high-end networking chip is exactly the sweet spot for a fast synthesis tool. On the other hand, it did not change the fact that the number of these large designs is dwindling and that the industry is looking more towards the front-end of the design cycle than the back.

So, today he asked me what I thought about Oasys’ announcement of it’s partnership with Xilinx. Now this was interesting. Here is what I wrote back:


I’m not surprised. I had heard from some people that they had funding from Xilinx all along. Of course, they don’t say that in the press release :)

Truthfully, I think the FPGA market may be a better play than ASIC for a few reasons:

  1. FPGA design starts are growing while ASIC starts are shrinking
  2. Do not have to compete with Synopsys and Cadence for market share. These would be bloody battles requiring a lot of resources that Oasys does not have. Synopsys would win by attrition.
  3. FPGA synthesis is truly a bottleneck for FPGA designs. The debug loop for most people is design => synthesize/place&R => debug => fix error => synthesis/P&R….. It’s not uncommon for there to be dozens of these loops to get an FPGA working. And synthesis on a large FPGA can be an overnight run. If they can turn that into a half hour, then that changes the whole method of debug and can save weeks of schedule.

On the down side, ASPs for FPGA synthesis tools are $0 since Xilinx and Altera give theirs away for free, although Synopsys (Synplicity) and Mentor sell FPGA synthesis tools. This was discussed very recently on Olivier Coudert’s blog.

Will be interesting to watch.


What do you think?

harry the ASIC guy

P.S. Oasys, can you get some real blogging software on your blog so people can leave their comments and thoughts there on your site and not on my blog. I don’t mind the traffic, but you are missing out on building a sizable following. Just some friendly advice.