Theodoric and Me

December 18th, 2010

Several of you have inquired what’s been going on and why it’s been so long between blog posts. So, here’s the deal.

It’s been a rough few months.

I don’t feel like going into too much detail, but our family has been hit by a pretty difficult streak of illnesses. Thankfully, our immediate family is fine, but we lost my mother-in-law to an illness and both my parents have spent the better part of the last few months in the hospital or nursing care. And, not the least, we lost our beloved family dog Mookie as well.

Family priorities being what they are, my time and strength has been allocated elsewhere. Hopefully, this post will be the transition back into a more regular schedule going forward.

These several months dealing with the medical system has been eye opening, and not in a good way. Not that I thought that everything was great beforehand. You see, my father had been hit by a car several years ago and I got a good look at the “sausage factory” that is the US medical system at that time. Mistakes, inefficiencies, and just plain neglect are the status quo for most who need hospital or nursing care and are not able to strongly advocate for themselves or have someone do so on their behalf.

I could go on and on with stories, and maybe I will someday, but here are just a few of the moments that I recall the most:

  • My father’s medical records were faxed from the hospital to a nursing facility when he was transferred. Sounds good, except the original was on 2-sided paper and the fax was sent 1-sided, so they only had every other page.
  • While at a nursing home, my mother-in-law acquired a wound so bad that was so neglected that they did not even notice until she needed a blood transfusion.
  • A nurse insisted that my father had the correct care for 2 wounds even though I could plainly tell that she had them reversed. It took a whole day to get her to admit that she “might” have been wrong and check with the doctor.
  • My mother, who was unable to feed herself due to her condition, had her breakfast meal tray delivered and left sitting there. When I showed up just before lunch and pointed it out, they were going to feed her the breakfast that had been sitting there for 3 hours. Yummy, 3 hour old milk.
  • Numerous mistakes made while hand-copying medication lists when transferring between facilities. Turns out nurses don’t write any more neatly than doctors.
  • My father acquired a wound on his heel while he was left in bed with a broken leg. The wound then acquired an MRSA infection that took 6 months to heal.
  • My father did not receive any antibiotics for an infection that gave him a 104 degree fever because he could not recall if he had any allergies to medications.

I’ve found that, unless I am being a pain-in-the-ass to the staff, I’m not really doing enough to make sure my parents get the quality of care they deserve. That should not be the case.

According to some studies I’ve seen referenced, there are 225,000 deaths annually in the US due to medical errors, which is almost 10% of all deaths in the US.

To us engineers that design the most complex SoCs and systems, it seems unfathomable that our medical system is still mostly using pen and paper. How hard could it be to have a central location to store all medical information on each individual? So a doctor, or even a paramedic in the field, can access your entire medical history in seconds and know exactly what is your situation. So medications follow the individual and are correctly identified. So any doctor can access any report, to see when the last flu shot was given and whether there was an adverse reaction.

What is most frustrating is that this is a very solvable problem. We have the technology. But, as usual, politics gets in the way. Even though last year’s stimulus package put billions aside to create such a database, privacy “advocates” try to block progress.

I don’t want to use this post to get on my pulpit and preach. And I’m not trying to advocate for one political party or the other. So, I’m sorry if it comes off that way. But, come on people, can’t we just figure something out to bring us into the 21st century?

harry the ASIC guy

1 Star2 Stars3 Stars4 Stars5 Stars (2 votes, average: 3 out of 5)
Loading ... Loading ...

EDA: The Next Big Things

October 10th, 2010

As most of you know, I’ve been a big advocate for using technology to do more and more online. As an example, back in April, when the volcano in Iceland was causing havoc with air travel in Europe, I wrote a post on the Xuropa blog entitled “What’s in Your Volcano Kit?” In that post, I urged EDA companies to develop a kit of online tools to communicate and collaborate with current and prospective customers and the industry in general.

Well, it’s good to know that people are reading my blog and following my advice! ;)

One such tool that has become very popular in the last year, virtual conferences, are events sponsored either by media companies or the EDA companies themselves with several sessions throughout the day on a variety of topics. For us designers, they allow us to “drop in” on an event without leaving our desks or investing additional time or cost in traveling to and from the event. Certainly, it is not as rich an experience as being there live, but it’s more complete than the standard single topic disguised product pitch Webinar.

Since my advocacy was so fundamental in bringing these events about, I am very excited to be taking part in one of these upcoming virtual conferences. I will be moderating a session entitled “System-on-Chip: Designing Faster and Faster” at the upcoming “EDA Virtual Conference- EDA: The Next Big Things” on October 14. Here is a brief overview of my session, which will include presentations by Synopsys, Sonics, and Magma.

High speed digital design presents three important challenges: creating functional IP that performs well, combining IP blocks quickly to form a system, and being sure the system performs as expected with no surprises. EDA is allowing designers to create, simulate, connect, and deliver SoCs in new and exciting ways by combining and verifying IP blocks faster than ever. Very fast digital IP, with as high as 2 GHz clock speeds, is uncovering new issues that EDA and IP teams are working together to solve.

This session looks at the trends in digital IP, interconnect technology, issues in maintaining signal integrity, on-chip instrumentation, and more ideas to create sophisticated SoC designs and get chips to market quickly. Experts will discuss what they are seeing as clock speeds increase, tools capable of identifying issues, and ways to make sure a high speed SoC functions right the first time.

There are also 4 other 1-hour sessions during the day:

You can register for the event here. I hope you can make it.

harry the ASIC guy

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Scott Clark on EDA Clouds

August 8th, 2010

scottclark.jpgAlthough I had heard his name mentioned quite often, it wasn’t until this year at DAC that I finally met Scott Clark  for the first time. Scott was describing how, as Director of Engineering Infrastructure at Broadcom, he led a project to virtualize Broadcom’s internal data center in order to transform it into a private cloud. It was a great discussion. We had lunch a few weeks later to talk about his new business, Deopli, a company that he has founded to help other semiconductor and EDA companies improve their compute infrastructure operations in similar fashion.

So, when I saw Dan Nenni’s blog post on cloud computing and some of the responses, I thought I’d contact Scott. You see, as opposed to most of those commenting on Dan’s post, Scott has actually taken EDA tools and moved them to the cloud, so he knows what he’s talking about. Scott was kind enough to contribute a blog post on the subject, so please enjoy.

__________

Harry the ASIC Guy pointed me to Dan Nenni’s Silicon Valley Blog to take a look at this post regarding Daniel Suarez’s books Daemon and Freedom. His post intrigued me enough to download the first book to my iPad to get a feel for the style and atmosphere. That was good enough that I plan to read both. You can read Dan’s post to see his overview of the books, but at the end of his post, he poses a question that seemed to spark lots of conversation and varying opinions. His question was “Who can be trusted to secure Darknet (Cloud Computing)?”

I think Dan was making reference to concepts in the book where all data in the world becomes controlled by a finite set of service providers, and therefore creates an exposure based on the singularity of the solution. His references hit pretty close to home in Apple, Microsoft and Google, but that did not seem to be the focus of the responses. Because Dan’s background (and blog) is primarily in the EDA / Semiconductor space, the responses seemed to fall into the category of “Should Semiconductor companies use Cloud Computing?” and the array of opinions seemed to align on the two ends of the spectrum. There were a few respondents who felt that EDA would never ever move into the Cloud or gave somewhat skewed definitions of “cloud” to say “it’s impossible” but for the most part, it was refreshing to see some open minded views of what was possible and how things could work. I was particularly intrigued by Dan’s comment that he felt foundries would venture into the cloud hosting space. Given the history of the fabless semiconductor space, how can that not make perfect sense! The leadup to the creation of foundries was that internal manufacturing was growing in capacity and complexity to the point that it made more sense to have that done externally. The same dynamics are happening in the datacenter space for chip design today.

Some of the comments were very accurate in my experiences, so just to highlight a few (please read the blog for specifics so I don’t mis-quote). Daniel Payne made the observation that semiconductor companies will start by creating their own private cloud, and that is exactly where we are today (compute clusters really are private clouds). James Colgan injected sanity throughout and made some very astute observations about the functional dynamics and applicability of cloud to certain parts of a design flow. I can’t say how much I agree with Kevin Cameron’s comments on security; cloud has the potential to be a huge boost in security for the industry. Tom Anderson indicated that he is already doing chip design using Amazon EC2 resources, and I think there are many more like Tom out there. One of the last postings to date is by Lou Covey, and his opinion is that Cloud for the industry is inevitable - I happen to agree with that. It’s not that we “have to” but more of “this is the right answer for the business, and we should do the right thing”.

One of the missing concepts that I notice is that this blog is looking at generic cloud solutions, and not industry specific solution. You will see the development of EDA specific cloud solutions that is very focused on EDA customers, and in the beginning it will be private clouds with technology added to elastic expansion. That said, looking at Cloud for the EDA industry, there are still going to be several roadblocks to adoption that will need to be addressed:

  • Ego – getting around the perception that IT is a core competency of chip design companies. The core competency of a chip design company should be … chip design.
  • Cost – getting around the expectation that cloud should cost ½ as much as what I am currently paying. There are many economies of scale and efficiencies that cloud brings. Cloud is an opportunity for cost avoidance as time goes forward, not a refund policy.
  • Trust – letting go of what is a critical function / resource and having confidence that you can still get the results necessary. This industry has a very powerful model to refer to. In this case, how the fabs were released, and successful partnerships were formed.
  • Control – how to let go of a critical resource, and still maintain control over the resources, costs, schedules, and dynamics of capacity / priority decisions.
  • Security – probably the most wielded blade in the “you can’t do it” arsenal, but also probably the most misunderstood.
  • Performance – the final roadblock, which is the one with the most technical merit, is performance. There are many different facets to performance, but it will primarily fall into “internal cluster performance” and “display performance”.

My perspective, the ego part we can get around. Current conversations with many EDA companies indicate they are already leaning this direction, which is a good sign.

The cost issue is far more ambiguous. There are as many expectations of cloud as there are definitions, but invariably the expectations are rooted in economics. Given that, the only answer seems to be to create a realistic model for cost, present the data, and let nature take it’s course. There really is cost benefit, so companies will want to accomplish that

Trust seems like it should be the easy part for this industry, but it is proving to be more stubborn than that. I think that is mostly because of the implied threat to job security for the people who are currently performing the tasks (who are usually the people receiving the presentation about outsourcing their job). EDA companies should examine their own history to see what to do and how to do it.

The control front falls into the same category as trust. The same way that fabless semiconductor companies created internal organizations and positions for managing the outsource of the foundries, that model should be applied to the outsourcing of computational infrastructure. That is not to say there will not be contention issues for capacity and priority. The cloud suppliers will need to make sure they have enough resources so they can provide sufficient capacity to the customers, or they will not be the supplier long. Again, foundries will be a great model to look at for this.

On the security front, Cloud will at a minimum give data points to show how weak internal security has been historically. Applying best security practices in a consistent manner should actually help evolve an industry specific cloud security solution to better address security issues. And for the time being, we can just avoid the multi-tenant aspects of security by maintaining isolation – private clouds with share dynamic resources.

And finally, given that we are stalking about EDA specific clouds, they will be specifically designed to have “internal cluster performance” appropriate for EDA. It will be designed exactly like we would design that cluster for a companies private datacenter. The tricky part will be in addressing display performance issues for functions like custom layout and board design where network latency causes the engineer’s working style to be impacted.

So really this boils down to proper execution by the EDA cloud providers, and one technical hurdle of display latency, which has many ways to be addressed. There is a lot of money and attention being aimed at these issues and this industry, and really no real reason why it will not succeed. There might be some companies that choose to adopt at a slower rate than others, but I believe this will become the direction everyone goes eventually. Thanks Dan for a great read and thanks Harry for pointing me at it.

__________

Scott Clark has been an infrastructure solution provider in the EDA/Semiconductor industry for the last 20 years, working for companies like Western Digital, Conexant, and Broadcom. He holds a bachelors of science in applied mathematics from San Diego State University and is currently President and CEO of Deopli Corporation. You can follow Scott on his blog at HPC in the Clouds.

1 Star2 Stars3 Stars4 Stars5 Stars (2 votes, average: 5 out of 5)
Loading ... Loading ...

Is 2D Scaling Dead? - Other Considerations

July 11th, 2010

othercons.PNG(Part 4 in the series Which Direction for EDA? 2D, 3D, or 360?)

In the last 2 posts in this series, I examined the lithography and transistor design issues that will need to be solved in order to save 2D scaling as we know it. In this post I will look at several other considerations.

For the moment, let’s assume that we are able to address the lithography and transistor design issues that I’ve identified in the previous posts. TSMC recently announced it will take delivery of an EUV lithography machine, so let’s assume they are successful in making the move to the 13.5 nm wavelength. IBM, TSMC, and Intel are already using multi-gate FETs in their most advanced process development and ITRS predicts it will be standard for the 32nm node, so let’s assume that will work out as well. If so, are we home free?

 

Not so fast!

 

There are still numerous technical challenges and one big economic one. First the technical:

 

Process variability refers to the fact that circuit performance can vary based upon the variability in the wafer processing. For instance, let’s say we are printing 2 overlapping rectangles on a die. Due to normal process variability, those rectangles can vary from the ideal in size (smaller or larger), can be shifted (north, south, east, west), or can be offset from each other. Thicknesses of processing layers have variability as well. The amount of doping can vary. Physical steps such as CMP (Chemical Mechanical Polishing) can introduce variability. These variabilities tend to be fixed amounts, so at large process nodes they don’t make much difference. But as we get smaller, these variabilities become significant. If we just take the old approach of choosing a 3-sigma range to define best case and worst case processing corners, the performance at lower more variable nodes may not be much greater than at the larger less variable nodes.

 

This process variability introduces performance variability, and not always in predictable ways.  For instance, if two related parameters vary equally based on oxide thickness, and all we care about is the ratio of these parameters, then the variation may cancel out. But if they vary in opposite directions, the effect may be worsened. Careful design and layout of circuits can make it so that process variations can cancel out with little net effect, but this takes enormous effort and planning and still you cannot account for all variation. Rather, we just have to live with the fact that process variation could cause +- 20, 30, or even 50% performance variation.

 

ssta_graph.JPGThere are some methods to account for this variation for digital designs, the most mainstream being statistical static timing analysis (SSTA). SSTA realizes that process variation results in a circuit performance distribution curve. Instead of drawing hard 3-sigma limits on the curve to define processing “corners”, as is done with traditional STA, SSTA allows designers to understand how yield varies with variability. For instance, if the designer wants to stick with 3-sigma bounds to achieve 90% yield then he may need to accept 500 MHz performance. However, if he wants to be more aggressive on timing he may be able to achieve 600 MHz by accepting a lower 75% yield for parts that fall within a smaller 2-sigma range. SSTA helps designers make these choices.

 

But SSTA is not a silver bullet. Process variability can affect hold times to the extent where they are very difficult to fix. Analog and mixed-signal circuits are much more susceptible to process variability since there are many more performance parameters designers care about. Companies like Solido are trying to attack this specific process variability issue, but the cost in time and analysis (e.g. Monte Carlo simulation) is large. And process variability can just plain break a chip altogether. This will only get worse as the dimensions shrink.

 

Yield is the first cousin to process variability. As discussed in the preceding section, there is a direct tradeoff between performance and yield due to process variability. And as process complexity increases and design margins shrink, yield surely will suffer. There’s a real question whether we’ll be able to yield the larger chips that we’ll be able to design.

 

Crosstalk and signal integrity issues are exaggerated at smaller nodes and are more difficult to address. According to a physical design manager I spoke with recently, the problem is that edge rates are faster and wires are closer together, so crosstalk induced delay is greater. Fixing these issues involves spreading wires or using a lower routing utilization, which defeats some of the benefit of the smaller node. And that is if you can even identify the aggressor nets, which may be multiple. It’s not uncommon for days to weeks to be spent fixing these issues at 45nm, so how long will is take at 22nm or lower?

 

Process variability and signal integrity are just 2 of the more prominent technical issues we’re hitting. In fact, pretty much everything gets more difficult. Consider clock tree synthesis for a large chip needing low skew and complex power gating. Or verifying such a large design (which merits it’s own series of posts). What about EDA tool capacity? And how are we going to manage the hundreds of people and hundreds of thousands of files associated with an effort like this? And let’s not forget the embedded software that runs on the embedded processors on these chips. A chip at these lower nodes will be a full system and will require a new approach. Are we ready?

 

And believe it or not, we’re even limited by the speed of light! A 10 Gbps SerDes lane runs at 100ps per bit, or the time it takes light to travel 3cm, a little over an inch. Even if we can process at faster and faster speeds on chip, can we communicate this data between chips at this rate, or does Einstein say “slow down”?

 

Enough of the technical issues, let’s talk economics.

 

Cost is, and always has been, the biggest non-technical threat to 2D scaling. Gordon Moore considered his observation to be primarily economic, not technological. In the end, it’s not about how much we can build, but how much we can afford to build. There are several aspects of cost, so let’s look at each.

 

Cost of fabrication is the most often quoted and well understood.  Although precise predictions will vary, it’s clear that all the breakthroughs required in lithography, transistor design, and other areas will not come cheaply. Nor will the facilities and manufacturing equipments necessary to implement these breakthroughs. $5B is not an unreasonable estimate to construct and equip a 22nm fab. When it costs $5B to ante up to even get into the game, we’re going to see some semiconductor companies fold their hands. We’re already seeing consolidation and collaboration in semiconductor fabrication (e.g. Common Platform, Global Foundries) and this will increase. Bernard Meyerson even spoke of a concept he called radical collaboration, in which competitors collaborate on and share the cost of the expensive basic science and R&D required to develop these new foundries and processes. We’re going to need to be creative.

 

Cost of design is also becoming a challenge. Larger chips mean larger chip design projects. Although I’ve not seen any hard data to back this up, I’ve seen $100M mentioned as the cost to develop a current state-of-the-art SoC. Assuming most of the cost is labor, that’s equivalent to over 200 engineer-years of labor! What will this be in 5 years? Obviously, a small startup cannot raise this much money to place a single bet on the roulette wheel, and larger companies will only be willing to place the safest bets with this type of investment. They will have to be high-margin high-volume applications, and how many of those applications will exist?

 

In the end, this all boils down to financial risk. Will semiconductors manufacturers be willing to take the risk of generating enough revenue to cover the cost of a $5B+ fab? Will semiconductor companies be willing to take the risk of generating enough revenue to cover the cost of a $100M+ SoC? For that matter, will there be many applications that draw $100M in revenue altogether? For more and more players, the answer will be “no”.

 

Despite all these increasing chip costs, it is important to take a step up and consider the costs at the system-level. Although it may be true that a 32nm 100M gate chip is more expensive than a 90nm 10M gate chip, the total system costs are certainly reduced due to the higher level of integration. Maybe 5 chips become 1 chip with higher performance and lower power. That reduces the packaging and product design cost. Perhaps other peripherals can now be incorporated that were previously separate. This will of course depend on each individual application, however, the point is that we should not stay myopically focused on the chip when we are ultimately designing systems. System performance is the new metric, not chip performance.

In the next blog post in this series, I’ll finish up the discussion on 2D scaling by looking at the alternatives and by making some predictions.

harry the ASIC guy

1 Star2 Stars3 Stars4 Stars5 Stars (3 votes, average: 4.33 out of 5)
Loading ... Loading ...

Is 2D Scaling Dead? Looking at Transistor Design

June 23rd, 2010

 (Part 3 in the series Which Direction For EDA: 2D,3D, or 360?)

Replica of the First TransistorIn the last blog post, I started to examine the question “is 2D scaling really dead or just mostly dead?” I looked at the most challenging issue for 2D scaling, lithography. But even if we can draw the device patterns somehow on the wafer at smaller and smaller geometries, does not necessarily mean that the circuits will deliver the performance (speed, area, power) improvements that Moore’s Law has delivered in the past. Indeed, as transistors get smaller (gate length and width) they also get shorter (oxide thickness). There are limits to the improvements we can gain in power and speed. We’ll talk about those next.

Transistor Design

First, consider what has made 2D scaling effective to date. The move to smaller geometries has allowed us to produce transistors that have shorter channels, operate at lower supply voltages, and switch less current. The shorter channel results in lower gate capacitance and higher drive which means faster devices. And the lower supply voltage and lower current result in lower dynamic power. All good.

At the same time, these shorter channels have higher sub-threshold and source-drain leakage currents and the thinner gate oxide results in greater gate leakage. At the start of Moore’s Law, leakage was small, so exponential increases were not a big deal. But at current and future geometries, leakage power is on par and soon exceeding dynamic power. And we care more today about static power, due to the proliferation of portable devices that spend most of their time in standby mode.

leakage-power.jpeg

The reduction in dynamic power is also reaching a limit. Most of the dynamic power reduction of the last decade was due to voltage scaling. For instance, scaling from 3.3V to 1.0V reduces power by 10x alone. But reductions beyond 08.V are problematic due to the inherent drop across a transistor and device threshold voltages. Noise margins are fast eroding and that will cause new problems.

Still, as with lithography, we haven’t thrown in the towel yet.

Strained Silicon is a technique that has been in use since the 90nm and 65nm nodes. It involves stretching apart the silicon atoms to allow better electron mobility and hence faster devices at lower power consumpti0on, up to 35% faster.

Hi-k dielectrics (k being the dielectric constant of the gate oxide) can reduce leakage current. The silicon dioxide is replaced with a material such as hafnium dioxide with a larger dielectric constant, thereby reducing leakage for an equivalent capacitance. This technique is often implemented with another modification which is replacing the polysilicon gate with a metal gate with lower resistance, hence increasing speed. Together, the use of hi-k dielectrics with metal gates is often referred to by the acronym HKMG and is common at 45nm and beyond.

A set of techniques commonly referred to as FinFET or Multi-gate FET (MuGFET) break the gate of a single transistor into several gates in a single device. How? Basically by flipping the transistor on it’s side. The net effect is a reduction in effective channel width and device threshold with the same leakage current; i.e. faster devices with lower dynamic power with the same leakage power.  But this technique is not a simple “tweak”; it’s a fundamental change in the way we build devices. To quote Bernard Meyerson of IBM, “to go away from a planar device and deal with a non-planar one introduces unimaginable complexities.” Don’t expect this to be easy or cheap.

Multigate FET - Trigate

A more mainstream technology that has been around a while, Silicon-on-Insulator (SOI), is also an attractive option for very high performance ICs such as those found in game consoles. In SOI ICs, a thick layer of an insulator (usually silicon dioxide) lies below the devices instead of silicon as in normal bulk CMOS. This reduces device capacitance and results in a speed-power improvement of 2x-4x, although with more expensive processing and a slightly more complex design process. You can find a ton of good information at the SOI Consortium website.

In summary, we are running into a brick wall for transistor design. Although there are new design techniques that can get us over the wall, none of these are easy and all of them are expensive, And the new materials used in this process create new kinds of defects, hence reducing yield. With some work, the techniques above may get us to 16nm or maybe a little bit further. Beyond that, they’re talking about Graphene transistors (i.e. carbon nanotubes), pretty far out stuff.

In my next post, I’ll look at some of the other considerations regarding 2D scaling, not the least of which is the extraordinary cost.

harry the ASIC guy

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Is 2D Scaling Really Dead or Just Mostly Dead?

June 20th, 2010

(Part 2 in the series Which Direction For EDA: 2D,3D, or 360?)

miracle_max.jpg “Well, it just so happens that your friend here is only mostly dead. There’s a big difference between mostly dead and all dead.” - Miracle Max to Inigo Montoya, The Princess Bride

 __________

In the film The Princess Bride, Westley lies motionless and apparently dead on a table in the cottage of Miracle Max. After some complaining from Max and nagging from Max’s wife, Max devises a chocolate covered Miracle Pill that revives the mostly dead Westley so he can save his true love Buttercup from the evil Prince Humperdink.

In our story, 2D scaling is the mostly dead Westley, the semiconductor manufacturers are Miracle Max trying to create a Miracle Pill, and Gordon Moore is Buttercup waiting to be rescued. (I’m not sure who’s Prince Humperdink and Max’s wife, but if you have an idea, please let me know.)

Moore’s law (the colloquial term for 2D scaling) states that certain metrics of semiconductor technology performance proceed at a rate of approximately 2x every 2 years. It can refer to transistor size (channel length), density (gates / sq. mm), cost ($/gate), power (nA/gate), capacity (gates), or a combination of these. Indeed, over the last 2 decades we can draw a pretty straight line curve (log scale) to track the progress of these various metrics. Here is one such curve below that Michael Keating presented at SNUG 2010:

2-D Scaling

To be sure, achieving Moore’s law has not just been a matter of driving down the scaling road with the top down and the tunes on. There have been numerous roadblocks in the past that could have ended the trip, but we’ve always been able to find a detour or new road to keep the trip on schedule. Some are emboldened by our previous innovations and say “yeah, they’ve predicted the end of Moore’s law before and we always find a new way.” Others say that this time is different, that we are hitting barriers of physics; that we are running out of road to drive and will need to find a new means of transportation altogether. So, let’s look at some of these barriers and what may be able to get us beyond.

Lithography

Most silicon-based semiconductors are produce with light that has a wavelength of 193nm, which is 4x the length of the minimum feature size of the current 45nm production technology. That does not seem possible, but tricks such as optical proximity correction (OPC) actually allow this to work. However, according to most experts in this field, those techniques will fail to work well very soon and new techniques will be needed.

Immersion Lithography is already in use (e.g. TSMC 45nm) and likely be needed starting at 32nm. Rather than air between the lens and the wafer, a liquid (currently very pure water) with an index of refraction > 1 is used to focus the beam and achieve smaller feature sizes. As you can imagine, this is not a simple process extension since the water needs to be contained, free of air bubbles, and then cleaned up without damaging the wafer or interfering with other aspects of the process. It can be done, but adds to the cost.

Double patterning is another technique currently in use and which will likely be required starting at 22nm. Instead of shrinking the patterns on the reticle to smaller feature sizes, the wafer is exposed to two (or more) different reticles each offset from the other to achieve a net effect of smaller feature size. After each exposure the wafer is etched, so this increases the number of process steps and hence the cost as does the need for multiple reticles for each layer.

Beyond 22 nm will likely require using a smaller wavelength of light. Extreme Ultraviolet (EUV) has a wavelength of 13.5nm and will likely take us to the 4nm node, but this technology is just now under development and production solutions may be very expensive when commercialized. Bernard Meyerson of IBM recently cited the cost of one such machine as $100M.

One technique that may actually reduce cost is called Electron-Beam Direct Write (EBDW), which is based upon traditional E-beam lithography that has been around since the early 1990s. Instead of using an optical reticle to define the patterns to be exposed on a wafer, E-beam lithography uses an electron beam to draw features on the wafer directly. This technology can be much more precise in feature size than optical methods, but is slower since the beam needs to traverse the entire wafer. Methods are being developed to utilize a massive number beams to speed up processing. Previous E-beam systems were very expensive and these will be no exception; on the other hand, there will be cost savings up-front since no masks are needed for EBDW. Another benefit - shorter fab runs will be more economically feasible since there won’t be as large an upfront cost for the masks to amortize.

There are some other next-generation lithography approaches being considered that you might wish to look at. All these approaches have been proven to some extent but some will require much more refinement to be technically and cost effective. Given our resourcefulness in the past, it’s likely that one or more methods will emerge to allow us to reach the 4nm node in about 12 years. But that may be the end of the line. As Michael Keating noted at SNUG this year, at 4nm you’re switching 3 electrons, so it does not seem you can get much further without some advancements in transistor design.

Next Blog Post: Is 2D Scaling Dead? - Looking at Transistor Design

harry the ASIC guy

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5 out of 5)
Loading ... Loading ...

Brian Bailey on Unconventional Blogging

June 15th, 2010

bailey.jpg

(Photo courtesy Ron Ploof

I had the pleasure yesterday of interviewing Brian Bailey in the Synopsys Conversation Central Stage at DAC. We discussed his roots in verification working with the initial developers of digital simulation tools and his blogging experiences these past few years. There are, of course, even a few comments on the difference between journalists and bloggers ;)

You can listen to this half hour interview at the Synopsys Blog Talk Radio site. I’d be interested in your comments on the show and the format as well. It was pretty fun, especially in front of a live audience.

At 12:30 PDT today, I’ll be doing another interview on Security Standards for the Cloud. You can tune in live on your computer or mobile device by going to the main Synopsys Blog Talk Radio Page. So, even if you’re not here at DAC, you can still partake.

harry the ASIC guy

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Where in the DAC is harry the ASIC guy?

June 11th, 2010

dac_logo.pngLast year’s Design Automation Conference was kind of quiet and dull, muted by the impact of the global recession with low attendance and just not a lot of real interesting new developments. This year looks very different; I’m actually having to make some tough choices of what sessions to attend. And with all the recent acquisitions by Cadence and Synopsys, the landscape is changing all around, which will make for some interesting discussion.

I’ll be at the conference Monday through Wednesday. As a rule, I try to keep half of my schedule open for meeting up with friends and colleagues and for the unexpected. So if you want to chat, hopefully we can find some time. Here are the public events that I have lined up:

Monday

10:30 - 11:00 My good friend Ron Ploof will interviewing Peggy Aycinena on the Synopsys Conversation Central stage, so I can’t miss that. They both ask tough questions so that one may get chippy. (Or you can participate remotely live here)

11:30 - 12:00 I’ll be on that same Synopsys Conversation Central stage interviewing Verification Consultant and Blogger Extraordinaire Brian Bailey. Audience questions are encouraged, so please come and participate. (Or you can participate remotely live here)

3:00 - 4:00 I’ll be at the Atrenta 3D Blogfest at their booth. It should be an interesting interactive discussion and a good chance to learn about one of the 3 directions EDA is moving in.

6:00 - Cadence is having a Beer for Bloggers event but I’m not sure where. For the record, beer does not necessarily mean I’ll write good things. (This event was canceled since there is the Denali party that night).

Tuesday

8:30 - 10:15 For the 2nd straight year, a large fab, Global Foundries (last year it was TSMC) will be presenting their ideas on how the semiconductor design ecosystem should change From Contract to Collaboration: Delivering a New Approach to Foundry

10:30 - 12:00 I’ll be at a panel discussion on EDA Challenges and Options: Investing for the Future. Wally Rhines is the lead panelist so it should be interesting as well.

12:30 - 1:00 I’ll be back at the Synopsys Conversation Central stage interviewing James Wendorf (IEEE) and Jeff Green (McAfee) about standards for cloud computing security, one of the hot topics.

Wednesday

10:30 - 11:30 I’ll be at the Starbucks outside the convention floor with Xuropa and Sigasi. We’ll be giving out Belgian Chocolate and invitations to use the Sigasi-Xilinx lab on Xuropa.

2:00 - 4:00 James Colgan, CEO of Xuropa, and representatives from Amazon, Synopsys, Cadence, Berkeley and Altera will be on a panel discussion on Does IC Design have a Future In the Cloud?. You know what I think!

This is my plan. Things might change. I hope I run into some of you there.

harry the ASIC guy

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Oasys for FPGA Synthesis? Hmmmm….

June 9th, 2010

A friend asked me what I thought about Oasys’ announcement last week that Juniper Networks was now a customer of theirs. I’ll admit that I was lukewarm. On the one hand, a large high-end networking chip is exactly the sweet spot for a fast synthesis tool. On the other hand, it did not change the fact that the number of these large designs is dwindling and that the industry is looking more towards the front-end of the design cycle than the back.

So, today he asked me what I thought about Oasys’ announcement of it’s partnership with Xilinx. Now this was interesting. Here is what I wrote back:

__________

I’m not surprised. I had heard from some people that they had funding from Xilinx all along. Of course, they don’t say that in the press release :)

Truthfully, I think the FPGA market may be a better play than ASIC for a few reasons:

  1. FPGA design starts are growing while ASIC starts are shrinking
  2. Do not have to compete with Synopsys and Cadence for market share. These would be bloody battles requiring a lot of resources that Oasys does not have. Synopsys would win by attrition.
  3. FPGA synthesis is truly a bottleneck for FPGA designs. The debug loop for most people is design => synthesize/place&R => debug => fix error => synthesis/P&R….. It’s not uncommon for there to be dozens of these loops to get an FPGA working. And synthesis on a large FPGA can be an overnight run. If they can turn that into a half hour, then that changes the whole method of debug and can save weeks of schedule.

On the down side, ASPs for FPGA synthesis tools are $0 since Xilinx and Altera give theirs away for free, although Synopsys (Synplicity) and Mentor sell FPGA synthesis tools. This was discussed very recently on Olivier Coudert’s blog.

Will be interesting to watch.

__________

What do you think?

harry the ASIC guy

P.S. Oasys, can you get some real blogging software on your blog so people can leave their comments and thoughts there on your site and not on my blog. I don’t mind the traffic, but you are missing out on building a sizable following. Just some friendly advice.

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

DAC Yesterday, Today, and Tomorrow

May 28th, 2010

About a week ago, I got an email from someone I know doing a story on how the Design Automation Conference has changed with respect to bloggers since the first EDA Bloggers Birds-of-a-Feather Session 2 years ago. I gave a thoughtful response and some of it ended up in the story, but I thought it would be nice to share my original full response with you.

Has your perception of the differences between bloggers and press changed since the first BOF?

Forget my perception; many of the press are now bloggers! I don’t mean that in a mean way and I understand that people losing their jobs is never a good thing. But I think the lines have blurred because we all find ourselves in similar positions now. It’s not just in EDA … many, if not most, journalists also have a blog that they write on the side.

Ultimately, I think either the traditional “press” or a blog is just a channel between someone with knowledge to people who want information they can trust. What determines trust is the reliability of the source. In thepast, the trust was endowed by the reputation of the publication. Now, weall have to earn that trust.

As for traditional investigative journalism (ala All the President’s Men) and reporting the facts (5 Ws), I think there is still a role for that, butmost readers are looking for insight, not jut the facts.

What do you think of DAC’s latest attempts to address these differences, e.g. Blog-sphere on the show floor, press room in the usual location?

Frankly, I’m not sure exactly what DAC is doing along these lines this year. Last year bloggers had very similar access as journalists to the press room and other facilities. It was nice to be able to find a quiet place to sit, but since most bloggers are not under deadline to file stories it is not as critical. Wireless technology is making a lot of this obsolete since we can pretty much work from anywhere. Still, having the snacks is nice :)

What does the future hold for blogging at DAC?

Two years ago, blogging was the “new thing” at DAC. Last year, blogging was mainstream and Twitter was the new thing. This year blogging will probably be old skool and there will be another “new thing”. For instance, I think we’re all aware and even involved in Synopsys’ radio show. This stuff moves so fast. So, I think the future at DAC is not so much for blogging, as it is for multiple channels of all kinds, controlled not only by “the media”, but also the vendors, independents, etc. Someone attending DAC will be able to use his wireless device to tap into many channels, some in real-time.

Next year, I predict that personalized and location aware services will be a bigger deal. When you come near a booth, you may get an invitation for a free demo or latte if your profile indicates you are a prospective customer. You’ll be able to hold up your device and see a “google goggles” like view of the show floor. You may even be able to tell who among your contacts is at the show and where they are. Who knows? It will be interesting.

harry the ASIC guy

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...