Archive for March, 2009

The Weakest Lynx

Thursday, March 26th, 2009

Earlier this week I wrote about the strengths of the new Synopsys Lynx flow offering. Today, the weakest Lynx.

1. Limited 3rd-Party Tool Support.

Synopsys states that Lynx is “flexible and inherently configurable to readily incorporate 3rd-party technology”. And it is true that they have done nothing to prevent you from incorporating 3rd-party tools. They also have done little to help you incorporate these tools. In most cases, incorporating 3rd party tools means digging in to understand the guts of how Lynx works. That means getting down and dirty with makefiles and tcl scripts and mimicking all the special behavior of the standard Lynx scripts for Synopsys tools. For instance, here are a few of the things you might need to do:

  • Break up your tcl scripts into several tcl scripts in separate directories for the project and block
  • Access special Lynx environmental variables by name in your makefiles and TCL scripts
  • Have your tool output special SNPS_INFO messages formatted for the metrics GUI to parse out of the log file
  • Update your scripts for new versions of Lynx if any of these formats have changed.

If you are motivated, I’m sure you can hack through the scripts and figure it out. However, to my knowledge (I looked in Solvnet, correct me if I am wrong) there is no application note that clearly documents the steps needed and the relevant files, variables, and message formats to use.

It’s not surprising that Synopsys does not want to make this too easy. One goal of offering the Lynx flow is to encourage the use of an all Synopsys tool flow. If it were truly a wide open flow with easy addition of 3rd-party tools, then there would be less of a hook to use Synopsys tools. (Personally, I disagree with this approach and think Synopsys would be better off offering a truly open flow, but that’s the next post).

As a result, Lynx will be used only by customers using a predominantly Synopsys tool flow. I think that is OK by Synopsys. They’d rather sell a few less Lynx licenses rather than support the use of 3rd-party tools. Unfortunately for designers using other tools, Lynx does not currently have that much to offer.

2. Lynx Is Difficult To Upgrade

One of the complaints of Lynx’s predecessors is that it is not easy to upgrade from version to version. That is because it is a set of template scripts, not an installed flow. What do I mean?

When you create a project using Lynx, a set of template scripts are copied from a central area and configured for you based on your input. Let’s call this V1.0. As you go through the design process, you customize the flow by changing settings in the GUI, which in turn changes the local scripts that you copied from the central area. Now, let’s say that you want to upgrade to V1.1 because there are some bug fixes or new capabilities you need to use. You can’t do that easily. You have 2 alternatives:

  1. Create a new project using v1.1 and try to replicate any customizations from the v1.0 project in the v1.1 project. I hope you kept good notes.
  2. Diff the new scripts and the old scripts and then update your version 1.0 scripts to manually upgrade to v1.1.

Admittedly, Synopsys provides release notes that identify what has changed and that will help with approach #2. And they try to avoid making gratuitous variable name changes. Even then, the upgrade process is error-prone and manual. In most cases, for any one project, customers will just stick with the version of Lynx that they started with in order to avoid this mess. Then they’ll upgrade between projects. That negates the benefit of having a flow that is always “up-to-date”.

In my humble opinion, a better way to architect the flow would have been to have a set of global scripts that are untouchable and a set of local scripts that can be customized to override or supplement the global scripts. In that case, a new version of Lynx would replace the global scripts, but the local scripts, where all the customization is done, could remain unchanged.

3. Debugging Is Difficult

Have you ever tried to debug a set of scripts that someone else wrote? Even worse, scripts that are broken up into multiple scripts in multiple directories. Even worse, by looking at log files that do not echo directly what commands were executed. Even worse, you were told you never had to worry about the scripts in the first place. And worst of all, when you called for support, nobody knew what you were talking about.

That is what debugging was like in Pilot, Lynx’s most recent predecessor.

I’ve been told that Synopsys has tried to address these issues in Lynx. They now have a switch that will echo the commands to the log files. The Runtime Manager can supposedly locate the errors in the log files and correlate them to the scripts. And now that Lynx is a supported product, Support Center and ACs should know how to help. Still, I’d like to see it to believe it. From what I understand, many of these features are still a bit flakey and almost all the Synopsys consultants, the largest user base for the flow, do not use the new GUIs yet.

__________

In summary, Lynx’s main weakness is that it was not originally architected as a forward compatible, open flow for novice tools users, which is what it is being positioned as. In fact, it started out as a set of scripts written by Avanti tool experts for Avanti tool experts to use with Avanti tools. Synopsys has done a lot to try to morph the flow into something that allows 3rd-party tools, upgrades more easily, and eases debug, but the inherent architecture limits what can be done.

So, what should have been added to make Lynx better? You’ll want to read the next in the series: The Missing Lynx.

Part 1 - Synopsys Lynx Design System Debuts at SNUG

Part 2 -  Lynx Design System? - It’s The Flow, Stupid!

Part 3 -  Strongest Lynx

Part 5 - The Mising Lynx - The ASIC Cloud

harry the ASIC guy

Strongest Lynx

Monday, March 23rd, 2009

I know. I know. I know.

I said that I was going to publish the final post in a 3-part series on Synopsys Lynx last Friday. However, as I put my notes together, I realized how much there is to say. So, I’m breaking up the last post into 3 separate posts: The Strongest Lynx, The Weakest Lynx, and The Missing Lynx (clever, huh?).  First, the Strongest Lynx.

I think that the best way to understand the strengths of Lynx is to consider who is Synopsys’ intended customer for this flow offering. After all, the offering is designed for them. In that regard, since adopting Synopsys Lynx is such a big change in methodology, Synopsys is looking for customers who are already planning some sort of major transition, including:

  • Startup companies who have no design flow to begin with
  • Companies making a significant transition to a new technology node or process (e.g. 90nm => 45nm)
  • Companies expanding their existing design capabilities (e.g. ASIC => COT)
  • Companies moving towards an all Synopsys flow (e.g. vendor consolidation)
  • Companies downsizing their in-house CAD teams

These companies are already committed to some sort of change in design flow, so Synopsys offers them a “buy” alternative to making it in-house. Synopsys Lynx is attractive to the extent that it accelerates that transition process and allows the design teams to be productive faster. In that light, the 3 greatest strength of Lynx seem to be:

1. 75-90% of a working design flow out-of-the-box.

I’m sure methodology experts and tool experts, if given a chance to dig into the Lynx scripts, would find areas to make improvements to the flow.  Nonetheless, I can say from experience managing teams using earlier versions (Tiger, Pilot), that Lynx provides a complete design flow that requires very little customization “out-of-the-box”. It is the same design flow that has evolved from almost 10 years of delivering design services and is being used by Synopsys’ own design services organization, so indeed they “eat their own dog food“. Synopsys says that they are doing more regression testing, which should increase quality. There is documentation and training, and Synopsys offers 1 week of on-site assistance to install and start customizing it.

2. A design flow that optimizes across the various tools.

It’s well understood that smaller process geometries require the various tools in the flow to work together more closely, exchanging data and anticipating what the other tools will do downstream. Synopsys claims to have implemented several such flows in Lynx, particularly highlighting their low-power flow. If this is true, that should result in better results than a point-tool approach.

3.  Ease-of-use features that help average tool users be productive more easily.

Design flows for very small geometries (65nm, 45nm, 32nm) are extremely complex and demand a depth of expertise across all the tools that is difficult to find. As a result, there is a need to simplify the design process and tool usage so “average” design teams can still implement these chips effectively. The Runtime Manager supposedly frees the designer from having to edit makefiles or tcl scripts, allowing control over all the appropriate variables through GUI settings and menus and debug of script errors through the GUI. Similarly, the Management Cockpit promises to provide valuable metrics without digging through log files and reports. If they deliver on these promises, the Runtime Manager and Management Cockpit will make average designers more productive more quickly. I have some doubts, though, especially since these GUIs are brand new and have not had extensive testing. I’d be interested to know if these run as smoothly as advertised or if there are issues getting them to deliver.

In summary, Lynx’s strength is in providing a 75-90% complete Synopsys design flow that optimizes across the tools to increase the design quality and provides graphical capabilities to make the flow easier to use for the average non-tool-expert designer. To my knowledge, none of the other major EDA vendors offer anything similar, either in scope or maturity.

If this sounds like an endorsement, you’ll want to read the next in the series : The Weakest Lynx.

Part 1 - Synopsys Lynx Design System Debuts at SNUG

Part 2 -  Lynx Design System? - It’s The Flow, Stupid!

Part 4 -  The Weakest Lynx

Part 5 - The Mising Lynx - The ASIC Cloud

harry the ASIC guy

Lynx Design System? - It’s The Flow, Stupid!

Wednesday, March 18th, 2009

(This is the 2nd in a 3 part series on the newly introduced Synopsys Lynx Design System. You can find Part 1 here) .

When Pilot was introduced some years back, one of the bigger discussion points concerned what to call this thing. I’m not talking about whether to call it Pilot or some other name. I’m talking about what-the-heck-is it.

  • Is it a flow?
  • Is it an environment?
  • Is it a system?
  • Is it a platform?

In the end, the marketing folks decided that it was an environment, which included a flow and other stuff like:

  • Tools for prepping IP and libraries
  • A configuration GUI
  • A metrics reporting GUI

Lynx adds a Runtime Manager to the product, so now it is no longer an environment. It’s a Design System. Well, with all due respect to the marketing folks who wrung their hands making this decision, I’d like to say one thing:

It’s the flow, stupid!

Sure, the metrics GUI can create pretty color-coded dashboards that even a VP can understand. “We’re red, dammit. Why aren’t we green”. And the Runtime Manager can configure the flow, and launch jobs, and monitor progress, also with pretty colors. And the “Foundry Ready System” … well, I’m still trying to figure out what that even means, even though I know what it is. But it’s the flow at the core of Lynx (nee Pilot nee Tiger nee Bamboo) that is the real guts of the product and the reason you’d want to buy it or not buy it. It’s the engine that makes Lynx run. So let’s take a tour.

At the core, the Lynx flow is a set of makefiles and Perl scripts that invoke Synopsys tools with standardized tcl scripts. (Clarification: All the scripts in the flow are tcl – one tiny bit of perl which comes with ICC-DP is re-used but anything the user touches is going to be in Tcl). Together, these scripts implement a flow that has been designed to produced very good results across a large number of designs.  The flow operates with a standard set of naming conventions and standard directory structures. In all, the Lynx flow covers all the steps from RTL to GDSII implementation.

There are actually 5 major “steps” in the flow:

  1. Synthesis
  2. Design-for-Test (may now be combined with #1)
  3. Design Planning
  4. Place and Route Optimization
  5. Chip Finishing

Lynx Flow

Each of these steps is further broken down into smaller tasks. For instance, Place and Route might be divided into:

  • Placement Optimization
  • Clock Optimization
  • Clock Routing
  • Routing
  • Routing Optimization
  • Post Route Optimization
  • Signal Integrity Optimization

The scripts also implement the analysis tasks such as parasitic extraction, static timing analysis, formal verification, IR drop analysis, etc. In all, they cover everything a design team needs to go from an RTL design to tapeout. If there is a task that is missing (e.g. Logic Bist insertion), you can add an entire new step to the flow by modifying the makefiles by hand, or using the GUI to create a new task. If you want it to use a 3rd party tool, you can do that too by having the makefile call that tool. Third party tools actually are a little more complicated than that, but that gives you the idea. (Clarification from Synopsys: Third party tools can be executed. The system is very open to including other tools – Synopsys just doesn’t promote this loudly. A key thing about the Lynx flow is that you do not ever need to even look at a Makefile, or execute a make command  – “no Makefile knowledge required” – since it is all handled graphically through the GUI – people should not be worried that they have to learn make).

So, you may ask, what’s the big deal? After all, isn’t this the way that most design teams / CAD teams implement their flows, more or less. What is so special about a set of makefiles and tcl scripts?

Honestly, you probably could go off and design something very similar on your own. And it might be better or more closely suited to your needs than this standard flow from Synopsys. Only you can make that choice because only you know what is important to you. The advantages of using a flow like Lynx from Synopsys are:

  • 90-95% of what you need you can get off-the-shelf and you can modify the rest if you need to.
  • The flow is being constantly updated with each new major release of the Synopsys tools so you don’t suffer from “flow inertia” and find yourself with an outdated flow.
  • The flow is being constantly tested, not just through regressions, but by the Synopsys consultants using it on real customer projects. So the quality is high.
  • In areas such as power, Synopsys can optimize the flow across multiple tasks and steps and tools, something that it would be hard for non tool experts to do.
  • You can use the engineers who would have been designing your flow to do real work.

Of course there are disadvantages as well:

  • It’s an all Synopsys flow, so you have to use Synopsys tools to get the most benefit. If you currently use other 3rd party tools, then the benefit is reduced proportionately. Or you can convert to the Synopsys tool, but that costs money and time and maybe it’s not the best-of-breed as a point tool.
  • The scripts are actually divided into many pieces of scripts that call each other. Although very modular, this can be confusing for a novice user if he is trying to modify the flow or debug a problem.
  • Lynx has a “strongly preferred” directory structure that is very deep. They do this for some good reasons, but this might go against the norm at your company and ignite a “religious feud”.
  • Once you’ve invested in this flow by training up your organization and building your own scripts and utilities on top, you’re pretty committed to Synopsys. Not a problem if Synopsys is your long term partner, but if you want to fool around or have a fear of commitment, not so good.

The bottom-line is that Lynx provides you with the same tool flow that the Synopsys consultants use on their own projects. If you are using an all or predominantly Synopsys flow, then I think it’s worth looking at.

(Friday: What I like. What I don’t like. And What Could Be Better)

Part 1 - Synopsys Lynx Design System Debuts at SNUG

Part 3 -  Strongest Lynx

Part 4 -  The Weakest Lynx

Part 5 - The Mising Lynx - The ASIC Cloud

harry the ASIC guy

A lot of paper…

Tuesday, March 17th, 2009

(The following is the text of an email I received this afternoon from a friend of mine in the Bay Area. I thought it was great and so I am sharing it with you, with his permission. If you would like to help him “unload his burden”, please let me know and I can put you in touch).

__________

Hi all,

Ok, so let me preface this by saying that I know I have a very deep and very hard to cope with, mental illness.  Somehow I feel that makes this more acceptable. As you may or may not know, we are moving.  I have decided that the boxes and boxes of IEEE and ACM journals will not be moving out of my storage and to our new home.  This is very hard for me.  It kills me to think about all the work and energy that went into fighting the universe’s entropy to come up with these things, and I CANNOT just take them to the dump (which I know is what I ought to do in a very real and cathartic sense.)

I know they are all available online and will forever be, at this point.  Years from now, I will not have the lone surviving issue of an incredibly important research paper otherwise to be lost to history.  I know that.  When I was younger, I had visions of one day having them all bound into annual editions and putting them in my library with oak or mahogany lined walls, overstuffed burgundy furniture, and a pool table with red felt in the middle of the room.  It’s time to put away my childish things and stop carrying this load.

As I said, it is very difficult for me.  What I would most like is to find a good home for them where they will be shelved, appreciated, and used.  The problem is that I think all the engineering libraries in the bay area all have as many (or even more) than they would like.  If any of you want to fill out a company library, I would be happy to give them to you.  I have about 25 years worth… the prized parts of the collection include IEEE Computer, IEEE Transactions on Computers, and IEEE Transactions on Pattern Analysis and Machine Intelligence… amongst lots of others. It will be hard to dump the Computer issues back to 1985… that seemed to be a glorious time in computer architecture and design.  A bygone era.  

I hate reading these things online.  My first inclination when I see an article I want to read online is to print it.  I’d much rather have it on a shelf and look it up that way saving myself the time to print, but I know it’s crazy, and I can no longer afford to keep hauling around this paper.

I believe I am going to fail in finding a home for these things.  This is my last ditch effort to find someone to take them.  I suppose the next best thing to the dump is taking them to an actual paper recycling plant.  I suppose that is at least one step more green than doing the landfill thing, which I truly find distasteful.

I am open to any and all suggestions.  Sorry for this long e-mail.  I hope it was at least a little entertaining looking into another person’s deep dementia.  I know I have issues.  Over twenty-five years’ worth…

-al

Synopsys Lynx Design System Debuts at SNUG

Monday, March 16th, 2009

Lynx Design Flow

This morning at the Synopsys Users Group (SNUG) Conference in San Jose, Aart DeGeus will be announcing Synopsys‘ latest generation of design flow / environment / system called Lynx. This is a big deal for Synopsys for a variety of reasons.  It’s also of particular interest to me for 3 reasons:

  1. First, when I was a program manager at Synopsys, I managed several projects that used previous generations of Lynx and was closely involved with the introduction of the most recent predecessor of Lynx, known as Pilot.
  2. Second, I have written about and believe in the importance of having an industry standard interoperable design system to enable collaboration.
  3. Third, I still keep in touch with some members of the flow team at Synopsys who developed and will support Lynx, so it’s good to see what they have come out with.

Given this full disclosure, you might be wondering if my opinion is objective. Probably not entirely. But those at Synopsys who worked with me in regards to flows can tell you that I was often a rather vocal critic. I know about the strengths and I also know the warts. So don’t expect this to be a sales pitch for Lynx, but as honest an assessment as I would have given internally were I still at Synopsys.

There’s a lot to cover, so I’m going to break this up into 3 5 separate posts over the course of the next 2 weeks. Today I’ll cover some of the history of flows at Synopsys and how they got to the Lynx flow that they have today. Wednesday I’ll cover what I consider the important nuts and bolts of Lynx, particularly what is new and exciting. Friday Next week, I’ll give my opinions as to what I like and don’t like and what can be made even better.

Since I won’t be covering nuts and bolts until Wednesday, I’ll include at the end of this post some links (no pun intended) to the requisite gratuitous shiny marketing collateral from Synopsys. Please take a look … I’m sure they spent a lot of money having it produced.

__________

A (not so) Brief History of Flows At Synopsys

SDE

The development of standardized tool flows dates back at least 12 years to the days when Synopsys was mainly a synthesis company. Some consultants in the professional services group decided that their life would be easier if they could standardize the synthesis scripts that they brought out to customers to do consulting work. The Synopsys Design Environment (SDE) was a set of Make, Perl, and Design Compiler scripts that implemented a hierarchical “compile-characterize-recompile” synthesis methodology. Although it was used extensively by those who created the scripts and some others, it never caught on broadly and no replacement came about for some time.

Bamboo

In 2000, Synopsys acquired a small design services company based in Austin called The Silicon Group (TSG). Primarily acquired to implement turnkey design services to GDSII (this was prior to the acquisition of Avanti), TSG had developed an internal tool flow to standardize and automate the use of the Avanti tools. This “Bamboo” flow was the genesis of Lynx.

TIGER

After Synopsys acquired Avanti in 2002, Synopsys Professional Services ramped up on its backend design services to GDSII, causing a broad deployment of the Bamboo flow across the organization. Renamed TIGER (for Tool InteGration EnviRonment), this flow was originally optional for design teams to use on consulting engagements, then became “strongly encouraged” and finally “mandatory” for any turnkey projects.

As you might expect, as a flow that originated in another company and was being required by management, TIGER met with some resistance. There were certainly aspects of TIGER that could be (and have been) improved, but primarily there was the predictable “not-invented-here” resistance and “I’m doing fine, just leave me be”. I managed several projects that used TIGER and it usually took 2-4 weeks for a new consultant to get familiar enough with the flow to stop complaining. After that however, he would usually start to feel comfortable and by the end of the project, would be a TIGER advocate.

As a project manager, the biggest benefit was standardization. A project could hit the ground running without the need for the team to arm wrestle over what design flow to use and then to develop it. If I needed more consultants to help suddenly, I knew they would also be able to hit the ground running as far as the design flow was concerned. Over time, various aspects of TIGER became part of the vernacular and culture (e.g. I’m at the “syn” step), making communication that much more efficient.

Pilot

In 2005, I became involved in an effort to introduce TIGER as a complete “service offering” to Synopsys customers. As you can imagine, there was a lot that had to be done before taking to market scriptware that was previously used internally, and this took over a year. Scripts had to be brought to a higher level of quality and a regression suite created to ensure that the flow ran properly across a wide variety of designs and libraries. A support organization within professional services was created solely to support customers using the flow. A metrics GUI was created to allow design and runtime metrics to be viewed graphically and reports created. Eventually, a flow editor was created to allow customers to modify flows without editing makefiles.

On the business side, there was a lot of discussion on how to offer this flow. There were those, myself included, who advocated to make it available as “open source”. Personally, my feeling was that hundreds of customer designers can maintain and enhance the flow better and at less cost than a handful of Synopsys flow developers. And once adopted broadly, this flow would become the de facto standard, and Synopsys would benefit greatly from that leadership position. There were downsides to that approach, however, and in the end the “Synopsys Pilot Design Environment” debuted just before SNUG 2006 as a “service offering”.

With the move to outside customers, several new concerns arose:

  1. Customers wanted support for non-Synopsys tools, most notably Mentor’s Calibre which enjoyed a dominant market share. Pilot allowed for 3rd party tools to be added to the flow, but it was up to the customer to do so.
  2. Despite the GUIs that were developed, there was still a fair amount of Make and Perl knowledge that the designer needed to be really effective, especially for debug. Many customer engineers did not feel compfortable with the intracacies of these scripting languages.
  3. There was confusion with other Synopsys methodologies offered by the Synopsys business units and application consultants (e.g. Recommended Astro Methodology) and by Synopsys partners (TSMC and ARM reference Flow). How were they different? Why were some free and Pilot a service offering?
  4. Customers resisted getting “locked-in” to an “all Synopsys” flow and forgoing (for some time at least) the best-of-breed approach.

Despite these concerns, it seems that Pilot has gotten a fair amount of deployment, largely by companies going through some sort of major transition (e.g. moving to a new process node, moving from ASIC to COT, consolidating on Synopsys tools). Although I am no longer with Synopsys, my estimate is that there are probably about 2 dozen or so companies using Pilot in some fashion, mostly with some degree of customization for their particular needs.

Lynx

Lynx is the next in the series of design flow offering from Synopsys. As with the others, it attempts to incrementally address issues and concerns with Pilot and to add new capabilities to increase adoption. In short, Lynx is intended to be a full-fledged product, supported through normal channels (support center and applications consultants).

(Note: I have not seen Lynx yet in action, so the following is based on Synopsys claims).

Among the key aspects of Lynx, as I’ve been told, are:

  • A runtime manager GUI has been added that supposedly frees designers from ever having to see or edit a makefile or perl script. It also allows debugging of errors and more configuration control.
  • Synopsys has migrated the metrics reporting to be web-based and hence accessible to any internet device (e.g. iPhone). The metrics can now cut across several projects instead of just one. And the GUI has been improved.
  • The GUIs in general are supposedly the same style as other Synopsys tools.
  • A much larger set of regressions run at Synopsys which should translate in better quality. Also, due to this regression automation, Synopsys claims they can release a version of Lynx concurrent with a new tool release. With Pilot, there was a 3 month lag.
  • Automated and semi-automated tapeout checks based on a set of internal guidelines that Synopsys has used for years on turnkey backend design projects.
  • Rather than having several competing methodologies and flows, Synopsys has decided to put all its eggs in the Lynx basket. This should result in greater focus and support to customers of Lynx.

My understanding is that Lynx will be sold as a perpetual site license with a separate user license for each user. Synopsys would not share the pricing with me, but I have strong reason to believe it is close to other mid-level Synopsys products.

__________

If you want to access more information on Lynx, here are the “links” to go to:

Official Synopsys Lynx Webpage

Brief Lynx Video

Also, if you are at SNUG this week, you can get more info on Lynx at the following:

I very much regret not being able to go to SNUG this week. So I’d like to ask a favor. Please be my eyes and ears. If you attend the keynote or any of the Lynx related events, please post a comment with your thoughts here on this blog post. If you have thoughts on other aspects of SNUG, and you use Twitter, then please use the Twitter #SNUG hashtag in your tweets and I’ll feel like I’m there.

Part 2 - Lynx Design System? - It’s The Flow, Stupid!

Part 3 -  Strongest Lynx

Part 4 -  The Weakest Lynx

Part 5 - The Mising Lynx - The ASIC Cloud

harry the ASIC guy

set_max_area 0

Friday, March 13th, 2009

I stopped by a lunchtime presentation yesterday given by the local Synopsys AC. He was updating my client on what was new in Design Compiler and other tools when he put up a slide that said something like this:

set_max_area 0 (now default setting)

For those who don’t know what this means, it tells the synthesis engine to try to make the design area as small as possible, which is obviously a desirable goal. Why would anyone ever want to set their area goal higher? If you’ve used Design Compiler before, you know that this has been somewhat of a running joke, a command that was in each and every synthesis script every written, as follows:

set_max_area 0

So it got me thinking. Were there any other artifacts of a bygone EDA era that were still were hanging around, like a running joke, that had served their purpose and needed to be put to rest. Of course there were, or I would not be writing this post. Here are 3:

1) Perpetual licenses. As Paul McClellan points out on his excellent EDA Graffiti blog, in the early days of EDA “the business model was the same business model as most hardware was sold: you bought the hardware, digitizers, screens and so on… And you paid an annual maintenance contract for them to keep it all running which was about 15-20% of the hardware cost per year.” EDA companies loved perpetual licenses for 2 reasons.

  1. They got to recognize all the revenue for the purchase at the time of the sale, so they were able to show better numbers on the books quicker.
  2. Once you “bought” the software, you only paid a small fee each year for maintenance. If you wanted to switch to another competitor’s tool, you’d need to pay that up-front perpetual license cost again, which was a real disincentive to switch. Basically, they could lock you in.

Even though most EDA companies have gone to a subscription license model, some still predominantly license software as perpetual. With the advent of short term licensing like Cadence’s eDaCard and Synopsys’ e-licensing, the perpetual model is as outdated as Sarah Palin.

2) Large direct sales teams. I need to be really careful here, because I worked in various customer facing roles at Synopsys for almost 15 years and I still have several friends who work in direct sales at various EDA companies. Many of them are very skilled and I don’t want to cause them to lose their jobs. But the fact is that all of us rely on “the Web” to get information on all things, including EDA tools, much more than we rely on salespeople. I’m part of the older generation (although I don’t feel or act that way), but the newer generation of customers views the internet in all its forms (static web pages, social networks, blogs, podcasts, twitter) like the air that they breath. They can’t live without it. And if you think they are going to want to have to schedule a visit from a “salesperson” to get access to a tool they are interested in, then you don’t have a clue about what these people expect. They expect to go to a web page, log in (maybe), and be off and running. And if your tool ain’t accessible that way, sorry, they’re not interested. Of course, that sounds shortsighted, but that’s they way it is and will be, like it or not.

This does not mean that some direct sales has no use or value. After all, a company is writing the check, not an individual with a credit card. And sophisticated customers will still (for now) want to install software and use it, so they will still need support. So there will still be a need for some direct sales and support, but much of the early stages of the sales process will move to the Web.

3)  Closed tool suites and solutions. As I stated in a previous post, most EDA companies seek to fence customers in rather than provide streams to nourish them. With all due regards to folks like Karen Bartleson and Dennis Brophy who have unselfishly worked to promote standards, we fall far short of the goals of the Cad Framework Initiative, which sought to enable true plug-n-play interoperability between EDA tools. It’s definitely getting better, due mostly to customer pressure. But we still have a long way to go before we have truly standard standards that enable collaboration between EDA suppliers. So, if you’re an EDA company, get with the standards.

That’s just 3. I’m sure there’s more. Let me know if you come up with others:

harry the ASIC guy

Community Based Tweeting

Monday, March 9th, 2009

A few weeks ago, Seth Godin reminded us to be careful what you say online because Google never forgets.

Yesterday, Ron Ploof reminded us that we can “sift extraordinary insight out of ordinary” Twitter traffic if we know how to look.

So today, I thought I’d keep the ball rolling. I’d like to share with you an interesting Twitter thread concerning online communities for electronic design. It started last Friday and really heated up today. It’s amazing what you can find with a little effort :-)

(Note: I have reversed the usual “most-recent-first” ordering of Twitter Tweets to make this easier to read.)

JL GrayjlgrayFiddling around with the Cadence online lab on Xuropa… Still don’t get the community part of Xuropa but the VNC demo is cool.9:52 PM Mar 6th from TweetDeck

loucoveyloucovey@jlgray do you get the community part of DVCon? How about DAC? Same thing w/o hotel rooms and sore feet.10:27 PM Mar 7th from twitterrific

JL Grayjlgray@loucovey Not sure there are enough folks on Xuropa to have a robust community. Why not just use Twitter/Facebook/Verif Guild/OVM World…about 14 hours ago from TweetDeck

JL Grayjlgray@loucovey What’s on Xuropa to motivate me to build YASN (Yet Another Social Network)?about 14 hours ago from TweetDeck

Paul Marriottpmarriott@jlgray Too many communities cause fragmentation. I only have time for a few “quality” areas. I can’t be in all places at all timesabout 14 hours ago from TweetDeck

Dave_59dave_59@pmarriott @jlgray I like Plaxo and LinkedIn tie-in to social networks. I can see where people are posting from one site. Needs more tie-insabout 13 hours ago from web

david lindltweeting@jlgray @loucovey don’t know if it’s xuropa or YASN, but I for one would like to see an independent online chip-design community evolve.about 12 hours ago from TweetDeck

Paul Marriottpmarriott@dltweeting It’s hard to have any chip-design community that’s truly independent. Everyone has some kind of axe …about 11 hours ago from web

david lindltweeting@pmarriott maybe independent is too strong. how about “balanced”? something like DAC, EDAC, or GSA could potentially pull it off.about 10 hours ago from TweetDeck

Paul Marriottpmarriott@dltweeting “Balanced” like USA Today editorials? Yuck. I want opinion, not PC mediocre rubbish. At least opinion spurs debateabout 10 hours ago from TweetDeck

david lindltweeting@pmarriott haha. not interested in PC rubbish either. balanced in that we get all perspectives. don’t need one view dominating convo.about 10 hours ago from TweetDeck

Tommy Kellytommykelly@pmarriott “PC mediocre rubbish”? SO get a Mac d00d. PC. Mac. Geddit? … OK, maybe not.about 10 hours ago from TweetDeck

Paul Marriottpmarriott@tommykelly Hope Steve Jobs is paying you commission Mr Macintoshabout 10 hours ago from TweetDeck

Tommy Kellytommykelly@pmarriott The Lord Steve (May He Live Forever) doesn’t need to pay his willing minions. We work for love (and shiny objects).about 9 hours ago from TweetDeck

JL Grayjlgray@dltweeting One could say there is a chip-design community building here which is controlled by no one!about 9 hours ago from TweetDeck

JL Grayjlgray@pmarriott If past history holds, in a couple of weeks, @tommykelly will be pushing the benefits of PCs with input from Lord Gates :-).about 9 hours ago from TweetDeck

david lindltweeting@jlgray yes, but discovering voices/people -> too tedious. content disaggregated -> hard to follow convos. hashtags antiquated.about 8 hours ago from TweetDeck

Paul Marriottpmarriott@jlgray @tommykelly maybe a PC with Lord Torvalds is the best solution. No Micro$oft, no problem :) about 8 hours ago from TweetDeck

david lindltweetinganyone ever try friendfeed?about 8 hours ago from TweetDeck

Tommy Kellytommykelly@dltweeting http://friendfeed.com/tommy… . Not completely sure yet what the point is, other than an excuse for more social notworking.about 8 hours ago from TweetDeck

david lindltweeting@tommykelly me neither, but they have a friendfeed “room” … can aggregate tweets, blogs, pics, linkedin updates, etc.about 8 hours ago from TweetDeck

John Fordjohn_m_ford@tommykelly: re: “social notworking” LOL!!about 7 hours ago from BeTwittered

david lindltweeting@john_m_ford @tommykelly hah! completely missed that! not working indeed!about 7 hours ago from TweetDeck

Mentor Graphicsmentor_graphicsMentor Graphics Community FAQ http://tinyurl.com/atl8b3 #Mentorabout 4 hours ago from web

James ColgansfojamesSocial Networks Presage Professional Network Growth? http://bit.ly/8v8nVabout 3 hours ago from TweetDeck

JL Grayjlgray@dltweeting But on the bright side, you get to channel William Shatner when writing short tweets!about 3 hours ago from TweetDeck

EDA Is Only “Mostly Dead”

Wednesday, March 4th, 2009

Last Wednesday at DVCon, Peggy Aycinena MC’ed what used to be known as the Troublemakers Panel, formerly MC’ed by John Cooley. The topic, “EDA: Dead or Alive?” Well, having attended Aart’s Keynote address immediately preceding and having attended Peggy’s panel discussion, I can answer that question in the immortal words of Miracle Max, “EDA is only MOSTLY dead”. But first, some background.

Back in the mid 90s, I attended a Synopsys field conference where Aart delivered a keynote addressing the challenges of achieving higher and higher productivity in the face of increasing chip size. The solution, he predicted, would be design reuse in the form of intellectual property. Although most of us had only the faintest idea of what design reuse entailed and could barely fathom such a future, Aart’s prediction has indeed come true. Today, there is hardly a chip designed without some form of soft or hard IP and many chips are predominantly IP.

Some years later, he delivered a similar keynote preaching the coming future of embedded software. This was before the term SoC was coined to designate a chip with embedded processors running embedded software. Again, only a handful understood or could fathom this future, but Aart was correct again.

So, this year, immediately preceding Peggy’s Panel, Aart delivered another very entertaining and predictive keynote. After describing the current economic crisis in engineering terms using amplifiers and feedback loops, he moved to the real meat of the presentation which addressed the growing amount of software content in today’s SoCs. He described how project schedules are often paced by embedded software development and validation. How products are increasingly differentiated based on software, not hardware. And he predicted a day when chips would only have custom hardware to implement functions that could not be performed with programmable software. In essence, he described a future with little electronic design as we know it today, where hardware designers are largely replaced by programmers.

Immediately following Aart’s keynote was Peggy’s panel. (If you want to know exactly what occurred, there is no place better to go than Mike Demler’s blow-by-blow account.) Peggy did her best to challenge the EDA execs to defend why EDA would not die out. She kept coming back to that same question in different ways and the execs kept avoiding directly answering the question, choosing instead to offer such philosophical logic such as: “If EDA is dead, then semiconductors are dead. If semiconductors are dead, then electronics are dead. And since electronics will never die, EDA will never die”.

On the surface, logic such as this is certainly comforting. After all, who can imagine a future without electronics? Upon closer inspection, however, and in light of Aart’s keynote, there is plenty reason for skepticism.

Just as Aart was right about design reuse and IP…

Just as Aart was right about embedded software …

I believe that Aart is right about hardware design being replaced by software development.

As processors and co-processors become faster and more capable of handling tasks formerly delegated to hardware…

As time-to-market drives companies to sell products that can be upgraded or fixed later via software patches…

As fewer and fewer companies can afford the cost of chip design at 32nm and below…

More companies will move capabilities to software running on standard chips.

With that, what becomes of the current EDA industry. Will it adapt to embrace software as part of its charter. Or will it continue to focus on chip development.

Personally, I think Aart is right again. Hardware will increasingly become software. And an EDA industry focused on hardware, will be increasingly “mostly dead”.

harry the ASIC guy