Posts Tagged ‘Lynx’

Thoughts On Synopsys’ Q2 2009 Earnings Call

Thursday, May 21st, 2009

Last night you may have watched the NBA Playoff game in which the Orlando Magic came back to defeat the heavily favored Cleveland Cavaliers. Great game!!!

Or the finale of American Idol in which Kris Allen came back to defeat the heavily favored Adam Lambert. Great show!!!

What did I do last night? I listened to the Q2 2009 Synopsys earnings call. Great conference call!!!

(OK … I’ll admit it wasn’t as exciting and nail biting as either of the other viewing options. Just think of it like this: I took on the work of listening to the call and summarizing it for you, in order to free you up to watch the game or idol. You can thank me later :-) )

Here’s the summary. (You can read the full transcript here if you like).


On the up side, Synopsys had a good Q2, beating their revenue and earnings per share guidance slightly. On the down side, Synopsys lowered its revenue and cash flow guidance slightly for the rest of the year, allowing for potential customer bankruptcies, late payments, and reduced bookings. Customers are approaching Synopsys to “help them right now through this downturn”, i.e. to reduce their cost of software. It looks like the recession is finally catching up to them.

As I finish off this post on Thursday morning, it looks like the analysts agree. Synopsys shares are down 10%, so it seems they are getting punished for revising their forecast. 

Still, Synopsys is in very good financial health, with $877M in cash and short term investments. Their cash flow is going to go down the rest of the year, so they will eat into this fund, but they will still have plenty to selectively acquire strong technology that might add to their portfolio, as they did with the MIPs Analog Business Group.


There were 2 themes or phrases that kept recurring in the call that I am sure were points of emphasis for Aart.

First, the word “momentum” was used 6 times (by my count) during the call. Technology momentum. Customer momentum. Momentum in the company. Clearly, Synopsys is trying to portray an image of the company building up steam while the rest of the industry wallows in the recession.

Second, customers are “de-risking their supplier relationships”, i.e. looking to consolidate with an EDA vendor with strong financials who’ll still be there when the recession ends. Again, Synopsys is trying to portray itself as the safe choice for customers, hoping to woo customers away from less financially secure competitors like Cadence and Magma. This ties in with the flurry of “primary EDA vendor” relationships that Synopsys has announced recently.

The opportunity for Synopsys (and danger for the competition) is to pick up market share during this downturn and it looks like that may be happening as companies “de-risk” by going with the company with the “momentum” and a “extraordinarily strong position”. Or at least that’s the message that Synopsys is sending.


Aart did rattle off the usual laundry list of technology that he wanted to highlight, including some introduced last year (e.g. Z-route). Of note were the following:

  • Multi-core technology in VCS with 2x speedup (is 2x a lot?)
  • Custom Designer, which Aart called “a viable alternative to the incumbent” (ya know marketing didn’t pick the word “viable”)
  • Analog IP via the MIPS Analog Business Group acquisition, especially highlighting how that complements the Custom Designer product (do I see “design kits” in the future?)
  • The Lynx Design System (see my 5-part series)
  • IC-Validator (smells like DRC fixing in IC Compiler - Webinar today, I’ll find out more)


In summary, Synopsys had a good quarter, but they have finally acknowledged that they are not immune to the downturn and they expect to get impacted the next few quarters.

harry the ASIC guy

TSMC Challenges Lynx With Flow Of Their Own

Wednesday, May 6th, 2009

About a month and a half ago, I wrote a 5 part series of blog posts on the newly introduced Lynx Design System from Synopsys:

One key feature, the inclusion of pre-qualified technology and node specific libraries in the flow, was something I had pushed for when I was previously involved with Lynx (then called Pilot). These libraries would have made Lynx into a complete out-of-the-box foundry and node specific design kit … no technology specific worries. Indeed, everyone thought that it was a good idea and would have happened had it not been for resistance from the foundries that were approached. Alas!

In the months before the announcement of Lynx, I heard that Synopsys had finally cracked that nut and that foundry libraries would be part of Lynx after all. Whilst speaking to Synopsys about Lynx in preparation for my posts, I asked whether this was the case. Given my expectations, I was rather surprised when I was told that no foundry libraries would be included as part of Lynx or as an option.

The explanation was that it proved too difficult to handle the many options that customers used. High Vt and low Vt. Regular and low power process. IO and RAM libraries from multiple vendors like ARM and Virage. Indeed, this was a very reasonable explanation to me since my experience was that all chips used some special libraries along the way. How could one QA a set of libraries for all the combinations? So, I left it at that. Besides, Synopsys offered a script that would build the Lynx node from the DesignWare TSMC Foundry Libraries.

Two weeks ago, at the TSMC Technology Symposium in San Jose, TSMC announced their own Integrated Sign-off Flow that competes with the Lynx flow, this one including their libraries. Now it seems to make sense. TSMC may  have backed out of providing libraries to Synopsys to use with Lynx since they were cooking up a flow offering of their own. I don’t know this to be a fact, but I think it’s a reasonable explanation.

So, besides the libraries, how does the TSMC flow compare to the Synopsys Lynx flow? I’m glad you asked. Here are the salient details of the TSMC offering:

  • Complete RTL to GDSII flow much like Lynx
  • Node and process specific optimizations
  • Uses multiple EDA vendors’ tools  (Synopsys mostly, but also Cadence, Mentor, and Azuro)
  • Available only for TSMC 65nm process node (at this time)
  • No cost (at least to early adopters … the press release is unclear whether TSMC will charge in the future)
  • And of course, libraries are included.

In comparison to Synopsys’ Lynx Design System, there were some notable features missing from the announcement:

  • No mention of anything like a Management Cockpit or Runtime Manager
  • No mention of how this was going to be supported
  • No mention of any chips or customers that have been through the flow

To be fair, just because these were not mentioned, does not mean that they are really missing, I have not seen a demo of the flow or spoken to TSMC (you know how to reach me) and that would help a lot in evaluating how this compares to Lynx. Still, from what I know, I’d like to give you my initial assessment of the strength of these offerings.

TSMC Integrated Signoff Flow

  • The flow includes EDA tools from multiple vendors. There is an assumption that TSMC has created a best-of-breed flow by picking the tool that performed each step in the flow the best and making all the tools work together. Synopsys will claim that their tools are all best-of-breed and that other tools can be easily integrated. But, TSMC’s flow comes that way with no additional work required. (Of course, you still need to go buy those other tools).
  • Integrated libraries, as I’ve described above. Unfortunately if you are using any 3rd party libraries, you’ll need to integrate them yourself it seems.
  • Node and process specific optimizations should provide an extra boost in quality of results.
  • Free (at least for now)

Synopsys Lynx Design System

  • You can use the flow with any foundry or technology node. A big advantage unless you are set on TSMC 65nm (which a lot of people are).
  • Other libraries and tools are easier to integrate into the flow I would think. It’s not clear whether TSMC even supports hacking the flow for other nodes.
  • Support from the Synopsys field and support center. Recall, this is now a full fledged product. Presumably, the price customers pay for Lynx will fund the support costs. If there is no cost for the TSMC flow, how will they fund supporting it? Perhaps they will take on the cost to get the silicon business, but that’s a business decision I am not privy to. And don’t underestimate the support effort. This is much like a flow that ASIC vendors (TI, Motorola/Freescale, LSI Logic), not foundries, would have offered. They had whole teams developing and QA’ing their flows. And then they would be tied to a specific set of tool releases and frozen.
  • Runtime Manager and Management Cockpit. Nice to have features.
  • Been used to create real chips before. As I’d said, the core flow in Lynx dates back almost 10 years and has been updated continuously. It’s not clear what is the genesis of the new TSMC flow. Is it a derivative of the TSMC reference flows? Is it something that has been used to create chips? Again, I don’t know, but I’ve got to give Synopsys the nod in terms of “production proven”.

So, what do I recommend. Well, if you are not going to TSMC 65 nm with TSMC standard cell libraries, then there is not much reason to look at the TSMC flow. However, if you are using the technology that TSMC currently supports, the appeal of a turnkey, optimized, and FREE flow is pretty strong. I’d at least do my due diligence and look at the TSMC flow. It might help you get better pricing from TSMC.

If anyone out there has actually seen or touched the TSMC flow, please add a comment below. Everyone would love to know what you think first hand.
harry the ASIC guy

The Weakest Lynx

Thursday, March 26th, 2009

Earlier this week I wrote about the strengths of the new Synopsys Lynx flow offering. Today, the weakest Lynx.

1. Limited 3rd-Party Tool Support.

Synopsys states that Lynx is “flexible and inherently configurable to readily incorporate 3rd-party technology”. And it is true that they have done nothing to prevent you from incorporating 3rd-party tools. They also have done little to help you incorporate these tools. In most cases, incorporating 3rd party tools means digging in to understand the guts of how Lynx works. That means getting down and dirty with makefiles and tcl scripts and mimicking all the special behavior of the standard Lynx scripts for Synopsys tools. For instance, here are a few of the things you might need to do:

  • Break up your tcl scripts into several tcl scripts in separate directories for the project and block
  • Access special Lynx environmental variables by name in your makefiles and TCL scripts
  • Have your tool output special SNPS_INFO messages formatted for the metrics GUI to parse out of the log file
  • Update your scripts for new versions of Lynx if any of these formats have changed.

If you are motivated, I’m sure you can hack through the scripts and figure it out. However, to my knowledge (I looked in Solvnet, correct me if I am wrong) there is no application note that clearly documents the steps needed and the relevant files, variables, and message formats to use.

It’s not surprising that Synopsys does not want to make this too easy. One goal of offering the Lynx flow is to encourage the use of an all Synopsys tool flow. If it were truly a wide open flow with easy addition of 3rd-party tools, then there would be less of a hook to use Synopsys tools. (Personally, I disagree with this approach and think Synopsys would be better off offering a truly open flow, but that’s the next post).

As a result, Lynx will be used only by customers using a predominantly Synopsys tool flow. I think that is OK by Synopsys. They’d rather sell a few less Lynx licenses rather than support the use of 3rd-party tools. Unfortunately for designers using other tools, Lynx does not currently have that much to offer.

2. Lynx Is Difficult To Upgrade

One of the complaints of Lynx’s predecessors is that it is not easy to upgrade from version to version. That is because it is a set of template scripts, not an installed flow. What do I mean?

When you create a project using Lynx, a set of template scripts are copied from a central area and configured for you based on your input. Let’s call this V1.0. As you go through the design process, you customize the flow by changing settings in the GUI, which in turn changes the local scripts that you copied from the central area. Now, let’s say that you want to upgrade to V1.1 because there are some bug fixes or new capabilities you need to use. You can’t do that easily. You have 2 alternatives:

  1. Create a new project using v1.1 and try to replicate any customizations from the v1.0 project in the v1.1 project. I hope you kept good notes.
  2. Diff the new scripts and the old scripts and then update your version 1.0 scripts to manually upgrade to v1.1.

Admittedly, Synopsys provides release notes that identify what has changed and that will help with approach #2. And they try to avoid making gratuitous variable name changes. Even then, the upgrade process is error-prone and manual. In most cases, for any one project, customers will just stick with the version of Lynx that they started with in order to avoid this mess. Then they’ll upgrade between projects. That negates the benefit of having a flow that is always “up-to-date”.

In my humble opinion, a better way to architect the flow would have been to have a set of global scripts that are untouchable and a set of local scripts that can be customized to override or supplement the global scripts. In that case, a new version of Lynx would replace the global scripts, but the local scripts, where all the customization is done, could remain unchanged.

3. Debugging Is Difficult

Have you ever tried to debug a set of scripts that someone else wrote? Even worse, scripts that are broken up into multiple scripts in multiple directories. Even worse, by looking at log files that do not echo directly what commands were executed. Even worse, you were told you never had to worry about the scripts in the first place. And worst of all, when you called for support, nobody knew what you were talking about.

That is what debugging was like in Pilot, Lynx’s most recent predecessor.

I’ve been told that Synopsys has tried to address these issues in Lynx. They now have a switch that will echo the commands to the log files. The Runtime Manager can supposedly locate the errors in the log files and correlate them to the scripts. And now that Lynx is a supported product, Support Center and ACs should know how to help. Still, I’d like to see it to believe it. From what I understand, many of these features are still a bit flakey and almost all the Synopsys consultants, the largest user base for the flow, do not use the new GUIs yet.


In summary, Lynx’s main weakness is that it was not originally architected as a forward compatible, open flow for novice tools users, which is what it is being positioned as. In fact, it started out as a set of scripts written by Avanti tool experts for Avanti tool experts to use with Avanti tools. Synopsys has done a lot to try to morph the flow into something that allows 3rd-party tools, upgrades more easily, and eases debug, but the inherent architecture limits what can be done.

So, what should have been added to make Lynx better? You’ll want to read the next in the series: The Missing Lynx.

Part 1 - Synopsys Lynx Design System Debuts at SNUG

Part 2 -  Lynx Design System? - It’s The Flow, Stupid!

Part 3 -  Strongest Lynx

Part 5 - The Mising Lynx - The ASIC Cloud

harry the ASIC guy

Strongest Lynx

Monday, March 23rd, 2009

I know. I know. I know.

I said that I was going to publish the final post in a 3-part series on Synopsys Lynx last Friday. However, as I put my notes together, I realized how much there is to say. So, I’m breaking up the last post into 3 separate posts: The Strongest Lynx, The Weakest Lynx, and The Missing Lynx (clever, huh?).  First, the Strongest Lynx.

I think that the best way to understand the strengths of Lynx is to consider who is Synopsys’ intended customer for this flow offering. After all, the offering is designed for them. In that regard, since adopting Synopsys Lynx is such a big change in methodology, Synopsys is looking for customers who are already planning some sort of major transition, including:

  • Startup companies who have no design flow to begin with
  • Companies making a significant transition to a new technology node or process (e.g. 90nm => 45nm)
  • Companies expanding their existing design capabilities (e.g. ASIC => COT)
  • Companies moving towards an all Synopsys flow (e.g. vendor consolidation)
  • Companies downsizing their in-house CAD teams

These companies are already committed to some sort of change in design flow, so Synopsys offers them a “buy” alternative to making it in-house. Synopsys Lynx is attractive to the extent that it accelerates that transition process and allows the design teams to be productive faster. In that light, the 3 greatest strength of Lynx seem to be:

1. 75-90% of a working design flow out-of-the-box.

I’m sure methodology experts and tool experts, if given a chance to dig into the Lynx scripts, would find areas to make improvements to the flow.  Nonetheless, I can say from experience managing teams using earlier versions (Tiger, Pilot), that Lynx provides a complete design flow that requires very little customization “out-of-the-box”. It is the same design flow that has evolved from almost 10 years of delivering design services and is being used by Synopsys’ own design services organization, so indeed they “eat their own dog food“. Synopsys says that they are doing more regression testing, which should increase quality. There is documentation and training, and Synopsys offers 1 week of on-site assistance to install and start customizing it.

2. A design flow that optimizes across the various tools.

It’s well understood that smaller process geometries require the various tools in the flow to work together more closely, exchanging data and anticipating what the other tools will do downstream. Synopsys claims to have implemented several such flows in Lynx, particularly highlighting their low-power flow. If this is true, that should result in better results than a point-tool approach.

3.  Ease-of-use features that help average tool users be productive more easily.

Design flows for very small geometries (65nm, 45nm, 32nm) are extremely complex and demand a depth of expertise across all the tools that is difficult to find. As a result, there is a need to simplify the design process and tool usage so “average” design teams can still implement these chips effectively. The Runtime Manager supposedly frees the designer from having to edit makefiles or tcl scripts, allowing control over all the appropriate variables through GUI settings and menus and debug of script errors through the GUI. Similarly, the Management Cockpit promises to provide valuable metrics without digging through log files and reports. If they deliver on these promises, the Runtime Manager and Management Cockpit will make average designers more productive more quickly. I have some doubts, though, especially since these GUIs are brand new and have not had extensive testing. I’d be interested to know if these run as smoothly as advertised or if there are issues getting them to deliver.

In summary, Lynx’s strength is in providing a 75-90% complete Synopsys design flow that optimizes across the tools to increase the design quality and provides graphical capabilities to make the flow easier to use for the average non-tool-expert designer. To my knowledge, none of the other major EDA vendors offer anything similar, either in scope or maturity.

If this sounds like an endorsement, you’ll want to read the next in the series : The Weakest Lynx.

Part 1 - Synopsys Lynx Design System Debuts at SNUG

Part 2 -  Lynx Design System? - It’s The Flow, Stupid!

Part 4 -  The Weakest Lynx

Part 5 - The Mising Lynx - The ASIC Cloud

harry the ASIC guy

Lynx Design System? - It’s The Flow, Stupid!

Wednesday, March 18th, 2009

(This is the 2nd in a 3 part series on the newly introduced Synopsys Lynx Design System. You can find Part 1 here) .

When Pilot was introduced some years back, one of the bigger discussion points concerned what to call this thing. I’m not talking about whether to call it Pilot or some other name. I’m talking about what-the-heck-is it.

  • Is it a flow?
  • Is it an environment?
  • Is it a system?
  • Is it a platform?

In the end, the marketing folks decided that it was an environment, which included a flow and other stuff like:

  • Tools for prepping IP and libraries
  • A configuration GUI
  • A metrics reporting GUI

Lynx adds a Runtime Manager to the product, so now it is no longer an environment. It’s a Design System. Well, with all due respect to the marketing folks who wrung their hands making this decision, I’d like to say one thing:

It’s the flow, stupid!

Sure, the metrics GUI can create pretty color-coded dashboards that even a VP can understand. “We’re red, dammit. Why aren’t we green”. And the Runtime Manager can configure the flow, and launch jobs, and monitor progress, also with pretty colors. And the “Foundry Ready System” … well, I’m still trying to figure out what that even means, even though I know what it is. But it’s the flow at the core of Lynx (nee Pilot nee Tiger nee Bamboo) that is the real guts of the product and the reason you’d want to buy it or not buy it. It’s the engine that makes Lynx run. So let’s take a tour.

At the core, the Lynx flow is a set of makefiles and Perl scripts that invoke Synopsys tools with standardized tcl scripts. (Clarification: All the scripts in the flow are tcl – one tiny bit of perl which comes with ICC-DP is re-used but anything the user touches is going to be in Tcl). Together, these scripts implement a flow that has been designed to produced very good results across a large number of designs.  The flow operates with a standard set of naming conventions and standard directory structures. In all, the Lynx flow covers all the steps from RTL to GDSII implementation.

There are actually 5 major “steps” in the flow:

  1. Synthesis
  2. Design-for-Test (may now be combined with #1)
  3. Design Planning
  4. Place and Route Optimization
  5. Chip Finishing

Lynx Flow

Each of these steps is further broken down into smaller tasks. For instance, Place and Route might be divided into:

  • Placement Optimization
  • Clock Optimization
  • Clock Routing
  • Routing
  • Routing Optimization
  • Post Route Optimization
  • Signal Integrity Optimization

The scripts also implement the analysis tasks such as parasitic extraction, static timing analysis, formal verification, IR drop analysis, etc. In all, they cover everything a design team needs to go from an RTL design to tapeout. If there is a task that is missing (e.g. Logic Bist insertion), you can add an entire new step to the flow by modifying the makefiles by hand, or using the GUI to create a new task. If you want it to use a 3rd party tool, you can do that too by having the makefile call that tool. Third party tools actually are a little more complicated than that, but that gives you the idea. (Clarification from Synopsys: Third party tools can be executed. The system is very open to including other tools – Synopsys just doesn’t promote this loudly. A key thing about the Lynx flow is that you do not ever need to even look at a Makefile, or execute a make command  – “no Makefile knowledge required” – since it is all handled graphically through the GUI – people should not be worried that they have to learn make).

So, you may ask, what’s the big deal? After all, isn’t this the way that most design teams / CAD teams implement their flows, more or less. What is so special about a set of makefiles and tcl scripts?

Honestly, you probably could go off and design something very similar on your own. And it might be better or more closely suited to your needs than this standard flow from Synopsys. Only you can make that choice because only you know what is important to you. The advantages of using a flow like Lynx from Synopsys are:

  • 90-95% of what you need you can get off-the-shelf and you can modify the rest if you need to.
  • The flow is being constantly updated with each new major release of the Synopsys tools so you don’t suffer from “flow inertia” and find yourself with an outdated flow.
  • The flow is being constantly tested, not just through regressions, but by the Synopsys consultants using it on real customer projects. So the quality is high.
  • In areas such as power, Synopsys can optimize the flow across multiple tasks and steps and tools, something that it would be hard for non tool experts to do.
  • You can use the engineers who would have been designing your flow to do real work.

Of course there are disadvantages as well:

  • It’s an all Synopsys flow, so you have to use Synopsys tools to get the most benefit. If you currently use other 3rd party tools, then the benefit is reduced proportionately. Or you can convert to the Synopsys tool, but that costs money and time and maybe it’s not the best-of-breed as a point tool.
  • The scripts are actually divided into many pieces of scripts that call each other. Although very modular, this can be confusing for a novice user if he is trying to modify the flow or debug a problem.
  • Lynx has a “strongly preferred” directory structure that is very deep. They do this for some good reasons, but this might go against the norm at your company and ignite a “religious feud”.
  • Once you’ve invested in this flow by training up your organization and building your own scripts and utilities on top, you’re pretty committed to Synopsys. Not a problem if Synopsys is your long term partner, but if you want to fool around or have a fear of commitment, not so good.

The bottom-line is that Lynx provides you with the same tool flow that the Synopsys consultants use on their own projects. If you are using an all or predominantly Synopsys flow, then I think it’s worth looking at.

(Friday: What I like. What I don’t like. And What Could Be Better)

Part 1 - Synopsys Lynx Design System Debuts at SNUG

Part 3 -  Strongest Lynx

Part 4 -  The Weakest Lynx

Part 5 - The Mising Lynx - The ASIC Cloud

harry the ASIC guy

Synopsys Lynx Design System Debuts at SNUG

Monday, March 16th, 2009

Lynx Design Flow

This morning at the Synopsys Users Group (SNUG) Conference in San Jose, Aart DeGeus will be announcing Synopsys‘ latest generation of design flow / environment / system called Lynx. This is a big deal for Synopsys for a variety of reasons.  It’s also of particular interest to me for 3 reasons:

  1. First, when I was a program manager at Synopsys, I managed several projects that used previous generations of Lynx and was closely involved with the introduction of the most recent predecessor of Lynx, known as Pilot.
  2. Second, I have written about and believe in the importance of having an industry standard interoperable design system to enable collaboration.
  3. Third, I still keep in touch with some members of the flow team at Synopsys who developed and will support Lynx, so it’s good to see what they have come out with.

Given this full disclosure, you might be wondering if my opinion is objective. Probably not entirely. But those at Synopsys who worked with me in regards to flows can tell you that I was often a rather vocal critic. I know about the strengths and I also know the warts. So don’t expect this to be a sales pitch for Lynx, but as honest an assessment as I would have given internally were I still at Synopsys.

There’s a lot to cover, so I’m going to break this up into 3 5 separate posts over the course of the next 2 weeks. Today I’ll cover some of the history of flows at Synopsys and how they got to the Lynx flow that they have today. Wednesday I’ll cover what I consider the important nuts and bolts of Lynx, particularly what is new and exciting. Friday Next week, I’ll give my opinions as to what I like and don’t like and what can be made even better.

Since I won’t be covering nuts and bolts until Wednesday, I’ll include at the end of this post some links (no pun intended) to the requisite gratuitous shiny marketing collateral from Synopsys. Please take a look … I’m sure they spent a lot of money having it produced.


A (not so) Brief History of Flows At Synopsys


The development of standardized tool flows dates back at least 12 years to the days when Synopsys was mainly a synthesis company. Some consultants in the professional services group decided that their life would be easier if they could standardize the synthesis scripts that they brought out to customers to do consulting work. The Synopsys Design Environment (SDE) was a set of Make, Perl, and Design Compiler scripts that implemented a hierarchical “compile-characterize-recompile” synthesis methodology. Although it was used extensively by those who created the scripts and some others, it never caught on broadly and no replacement came about for some time.


In 2000, Synopsys acquired a small design services company based in Austin called The Silicon Group (TSG). Primarily acquired to implement turnkey design services to GDSII (this was prior to the acquisition of Avanti), TSG had developed an internal tool flow to standardize and automate the use of the Avanti tools. This “Bamboo” flow was the genesis of Lynx.


After Synopsys acquired Avanti in 2002, Synopsys Professional Services ramped up on its backend design services to GDSII, causing a broad deployment of the Bamboo flow across the organization. Renamed TIGER (for Tool InteGration EnviRonment), this flow was originally optional for design teams to use on consulting engagements, then became “strongly encouraged” and finally “mandatory” for any turnkey projects.

As you might expect, as a flow that originated in another company and was being required by management, TIGER met with some resistance. There were certainly aspects of TIGER that could be (and have been) improved, but primarily there was the predictable “not-invented-here” resistance and “I’m doing fine, just leave me be”. I managed several projects that used TIGER and it usually took 2-4 weeks for a new consultant to get familiar enough with the flow to stop complaining. After that however, he would usually start to feel comfortable and by the end of the project, would be a TIGER advocate.

As a project manager, the biggest benefit was standardization. A project could hit the ground running without the need for the team to arm wrestle over what design flow to use and then to develop it. If I needed more consultants to help suddenly, I knew they would also be able to hit the ground running as far as the design flow was concerned. Over time, various aspects of TIGER became part of the vernacular and culture (e.g. I’m at the “syn” step), making communication that much more efficient.


In 2005, I became involved in an effort to introduce TIGER as a complete “service offering” to Synopsys customers. As you can imagine, there was a lot that had to be done before taking to market scriptware that was previously used internally, and this took over a year. Scripts had to be brought to a higher level of quality and a regression suite created to ensure that the flow ran properly across a wide variety of designs and libraries. A support organization within professional services was created solely to support customers using the flow. A metrics GUI was created to allow design and runtime metrics to be viewed graphically and reports created. Eventually, a flow editor was created to allow customers to modify flows without editing makefiles.

On the business side, there was a lot of discussion on how to offer this flow. There were those, myself included, who advocated to make it available as “open source”. Personally, my feeling was that hundreds of customer designers can maintain and enhance the flow better and at less cost than a handful of Synopsys flow developers. And once adopted broadly, this flow would become the de facto standard, and Synopsys would benefit greatly from that leadership position. There were downsides to that approach, however, and in the end the “Synopsys Pilot Design Environment” debuted just before SNUG 2006 as a “service offering”.

With the move to outside customers, several new concerns arose:

  1. Customers wanted support for non-Synopsys tools, most notably Mentor’s Calibre which enjoyed a dominant market share. Pilot allowed for 3rd party tools to be added to the flow, but it was up to the customer to do so.
  2. Despite the GUIs that were developed, there was still a fair amount of Make and Perl knowledge that the designer needed to be really effective, especially for debug. Many customer engineers did not feel compfortable with the intracacies of these scripting languages.
  3. There was confusion with other Synopsys methodologies offered by the Synopsys business units and application consultants (e.g. Recommended Astro Methodology) and by Synopsys partners (TSMC and ARM reference Flow). How were they different? Why were some free and Pilot a service offering?
  4. Customers resisted getting “locked-in” to an “all Synopsys” flow and forgoing (for some time at least) the best-of-breed approach.

Despite these concerns, it seems that Pilot has gotten a fair amount of deployment, largely by companies going through some sort of major transition (e.g. moving to a new process node, moving from ASIC to COT, consolidating on Synopsys tools). Although I am no longer with Synopsys, my estimate is that there are probably about 2 dozen or so companies using Pilot in some fashion, mostly with some degree of customization for their particular needs.


Lynx is the next in the series of design flow offering from Synopsys. As with the others, it attempts to incrementally address issues and concerns with Pilot and to add new capabilities to increase adoption. In short, Lynx is intended to be a full-fledged product, supported through normal channels (support center and applications consultants).

(Note: I have not seen Lynx yet in action, so the following is based on Synopsys claims).

Among the key aspects of Lynx, as I’ve been told, are:

  • A runtime manager GUI has been added that supposedly frees designers from ever having to see or edit a makefile or perl script. It also allows debugging of errors and more configuration control.
  • Synopsys has migrated the metrics reporting to be web-based and hence accessible to any internet device (e.g. iPhone). The metrics can now cut across several projects instead of just one. And the GUI has been improved.
  • The GUIs in general are supposedly the same style as other Synopsys tools.
  • A much larger set of regressions run at Synopsys which should translate in better quality. Also, due to this regression automation, Synopsys claims they can release a version of Lynx concurrent with a new tool release. With Pilot, there was a 3 month lag.
  • Automated and semi-automated tapeout checks based on a set of internal guidelines that Synopsys has used for years on turnkey backend design projects.
  • Rather than having several competing methodologies and flows, Synopsys has decided to put all its eggs in the Lynx basket. This should result in greater focus and support to customers of Lynx.

My understanding is that Lynx will be sold as a perpetual site license with a separate user license for each user. Synopsys would not share the pricing with me, but I have strong reason to believe it is close to other mid-level Synopsys products.


If you want to access more information on Lynx, here are the “links” to go to:

Official Synopsys Lynx Webpage

Brief Lynx Video

Also, if you are at SNUG this week, you can get more info on Lynx at the following:

I very much regret not being able to go to SNUG this week. So I’d like to ask a favor. Please be my eyes and ears. If you attend the keynote or any of the Lynx related events, please post a comment with your thoughts here on this blog post. If you have thoughts on other aspects of SNUG, and you use Twitter, then please use the Twitter #SNUG hashtag in your tweets and I’ll feel like I’m there.

Part 2 - Lynx Design System? - It’s The Flow, Stupid!

Part 3 -  Strongest Lynx

Part 4 -  The Weakest Lynx

Part 5 - The Mising Lynx - The ASIC Cloud

harry the ASIC guy