Posts Tagged ‘Pilot’

TSMC Challenges Lynx With Flow Of Their Own

Wednesday, May 6th, 2009

About a month and a half ago, I wrote a 5 part series of blog posts on the newly introduced Lynx Design System from Synopsys:

One key feature, the inclusion of pre-qualified technology and node specific libraries in the flow, was something I had pushed for when I was previously involved with Lynx (then called Pilot). These libraries would have made Lynx into a complete out-of-the-box foundry and node specific design kit … no technology specific worries. Indeed, everyone thought that it was a good idea and would have happened had it not been for resistance from the foundries that were approached. Alas!

In the months before the announcement of Lynx, I heard that Synopsys had finally cracked that nut and that foundry libraries would be part of Lynx after all. Whilst speaking to Synopsys about Lynx in preparation for my posts, I asked whether this was the case. Given my expectations, I was rather surprised when I was told that no foundry libraries would be included as part of Lynx or as an option.

The explanation was that it proved too difficult to handle the many options that customers used. High Vt and low Vt. Regular and low power process. IO and RAM libraries from multiple vendors like ARM and Virage. Indeed, this was a very reasonable explanation to me since my experience was that all chips used some special libraries along the way. How could one QA a set of libraries for all the combinations? So, I left it at that. Besides, Synopsys offered a script that would build the Lynx node from the DesignWare TSMC Foundry Libraries.

Two weeks ago, at the TSMC Technology Symposium in San Jose, TSMC announced their own Integrated Sign-off Flow that competes with the Lynx flow, this one including their libraries. Now it seems to make sense. TSMC may  have backed out of providing libraries to Synopsys to use with Lynx since they were cooking up a flow offering of their own. I don’t know this to be a fact, but I think it’s a reasonable explanation.

So, besides the libraries, how does the TSMC flow compare to the Synopsys Lynx flow? I’m glad you asked. Here are the salient details of the TSMC offering:

  • Complete RTL to GDSII flow much like Lynx
  • Node and process specific optimizations
  • Uses multiple EDA vendors’ tools  (Synopsys mostly, but also Cadence, Mentor, and Azuro)
  • Available only for TSMC 65nm process node (at this time)
  • No cost (at least to early adopters … the press release is unclear whether TSMC will charge in the future)
  • And of course, libraries are included.

In comparison to Synopsys’ Lynx Design System, there were some notable features missing from the announcement:

  • No mention of anything like a Management Cockpit or Runtime Manager
  • No mention of how this was going to be supported
  • No mention of any chips or customers that have been through the flow

To be fair, just because these were not mentioned, does not mean that they are really missing, I have not seen a demo of the flow or spoken to TSMC (you know how to reach me) and that would help a lot in evaluating how this compares to Lynx. Still, from what I know, I’d like to give you my initial assessment of the strength of these offerings.

TSMC Integrated Signoff Flow

  • The flow includes EDA tools from multiple vendors. There is an assumption that TSMC has created a best-of-breed flow by picking the tool that performed each step in the flow the best and making all the tools work together. Synopsys will claim that their tools are all best-of-breed and that other tools can be easily integrated. But, TSMC’s flow comes that way with no additional work required. (Of course, you still need to go buy those other tools).
  • Integrated libraries, as I’ve described above. Unfortunately if you are using any 3rd party libraries, you’ll need to integrate them yourself it seems.
  • Node and process specific optimizations should provide an extra boost in quality of results.
  • Free (at least for now)

Synopsys Lynx Design System

  • You can use the flow with any foundry or technology node. A big advantage unless you are set on TSMC 65nm (which a lot of people are).
  • Other libraries and tools are easier to integrate into the flow I would think. It’s not clear whether TSMC even supports hacking the flow for other nodes.
  • Support from the Synopsys field and support center. Recall, this is now a full fledged product. Presumably, the price customers pay for Lynx will fund the support costs. If there is no cost for the TSMC flow, how will they fund supporting it? Perhaps they will take on the cost to get the silicon business, but that’s a business decision I am not privy to. And don’t underestimate the support effort. This is much like a flow that ASIC vendors (TI, Motorola/Freescale, LSI Logic), not foundries, would have offered. They had whole teams developing and QA’ing their flows. And then they would be tied to a specific set of tool releases and frozen.
  • Runtime Manager and Management Cockpit. Nice to have features.
  • Been used to create real chips before. As I’d said, the core flow in Lynx dates back almost 10 years and has been updated continuously. It’s not clear what is the genesis of the new TSMC flow. Is it a derivative of the TSMC reference flows? Is it something that has been used to create chips? Again, I don’t know, but I’ve got to give Synopsys the nod in terms of “production proven”.

So, what do I recommend. Well, if you are not going to TSMC 65 nm with TSMC standard cell libraries, then there is not much reason to look at the TSMC flow. However, if you are using the technology that TSMC currently supports, the appeal of a turnkey, optimized, and FREE flow is pretty strong. I’d at least do my due diligence and look at the TSMC flow. It might help you get better pricing from TSMC.

If anyone out there has actually seen or touched the TSMC flow, please add a comment below. Everyone would love to know what you think first hand.
harry the ASIC guy

The Missing Lynx - The ASIC Cloud

Friday, April 3rd, 2009

My last blog post, entitled The Weakest Lynx, got a lot of attention from the Synopsys Lynx CAEs and Synopsys marketing. Please go see the comments on that post for a response from Chris Smith, the lead support person for Lynx at Synopsys. Meanwhile, the final part of this series … The Missing Lynx.

About 7 months ago, I wrote a blog post entitled Birth of an EDA Revolution in which I first shared my growing excitement over the potential for cloud computing and Software-as-a-Service (SaaS) to transform EDA. About a week later, Cadence announced a SaaS offering that provides their reference flows, their software, and their hardware for rent to projects on a short-term basis. About a week after that, I wrote a third post on this topic, asking WWSD (what will Synopsys do) in response to Cadence.

In that last post, I wrote the following:

Synopsys could probably go one better and offer a superior solution if it wanted to, combining their DesignSphere infrastructure and Pilot Design Environment.  If fact, they have done this for select customers already, but not as a standard offering. There is some legwork that they’d need to do, but the real barrier is Synopsys itself. They’ve got to decide to go after this market and put together a standard offering like Cadence has … And while they are at it, if they host it on a secure cloud to make it universally accessible and scalable, and if they offer on-demand licensing, and if they make it truly open by allowing third party tools to plug into their flow, they can own the high ground in the upcoming revolution.

Although I wrote this over 6 months ago, I don’t think I could have written it better today. The only difference is that Pilot has now become Lynx. “The ASIC Cloud”, as I call it, would look something like this:

The ASIC Cloud

As I envision it, Synopsys Lynx will be the heart of The ASIC Cloud and will serve to provide the overall production design flow. The Runtime Manager will manage the resources including provisioning of additional hardware (CPU and storage) and licenses, as needed. The management cockpit will provide real-time statistics on resource utilization so the number of CPUs and licenses can be scaled on-the-go. Since The ASIC Cloud is accessible through any web browser, this virtual design center is accessible to large corporate customers and to smaller startups and consultants. It’s also available to run through portable devices such as netbooks and smartphones.

If you think I’m insane, you may be right, I may be crazy. But it just might be a lunatic you’re looking for. To show you that this whole cloud computing thing is not just my fever (I have been sick this past week), take a look at what this one guy in Greece did with Xilinx tools. He basically pays < $1 per hour to access hardware to run Xilinx synthesis tools on the Amazon Elastic Compute Cloud. Now, this is nothing like running an entire RTL2GDSII design flow, but he IS running EDA tools on the cloud, taking advantage of pay-as-you go CPU and storage resources, and taking advantage of multiple processors to speed up his turnaround time. The ASIC Cloud will be similar and on a much greater scale.

It may take some time for Synopsys to warm up to this idea, especially since it is a whole new business model for licensing software. But for a certain class of customers (startups, design services providers) it has definite immediate benefits. And many of these customers are also potential Lynx customers.

So, Synopsys, if you want to talk, you know where to find me.


That wraps up my 5-part series on Synopsys Lynx. If you want to find the other 4 parts, here they are:

Part 1 - Synopsys Lynx Design System Debuts at SNUG

Part 2 - Lynx Design System? - It’s The Flow, Stupid!

Part 3 - Strongest Lynx

Part 4 - The Weakest Lynx

harry the ASIC guy

Lynx Design System? - It’s The Flow, Stupid!

Wednesday, March 18th, 2009

(This is the 2nd in a 3 part series on the newly introduced Synopsys Lynx Design System. You can find Part 1 here) .

When Pilot was introduced some years back, one of the bigger discussion points concerned what to call this thing. I’m not talking about whether to call it Pilot or some other name. I’m talking about what-the-heck-is it.

  • Is it a flow?
  • Is it an environment?
  • Is it a system?
  • Is it a platform?

In the end, the marketing folks decided that it was an environment, which included a flow and other stuff like:

  • Tools for prepping IP and libraries
  • A configuration GUI
  • A metrics reporting GUI

Lynx adds a Runtime Manager to the product, so now it is no longer an environment. It’s a Design System. Well, with all due respect to the marketing folks who wrung their hands making this decision, I’d like to say one thing:

It’s the flow, stupid!

Sure, the metrics GUI can create pretty color-coded dashboards that even a VP can understand. “We’re red, dammit. Why aren’t we green”. And the Runtime Manager can configure the flow, and launch jobs, and monitor progress, also with pretty colors. And the “Foundry Ready System” … well, I’m still trying to figure out what that even means, even though I know what it is. But it’s the flow at the core of Lynx (nee Pilot nee Tiger nee Bamboo) that is the real guts of the product and the reason you’d want to buy it or not buy it. It’s the engine that makes Lynx run. So let’s take a tour.

At the core, the Lynx flow is a set of makefiles and Perl scripts that invoke Synopsys tools with standardized tcl scripts. (Clarification: All the scripts in the flow are tcl – one tiny bit of perl which comes with ICC-DP is re-used but anything the user touches is going to be in Tcl). Together, these scripts implement a flow that has been designed to produced very good results across a large number of designs.  The flow operates with a standard set of naming conventions and standard directory structures. In all, the Lynx flow covers all the steps from RTL to GDSII implementation.

There are actually 5 major “steps” in the flow:

  1. Synthesis
  2. Design-for-Test (may now be combined with #1)
  3. Design Planning
  4. Place and Route Optimization
  5. Chip Finishing

Lynx Flow

Each of these steps is further broken down into smaller tasks. For instance, Place and Route might be divided into:

  • Placement Optimization
  • Clock Optimization
  • Clock Routing
  • Routing
  • Routing Optimization
  • Post Route Optimization
  • Signal Integrity Optimization

The scripts also implement the analysis tasks such as parasitic extraction, static timing analysis, formal verification, IR drop analysis, etc. In all, they cover everything a design team needs to go from an RTL design to tapeout. If there is a task that is missing (e.g. Logic Bist insertion), you can add an entire new step to the flow by modifying the makefiles by hand, or using the GUI to create a new task. If you want it to use a 3rd party tool, you can do that too by having the makefile call that tool. Third party tools actually are a little more complicated than that, but that gives you the idea. (Clarification from Synopsys: Third party tools can be executed. The system is very open to including other tools – Synopsys just doesn’t promote this loudly. A key thing about the Lynx flow is that you do not ever need to even look at a Makefile, or execute a make command  – “no Makefile knowledge required” – since it is all handled graphically through the GUI – people should not be worried that they have to learn make).

So, you may ask, what’s the big deal? After all, isn’t this the way that most design teams / CAD teams implement their flows, more or less. What is so special about a set of makefiles and tcl scripts?

Honestly, you probably could go off and design something very similar on your own. And it might be better or more closely suited to your needs than this standard flow from Synopsys. Only you can make that choice because only you know what is important to you. The advantages of using a flow like Lynx from Synopsys are:

  • 90-95% of what you need you can get off-the-shelf and you can modify the rest if you need to.
  • The flow is being constantly updated with each new major release of the Synopsys tools so you don’t suffer from “flow inertia” and find yourself with an outdated flow.
  • The flow is being constantly tested, not just through regressions, but by the Synopsys consultants using it on real customer projects. So the quality is high.
  • In areas such as power, Synopsys can optimize the flow across multiple tasks and steps and tools, something that it would be hard for non tool experts to do.
  • You can use the engineers who would have been designing your flow to do real work.

Of course there are disadvantages as well:

  • It’s an all Synopsys flow, so you have to use Synopsys tools to get the most benefit. If you currently use other 3rd party tools, then the benefit is reduced proportionately. Or you can convert to the Synopsys tool, but that costs money and time and maybe it’s not the best-of-breed as a point tool.
  • The scripts are actually divided into many pieces of scripts that call each other. Although very modular, this can be confusing for a novice user if he is trying to modify the flow or debug a problem.
  • Lynx has a “strongly preferred” directory structure that is very deep. They do this for some good reasons, but this might go against the norm at your company and ignite a “religious feud”.
  • Once you’ve invested in this flow by training up your organization and building your own scripts and utilities on top, you’re pretty committed to Synopsys. Not a problem if Synopsys is your long term partner, but if you want to fool around or have a fear of commitment, not so good.

The bottom-line is that Lynx provides you with the same tool flow that the Synopsys consultants use on their own projects. If you are using an all or predominantly Synopsys flow, then I think it’s worth looking at.

(Friday: What I like. What I don’t like. And What Could Be Better)

Part 1 - Synopsys Lynx Design System Debuts at SNUG

Part 3 -  Strongest Lynx

Part 4 -  The Weakest Lynx

Part 5 - The Mising Lynx - The ASIC Cloud

harry the ASIC guy

Synopsys Lynx Design System Debuts at SNUG

Monday, March 16th, 2009

Lynx Design Flow

This morning at the Synopsys Users Group (SNUG) Conference in San Jose, Aart DeGeus will be announcing Synopsys‘ latest generation of design flow / environment / system called Lynx. This is a big deal for Synopsys for a variety of reasons.  It’s also of particular interest to me for 3 reasons:

  1. First, when I was a program manager at Synopsys, I managed several projects that used previous generations of Lynx and was closely involved with the introduction of the most recent predecessor of Lynx, known as Pilot.
  2. Second, I have written about and believe in the importance of having an industry standard interoperable design system to enable collaboration.
  3. Third, I still keep in touch with some members of the flow team at Synopsys who developed and will support Lynx, so it’s good to see what they have come out with.

Given this full disclosure, you might be wondering if my opinion is objective. Probably not entirely. But those at Synopsys who worked with me in regards to flows can tell you that I was often a rather vocal critic. I know about the strengths and I also know the warts. So don’t expect this to be a sales pitch for Lynx, but as honest an assessment as I would have given internally were I still at Synopsys.

There’s a lot to cover, so I’m going to break this up into 3 5 separate posts over the course of the next 2 weeks. Today I’ll cover some of the history of flows at Synopsys and how they got to the Lynx flow that they have today. Wednesday I’ll cover what I consider the important nuts and bolts of Lynx, particularly what is new and exciting. Friday Next week, I’ll give my opinions as to what I like and don’t like and what can be made even better.

Since I won’t be covering nuts and bolts until Wednesday, I’ll include at the end of this post some links (no pun intended) to the requisite gratuitous shiny marketing collateral from Synopsys. Please take a look … I’m sure they spent a lot of money having it produced.


A (not so) Brief History of Flows At Synopsys


The development of standardized tool flows dates back at least 12 years to the days when Synopsys was mainly a synthesis company. Some consultants in the professional services group decided that their life would be easier if they could standardize the synthesis scripts that they brought out to customers to do consulting work. The Synopsys Design Environment (SDE) was a set of Make, Perl, and Design Compiler scripts that implemented a hierarchical “compile-characterize-recompile” synthesis methodology. Although it was used extensively by those who created the scripts and some others, it never caught on broadly and no replacement came about for some time.


In 2000, Synopsys acquired a small design services company based in Austin called The Silicon Group (TSG). Primarily acquired to implement turnkey design services to GDSII (this was prior to the acquisition of Avanti), TSG had developed an internal tool flow to standardize and automate the use of the Avanti tools. This “Bamboo” flow was the genesis of Lynx.


After Synopsys acquired Avanti in 2002, Synopsys Professional Services ramped up on its backend design services to GDSII, causing a broad deployment of the Bamboo flow across the organization. Renamed TIGER (for Tool InteGration EnviRonment), this flow was originally optional for design teams to use on consulting engagements, then became “strongly encouraged” and finally “mandatory” for any turnkey projects.

As you might expect, as a flow that originated in another company and was being required by management, TIGER met with some resistance. There were certainly aspects of TIGER that could be (and have been) improved, but primarily there was the predictable “not-invented-here” resistance and “I’m doing fine, just leave me be”. I managed several projects that used TIGER and it usually took 2-4 weeks for a new consultant to get familiar enough with the flow to stop complaining. After that however, he would usually start to feel comfortable and by the end of the project, would be a TIGER advocate.

As a project manager, the biggest benefit was standardization. A project could hit the ground running without the need for the team to arm wrestle over what design flow to use and then to develop it. If I needed more consultants to help suddenly, I knew they would also be able to hit the ground running as far as the design flow was concerned. Over time, various aspects of TIGER became part of the vernacular and culture (e.g. I’m at the “syn” step), making communication that much more efficient.


In 2005, I became involved in an effort to introduce TIGER as a complete “service offering” to Synopsys customers. As you can imagine, there was a lot that had to be done before taking to market scriptware that was previously used internally, and this took over a year. Scripts had to be brought to a higher level of quality and a regression suite created to ensure that the flow ran properly across a wide variety of designs and libraries. A support organization within professional services was created solely to support customers using the flow. A metrics GUI was created to allow design and runtime metrics to be viewed graphically and reports created. Eventually, a flow editor was created to allow customers to modify flows without editing makefiles.

On the business side, there was a lot of discussion on how to offer this flow. There were those, myself included, who advocated to make it available as “open source”. Personally, my feeling was that hundreds of customer designers can maintain and enhance the flow better and at less cost than a handful of Synopsys flow developers. And once adopted broadly, this flow would become the de facto standard, and Synopsys would benefit greatly from that leadership position. There were downsides to that approach, however, and in the end the “Synopsys Pilot Design Environment” debuted just before SNUG 2006 as a “service offering”.

With the move to outside customers, several new concerns arose:

  1. Customers wanted support for non-Synopsys tools, most notably Mentor’s Calibre which enjoyed a dominant market share. Pilot allowed for 3rd party tools to be added to the flow, but it was up to the customer to do so.
  2. Despite the GUIs that were developed, there was still a fair amount of Make and Perl knowledge that the designer needed to be really effective, especially for debug. Many customer engineers did not feel compfortable with the intracacies of these scripting languages.
  3. There was confusion with other Synopsys methodologies offered by the Synopsys business units and application consultants (e.g. Recommended Astro Methodology) and by Synopsys partners (TSMC and ARM reference Flow). How were they different? Why were some free and Pilot a service offering?
  4. Customers resisted getting “locked-in” to an “all Synopsys” flow and forgoing (for some time at least) the best-of-breed approach.

Despite these concerns, it seems that Pilot has gotten a fair amount of deployment, largely by companies going through some sort of major transition (e.g. moving to a new process node, moving from ASIC to COT, consolidating on Synopsys tools). Although I am no longer with Synopsys, my estimate is that there are probably about 2 dozen or so companies using Pilot in some fashion, mostly with some degree of customization for their particular needs.


Lynx is the next in the series of design flow offering from Synopsys. As with the others, it attempts to incrementally address issues and concerns with Pilot and to add new capabilities to increase adoption. In short, Lynx is intended to be a full-fledged product, supported through normal channels (support center and applications consultants).

(Note: I have not seen Lynx yet in action, so the following is based on Synopsys claims).

Among the key aspects of Lynx, as I’ve been told, are:

  • A runtime manager GUI has been added that supposedly frees designers from ever having to see or edit a makefile or perl script. It also allows debugging of errors and more configuration control.
  • Synopsys has migrated the metrics reporting to be web-based and hence accessible to any internet device (e.g. iPhone). The metrics can now cut across several projects instead of just one. And the GUI has been improved.
  • The GUIs in general are supposedly the same style as other Synopsys tools.
  • A much larger set of regressions run at Synopsys which should translate in better quality. Also, due to this regression automation, Synopsys claims they can release a version of Lynx concurrent with a new tool release. With Pilot, there was a 3 month lag.
  • Automated and semi-automated tapeout checks based on a set of internal guidelines that Synopsys has used for years on turnkey backend design projects.
  • Rather than having several competing methodologies and flows, Synopsys has decided to put all its eggs in the Lynx basket. This should result in greater focus and support to customers of Lynx.

My understanding is that Lynx will be sold as a perpetual site license with a separate user license for each user. Synopsys would not share the pricing with me, but I have strong reason to believe it is close to other mid-level Synopsys products.


If you want to access more information on Lynx, here are the “links” to go to:

Official Synopsys Lynx Webpage

Brief Lynx Video

Also, if you are at SNUG this week, you can get more info on Lynx at the following:

I very much regret not being able to go to SNUG this week. So I’d like to ask a favor. Please be my eyes and ears. If you attend the keynote or any of the Lynx related events, please post a comment with your thoughts here on this blog post. If you have thoughts on other aspects of SNUG, and you use Twitter, then please use the Twitter #SNUG hashtag in your tweets and I’ll feel like I’m there.

Part 2 - Lynx Design System? - It’s The Flow, Stupid!

Part 3 -  Strongest Lynx

Part 4 -  The Weakest Lynx

Part 5 - The Mising Lynx - The ASIC Cloud

harry the ASIC guy

Upon Further Review and W.W.S.D

Sunday, September 21st, 2008

At the end of last Sunday’s Chargers-Bronco’s game, Referee Ed Hochuli blew a call that cost the San Diego Charger’s the football game.  Here’s a somewhat comical look at what happened:

Probably not so comical if you’re a Charger’s fan :-(

Well, last week I got a chance to do some more “research” into the Cadence announcement of a SaaS offering. Although I got the substance of the call correct, in haste I also got one important detail incorrect.  (As, Mark Twain once said, “a lie can travel halfway around the world while the truth is putting on its shoes.” Today, a lie can travel around the world several hundred times while you put on your shoes).

I had inferred from the use of the term “Software-as-a-Service“, that the Hosted Design offering would include a “pay per use … pay as you go” or similar on-demand licensing model. Upon further review … this is not the case.  Here are some of the things I found out:

  1. No on-demand licensing, no eDACard … only monthly granularity for licensing. If you want to scale the size of the hosted environment, several weeks lead time may be needed to obtain and configure the additional CPUs unless they are otherwise available.
  2. The “flows” that are offered are the Cadence reference flows (e.g. Low-Power Design Flow), not a production flow that Cadence may or may not be developing.
  3. Cadence says that it can host any third party EDA software … just license it to Cadence’s hostid.

Despite some limitations, this is still a big step. Small companies can now obtain the necessary hardware, software, and IT support to do chip design at a lower initial cost than building their own infrastructure. The VCs should like that.

But there are some limitations.  First, although the Cadence VCAD chamber provides security, it lacks the instant scalability and on-demand pricing that cloud computing would provide. Second, although reference flows are provided, it lacks a real production design environment that designers can just pick up and use.  Third, despite Cadence’s assurances that they will allow other EDA tools to be hosted, competitive tools likely will be discouraged since the ultimate objective is to further lock customers into an all Cadence tool flow.

So, the question now is … What Will Synopsys Do (W.W.S.D)?

Before that, we have to ask What “Has” Synopsys Done?  You see, Synopsys tried and then abandoned a similar idea about 7 years ago. At the time, companies were not “comfortable with the idea that their computers and data were in a remote building operated by a third party”.  But they are now (at least more than before).  At that time, Synopsys had no production design environment available to offer. They do now.

Synopsys could probably go one better and offer a superior solution if it wanted to, combining their DesignSphere infrastructure and Pilot Design Environment.  If fact, they have done this for select customers already, but not as a standard offering. There is some legwork that they’d need to do, but the real barrier is Synopsys itself. They’ve got to decide to go after this market and put together a standard offering like Cadence has.

And while they are at it, if they host it on a secure cloud to make it universally accessible and scalable, and if they offer on-demand licensing, and if they make it truly open by allowing third party tools to plug into their flow, they can own the high ground in the upcoming revolution.

What do you think?

harry the ASIC guy