(This is the 2nd in a 3 part series on the newly introduced Synopsys Lynx Design System. You can find Part 1 here) .
When Pilot was introduced some years back, one of the bigger discussion points concerned what to call this thing. I’m not talking about whether to call it Pilot or some other name. I’m talking about what-the-heck-is it.
- Is it a flow?
- Is it an environment?
- Is it a system?
- Is it a platform?
In the end, the marketing folks decided that it was an environment, which included a flow and other stuff like:
- Tools for prepping IP and libraries
- A configuration GUI
- A metrics reporting GUI
Lynx adds a Runtime Manager to the product, so now it is no longer an environment. It’s a Design System. Well, with all due respect to the marketing folks who wrung their hands making this decision, I’d like to say one thing:
It’s the flow, stupid!
Sure, the metrics GUI can create pretty color-coded dashboards that even a VP can understand. “We’re red, dammit. Why aren’t we green”. And the Runtime Manager can configure the flow, and launch jobs, and monitor progress, also with pretty colors. And the “Foundry Ready System” … well, I’m still trying to figure out what that even means, even though I know what it is. But it’s the flow at the core of Lynx (nee Pilot nee Tiger nee Bamboo) that is the real guts of the product and the reason you’d want to buy it or not buy it. It’s the engine that makes Lynx run. So let’s take a tour.
At the core, the Lynx flow is a set of makefiles and Perl scripts that invoke Synopsys tools with standardized tcl scripts. (Clarification: All the scripts in the flow are tcl – one tiny bit of perl which comes with ICC-DP is re-used but anything the user touches is going to be in Tcl). Together, these scripts implement a flow that has been designed to produced very good results across a large number of designs. The flow operates with a standard set of naming conventions and standard directory structures. In all, the Lynx flow covers all the steps from RTL to GDSII implementation.
There are actually 5 major “steps” in the flow:
- Design-for-Test (may now be combined with #1)
- Design Planning
- Place and Route Optimization
- Chip Finishing
Each of these steps is further broken down into smaller tasks. For instance, Place and Route might be divided into:
- Placement Optimization
- Clock Optimization
- Clock Routing
- Routing Optimization
- Post Route Optimization
- Signal Integrity Optimization
The scripts also implement the analysis tasks such as parasitic extraction, static timing analysis, formal verification, IR drop analysis, etc. In all, they cover everything a design team needs to go from an RTL design to tapeout. If there is a task that is missing (e.g. Logic Bist insertion), you can add an entire new step to the flow by modifying the makefiles by hand, or using the GUI to create a new task. If you want it to use a 3rd party tool, you can do that too by having the makefile call that tool. Third party tools actually are a little more complicated than that, but that gives you the idea. (Clarification from Synopsys: Third party tools can be executed. The system is very open to including other tools – Synopsys just doesn’t promote this loudly. A key thing about the Lynx flow is that you do not ever need to even look at a Makefile, or execute a make command – “no Makefile knowledge required” – since it is all handled graphically through the GUI – people should not be worried that they have to learn make).
So, you may ask, what’s the big deal? After all, isn’t this the way that most design teams / CAD teams implement their flows, more or less. What is so special about a set of makefiles and tcl scripts?
Honestly, you probably could go off and design something very similar on your own. And it might be better or more closely suited to your needs than this standard flow from Synopsys. Only you can make that choice because only you know what is important to you. The advantages of using a flow like Lynx from Synopsys are:
- 90-95% of what you need you can get off-the-shelf and you can modify the rest if you need to.
- The flow is being constantly updated with each new major release of the Synopsys tools so you don’t suffer from “flow inertia” and find yourself with an outdated flow.
- The flow is being constantly tested, not just through regressions, but by the Synopsys consultants using it on real customer projects. So the quality is high.
- In areas such as power, Synopsys can optimize the flow across multiple tasks and steps and tools, something that it would be hard for non tool experts to do.
- You can use the engineers who would have been designing your flow to do real work.
Of course there are disadvantages as well:
- It’s an all Synopsys flow, so you have to use Synopsys tools to get the most benefit. If you currently use other 3rd party tools, then the benefit is reduced proportionately. Or you can convert to the Synopsys tool, but that costs money and time and maybe it’s not the best-of-breed as a point tool.
- The scripts are actually divided into many pieces of scripts that call each other. Although very modular, this can be confusing for a novice user if he is trying to modify the flow or debug a problem.
- Lynx has a “strongly preferred” directory structure that is very deep. They do this for some good reasons, but this might go against the norm at your company and ignite a “religious feud”.
- Once you’ve invested in this flow by training up your organization and building your own scripts and utilities on top, you’re pretty committed to Synopsys. Not a problem if Synopsys is your long term partner, but if you want to fool around or have a fear of commitment, not so good.
The bottom-line is that Lynx provides you with the same tool flow that the Synopsys consultants use on their own projects. If you are using an all or predominantly Synopsys flow, then I think it’s worth looking at.
(Friday: What I like. What I don’t like. And What Could Be Better)
Part 3 - Strongest Lynx
Part 4 - The Weakest Lynx
Part 5 - The Mising Lynx - The ASIC Cloud
harry the ASIC guy