Posts Tagged ‘Cadence’

Altium Looking to Gain Altitude in the Cloud

Sunday, January 30th, 2011

Altium Enterprise Vault SystemOver the holiday break, I came across an interview of Altium CIO Alan Perkins that caught my eye. Sramana Mitra has been focusing on interesting cloud-based businesses and this interview focused on how this EDA company was planning to move into the cloud. I wasn’t able to talk to Alan Perkins directly, but I was able to find out more through their folks in the US (the company is based in Australia). It was interesting enough to warrant a post.

I knew very little about Altium before seeing this interview and maybe you don’t either, so here is a little background. Based in Australia, Altium is a small (~$50M) EDA company focused primarily in the design of printed circuit boards with FPGAs and embedded software. They formed from a company called Protel about 10 years ago and most recently gained attention when they acquired Morfik, a company that offers an IDE for developing web apps (more on that later). According to some data I saw and from what they told me, they added 1700 new customers (companies, not seats) in 2010 just in the US! So, they may be they best kept secret in a long while. (Ironically, the next day at work after I spoke to Altium, I spoke to someone at another company that was using Altium to design a PC board for us).

According to Altium, their big differentiator is that they have a database-centric offering as compared to tool-flow centric offerings like Cadence OrCAD and Allegro and Mentor’s Board Station and Expedition and related tools. I’m not an EDA developer, so I won’t pretend to understand the nuances of one versus the other. However, when I think of a “database-centric”, I think of “frameworks”. I know it’s been almost 20 years since those days, and things have changed, so maybe database-centric makes a lot of sense now. OpenAccess is certainly a good thing for the industry, but that is because it’s an “open standard” while Altium’s database is not. Anyway, enough on this matter because, as I said, I’m not an EDA developer and don’t want to get in too deep here.

A few years ago, I wrote a blog post entitled “Is IP a 4-Letter Word?”. The main thrust of that post was that IP quality is rather poor in general and there needs to be some sort of centralized authority to grade IP quality and to certify its use. So, when Altium told me they plan to enable a marketplace for design IP by creating ”design vaults” in the cloud, my first question was “who is going to make sure this IP is any good”? Is this going to be the iPhone app model, where Apple vets and approves every app? Or is it going to be the Android model, caveat emptor.

To Altium’s credit, they have similar concerns, which is why they are planning to move slowly. With their introduction of Altium Designer 10, Altium will first provide it’s own vetted IP in the cloud. In the past, this IP was distributed to the tool users on their site, but having it in the cloud will make it easier to distribute (pull, insted of push) and also allow for asynchronous release and updates. The tools will automatically detect if you are using an IP that has been revved, and ask you if you want to download the new version.

Once they have this model understood, Altium then plans to open the model up to 3rd party IP which can be offered for free, or licensed, or maybe even traded for credits (like Linden dollars in Second Life). It’s an interesting idea which requires some pretty significant shifts in personal and corporate cultures. I think that sharing of small “jelly bean” type IP is acheivable because none of it is very differentiated. But once you get to IP that required some significant time to design, why share it unless IP is your primary business. The semiconductor industry is still fiercely competitive and I think that will be a significant barrier. Not to mention that it takes something like 4x-5x as much effort to create an IP that is easily reusable as compared to creating it just to be used once.

Being a tool for the design of FPGAs is an advantage for Altium, since the cost of repairing an FPGA bug is so much less than an SoC or ASIC. For FPGAs, the rewards may be greater than the risks, especially for companies that are doing ASICs for the first time. And this is the market that Altium is aiming for … the thousands of sompanies that will have to design their products to work on the internet-of-things. Companies that design toasters that have never had any digital electronics and now have to throw something together. They will be the ones that will want to reuse these designs because they don’t have the ability to design them in-house.

Which brings us to Morfik, that company that Altium acquired that does IDEs for web apps. It’s those same companies that are designing internet enabled toasters that will also need to design a web app for their customers to access the toaster. So if Altium sells the web app and the IP that let’s the toaster talk to the web app, then Altium provides a significant value to the toaster company. That’s the plan.

Still, the cloud aspect is what interests me the most. Even if designers are reluctant to enter this market, the idea of having this type of central repository is best enabled by the cloud. The cloud can enable collaboration and sharing much better than any hosted environment. And it can scale as large and as quickly as needed. It allows a safe sort of DMZ where IP can be evaluated by a customer while still protecting the IP from theft.

This is not by any means a new idea either. OpenCores has been around for more than a decade offering a repository for designers to share and access free IP. I spoke with them a few years ago and at the time the site was used mainly by universities and smaller companies, but their OpenRISC processor has seen some good usage, so it’s a model that can work.

I’m anxious to see what happens over time with this concept. Eventually, I think this sort of sharing will have to happen and it will be interesting to see how this evolves.

harry the ASIC guy

Where in the DAC is harry the ASIC guy?

Friday, June 11th, 2010

dac_logo.pngLast year’s Design Automation Conference was kind of quiet and dull, muted by the impact of the global recession with low attendance and just not a lot of real interesting new developments. This year looks very different; I’m actually having to make some tough choices of what sessions to attend. And with all the recent acquisitions by Cadence and Synopsys, the landscape is changing all around, which will make for some interesting discussion.

I’ll be at the conference Monday through Wednesday. As a rule, I try to keep half of my schedule open for meeting up with friends and colleagues and for the unexpected. So if you want to chat, hopefully we can find some time. Here are the public events that I have lined up:

Monday

10:30 - 11:00 My good friend Ron Ploof will interviewing Peggy Aycinena on the Synopsys Conversation Central stage, so I can’t miss that. They both ask tough questions so that one may get chippy. (Or you can participate remotely live here)

11:30 - 12:00 I’ll be on that same Synopsys Conversation Central stage interviewing Verification Consultant and Blogger Extraordinaire Brian Bailey. Audience questions are encouraged, so please come and participate. (Or you can participate remotely live here)

3:00 - 4:00 I’ll be at the Atrenta 3D Blogfest at their booth. It should be an interesting interactive discussion and a good chance to learn about one of the 3 directions EDA is moving in.

6:00 - Cadence is having a Beer for Bloggers event but I’m not sure where. For the record, beer does not necessarily mean I’ll write good things. (This event was canceled since there is the Denali party that night).

Tuesday

8:30 - 10:15 For the 2nd straight year, a large fab, Global Foundries (last year it was TSMC) will be presenting their ideas on how the semiconductor design ecosystem should change From Contract to Collaboration: Delivering a New Approach to Foundry

10:30 - 12:00 I’ll be at a panel discussion on EDA Challenges and Options: Investing for the Future. Wally Rhines is the lead panelist so it should be interesting as well.

12:30 - 1:00 I’ll be back at the Synopsys Conversation Central stage interviewing James Wendorf (IEEE) and Jeff Green (McAfee) about standards for cloud computing security, one of the hot topics.

Wednesday

10:30 - 11:30 I’ll be at the Starbucks outside the convention floor with Xuropa and Sigasi. We’ll be giving out Belgian Chocolate and invitations to use the Sigasi-Xilinx lab on Xuropa.

2:00 - 4:00 James Colgan, CEO of Xuropa, and representatives from Amazon, Synopsys, Cadence, Berkeley and Altera will be on a panel discussion on Does IC Design have a Future In the Cloud?. You know what I think!

This is my plan. Things might change. I hope I run into some of you there.

harry the ASIC guy

Which Direction for EDA - 2D, 3D, or 360?

Sunday, May 23rd, 2010

2d3d360.JPGA hiker comes to a fork in the road and doesn’t know which way to go to reach his destination. Two men are at the fork, one of whom always tells the truth while the other always lies. The hiker doesn’t know which is which. He may ask one of the men only one question to find his way.

Which man does he ask, and what is the question?

__________

There’s been lots of discussion over the last month or 2 about the direction of EDA going forward. And I mean literally, the “direction” of EDA. Many semiconductor industry folks and proponents have been telling us to hold off on that obituary for 2D scaling and Moore’s law. Others have been doing quiet innovation in the technologies needed for 3D die and wafer stacks. And Cadence has recently unveiled its holistic 360 degree vision for EDA that has us developing apps first and silicon last.

I’ll examine each of these orthogonal directions in the next few posts. In this post, I’ll first examine the problem that is forcing us to make these choices.

The Problem

One of the great things about writing this blog is that I know that you all are very knowledgeable about the industry and technology and I don’t need to start with the basics. So I’ll just summarize them here for clarity:

  • Smaller semiconductor process geometries are getting more and more difficult to achieve and are challenging the semiconductor manufacturing equipment, the EDA tools, and even the physics. No doubt there have been and always will be innovations and breakthroughs that will move us forward, but we can no longer see clearly the path to the next 3 or 4 process geometries down the road. Even if you are one of the people who feels there is no end to the road, you’d have to admit that it certainly is getting steeper.
  • The costs to create fabs for these process nodes is increasing drastically, forcing consolidation in the semiconductor manufacturing industry. Some predict there will only be 3 or 4 fabs in a few years. This cost is passed on to the cost of the semiconductor device. Net cost per gate may not be rising, but the cost to ante up with a set of masks at a new node certainly is.
  • From a device physics and circuit design perspective, we are hitting a knee in the curve where lower geometries are not able to deliver on the full speed increases and power reductions achieved at larger nodes without new “tricks” being employed.
  • Despite these challenges, ICs are still growing in complexity and so are the development costs, some say as high as $100M. Many of these ICs are complex SoCs with analog and digital content, multiple processor cores, and several 3rd party IP blocks. Designing analog and digital circuits in the same process technology is not easy. The presence of embedded processors means that software and hardware have intersected and need to be developed harmoniously … no more throwing the hardware over-the-wall to software. And all this 3rd party IP means that our success is increasingly dependent on the quality of work of others that we have never met.
  • FPGAs are eating away at ASIC market share because of all the factors above. The break even quantity between ASIC and FPGA is increasing, which means more of the lower volume applications will choose FPGAs. Nonetheless, these FPGAs are still complex SoCs requiring similar verification methods as ASICs, including concurrent hardware and software development.

There are no doubt many other factors, but these are the critical ones in my mind. So, then, what does all this mean for semiconductor design and EDA?

At the risk of using a metaphor, many feel we are at a “fork-in-the-road”. One path leads straight ahead, continuing the 2D scaling with new process and circuit innovations. Another path leads straight up, moving Moore’s law into the 3D dimension with die stacks in order to cost effectively manage increasing complexity. And one path turns us 180 degrees around, asking us to look at the applications and software stack first and the semiconductor last. Certainly, 3 separate directions.

Which is the best path? Is there another path to move in? Perhaps a combination of these paths?

I’ll try to examine these questions in the next few posts. Next Post: Is 2D Scaling Really Dead or Just Mostly Dead?

__________

Answer to Riddle: Either man should be asked the following question: “If I were to ask you if this is the way I should go, would you say yes?” While asking the question, the hiker should be pointing at either of the directions going from the fork.

harry the ASIC guy

My Obligatory TOP 10 for 2009

Thursday, December 31st, 2009

2009 To 2010

http://www.flickr.com/photos/optical_illusion/ / CC BY 2.0

What’s a blog without some sort of obligatory year end TOP 10 list?

So, without further ado, here is my list of the TOP 10 events, happenings, occurrences, observations that I will remember from 2009. This is my list, from my perspective, of what I will remember. Here goes:

  1. Verification Survey - Last February, as DVCon was approaching, I thought it would be interesting to post a quickie survey to see what verification languages and methodologies were being used. Naively, I did not realize to what extent the fans of the various camps would go to rig the results in their favor. Nonetheless, the results ended up very interesting and I learned a valuable lesson on how NOT to do a survery.
  2. DVCon SaaS and Cloud Computing EDA Roundtable - One of the highlights of the year was definitely the impromptu panel that I assembled during DVCon to discuss Software-as-a-Service and Cloud Computing for EDA tools. My thanks to the panel guests, James Colgan (CEO @ Xuropa), Jean Brouwers (Consultant to Xuropa),  Susan Peterson (Verification IP Marketing Manager @ Cadence), Jeremy Ralph (CEO @ PDTi), Bill Alexander (VP Marketing @ Blue Pearl Software), Bill Guthrie (VP Marketing @ Numetrics). Unfortunately, the audio recording of the event was not of high enough quality to post, but you can read about it from others at the following locations:

    > 3 separate blog posts from Joe Hupcey (1, 2, 3)

    > A nice mention from Peggy Aycinena

    > Numerous other articles and blog posts throughout the year that were set in motion, to some extent, by this roundtable

  3. Predictions to the contrary, Magma is NOT dead. Cadence was NOT sold. Oh, and EDA is NOT dead either.
  4. John Cooley IS Dead - OK, he’s NOT really dead. But this year was certainly a turning point for his influence in the EDA space. It started off with John’s desperate attempt at a Conversation Central session at DAC to tell bloggers that their blog sucks and convince them to just send him their thoughts. For those who took John up on his offer by sending their thoughts, they would have waited 4 months to see them finally posted by John in his December DAC Trip report. I had a good discussion on this topic with John earlier this year, which he asked me to keep “off the record”. Let’s just say, he just doesn’t get it and doesn’t want to get it.
  5. The Rise of the EDA Bloggers.
  6. FPGA Taking Center Stage - It started back in March when Gartner issued a report stated that there were 30 FPGA design starts for every ASIC start. That number seemed very high to me and to others, but that did not stop this 30:1 ratio from being quoted as fact in all sorts of FPGA marketing materials throughout the year. On the technical side, it was a year where the issues of verification of large FPGAs came front-and-center and where a lot of ASIC people started transitioning to FPGA.
  7. Engineers Looking For Work - This was one of the more unfortunate trends that I will remember from 2009 and hopefully 2010 will be better. Personally, I had difficulty finding work between projects. DAC this year seemed to be as much about finding work as finding tools. A good friend of mine spent about 4 months looking for work until he finally accepted a job at 30% less pay and with a 1.5 hour commute because he “has to pay the bills”. A lot of my former EDA sales and AE colleagues have been laid off. Some have been looking for the right position for over a year. Let’s hope 2010 is a better year.
  8. SaaS and Cloud Computing for EDA - A former colleague of mine, now a VP of Sales at one of the small but growing EDA companies, came up to me in the bar during DAC one evening and stammered some thoughts regarding my predictions of SaaS and Cloud Computing for EDA. “It will never happen”. He may be right and I may be a bit biased, but this year I think we started to see some of the beginnings of these technologies moving into EDA. On a personal note, I’m involved in one of those efforts at Xuropa. Look for more developments in 2010.
  9. Talk of New EDA Business Models - For years, EDA has bemoaned the fact that the EDA industry captures so little of the value ($5B) of the much larger semiconductor industry ($250B) that it enables. At the DAC Keynote, Fu-Chieh Hsu of TSMC tried to convince everyone that the solution for EDA is to become part of some large TSMC ecosystem in which TSMC would reward the EDA industry like some sort of charitable tax deduction. Others talked about EDA companies having more skin in the game with their customers and being compensated based on their ultimate product success. And of course there is the SaaS business model I’ve been talking about. We’ll see if 2010 brings any of these to fruition.
  10. The People I Got to Meet and the People Who Wanted to Meet Me- One of the great things about having a blog is that I got to meet so many interesting people that I would never have had an opportunity to even talk to. I’ve had the opportunity to talk with executives at Synopsys, Cadence, Mentor, Springsoft, GateRocket, Oasys, Numetrics, and a dozen other EDA companies. I’ve even had the chance to interview some of them. And all the fellow bloggers I’ve met and now realize how much they know. On the flip side, I’ve been approached by PR people, both independent and in-house. I was interviewed 3 separate times, once by email by Rick Jamison, once by Skype by Liz Massingill, and once live by Dee McCrorey. EETimes added my blog as a Trusted Source. For those who say that social media brings people together, I can certainly vouch for that.

harry the ASIC guy

Synopsys Synphony Synopsis

Monday, October 12th, 2009

sheet_music.jpgI was contacted a few weeks ago by Synopsys’ PR agency to see if I’d be interested in covering an upcoming product announcement. I usually ignore these “opportunities” since the information provided is usually carefully wordsmithed marketing gobbledygook and not enough for me to really form an opinion. However, it turned out that this announcement was on a subject I know a little bit about, so I took them up on their offer.

The announcement was “embargoed“, that is, I was not to make it public until today. Embargoes are a vestige of the days when traditional journalism ruled the roost and when PR departments thought they could control the timing of their message. I don’t think embargoes benefit companies anymore since news is reported at light speed (literally) and people will write what they want when they want. Still, I consider it a sort of gentleman’s agreement so I’m not writing about it until today.

I also waited a little bit until the “mainstream press” wrote their articles. That let’s me point you to the best of them and conserve the space here for my own views, rather that regurgitating the press release and nuts and bolts.

(Update: Here is a very good description of the Synphony flow from Ron Wilson).

Today, Synopsys announced a new product called Synphony High Level Synthesis. You can read about this here. Basically, Synopsys is introducing a high level synthesis (aka behavioral synthesis) product that takes as its input Matlab M-Code and produces RTL Code, a cycle accurate C-model, and a testbench for simulation. Since I have not used the tool, I cannot comment on the capabilities or the quality of results or compare it to other tools on the market. However, I have had some past experience with tools like Matlab (specifically SPW) and Synphony (specifically Behavioral Compiler). So, here are my thoughts, observations, opinions that come to mind.

  1. Synopsys, once the leader in behavioral synthesis, is now the follower - When Synopsys introduced Behavioral Compiler over a decade ago they were the first to preach the gospel of high-level synthesis and all the associated benefits. Architectural optimization. Faster simulation. Bridging the gap between system design and ASIC design. Smaller and easier to understand code. Dogs and cats living together. The promises never fully materialized and Synopsys seemingly moved out of the market. Meanwhile, Mentor introduced Catapult C, Cadence introduced C-to-Silicon, and several others including Forte, Agility, Bluespec, Synfora, ChipVision, and AutoESL introduced their own high-level synthesis tools. Now, after acquiring Synplify DSP through Synplicity, Synopsys is finally re-entering the market (at least for ASIC design) with Synphony. The hunted have become the hunters.
  2. Synphony takes M-code from Matlab as its only source - Whereas most (but not all) other high-level synthesis tools input C like languages, Synopsys has chosen to input M-code only, at least for now. According to Chris Eddington, who is Director of Product Marketing for System-Level Products at Synopsys (according to his LinkedIn profile), approximately 60% of those who say they do “high-level design” are using M-code or some form of C (ANSI C, C++, System-C) for some portion of their design activities. Of those, slightly more use the C variants than M-code, which means that somewhere close to 25% of all ASIC designers could be a possible market for this tool.
  3. Synopsys can try to leverage the Matlab installed base - As mentioned above, Synopsys estimates that 25% of high-level designers could use the Synphony tool which is a pretty big market. By targeting mainly algorithmic design, not control logic, Synopsys can try to serve the Matlab installed base with a more narrowly targeted offering which should make it easier to support. It also allows Synopsys to avoid a bloody battle over C dominance and to pursue a blue ocean strategy with Matlab’s installed base. Interestingly though, there is no partnership with MathWorks implied by this announcement.
  4. Synphony leverages existing IP libraries - Libraries already exist for many common functions that were available for the Synplify DSP tool. The library elements are available as well for Synphony, allowing the designer to specify his functionality using this library or using M-code as the source.
  5. An FPGA tool is being adapted for ASIC - This is probably one of the first times that a tool initially developed for FPGAs (Synplify DSP) is being adapted for ASICs. It’s usually the other way around (e.g. FPGA Compiler grew out of Design Compiler). It should be interesting to see if the FPGA tool can “cut-it” in the ASIC world.
  6. Ties to implementation are seemingly tenuous - A tool that can take M-code as its input and produce RTL and C and do all the other things is all fine and good. But for Synphony to become more than an experimentation tool, it has to produce results (speed, area, power) as good or better than hand-coded RTL. However, the ties to the implementation tool (Design Compiler) are not as direct as even Behavioral Compiler was a decade ago. It seems that Synphony takes an approach where it pre-compiles and estimates timing for various blocks (kind of like building DesignWare libraries), but it assembles the design outside of DesignCompiler without all the associated timing views and engines necessary for true design and timing closure. It’s hard to understand how this can reliably produce results that consistently meet timing, but perhaps there is something that I am not aware of?
  7. Focus on “algorithmic design”, not control - As mentioned above, Synopsys is going after the folks using Matlab. And those designers are developing algorithms, not state machines. In essence, Synphony can focus on the fairly straightforward problem of scheduling mathematical operations to hit throughput and latency goals and not deal with more complex control logic. Much simpler.
  8. Conversion from Floating Point to Fixed Point - Anyone who has designed a filter or any DSP function knows that the devil is in the details, specifically the details of fixed point bit width. One choice of bit width affects downstream choices. You have to decide whether to round or truncate and these decisions can introduce unexpected artifacts into your signal. Synphony converts the floating point Matlab model into a fixed point implementation. Supposedly, it then allows you to easily fiddle with the bit widths to tweak the performance. Some earlier Synopsys products did this (Cossap, System Studio) and it’s a nice feature that can save time. We’ll see how useful it really is over time.
  9. Synphony produces real RTL, as well as C-code and a testbench - One of the drawbacks of Behavioral Compiler is that it never produced a human readable form of RTL code. This made it hard to simulate and debug the RTL. Synphony supplies readable RTL (or so I am told) as well as cycle accurate C-code for system simulation and a testbench for block simulation. This should help facilitate full chip simulations for chip integration, since Synphony will probably only be used on blocks, not entire chips.
  10. Couldn’t Synopsys come up with a better reference than Toyon Research Corporation - No offense to Toyon, but they are hardly a household name. It makes me wonder how many partners Synopsys has engaged in this development and how well tested this flow is. Not saying it isn’t well tested, just that Synopsys is making me wonder. Gimme a name I’ve heard of, please.

Only time will tell if Synphony is truly music to our ears, or if it is just SYNthesis that is PHONY.

harry the ASIC guy

DAC Theme #3 - “Increasing Clouds Over SF Bay”

Sunday, August 16th, 2009

Clouds over San FranciscoIt was easy to spot the big theme’s at DAC this year. This was the “Year of ESL” (again). The state of the economy and the future of EDA was a constant backdrop. Analog design was finally more than just Cadence Virtuoso. And social media challenged traditional media.

It was harder to spot the themes that were not front and center, that were not spotlighted by the industry beacons, that were not reported by press or bloggers. Still, there were important developments if you  looked in the right places and noticed what was changing. At least one of those themes came across to me loud and clear. This was the year that the clouds started forming over EDA.

If you’ve read my blog for a while, you know I’m not talking about the weather or some metaphor for the health of the EDA industry. You know I am talking about cloud computing, which moved from crazy idea of deluded bloggers to solidly in the early adopter category. Though this technology is still “left of chasm”, many companies were talking about sticking their toes in the waters of cloud computing and some even had specific plans to jump in. Of note:

  • Univa UD - Offering a “hybrid cloud” approach to combine on premise hardware and public cloud resources. Many view this as the first step into the cloud since it is incremental to existing on premise hardware.
  • Imera Systems - Offering a product called EDA Remote Debug that enables an EDA company to place a debug version of their software on a customer’s site in order to debug a tool issue. This reduces the need to send an AE on site or to have the customer package up a testcase.
  • R Systems - A spinoff from the National Center for Supercomputing Applications (best known for Telnet and Mosaic), they were wandering the floor pitching their own high performance computing resources (that they steadfastly insisted were “not a cloud”) available remotely or brought to your site to increase your computing capacity.
  • Cadence - One of the first (after PDTi) to have an official Hosted Design Solutions offering, they host their software and your data in a secure datacenter and are looking at the cloud as well for the future.

And then there’s Xuropa.

Before I cover Xuropa, I need to take a brief digression. You see, July 27th was not just the first day of DAC. It was also my first official day working for Xuropa as one of my clients. I’ll be doing social media consulting (blogging, tweeting, other online social community stuff) and also helping their customers get their tools on the Xuropa platform. This is very exciting for me, something I’ll blog about specifically on the Xuropa Blog and also here. In the meantime, under full disclosure, you’ve now been told. You can factor in the appropriate amount of skepticism to what I have to say about cloud computing, hosted design, Software-as-a-Service and Xuropa.

  • Xuropa - Offering to EDA companies and IP providers the ability to create secure online labs in the cloud for current and prospective customers to test drive a tool, do tool training, etc. They also have plans to make the tools available for “real work”.

These companies and technologies are very exciting on their own. Still, the cloud computing market is very new and there is a lot of churn so it is very difficult to know what will survive or become the standard. Perhaps something not even on this list will emerge.

Even though the technology side is cloudy (pun intended), the factors driving companies to consider using the cloud are very clear. They all seem to come down to one economic requirement. Doing more with less. Whenever I speak to people about cloud computing (and I do that a lot) they always seem to “get it” when I speak in terms of doing more with less. Here are some examples:

  • I spoke to an IT person from a large fabless semiconductor company that is looking at cloud computing as a way to access more IT resources with less of an on premise hardware datacenter.
  • Cadence told me that their Hosted Design Solutions are specifically targeted at smaller companies that want to be able to access a complete EDA design environment (hardware, software, IT resources) without making any long-term commitment to the infrastructure.
  • EDA and IP companies of all sizes are looking to reduce the cost of customer support while providing more immediate and accessible service.
  • EDA and IP companies are looking to go global (e.g. US companies into Europe and Asia) without hiring a full on sales and support team.
  • Everyone is trying to reduce their travel budgets.

Naysayers point out that we’ve seen this trend before. EDA companies tried to put their tools in datacenters. There were Application Service Providers trying to sell Software-as-a-Service. These attempts failed or the companies moved into other offerings. And so they ask (rightly) “what is different now?”

There is certainly a lot of new technology (as you see above) that help to make this all more secure and convenient than it was in the past. We live in a time of cheap computing and storage and ubiquitous internet access which makes this all so much more affordable and accessible than before. And huge low cost commodity hardware data centers like those at Amazon and Google never existed before now. But just because all this technology exists so that it can be done, doesn’t mean it will be done.

What is different is the economic imperative to do more with less. That is why this will happen. If cloud computing did not exist, we’d have to invent it.

harry the ASIC guy

DAC Theme #2 - “Oasys Frappe”

Monday, August 10th, 2009

Sean Murphy has the best one sentence description of DAC that I have ever read:

FrappeThe emotional ambience at DAC is what you get when you pour the excitement of a high school science fair, the sense of the recurring wheel of life from the movie Groundhog Day, and the auld lang syne of a high school re-union, and hit frappe.

That perfectly describes my visit with Oasys Design Systems at DAC.

Auld Lang Syne

When I joined Synopsys in June of 1992, the company had already gone public, but still felt like a startup. Logic synthesis was going mainstream, challenging schematic entry for market dominance. ASICs (they were actually called gate arrays back then) were heading towards 50K gates capacity using 0.35 uM technology. And we were aiming to change the world by knocking off Joe Costello’s Cadence as the #1 EDA company.

As I walked through the Oasys booth at DAC, I recognized familiar faces. A former Synopsys sales manager, now a sales consultant for Oasys. A former Synopsys AE, now managing business development for Oasys. And not to be forgotten, Joe Costello, ever the Synopsys nemesis, now an Oasys board member. Even the company’s tag line “the chip synthesis company” is a takeoff on Synopsys’ original tag line “the synthesis company”. It seemed like 1992 all over again … only 17 years later.

Groundhog Day

In the movie Groundhog Day, Bill Murray portrays Phil, a smug, self-centered, yet popular TV reporter who is consigned by the spirits of Groundhog Day to relive Feb 2nd over and over. After many tries, Phil is finally able to live a “perfect day” that pleases the spirits and he is able to move on, as a better person, to Feb 3rd.

As I mentioned in a previous post, I’ve seen this movie before. In the synthesis market, there was Autologic on Groundhog Day #1. Then Ambit on Groundhod Day #2. Then Get2chip on Groundhod Day #3. Compass had a synthesis tool in there somewhere as well. (I’m sure Paul McLellan could tell me when that was.) None of these tools, some of which had significant initial performance advantages, were able to knock off Design Compiler as market leader. This Groundhog Day it’s Oasys’ turn. Will this be the day they finally “get it right”?

Science Fair

A good science fair project is part technology and part showmanship. Oasys had the showmanship with a pre-recorded 7-minute rock medley featuring “Bass ‘n’ Vocal Monster” Joe Costello, Sanjiv “Tropic Thunder” Kaul, and Paul “Van Halen” Besouw. Does anyone know if this has been posted on Youtube yet?

On the technology side, I had one main mission at the Oasys booth … to find out enough about the RealTime Designer product to make my own judgment whether it was “too good to be true”. In order to do this, I needed to get a better explanation of the algorithms working on “under-the-hood”, which I was able to get from founder Paul van Besouw.

For the demo, Paul ran on a Dell laptop with a 2.2 GHz Core Duo processor, although he claims that only 1 CPU was used. The demo design was a 1.6M instance design based on multiple instantiations of the open source Sparc T1 processor. The target technology was the open source 45nm Nangate library. Parts of the design flow ran in real time as we spoke about the tool, but unfortunately we did not run through the entire chip synthesis on his laptop in the 30 minutes I was there, so I cannot confirm the actual performance of the tool. Bummer.

Paul did describe, though, in some detail, the methods that enable their tool to achieve such fast turnaround time and high capacity. For some context, you need to go back in time to the origins and evolution of logic synthesis.

At 0.35 uM, gate delays were 80%+ of the path delay and the relatively small wire delays could be estimated accurately enough using statistical wire load models. At 0.25 uM, wire delays grew as a percentage of the path delay. The Synopsys Floorplan Manager tool allowed front-end designers to create custom wire load models from an initial floorplan. This helped maintain some accuracy for a while, but eventually was also too inaccurate. At 180 nM and 130 nM, Physical Compiler (now part of IC Compiler) came along to do actual cell placement and estimate wire lengths based on a global route. At 90 nM and 65 nM came DC-Topographic and DC-Graphical, further addressing the issues of wire delay accuracy and also layout congestion.

These approaches seem to work well, but certain drawbacks are starting to appear:

  1. Much of the initial logic optimization takes place prior to placement, so the real delays (now heavily dependent on placement) are not available yet.
  2. The capacity is limited because the logic optimization problem scales faster than order(n). Although Synopsys has come out with methods to address the turnaround time issue, such as automatic chip synthesis, these approaches amount to not much more than divide and conquer (i.e.budget and compile).
  3. The placement developed by the front-end synthesis tool (e.g. DC-Topographic) is not passed on to the place and route tool. As a result, once you place the design again in the place and route tool, the timing has changed.

According to Paul van Besouw, Oasys decided to take an approach they call “place first”. That is, rather than spend a lot of cycles in logic optimization before even getting to placement, they do an initial placement of the design as soon as possible so they are working with real interconnect delays from the start. Because of this approach, RealTime Designer can get to meaningful optimizations almost immediately in the first stage of optimization.

A second key strategy according to van Besouw is the RTL partitioning which chops the design up into RTL blocks that are floorplaned and placed on the chip. The partitions are fluid, sometimes splitting apart, sometimes merging with other partitions during the optimization process as the design demands. The RTL can be revisited and changed for a new structure during the optimization as well. Since the RTL partitions are higher-level than gates, the number of design objects in much fewer, leading to faster runtime with lower memory foot print according to van Besouw. Exactly how Oasys does the RTL partitioning and optimizations is the “secret sauce”, so don’t expect to hear a lot of detail.

Besides this initial RTL optimization and placement, there are 2 more phases of synthesis in which the design is further optimized and refined to a legal placement. That final placement can be taken into any place and route tool and give you better results than the starting point netlist from another tool, says van Besouw.

In summary, Oasys claims that they achieve faster turnaround time and higher capacity by using a higher level of abstraction (RTL vs. gate). They claim that they can achieve a better starting point for and timing correlation with place and route because they use actual placement from the start and feed that placement on to the place and route tool. And the better placement also runs faster because it converges faster.

What Does Harry Think?

Given the description that I got from Oasys at DAC, I am now convinced that it is “plausible” that Oasys can do what they claim. Although gory detail is still missing, the technical approach described above sounds exactly right, almost obvious when you think about it. Add to that the advantage of starting from scratch with modern coding languages and methods and not being tied to a 20 year old code base, and you can achieve quite a bit of improvement.

However, until I see the actual tool running for myself in a neutral environment on a variety of designs and able to demonstrate faster timing closure through the place and route flow, I remain a skeptic. I’m not saying it is not real, just that I need to see it.

There are several pieces of the solution that were not addressed adequately, in my opinion:

  1. Clock tree synthesis - How can you claim to have a netlist and placement optimized to meet timing until you have a clock tree with its unique slew and skew. CTS is not address in this solution. (To be fair, it’s not addressed directly in Design Compiler either).
  2. A robust interface to the backend - Oasys has no backend tools in-house, which means that the work they have done integrating with 3rd party place and route has been at customer sites, either by them or by the customer. How robust could those flows be unless they have the tools in-house (and join the respective partner programs).
  3. Bells and whistles - RealTime designer can support multi-voltage, but not multi-mode optimization. Support for low power design is not complete. What about UPF? CPF? All of these are important in a real flow and it is not clear what support Oasys has.
  4. Tapeouts - This is probably the key question. For as long as EDA has existed, tapeouts have been the gold standards by which to evaluate a tool and its adoption. When I asked Paul if there are any tapeouts to date, he said “probably”. That seems odd to me. He should know.

However, if Oasys can address these issues, this might actually be the game changer that gets us out of the Groundhog Day rut and onto a new day.

harry the ASIC guy

DAC Theme #1 - “The Rise of the EDA Bloggers”

Sunday, August 2nd, 2009

Harry Gries at Conversation Central

(Photo courtesy J.L. Gray

Last year, at the Design Automation Conference, there were only a couple dozen individuals who would have merited the title of EDA blogger. Of those, perhaps a dozen or so wrote regularly and had any appreciable audience. In order to nurture this fledgling group, JL Gray (with the help of John Ford, Sean Murphy, and yours truly) scrounged a free room after-hours in the back corner of the Anaheim Convention Center in which to hold the first ever EDA Bloggers Birds-of-a-Feather session. At this event, attended by both bloggers and traditional journalists, as John Ford put it, us bloggers got our collective butts sniffed by the top dog journalists.

My, how things have changed in just one year.

This year at DAC, us EDA bloggers (numbering 233 according to Sean Murphy) and other new media practitioners took center stage:

  • Bloggers were literally on stage at the Denali party as part of an EDA’s Next Top Blogger competition.
  • Bloggers were literally center stage at the exhibits, in the centrally located Synopsys booth, engaging in lively conversation regarding new media.
  • Atrenta held a Blogfest.
  • There was a Pavillion Panel dedicated to tweeting and blogging.
  • And most conspicuously, there was the 14-foot Twitter Tower streaming DAC related tweets.

Meanwhile, the traditional journalists who were still covering DAC seemed to fall into 2 camps. There were those who embraced the bloggers as part of the media and those that didn’t. Those that did, like Brian Fuller, could be found in many of the sessions and venues I mentioned above. Those that did not, could be found somewhere down the hall between North and South halls of Moscone in their own back corner room. I know this because I was given access to the press room this year and I did indeed find that room to be very valuable … I was able to print out my boarding pass on their printer.

Here’s my recap of the new media events:

I had mixed feelings regarding the Denali Top Blogger competition as I know others did as well. JL, Karen, and I all felt it was kind of silly, parading like beauty queens to be judged. Especially since blogging is such a collaborative, rather than competitive, medium. So often we reference and riff off of each other’s blog posts. Still, I think it was good recognition and publicity for blogging in EDA and one could not argue with the legitimacy of the blogger representatives, all first-hand experts in the areas that they cover. Oh, by the way, congratulations to Karen Bartleson for winning the award.

Conversation Central, hosted by Synopsys, was my highlight of DAC.  It was a little hard to find (they should have had a sign), located in a little frosted glass room on the left front corner of the Synopsys booth. But if you could find your way there, it was well worth the search. I’m a little biased since I hosted conversations there Monday - Wednesday on “Job Search: How Social Media Can Help Job Seekers & Employers”. The sessions were a combination of specific advice and lively discussions and debates. I was fortunate to have a recruiter show up one day and a hiring manager another day to add their unique perspectives. I think that that was the real power of this very intimate kitchen table style format. Everybody felt like they were allowed to and even encouraged to participate and add their views into the discussions. This is very different from a very formal style presentation and even panel discussions.

Unfortunately, I was not able to clone myself in order to attend all the sessions there, many of which I heard about afterwards from others or in online writeups. I did attend the session by Ron Ploof entitled “Objectivity is Overrated: Corporate Bloggers Aren’t Journalists, & Why They Shouldn’t Even Try”. Interestingly enough, no journalists showed up to the session. Still, it was a lively discussion, the key point being that bloggers don’t just talk the talk, they walk the walk, and therefore bring to the table a deeper understanding and experience with EDA and design than a journalist, even one that was previously a designer.

I also attended Rick Jamison’s session on “Competitors in Cyberspace: Why Be Friends?” which attracted several Cadence folks (Joe Hupcey, Adam Sherer, Bob Dwyer) and some Mentor folks. Although competitors for their respective companies, there was a sense of fraternity and a lot of the discussion concerned what is “fair play” with regards to blog posting and commenting. The consensus was that advocacy was acceptable and even expected from the partisans, as long as it could be backed up by fact and kept within the bounds of decorum (i.e. no personal attacks). EDA corporate bloggers have been very fair in this regards in contrast to some rather vitriolic “discussions” in other industries.

The Atrenta Blogfest sounded very interesting and I was very disappointed that I could not attend because it conflicted with my Conversation Central discussion. Mike Demler has a brief summary on his blog as does Daniel Nenni on his blog.

Late Wednesday, Michael Sanie hosted a DAC Pavillion Panel entitled “Tweet, Blog or News: How Do I Stay Current?” Panelists Ron Wilson (Practical Chip Design in EDN), John Busco (John’s Semi-Blog) and Sean Murphy (his blog) shared insights into the ways they use social media to stay current with events in the industry, avoid information overload, and separate fact from fiction. Ron Wilson commented that social networks are taking the place of the socialization that engineers used to get by attending conferences and the shared experience reading the same traditional media news. John Busco, the recognized first EDA blogger, shared how he keeps his private life and his job at NVidia separate from his blogging life. And Sean Murphy gave perspective on how blogging has grown within EDA and will continue to grow to his projection of 500 EDA bloggers in 2011.

Last, but not least, there was the Twitter Tower, located next to the Synopsys booth. Previous conferences, such as DVCon attempted to use hashtags (#DVCon) to aggregate conference related tweets. The success was limited, attracting perhaps a few dozen tweets at most. This time, Karen Bartleson had a better idea. Appeal to people’s vanity. The Twitter Tower displayed a realtime snapshot of all tweets containing “#46DAC“, the hashtag designated for the 46th DAC. If one stood in front of the tower and tweeted with this hastag, the tweet would show up within seconds on the tower. How cool is that? Sure it was a little gimmicky, but it made everyone who passed by aware of this new standard. As I write this, there have been over 1500 tweets using the #46DAC hashtag.

If you want to read more, Sean Murphy has done the not-so-glamorous but oh-so-valuable legwork of compiling a pretty comprehensive roundup of the DAC coverage by bloggers and traditional press. (Thanks Sean!)

harry the ASIC guy

Oasys or Mirage?

Monday, July 20th, 2009

Oasis BMP

That’s the question that everyone was asking last week when Oasys Design Systems came out of stealth mode with a “chip synthesis” tool they claim leaves Synopsys’ Design Compiler and other synthesis tools in the dust. According to Sanjiv Kaul, Chairman of Oasys and former VP of Synopsys’ Implementation Business Unit, RealTime Designer can synthesize full chips up to 100 million gates in a single run, and do so 20x faster with smaller memory requirements and achieving better quality of results. Oh, and it also produces a legalized cell placement that can be taken forward into detailed routing.

Well, I had 3 different reactions to these claims:

1. “Too good to be true!”

This was also the most common reaction I heard from fellow designers when I told them of the Oasys claims. It was my own reaction a month or so ago when I first spoke to Oasys about their technology. (To tell the truth, I was wondering what they were smoking.) Paul McLellan, as of last week a blogger for Oasys, indicated that disbelief was the most common reaction heard from people Oasys talks to about this product. Steve Meier, former VP of R&D for IC Compiler at Synopsys, said the same thing on Twitter and added some specific questions for Oasys to answer. Even one of the Oasoids (is it to early to coin that phrase) acknowledged to me privately that he was incredulous when he was first approached months ago to join the team. I guess he was convinced enough to join.

2. “I’ve seen this movie before, and I know how it ends.”

That was my second reaction. After all, there were Synopsys killers before. Ambit (out of which, by the way, came most of the developers of the Oasys tool) was the first big threat. They had a better QOR (quality of results) by many accounts, but Synopsys responded quickly to stave them off. Then came Get2Chip. Similar story. Cadence’s RTL Compiler, which combines technology from both Ambit and Get2Chip, is well regarded by many but still it has a very small market share. Bottom line, nobody ever got fired for choosing Design Compiler, so it’s hard to imagine a mass migration. Still, if the Oasys claims are true, they’d have a much more compelling advantage than Ambit or Get2Chip ever had.

3. “Synthesis? Who cares about synthesis?”

That’s my third reaction. Verification is the #1 problem for ASIC design teams. DFM is a critical issue. ESL and C-synthesis are starting to take off. RTL synthesis addresses none of these big problems or opportunities. It’s a solved problem. Indeed, many design flows just do a “quick and dirty” synthesis in order to get a netlist in to place and route where real timing can be seen and a good placement performed. I hear very few people complaining about synthesis, so I wonder who is going to spend money in a tight economy on something that just “ain’t broken”. True, synthesis may be a bottleneck for 100M gate ASICs, but how many companies are doing those and can those companies alone support Oasys. If you talk to Oasys, however, they feel that the availability of such fast synthesis will change the way people design, creating a “new platform”. I’m not sure I see that, but perhaps they are smarter than me.

__________

OK, so that’s my first 3 thoughts regarding Oasys design. I’ll be getting a better look at them at DAC and will share what I learn in some upcoming blog posts. Please feel free to share your thoughts here as well. Between us, we can hopefully decide if this Oasys is real or a mirage.

harry the ASIC guy

What Makes DAC 2009 different from other DACs?

Sunday, July 12th, 2009

By Narendra (Nari) Shenoy, Technical Program Co-Chair, 46th DAC

Each year, around this time, the electronic design industry and academia meticulously prepare to showcase the latest research and technologies at the Design Automation Conference. For the casual attendee, after a few years the difference between the conferences of years past begins to dim. If you are one of them, allow me to dispel this notion and invite you to look at what is different this year.

For starters, we will be in the beautiful city of San Francisco from July 26-31. The DAC 2009 program, as in previous years, has been thoughtfully composed from using two approaches. The bottom up approach selects technical papers from a pool of submissions using a rigorous review process. This ensures that only the best technical submissions are accepted. For 2009, we see an increasing focus on research towards system level design, low power design and analysis, and physical design and manufacturability. This year, a special emphasis for the design community has been added to the program, with a User Track that runs throughout the conference. The new track, which focuses on the use of EDA tools, attracted 117 submissions reviewed by a committee made up of experienced tool users from the industry. The User Track features front end and back end sessions and a poster session that allows a perfect opportunity to interact with presenters and other DAC attendees. In addition to the traditional EDA professionals, we invite all practitioners in the design community – design tool users, hardware and software designers, application engineers, consultants, and flow/methodology developers, to come join us.

This first approach is complemented by a careful top-down selection of themes and topics in the form of panels, special sessions, keynote sessions, and management day events. The popular CEO panel returns to DAC this year as a keynote panel. The captains of the EDA industry, Aart deGeus (Synopsys), Lip-Bu Tan (Cadence) and Walden Rhines (Mentor) will explore what the future holds for EDA. The keynote on Tuesday by Fu-Chieh Hsu (TSMC), will discuss alignment of business and technology models to overcome design complexity. William Dally (Nvidia and Stanford) will present the challenges and opportunities that throughput computing provides to the EDA world in his keynote on Wednesday. Eight panels on relevant areas are spread across the conference. One panel explores whether the emphasis on Design for Manufacturing is a differentiator or a distraction. Other panels focus on a variety of themes such as confronting hardware-dependent software design, analog and mixed signal verification challenges, and various system prototyping approaches. The financial viability of Moore’s law is explored in a panel, while another panel explores the role of statistical analysis in several fields, including EDA. Lastly, we have a panel exploring the implications of recent changes in the EDA industry from an engineer’s perspective.

Special technical sessions will deal with a wide variety of themes such as preparing for design at 22nm, designing circuits in the face of uncertainty, verification of large systems on chip, bug-tracking in complex designs, novel computation models and multi-core computing. Leading researchers and industry experts will present their views on each of these topics.

Management day includes topics that tackle challenges and decision making in a complex technology and business environment. The current “green” trend is reflected in a slate of events during the afternoon of Thursday July 30th. We start with a special plenary that explores green technology and its impact on system design, public policy and our industry. A special panel investigates the system level power design challenge and finally a special session considers technologies for data centers.

Rather than considering it a hindrance to attendance, the prolonged economic malaise this year should provide a fundamental reason to participate at DAC. As a participant in the technical program, DAC offers an opportunity to share your research and win peer acclaim. As an exhibitor, it is an ideal environment to demonstrate your technology and advance your business agenda. As an attendee, you cannot afford to miss the event where “electronic design meets”. DAC provides an unparalleled chance to network and learn about advances in electronic design for everyone. Won’t you join us at the Moscone Center at the end of the month?

__________

This year’s DAC will be held July 26-31 at the Moscone Center in San Francisco. Register today at www.dac.com. Note also that there are 600 free DAC passes being offered courtesy of the DAC Fan Club (Atrenta, Denali, Springsoft) for those who have no other means to attend.