Posts Tagged ‘Cloud Computing’

Dunbar’s Number and #48DAC

Tuesday, June 14th, 2011

DAC Badges

My apologies for the recent hiatus in my blog posting. It’s been a difficult time personally for me the past few months, dealing with family illnesses. Hopefully, I can get it going again.

With all that I had going on, it was a relief to escape last week for a few days to DAC in San Diego. After several years attending as a blogger (what DAC calls “independent media”), it was exciting to be on the floor representing Xuropa at the Synopsys Cloud Partners Booth. I still got to see several friends like JL Gray, who wrote up what he heard from us, and Peggy Aycinena, who accused me of being a sellout since I was in the Synopsys Cloud booth and had a Synopsys badge lanyard. And of course, what DAC would be complete without Eric Thune of AtopTech telling me that cloud will never work for EDA. 

One of the downsides of being in the booth was not being able to attend a lot of the other sessions. I missed The Woz, and the Logan & McLellan show, and Gary Smith, and a lot of the panel discussions. I was, however, able to sneak away for the EDA Cloud Computing Panel discussion, featuring the usual suspects and a few new ones. A highlight was when John Bruggeman of Cadence offered to buy John Chilton of Synopsys a beer at the Denali Party and work out a joint Synopsys/Cadence solution on the cloud. No word yet how that turned out. Another highlight was the audience poll at the end where 1/3 of the audience felt that most of EDA would be on the cloud in 3 years. I don’t know if this is correct or not, but this is the 3rd year we had a cloud panel at DAC, and each year the expectations increase. Richard Goering has a good writeup on the panel.

One booth I did visit and get an interesting demo was Duolog. Duolog is a Xuropa customer (you can try out their tool here), which is why I knew a little about them going in. They have a tool called Socrates Bitwise that does register management for processor based designs. In this tool, you specify all the processor accessible registers, their type (RO, RW, etc), the locations (base and offset), and the tool automatically generates the RTL, verification code (OVM, UVM, etc), register package, C APIs and documentation. If something needs to change, you change it in one place in the tool and all the subsequent files are regernerated correct by construction. With many designs having hundreds or thousands of registers to manage, this is a growing problem to be solved. Duolog has a few competitors as well, but their biggest competition is in-house home-grown scripts.

Of course, there were my 150 closest friends I know from years gone by, too numerous to mention, lest I leave someone out. I’m reminded of Sean Murphy’s perfect description of DAC:

The emotional ambience at DAC is what you get when you pour the excitement of a high school science fair, the sense of the recurring wheel of life from the movie Groundhog Day, and the auld lang syne of a high school re-union, and hit frappe.”

An overall impression I, and many others, had was that the show floor was smaller and there were fewer attendees than in the past. The official preliminary numbers, however indicate that DAC was larger than last year, so I’m not sure whether to believe my eyes or the numbers.

For me personally, it was my annual chance to connect with the entire industry, so I got a lot out of it. At a minimum, it provided me with a lot of good ideas that I can work on for the next year.

harry the ASIC guy

761 Days

Tuesday, March 29th, 2011

Clouds over San Francisco

761 days.

That’s 2 years, 1 month, and 3 days.

761 days ago, I hosted a small group of interested EDA folks, journalists, and bloggers in a small room in the Doubletree hotel after one of the evenings after DVCon.

Most of the discussion that year was around OVM and VMM and which methodology was going to win out and which was really open and which simulator supported more of the System-Verilog language. Well, all that is put to bed. This year at DVCon, 733 days later, we all sang Kumbaya as we sat around and our hearts were warmed by the UVM campfire.

But, back to that small group that I hosted 761 days ago. Those that attended this conclave had shrugged off all the OVM and VMM hoopla and decided to come hear this strange discussion about Cloud Computing and SaaS for EDA tools. Some, no doubt, thought there was going to be free booze served, and they were certainly disappointed. Those that stayed, however, heard a fiery discussion between individuals who were either visionaries or lunatics. For many, this was the first time they had heard the term cloud computing explained, and their heads spun as they tried to imagine what, if anything would come of it for the EDA industry.

Over the 761 days since, the voices speaking of cloud computing for EDA, once very soft, grew slowly in volume. All the reasons that it would not work were thrown about like arrows, and those objections continue. But slowly, over time, the voices in support of this model have grown to the point where the question no longer was “if” but “when”.

761 days, that’s when.

Yesterday, to the shock of many at SNUG San Jose, including many in attendence from Synopsys, Aart DeGeus personally answered the question asked 761 days earlier. Indeed, those individuals gathered in that small room at the Doubletree were visionaries, not lunatics.

There are many reasons why Synopsys should not be offering its tools on the cloud via SaaS:

  • Customers will never let their precious proprietary data off-site
  • It will cannibalize longer term license sales
  • The internet connection is too slow and unreliable
  • There’s too much data to transfer
  • The cloud is not secure
  • It’s more expensive
  • It just won’t work

But, as it turns out, there are better reasons to do it:

  • Customers want it

Sure, there are some other reasons. The opportunity to increase revenue by selling higher priced short-term pay-as-you-go licenses. Taking advantage of the parallelism inherent in the cloud. Serving a new customer base that has very peaky needs.

But in the end, Aart did what he does best. He put on his future vision goggles, gazed into the future, saw that the cloud was inevitable, and decided that Synopsys should lead and not follow.

761 days. Now the race is on.

Altium Looking to Gain Altitude in the Cloud

Sunday, January 30th, 2011

Altium Enterprise Vault SystemOver the holiday break, I came across an interview of Altium CIO Alan Perkins that caught my eye. Sramana Mitra has been focusing on interesting cloud-based businesses and this interview focused on how this EDA company was planning to move into the cloud. I wasn’t able to talk to Alan Perkins directly, but I was able to find out more through their folks in the US (the company is based in Australia). It was interesting enough to warrant a post.

I knew very little about Altium before seeing this interview and maybe you don’t either, so here is a little background. Based in Australia, Altium is a small (~$50M) EDA company focused primarily in the design of printed circuit boards with FPGAs and embedded software. They formed from a company called Protel about 10 years ago and most recently gained attention when they acquired Morfik, a company that offers an IDE for developing web apps (more on that later). According to some data I saw and from what they told me, they added 1700 new customers (companies, not seats) in 2010 just in the US! So, they may be they best kept secret in a long while. (Ironically, the next day at work after I spoke to Altium, I spoke to someone at another company that was using Altium to design a PC board for us).

According to Altium, their big differentiator is that they have a database-centric offering as compared to tool-flow centric offerings like Cadence OrCAD and Allegro and Mentor’s Board Station and Expedition and related tools. I’m not an EDA developer, so I won’t pretend to understand the nuances of one versus the other. However, when I think of a “database-centric”, I think of “frameworks”. I know it’s been almost 20 years since those days, and things have changed, so maybe database-centric makes a lot of sense now. OpenAccess is certainly a good thing for the industry, but that is because it’s an “open standard” while Altium’s database is not. Anyway, enough on this matter because, as I said, I’m not an EDA developer and don’t want to get in too deep here.

A few years ago, I wrote a blog post entitled “Is IP a 4-Letter Word?”. The main thrust of that post was that IP quality is rather poor in general and there needs to be some sort of centralized authority to grade IP quality and to certify its use. So, when Altium told me they plan to enable a marketplace for design IP by creating ”design vaults” in the cloud, my first question was “who is going to make sure this IP is any good”? Is this going to be the iPhone app model, where Apple vets and approves every app? Or is it going to be the Android model, caveat emptor.

To Altium’s credit, they have similar concerns, which is why they are planning to move slowly. With their introduction of Altium Designer 10, Altium will first provide it’s own vetted IP in the cloud. In the past, this IP was distributed to the tool users on their site, but having it in the cloud will make it easier to distribute (pull, insted of push) and also allow for asynchronous release and updates. The tools will automatically detect if you are using an IP that has been revved, and ask you if you want to download the new version.

Once they have this model understood, Altium then plans to open the model up to 3rd party IP which can be offered for free, or licensed, or maybe even traded for credits (like Linden dollars in Second Life). It’s an interesting idea which requires some pretty significant shifts in personal and corporate cultures. I think that sharing of small “jelly bean” type IP is acheivable because none of it is very differentiated. But once you get to IP that required some significant time to design, why share it unless IP is your primary business. The semiconductor industry is still fiercely competitive and I think that will be a significant barrier. Not to mention that it takes something like 4x-5x as much effort to create an IP that is easily reusable as compared to creating it just to be used once.

Being a tool for the design of FPGAs is an advantage for Altium, since the cost of repairing an FPGA bug is so much less than an SoC or ASIC. For FPGAs, the rewards may be greater than the risks, especially for companies that are doing ASICs for the first time. And this is the market that Altium is aiming for … the thousands of sompanies that will have to design their products to work on the internet-of-things. Companies that design toasters that have never had any digital electronics and now have to throw something together. They will be the ones that will want to reuse these designs because they don’t have the ability to design them in-house.

Which brings us to Morfik, that company that Altium acquired that does IDEs for web apps. It’s those same companies that are designing internet enabled toasters that will also need to design a web app for their customers to access the toaster. So if Altium sells the web app and the IP that let’s the toaster talk to the web app, then Altium provides a significant value to the toaster company. That’s the plan.

Still, the cloud aspect is what interests me the most. Even if designers are reluctant to enter this market, the idea of having this type of central repository is best enabled by the cloud. The cloud can enable collaboration and sharing much better than any hosted environment. And it can scale as large and as quickly as needed. It allows a safe sort of DMZ where IP can be evaluated by a customer while still protecting the IP from theft.

This is not by any means a new idea either. OpenCores has been around for more than a decade offering a repository for designers to share and access free IP. I spoke with them a few years ago and at the time the site was used mainly by universities and smaller companies, but their OpenRISC processor has seen some good usage, so it’s a model that can work.

I’m anxious to see what happens over time with this concept. Eventually, I think this sort of sharing will have to happen and it will be interesting to see how this evolves.

harry the ASIC guy

Scott Clark on EDA Clouds

Sunday, August 8th, 2010

scottclark.jpgAlthough I had heard his name mentioned quite often, it wasn’t until this year at DAC that I finally met Scott Clark  for the first time. Scott was describing how, as Director of Engineering Infrastructure at Broadcom, he led a project to virtualize Broadcom’s internal data center in order to transform it into a private cloud. It was a great discussion. We had lunch a few weeks later to talk about his new business, Deopli, a company that he has founded to help other semiconductor and EDA companies improve their compute infrastructure operations in similar fashion.

So, when I saw Dan Nenni’s blog post on cloud computing and some of the responses, I thought I’d contact Scott. You see, as opposed to most of those commenting on Dan’s post, Scott has actually taken EDA tools and moved them to the cloud, so he knows what he’s talking about. Scott was kind enough to contribute a blog post on the subject, so please enjoy.

__________

Harry the ASIC Guy pointed me to Dan Nenni’s Silicon Valley Blog to take a look at this post regarding Daniel Suarez’s books Daemon and Freedom. His post intrigued me enough to download the first book to my iPad to get a feel for the style and atmosphere. That was good enough that I plan to read both. You can read Dan’s post to see his overview of the books, but at the end of his post, he poses a question that seemed to spark lots of conversation and varying opinions. His question was “Who can be trusted to secure Darknet (Cloud Computing)?”

I think Dan was making reference to concepts in the book where all data in the world becomes controlled by a finite set of service providers, and therefore creates an exposure based on the singularity of the solution. His references hit pretty close to home in Apple, Microsoft and Google, but that did not seem to be the focus of the responses. Because Dan’s background (and blog) is primarily in the EDA / Semiconductor space, the responses seemed to fall into the category of “Should Semiconductor companies use Cloud Computing?” and the array of opinions seemed to align on the two ends of the spectrum. There were a few respondents who felt that EDA would never ever move into the Cloud or gave somewhat skewed definitions of “cloud” to say “it’s impossible” but for the most part, it was refreshing to see some open minded views of what was possible and how things could work. I was particularly intrigued by Dan’s comment that he felt foundries would venture into the cloud hosting space. Given the history of the fabless semiconductor space, how can that not make perfect sense! The leadup to the creation of foundries was that internal manufacturing was growing in capacity and complexity to the point that it made more sense to have that done externally. The same dynamics are happening in the datacenter space for chip design today.

Some of the comments were very accurate in my experiences, so just to highlight a few (please read the blog for specifics so I don’t mis-quote). Daniel Payne made the observation that semiconductor companies will start by creating their own private cloud, and that is exactly where we are today (compute clusters really are private clouds). James Colgan injected sanity throughout and made some very astute observations about the functional dynamics and applicability of cloud to certain parts of a design flow. I can’t say how much I agree with Kevin Cameron’s comments on security; cloud has the potential to be a huge boost in security for the industry. Tom Anderson indicated that he is already doing chip design using Amazon EC2 resources, and I think there are many more like Tom out there. One of the last postings to date is by Lou Covey, and his opinion is that Cloud for the industry is inevitable - I happen to agree with that. It’s not that we “have to” but more of “this is the right answer for the business, and we should do the right thing”.

One of the missing concepts that I notice is that this blog is looking at generic cloud solutions, and not industry specific solution. You will see the development of EDA specific cloud solutions that is very focused on EDA customers, and in the beginning it will be private clouds with technology added to elastic expansion. That said, looking at Cloud for the EDA industry, there are still going to be several roadblocks to adoption that will need to be addressed:

  • Ego – getting around the perception that IT is a core competency of chip design companies. The core competency of a chip design company should be … chip design.
  • Cost – getting around the expectation that cloud should cost ½ as much as what I am currently paying. There are many economies of scale and efficiencies that cloud brings. Cloud is an opportunity for cost avoidance as time goes forward, not a refund policy.
  • Trust – letting go of what is a critical function / resource and having confidence that you can still get the results necessary. This industry has a very powerful model to refer to. In this case, how the fabs were released, and successful partnerships were formed.
  • Control – how to let go of a critical resource, and still maintain control over the resources, costs, schedules, and dynamics of capacity / priority decisions.
  • Security – probably the most wielded blade in the “you can’t do it” arsenal, but also probably the most misunderstood.
  • Performance – the final roadblock, which is the one with the most technical merit, is performance. There are many different facets to performance, but it will primarily fall into “internal cluster performance” and “display performance”.

My perspective, the ego part we can get around. Current conversations with many EDA companies indicate they are already leaning this direction, which is a good sign.

The cost issue is far more ambiguous. There are as many expectations of cloud as there are definitions, but invariably the expectations are rooted in economics. Given that, the only answer seems to be to create a realistic model for cost, present the data, and let nature take it’s course. There really is cost benefit, so companies will want to accomplish that

Trust seems like it should be the easy part for this industry, but it is proving to be more stubborn than that. I think that is mostly because of the implied threat to job security for the people who are currently performing the tasks (who are usually the people receiving the presentation about outsourcing their job). EDA companies should examine their own history to see what to do and how to do it.

The control front falls into the same category as trust. The same way that fabless semiconductor companies created internal organizations and positions for managing the outsource of the foundries, that model should be applied to the outsourcing of computational infrastructure. That is not to say there will not be contention issues for capacity and priority. The cloud suppliers will need to make sure they have enough resources so they can provide sufficient capacity to the customers, or they will not be the supplier long. Again, foundries will be a great model to look at for this.

On the security front, Cloud will at a minimum give data points to show how weak internal security has been historically. Applying best security practices in a consistent manner should actually help evolve an industry specific cloud security solution to better address security issues. And for the time being, we can just avoid the multi-tenant aspects of security by maintaining isolation – private clouds with share dynamic resources.

And finally, given that we are stalking about EDA specific clouds, they will be specifically designed to have “internal cluster performance” appropriate for EDA. It will be designed exactly like we would design that cluster for a companies private datacenter. The tricky part will be in addressing display performance issues for functions like custom layout and board design where network latency causes the engineer’s working style to be impacted.

So really this boils down to proper execution by the EDA cloud providers, and one technical hurdle of display latency, which has many ways to be addressed. There is a lot of money and attention being aimed at these issues and this industry, and really no real reason why it will not succeed. There might be some companies that choose to adopt at a slower rate than others, but I believe this will become the direction everyone goes eventually. Thanks Dan for a great read and thanks Harry for pointing me at it.

__________

Scott Clark has been an infrastructure solution provider in the EDA/Semiconductor industry for the last 20 years, working for companies like Western Digital, Conexant, and Broadcom. He holds a bachelors of science in applied mathematics from San Diego State University and is currently President and CEO of Deopli Corporation. You can follow Scott on his blog at HPC in the Clouds.

My Obligatory TOP 10 for 2009

Thursday, December 31st, 2009

2009 To 2010

http://www.flickr.com/photos/optical_illusion/ / CC BY 2.0

What’s a blog without some sort of obligatory year end TOP 10 list?

So, without further ado, here is my list of the TOP 10 events, happenings, occurrences, observations that I will remember from 2009. This is my list, from my perspective, of what I will remember. Here goes:

  1. Verification Survey - Last February, as DVCon was approaching, I thought it would be interesting to post a quickie survey to see what verification languages and methodologies were being used. Naively, I did not realize to what extent the fans of the various camps would go to rig the results in their favor. Nonetheless, the results ended up very interesting and I learned a valuable lesson on how NOT to do a survery.
  2. DVCon SaaS and Cloud Computing EDA Roundtable - One of the highlights of the year was definitely the impromptu panel that I assembled during DVCon to discuss Software-as-a-Service and Cloud Computing for EDA tools. My thanks to the panel guests, James Colgan (CEO @ Xuropa), Jean Brouwers (Consultant to Xuropa),  Susan Peterson (Verification IP Marketing Manager @ Cadence), Jeremy Ralph (CEO @ PDTi), Bill Alexander (VP Marketing @ Blue Pearl Software), Bill Guthrie (VP Marketing @ Numetrics). Unfortunately, the audio recording of the event was not of high enough quality to post, but you can read about it from others at the following locations:

    > 3 separate blog posts from Joe Hupcey (1, 2, 3)

    > A nice mention from Peggy Aycinena

    > Numerous other articles and blog posts throughout the year that were set in motion, to some extent, by this roundtable

  3. Predictions to the contrary, Magma is NOT dead. Cadence was NOT sold. Oh, and EDA is NOT dead either.
  4. John Cooley IS Dead - OK, he’s NOT really dead. But this year was certainly a turning point for his influence in the EDA space. It started off with John’s desperate attempt at a Conversation Central session at DAC to tell bloggers that their blog sucks and convince them to just send him their thoughts. For those who took John up on his offer by sending their thoughts, they would have waited 4 months to see them finally posted by John in his December DAC Trip report. I had a good discussion on this topic with John earlier this year, which he asked me to keep “off the record”. Let’s just say, he just doesn’t get it and doesn’t want to get it.
  5. The Rise of the EDA Bloggers.
  6. FPGA Taking Center Stage - It started back in March when Gartner issued a report stated that there were 30 FPGA design starts for every ASIC start. That number seemed very high to me and to others, but that did not stop this 30:1 ratio from being quoted as fact in all sorts of FPGA marketing materials throughout the year. On the technical side, it was a year where the issues of verification of large FPGAs came front-and-center and where a lot of ASIC people started transitioning to FPGA.
  7. Engineers Looking For Work - This was one of the more unfortunate trends that I will remember from 2009 and hopefully 2010 will be better. Personally, I had difficulty finding work between projects. DAC this year seemed to be as much about finding work as finding tools. A good friend of mine spent about 4 months looking for work until he finally accepted a job at 30% less pay and with a 1.5 hour commute because he “has to pay the bills”. A lot of my former EDA sales and AE colleagues have been laid off. Some have been looking for the right position for over a year. Let’s hope 2010 is a better year.
  8. SaaS and Cloud Computing for EDA - A former colleague of mine, now a VP of Sales at one of the small but growing EDA companies, came up to me in the bar during DAC one evening and stammered some thoughts regarding my predictions of SaaS and Cloud Computing for EDA. “It will never happen”. He may be right and I may be a bit biased, but this year I think we started to see some of the beginnings of these technologies moving into EDA. On a personal note, I’m involved in one of those efforts at Xuropa. Look for more developments in 2010.
  9. Talk of New EDA Business Models - For years, EDA has bemoaned the fact that the EDA industry captures so little of the value ($5B) of the much larger semiconductor industry ($250B) that it enables. At the DAC Keynote, Fu-Chieh Hsu of TSMC tried to convince everyone that the solution for EDA is to become part of some large TSMC ecosystem in which TSMC would reward the EDA industry like some sort of charitable tax deduction. Others talked about EDA companies having more skin in the game with their customers and being compensated based on their ultimate product success. And of course there is the SaaS business model I’ve been talking about. We’ll see if 2010 brings any of these to fruition.
  10. The People I Got to Meet and the People Who Wanted to Meet Me- One of the great things about having a blog is that I got to meet so many interesting people that I would never have had an opportunity to even talk to. I’ve had the opportunity to talk with executives at Synopsys, Cadence, Mentor, Springsoft, GateRocket, Oasys, Numetrics, and a dozen other EDA companies. I’ve even had the chance to interview some of them. And all the fellow bloggers I’ve met and now realize how much they know. On the flip side, I’ve been approached by PR people, both independent and in-house. I was interviewed 3 separate times, once by email by Rick Jamison, once by Skype by Liz Massingill, and once live by Dee McCrorey. EETimes added my blog as a Trusted Source. For those who say that social media brings people together, I can certainly vouch for that.

harry the ASIC guy

Two Blog or Not Two Blog?

Monday, September 7th, 2009

I got an email last week from one the readers of this blog that observed “it would be interesting to learn how to manage both blogs while doing justice to your readers.” He was of course referring to my new blog on Xuropa that I write in addition to this one.

Indeed, this was a concern of mine that I had considered carefully before embarking on the other blog … or so I thought. The other day I wrote a new blog post about how designers want to actually use tools hands-on rather than just listen to product marketing pitches, or webinars, or podcasts. I originally wrote the post for this blog, then decided that it made more sense for the Xuropa blog, and ended up publishing it there (here’s the link). But it could have really gone on either one with small adjustments. I can see that this is now going to be more difficult than I thought.I did a little research online to see how other bloggers are handling writing multiple blogs. One of the suggestions was to set down the objectives of each of the blogs so I could be more clear in my mind and to the readers. I think that’s a good idea. So here goes:

  • The Xuropa blog will be focused on ways that EDA companies can do more with less, like cloud computing, online tool access, and software-as-a-service. It will also be written for an audience of EDA sales and marketing professionals. If you are in EDA, you’ll want to subscribe to that blog.
  • The harry the ASIC guy blog will include lots of other content and is hopefully valuable for people in all aspects of the semiconductor industry. I’ll discuss general engineering trends, quarterly reports from EDA companies, technical topics, and industry news. If you are a designer, you’ll want to subscribe to this blog.

I’m guessing that many of you will be interested in both topic areas and so it is OK to subscribe to the Xuropa blog and subscribe to this blog. You have my permission. After some time you may find that you are only interested in one of the blogs. That’s OK too, just unsubscribe to the one that doesn’t meet your needs.

Another suggestion was to set realistic expectations for how frequently I’d be publishing a new post. I think that is a good idea as well. I will continue to post on this blog roughly once per week as I have in the past. For some time I was actually closer to 2 posts per week but I have fallen back to once a week and that is about what I can handle now. The other blog is shared with some other folks from Xuropa so I will probably publish there every other week. We’ll see how that goes.

I’d like to ask you each a favor as well. Please help me keep to my commitment. I’ve already made this commitment of public record here, so that alone will provide some pressure. But if I start to post too infrequently or the quality slips or goes off track, let me know. Leave a comment or send me an email.

I would also like to make this blog a little more fresh and collaborative. I’ve said in the past that I learn more from you folks than you learn from me. You are working in hundreds of companies with thousands of years of collective experience. I’d like to see if we can tap into that for all our benefit. So here’s the deal:

  • If you have an idea for a blog post, let me know. Leave it as a comment or send me an email. I’ll make sure I give you full credit (unless you want to be anonymous) and link back to your website or LinkedIn profile.
  • If you’d like to write a guest blog post, I’m open to that as well. The more viewpoints the better.

Of course, not every suggestion will be used and not every offer of a guest blog post will be accepted. I’ll still make that decision to make sure the content is of high quality. But I won’t censor anything just because I disagree.

Well, I guess that’s it. We’re going to try this 2 blog thing and see how it goes. Wish me luck.

harry the ASIC guy

DAC Theme #3 - “Increasing Clouds Over SF Bay”

Sunday, August 16th, 2009

Clouds over San FranciscoIt was easy to spot the big theme’s at DAC this year. This was the “Year of ESL” (again). The state of the economy and the future of EDA was a constant backdrop. Analog design was finally more than just Cadence Virtuoso. And social media challenged traditional media.

It was harder to spot the themes that were not front and center, that were not spotlighted by the industry beacons, that were not reported by press or bloggers. Still, there were important developments if you  looked in the right places and noticed what was changing. At least one of those themes came across to me loud and clear. This was the year that the clouds started forming over EDA.

If you’ve read my blog for a while, you know I’m not talking about the weather or some metaphor for the health of the EDA industry. You know I am talking about cloud computing, which moved from crazy idea of deluded bloggers to solidly in the early adopter category. Though this technology is still “left of chasm”, many companies were talking about sticking their toes in the waters of cloud computing and some even had specific plans to jump in. Of note:

  • Univa UD - Offering a “hybrid cloud” approach to combine on premise hardware and public cloud resources. Many view this as the first step into the cloud since it is incremental to existing on premise hardware.
  • Imera Systems - Offering a product called EDA Remote Debug that enables an EDA company to place a debug version of their software on a customer’s site in order to debug a tool issue. This reduces the need to send an AE on site or to have the customer package up a testcase.
  • R Systems - A spinoff from the National Center for Supercomputing Applications (best known for Telnet and Mosaic), they were wandering the floor pitching their own high performance computing resources (that they steadfastly insisted were “not a cloud”) available remotely or brought to your site to increase your computing capacity.
  • Cadence - One of the first (after PDTi) to have an official Hosted Design Solutions offering, they host their software and your data in a secure datacenter and are looking at the cloud as well for the future.

And then there’s Xuropa.

Before I cover Xuropa, I need to take a brief digression. You see, July 27th was not just the first day of DAC. It was also my first official day working for Xuropa as one of my clients. I’ll be doing social media consulting (blogging, tweeting, other online social community stuff) and also helping their customers get their tools on the Xuropa platform. This is very exciting for me, something I’ll blog about specifically on the Xuropa Blog and also here. In the meantime, under full disclosure, you’ve now been told. You can factor in the appropriate amount of skepticism to what I have to say about cloud computing, hosted design, Software-as-a-Service and Xuropa.

  • Xuropa - Offering to EDA companies and IP providers the ability to create secure online labs in the cloud for current and prospective customers to test drive a tool, do tool training, etc. They also have plans to make the tools available for “real work”.

These companies and technologies are very exciting on their own. Still, the cloud computing market is very new and there is a lot of churn so it is very difficult to know what will survive or become the standard. Perhaps something not even on this list will emerge.

Even though the technology side is cloudy (pun intended), the factors driving companies to consider using the cloud are very clear. They all seem to come down to one economic requirement. Doing more with less. Whenever I speak to people about cloud computing (and I do that a lot) they always seem to “get it” when I speak in terms of doing more with less. Here are some examples:

  • I spoke to an IT person from a large fabless semiconductor company that is looking at cloud computing as a way to access more IT resources with less of an on premise hardware datacenter.
  • Cadence told me that their Hosted Design Solutions are specifically targeted at smaller companies that want to be able to access a complete EDA design environment (hardware, software, IT resources) without making any long-term commitment to the infrastructure.
  • EDA and IP companies of all sizes are looking to reduce the cost of customer support while providing more immediate and accessible service.
  • EDA and IP companies are looking to go global (e.g. US companies into Europe and Asia) without hiring a full on sales and support team.
  • Everyone is trying to reduce their travel budgets.

Naysayers point out that we’ve seen this trend before. EDA companies tried to put their tools in datacenters. There were Application Service Providers trying to sell Software-as-a-Service. These attempts failed or the companies moved into other offerings. And so they ask (rightly) “what is different now?”

There is certainly a lot of new technology (as you see above) that help to make this all more secure and convenient than it was in the past. We live in a time of cheap computing and storage and ubiquitous internet access which makes this all so much more affordable and accessible than before. And huge low cost commodity hardware data centers like those at Amazon and Google never existed before now. But just because all this technology exists so that it can be done, doesn’t mean it will be done.

What is different is the economic imperative to do more with less. That is why this will happen. If cloud computing did not exist, we’d have to invent it.

harry the ASIC guy

What To Do With 1000 CPUs - The Answers

Wednesday, April 15th, 2009

I recall taking a course called The Counselor Salesperson when I was an AE at Synopsys. The course was very popular across the industry and was the basis for the book Win-Win Selling. It advocated a consultative approach to sales, one in which the salesperson tries to understand the customer’s problem first and provide a solution that he needs second. Sounds obvious, but how often do you encounter a salesperson who knows he has what you need and then tries to convince you that you have a problem?

One of the techniques in the process is called the “Magic Wand” wherein the salesperson asks the customer “What would it be like if …”. This open-ended type of question is designed to free the customer’s mind to imagine solutions that he’d otherwise not consider due to real or imagined constraints. That’s the type of question I asked last week when I asked: What would you do with 1000 CPU’s? And boy did it free your minds!

Before I go into the responses, you may be wondering what was my point in asking the question in the first place.  Well, not so surprisingly, I’m looking to understand better the possible applications of cloud computing to EDA and ASIC design. If a designer, design team, or company can affordably access a large number of CPUs for a short period of time, as needed, what would that mean? What would they be able to do with this magic wand that they would not even have thought of otherwise?

I received 8 separate responses, some of them dripping with humor, sarcasm, and even disdain. Good stuff! I’ve looked them over and noticed that they seem to fall into 4 groups, each of which highlights a different aspect or issue of this question.

“Rent Them Out”

Gabe Moretti had the best response along these lines, “(I’d) heat my house and pool while selling time to shivering engineers”. Jeremy Ralph of PDTi put some dollar value on the proposition, calculating that he could make $8.25M per month sub-licensing the licenses and CPUs. While Guarav Jalan pointed out that I’d need to also provide bandwidth to support this “pay-as-you-use” batch farm.

The opportunity is to aggregate users together to share hardware and software resources. If I buy a large quantity of hardware and software on a long-term basis at discounted rates, then I can rent it out on a shorter-term basis at higher rates and make money. The EDA company wins because they get a big sale at a low cost-of-sales. The customers win because they get access to tools on a pay-as-you-go basis at lower cost without a long-term commitment. And I win because I get to pocket the difference for taking the risk.

“Philanthropy”

One of the reasons that Karen Bartleson and I get along so well is that we’ve both been around the EDA industry for some time (we’ll leave it at that). As a result, we not only feel connected to the industry, but also some sense of responsibility to give back. Karen would train university student’s on designing SOCs. I’d train displaced workers on tools that can help them find a new job.

Even though this is not really a business model, I think it is still something that the EDA vendors should consider. Mentor is already very active in promoting it’s Displaced Worker Program. Autodesk and SolidWorks are giving away free licenses to the unemployed. This type of program should be universal. Using cloud computing resources is an easy way to make it happen without investing in lots of hardware.

(On a side note: PLEASE, PLEASE encourage anyone you know at Synopsys and Cadence to follow Mentor’s lead. Synopsys did this in 2001 and Cadence once had a “Retool-To-Work” program that was similar. I truly believe that both companies have that same sense of corporate responsibility as Mentor has, but for some reason they have not felt the urgency of the current situation. I am personally going to issue a daily challenge on Twitter to Synopsys and Cadence to follow suit until it happens. Please Retweet.)

“Do Nothing”

John Eaton pointed out that it is very difficult to use any additional capability offered as “pumpkinware” if you know it will evaporate within a month. It would take that long to set up a way to use it. And John McGehee stated that his client already has all the “beer, wine, and sangria” they can drink (New Yorkers - do you remember Beefsteak Charlie’s?), so he’d pass. John: Can you hook me up with your client :-) ?

Seriously,  it certainly requires some planning to to take advantage of this type of horsepower. You don’t just fire off more simulations or synthesis runs or place and route jobs without a plan. For design teams that might have access to this type of capability, it’s important to figure out ahead of time how you will use it and for how long you will need it. If you will be running more sims, which sims will they be? How will you randomize them? How will you target them to the most risky parts of the design?

Run Lots of Experiments”

Which brings us to Jeremy Ralph’s 2nd response. This one wins the prize as best response because it was well thought out and also addressed the intention of the magic wand question: what problem could you solve that you otherwise could not have solved? Jeremy would use the resources to explore many different candidate architectures for his IP (aka chiplet) and select the best one.

One of the key benefits of the cloud is that anyone can have affordable access to 1000 CPUs if they want it. If that is the case, what sorts of new approaches could be implemented by the EDA tools in addressing design challenges? Could we implement place and route on 1000 CPUs and have it finish in an hour on a 100M gate design? Could we partition formal verification problems into smaller problems and solve what was formerly the unsolvable? Could we run lots more simulations to find the one key bug that will kill our chip? The cloud opens up a whole new set of possibilities.

__________

I’ve learned a lot from your responses. Some were expected and some were not. That’s what’s fun about doing this type of research … finding the unexpected. I’ll definitely give it some thought.

harry the ASIC guy

What would you do with 1000 CPUs?

Tuesday, April 7th, 2009

If I gave you 1000 CPUs to use for a month … and 1000 licenses of any EDA tool you want … what would you do?

What would it be worth?

harry the ASIC guy

The Missing Lynx - The ASIC Cloud

Friday, April 3rd, 2009

My last blog post, entitled The Weakest Lynx, got a lot of attention from the Synopsys Lynx CAEs and Synopsys marketing. Please go see the comments on that post for a response from Chris Smith, the lead support person for Lynx at Synopsys. Meanwhile, the final part of this series … The Missing Lynx.

About 7 months ago, I wrote a blog post entitled Birth of an EDA Revolution in which I first shared my growing excitement over the potential for cloud computing and Software-as-a-Service (SaaS) to transform EDA. About a week later, Cadence announced a SaaS offering that provides their reference flows, their software, and their hardware for rent to projects on a short-term basis. About a week after that, I wrote a third post on this topic, asking WWSD (what will Synopsys do) in response to Cadence.

In that last post, I wrote the following:

Synopsys could probably go one better and offer a superior solution if it wanted to, combining their DesignSphere infrastructure and Pilot Design Environment.  If fact, they have done this for select customers already, but not as a standard offering. There is some legwork that they’d need to do, but the real barrier is Synopsys itself. They’ve got to decide to go after this market and put together a standard offering like Cadence has … And while they are at it, if they host it on a secure cloud to make it universally accessible and scalable, and if they offer on-demand licensing, and if they make it truly open by allowing third party tools to plug into their flow, they can own the high ground in the upcoming revolution.

Although I wrote this over 6 months ago, I don’t think I could have written it better today. The only difference is that Pilot has now become Lynx. “The ASIC Cloud”, as I call it, would look something like this:

The ASIC Cloud

As I envision it, Synopsys Lynx will be the heart of The ASIC Cloud and will serve to provide the overall production design flow. The Runtime Manager will manage the resources including provisioning of additional hardware (CPU and storage) and licenses, as needed. The management cockpit will provide real-time statistics on resource utilization so the number of CPUs and licenses can be scaled on-the-go. Since The ASIC Cloud is accessible through any web browser, this virtual design center is accessible to large corporate customers and to smaller startups and consultants. It’s also available to run through portable devices such as netbooks and smartphones.

If you think I’m insane, you may be right, I may be crazy. But it just might be a lunatic you’re looking for. To show you that this whole cloud computing thing is not just my fever (I have been sick this past week), take a look at what this one guy in Greece did with Xilinx tools. He basically pays < $1 per hour to access hardware to run Xilinx synthesis tools on the Amazon Elastic Compute Cloud. Now, this is nothing like running an entire RTL2GDSII design flow, but he IS running EDA tools on the cloud, taking advantage of pay-as-you go CPU and storage resources, and taking advantage of multiple processors to speed up his turnaround time. The ASIC Cloud will be similar and on a much greater scale.

It may take some time for Synopsys to warm up to this idea, especially since it is a whole new business model for licensing software. But for a certain class of customers (startups, design services providers) it has definite immediate benefits. And many of these customers are also potential Lynx customers.

So, Synopsys, if you want to talk, you know where to find me.

__________

That wraps up my 5-part series on Synopsys Lynx. If you want to find the other 4 parts, here they are:

Part 1 - Synopsys Lynx Design System Debuts at SNUG

Part 2 - Lynx Design System? - It’s The Flow, Stupid!

Part 3 - Strongest Lynx

Part 4 - The Weakest Lynx

harry the ASIC guy