Posts Tagged ‘FPGA’

Altium Looking to Gain Altitude in the Cloud

Sunday, January 30th, 2011

Altium Enterprise Vault SystemOver the holiday break, I came across an interview of Altium CIO Alan Perkins that caught my eye. Sramana Mitra has been focusing on interesting cloud-based businesses and this interview focused on how this EDA company was planning to move into the cloud. I wasn’t able to talk to Alan Perkins directly, but I was able to find out more through their folks in the US (the company is based in Australia). It was interesting enough to warrant a post.

I knew very little about Altium before seeing this interview and maybe you don’t either, so here is a little background. Based in Australia, Altium is a small (~$50M) EDA company focused primarily in the design of printed circuit boards with FPGAs and embedded software. They formed from a company called Protel about 10 years ago and most recently gained attention when they acquired Morfik, a company that offers an IDE for developing web apps (more on that later). According to some data I saw and from what they told me, they added 1700 new customers (companies, not seats) in 2010 just in the US! So, they may be they best kept secret in a long while. (Ironically, the next day at work after I spoke to Altium, I spoke to someone at another company that was using Altium to design a PC board for us).

According to Altium, their big differentiator is that they have a database-centric offering as compared to tool-flow centric offerings like Cadence OrCAD and Allegro and Mentor’s Board Station and Expedition and related tools. I’m not an EDA developer, so I won’t pretend to understand the nuances of one versus the other. However, when I think of a “database-centric”, I think of “frameworks”. I know it’s been almost 20 years since those days, and things have changed, so maybe database-centric makes a lot of sense now. OpenAccess is certainly a good thing for the industry, but that is because it’s an “open standard” while Altium’s database is not. Anyway, enough on this matter because, as I said, I’m not an EDA developer and don’t want to get in too deep here.

A few years ago, I wrote a blog post entitled “Is IP a 4-Letter Word?”. The main thrust of that post was that IP quality is rather poor in general and there needs to be some sort of centralized authority to grade IP quality and to certify its use. So, when Altium told me they plan to enable a marketplace for design IP by creating ”design vaults” in the cloud, my first question was “who is going to make sure this IP is any good”? Is this going to be the iPhone app model, where Apple vets and approves every app? Or is it going to be the Android model, caveat emptor.

To Altium’s credit, they have similar concerns, which is why they are planning to move slowly. With their introduction of Altium Designer 10, Altium will first provide it’s own vetted IP in the cloud. In the past, this IP was distributed to the tool users on their site, but having it in the cloud will make it easier to distribute (pull, insted of push) and also allow for asynchronous release and updates. The tools will automatically detect if you are using an IP that has been revved, and ask you if you want to download the new version.

Once they have this model understood, Altium then plans to open the model up to 3rd party IP which can be offered for free, or licensed, or maybe even traded for credits (like Linden dollars in Second Life). It’s an interesting idea which requires some pretty significant shifts in personal and corporate cultures. I think that sharing of small “jelly bean” type IP is acheivable because none of it is very differentiated. But once you get to IP that required some significant time to design, why share it unless IP is your primary business. The semiconductor industry is still fiercely competitive and I think that will be a significant barrier. Not to mention that it takes something like 4x-5x as much effort to create an IP that is easily reusable as compared to creating it just to be used once.

Being a tool for the design of FPGAs is an advantage for Altium, since the cost of repairing an FPGA bug is so much less than an SoC or ASIC. For FPGAs, the rewards may be greater than the risks, especially for companies that are doing ASICs for the first time. And this is the market that Altium is aiming for … the thousands of sompanies that will have to design their products to work on the internet-of-things. Companies that design toasters that have never had any digital electronics and now have to throw something together. They will be the ones that will want to reuse these designs because they don’t have the ability to design them in-house.

Which brings us to Morfik, that company that Altium acquired that does IDEs for web apps. It’s those same companies that are designing internet enabled toasters that will also need to design a web app for their customers to access the toaster. So if Altium sells the web app and the IP that let’s the toaster talk to the web app, then Altium provides a significant value to the toaster company. That’s the plan.

Still, the cloud aspect is what interests me the most. Even if designers are reluctant to enter this market, the idea of having this type of central repository is best enabled by the cloud. The cloud can enable collaboration and sharing much better than any hosted environment. And it can scale as large and as quickly as needed. It allows a safe sort of DMZ where IP can be evaluated by a customer while still protecting the IP from theft.

This is not by any means a new idea either. OpenCores has been around for more than a decade offering a repository for designers to share and access free IP. I spoke with them a few years ago and at the time the site was used mainly by universities and smaller companies, but their OpenRISC processor has seen some good usage, so it’s a model that can work.

I’m anxious to see what happens over time with this concept. Eventually, I think this sort of sharing will have to happen and it will be interesting to see how this evolves.

harry the ASIC guy

Which Direction for EDA - 2D, 3D, or 360?

Sunday, May 23rd, 2010

2d3d360.JPGA hiker comes to a fork in the road and doesn’t know which way to go to reach his destination. Two men are at the fork, one of whom always tells the truth while the other always lies. The hiker doesn’t know which is which. He may ask one of the men only one question to find his way.

Which man does he ask, and what is the question?

__________

There’s been lots of discussion over the last month or 2 about the direction of EDA going forward. And I mean literally, the “direction” of EDA. Many semiconductor industry folks and proponents have been telling us to hold off on that obituary for 2D scaling and Moore’s law. Others have been doing quiet innovation in the technologies needed for 3D die and wafer stacks. And Cadence has recently unveiled its holistic 360 degree vision for EDA that has us developing apps first and silicon last.

I’ll examine each of these orthogonal directions in the next few posts. In this post, I’ll first examine the problem that is forcing us to make these choices.

The Problem

One of the great things about writing this blog is that I know that you all are very knowledgeable about the industry and technology and I don’t need to start with the basics. So I’ll just summarize them here for clarity:

  • Smaller semiconductor process geometries are getting more and more difficult to achieve and are challenging the semiconductor manufacturing equipment, the EDA tools, and even the physics. No doubt there have been and always will be innovations and breakthroughs that will move us forward, but we can no longer see clearly the path to the next 3 or 4 process geometries down the road. Even if you are one of the people who feels there is no end to the road, you’d have to admit that it certainly is getting steeper.
  • The costs to create fabs for these process nodes is increasing drastically, forcing consolidation in the semiconductor manufacturing industry. Some predict there will only be 3 or 4 fabs in a few years. This cost is passed on to the cost of the semiconductor device. Net cost per gate may not be rising, but the cost to ante up with a set of masks at a new node certainly is.
  • From a device physics and circuit design perspective, we are hitting a knee in the curve where lower geometries are not able to deliver on the full speed increases and power reductions achieved at larger nodes without new “tricks” being employed.
  • Despite these challenges, ICs are still growing in complexity and so are the development costs, some say as high as $100M. Many of these ICs are complex SoCs with analog and digital content, multiple processor cores, and several 3rd party IP blocks. Designing analog and digital circuits in the same process technology is not easy. The presence of embedded processors means that software and hardware have intersected and need to be developed harmoniously … no more throwing the hardware over-the-wall to software. And all this 3rd party IP means that our success is increasingly dependent on the quality of work of others that we have never met.
  • FPGAs are eating away at ASIC market share because of all the factors above. The break even quantity between ASIC and FPGA is increasing, which means more of the lower volume applications will choose FPGAs. Nonetheless, these FPGAs are still complex SoCs requiring similar verification methods as ASICs, including concurrent hardware and software development.

There are no doubt many other factors, but these are the critical ones in my mind. So, then, what does all this mean for semiconductor design and EDA?

At the risk of using a metaphor, many feel we are at a “fork-in-the-road”. One path leads straight ahead, continuing the 2D scaling with new process and circuit innovations. Another path leads straight up, moving Moore’s law into the 3D dimension with die stacks in order to cost effectively manage increasing complexity. And one path turns us 180 degrees around, asking us to look at the applications and software stack first and the semiconductor last. Certainly, 3 separate directions.

Which is the best path? Is there another path to move in? Perhaps a combination of these paths?

I’ll try to examine these questions in the next few posts. Next Post: Is 2D Scaling Really Dead or Just Mostly Dead?

__________

Answer to Riddle: Either man should be asked the following question: “If I were to ask you if this is the way I should go, would you say yes?” While asking the question, the hiker should be pointing at either of the directions going from the fork.

harry the ASIC guy

The Burning Platform

Monday, March 1st, 2010

The Burning PlatformAlthough I was unable to attend DVCon last week, and I missed Jim Hogan and Paul McLellan presenting “So you want to start an EDA Company? Here’s how“, I was at least able to sit in on an interesting webinar offered by RTM Consulting entitled Achieving Breakthrough Customer Satisfaction through Project Excellence.

As you may recall, I wrote a previous blog post about a Consulting Soft Skills training curriculum developed by RTM in conjunction with Mentor Graphics for their consulting organization. Since that time, I’ve spoken on and off with RTM CEO Randy Mysliviec. During a recent conversation he made me aware of this webinar and offered one of the slots for me to attend. I figured it would be a good refresher, at a minimum, and if I came out of it with at least one new nugget or perspective, I was ahead of the game. So I accepted.

I decided to “live tweet” the seminar. That is to say, I posted tweets of anything interesting that I heard during the webinar, all using the hash tag #RTMConsulting. If you want to view the tweets from that webinar, go here.

After 15 years in the consulting biz, I certainly had learned a lot, and the webinar was indeed a good refresher on some of the basics of managing customer satisfaction. There was a lot of material for the 2 hours that we had, and there were no real breaks, so it was very dense and full of material. The only downside is that I wish there had been some more time for discussion or questions, but that’s really a minor nit to pick.

I did get a new insight out of the webinar, and so I guess I’m ahead of the game. I had never heard of the concept of the “burning platform” before, especially as applies to projects. The story goes that there was an oil rig in the North Sea that caught fire and was bound to be destroyed. One of the workers had to decide whether to stay on the rig or jump into the freezing waters. The fall might kill him and he’d face hypothermia within minutes if not rescued, but he decided to jump anyway, since probable death was better than certain death. According to the story, the man survived and was rescued. Happy ending.

The instructor observed that many projects are like burning platforms, destined for destruction unless radically rethought. In thinking back, I immediately thought of 2 projects I’d been involved with that turned out to be burning platforms.

The first was a situation where a design team was trying to reverse engineer an asynchronously designed processor in order to port it to another process. The motivation was that the processor (I think it was an ADSP 21 something or other) was being retired by the manufacturer and this company wanted to continue to use it nonetheless. We were called in when the project was already in trouble, significantly over budget and schedule and with no clear end in sight. After a few weeks of looking at the situation, we decided that there was no way they would ever be able to verify the timing and functionality of the ported design. We recommended that they kill this approach and start over with a standard processor core that could do the job. There was a lot of resistance, especially from the engineer whose idea it was to reverse engineer the existing processor. But, eventually the customer made the right choice and redesigned using an ARM core.

Another group at the same company also had a burning platform. They were on their 4th version of a particular chip and were still finding functional bugs. Each time they developed a test plan and executed it, there were still more bugs that they had missed. Clearly their verification methodology was outdated and insufficient, depending on directed tests and FPGA prototypes rather than more current measurable methods. We tried to convince them to use assertions, functional coverage, constrained random testing, etc. But they were convinced that they just had to fix the few known bugs and they’d be OK. From their perspective, it wasn’t worth all the time and effort to develop and execute a new plan. They never did take our recommendations and I lost track of that project. I wonder if they ever finished.

As I think about these 2 examples, I realize that “burning platform” projects have some characteristics in common. And they align with the 3 key elements of a project. To tell if you have a “burning platform” on your hands, you might ask yourself the following 3 questions:

  1. Scope - Are you spending more and more time every week managing issues and risks? Is the list growing, rather than shrinking?
  2. Schedule - Are you on a treadmill with regards to schedule? Do you update the schedule every month only to realize that the end date has moved out by a month, or more?
  3. Resources - Are the people that you respect the most trying to jump off of the project? Are people afraid to join you?

If you answered yes to at least 2 of these, then you probably have a burning platform project on your hands. It’s time to jump in the water. That is, it’s time to scrap the plan and rethink your project from a fresh perspective and come up with a new plan. Of course, this is not a very scientific way of identifying an untenable project, but I think it’s a good rule-of-thumb.

There are other insights that I had from the webinar, but I thought I’d only share just the one. I don’t know if this particular webinar was recorded, but there are 2 more upcoming that you can attend. If you do, please feel free to live tweet the event like I did, using the #RTMConsulting hash tag.

But please, no “flaming” :-)

harry the ASIC guy

My Obligatory TOP 10 for 2009

Thursday, December 31st, 2009

2009 To 2010

http://www.flickr.com/photos/optical_illusion/ / CC BY 2.0

What’s a blog without some sort of obligatory year end TOP 10 list?

So, without further ado, here is my list of the TOP 10 events, happenings, occurrences, observations that I will remember from 2009. This is my list, from my perspective, of what I will remember. Here goes:

  1. Verification Survey - Last February, as DVCon was approaching, I thought it would be interesting to post a quickie survey to see what verification languages and methodologies were being used. Naively, I did not realize to what extent the fans of the various camps would go to rig the results in their favor. Nonetheless, the results ended up very interesting and I learned a valuable lesson on how NOT to do a survery.
  2. DVCon SaaS and Cloud Computing EDA Roundtable - One of the highlights of the year was definitely the impromptu panel that I assembled during DVCon to discuss Software-as-a-Service and Cloud Computing for EDA tools. My thanks to the panel guests, James Colgan (CEO @ Xuropa), Jean Brouwers (Consultant to Xuropa),  Susan Peterson (Verification IP Marketing Manager @ Cadence), Jeremy Ralph (CEO @ PDTi), Bill Alexander (VP Marketing @ Blue Pearl Software), Bill Guthrie (VP Marketing @ Numetrics). Unfortunately, the audio recording of the event was not of high enough quality to post, but you can read about it from others at the following locations:

    > 3 separate blog posts from Joe Hupcey (1, 2, 3)

    > A nice mention from Peggy Aycinena

    > Numerous other articles and blog posts throughout the year that were set in motion, to some extent, by this roundtable

  3. Predictions to the contrary, Magma is NOT dead. Cadence was NOT sold. Oh, and EDA is NOT dead either.
  4. John Cooley IS Dead - OK, he’s NOT really dead. But this year was certainly a turning point for his influence in the EDA space. It started off with John’s desperate attempt at a Conversation Central session at DAC to tell bloggers that their blog sucks and convince them to just send him their thoughts. For those who took John up on his offer by sending their thoughts, they would have waited 4 months to see them finally posted by John in his December DAC Trip report. I had a good discussion on this topic with John earlier this year, which he asked me to keep “off the record”. Let’s just say, he just doesn’t get it and doesn’t want to get it.
  5. The Rise of the EDA Bloggers.
  6. FPGA Taking Center Stage - It started back in March when Gartner issued a report stated that there were 30 FPGA design starts for every ASIC start. That number seemed very high to me and to others, but that did not stop this 30:1 ratio from being quoted as fact in all sorts of FPGA marketing materials throughout the year. On the technical side, it was a year where the issues of verification of large FPGAs came front-and-center and where a lot of ASIC people started transitioning to FPGA.
  7. Engineers Looking For Work - This was one of the more unfortunate trends that I will remember from 2009 and hopefully 2010 will be better. Personally, I had difficulty finding work between projects. DAC this year seemed to be as much about finding work as finding tools. A good friend of mine spent about 4 months looking for work until he finally accepted a job at 30% less pay and with a 1.5 hour commute because he “has to pay the bills”. A lot of my former EDA sales and AE colleagues have been laid off. Some have been looking for the right position for over a year. Let’s hope 2010 is a better year.
  8. SaaS and Cloud Computing for EDA - A former colleague of mine, now a VP of Sales at one of the small but growing EDA companies, came up to me in the bar during DAC one evening and stammered some thoughts regarding my predictions of SaaS and Cloud Computing for EDA. “It will never happen”. He may be right and I may be a bit biased, but this year I think we started to see some of the beginnings of these technologies moving into EDA. On a personal note, I’m involved in one of those efforts at Xuropa. Look for more developments in 2010.
  9. Talk of New EDA Business Models - For years, EDA has bemoaned the fact that the EDA industry captures so little of the value ($5B) of the much larger semiconductor industry ($250B) that it enables. At the DAC Keynote, Fu-Chieh Hsu of TSMC tried to convince everyone that the solution for EDA is to become part of some large TSMC ecosystem in which TSMC would reward the EDA industry like some sort of charitable tax deduction. Others talked about EDA companies having more skin in the game with their customers and being compensated based on their ultimate product success. And of course there is the SaaS business model I’ve been talking about. We’ll see if 2010 brings any of these to fruition.
  10. The People I Got to Meet and the People Who Wanted to Meet Me- One of the great things about having a blog is that I got to meet so many interesting people that I would never have had an opportunity to even talk to. I’ve had the opportunity to talk with executives at Synopsys, Cadence, Mentor, Springsoft, GateRocket, Oasys, Numetrics, and a dozen other EDA companies. I’ve even had the chance to interview some of them. And all the fellow bloggers I’ve met and now realize how much they know. On the flip side, I’ve been approached by PR people, both independent and in-house. I was interviewed 3 separate times, once by email by Rick Jamison, once by Skype by Liz Massingill, and once live by Dee McCrorey. EETimes added my blog as a Trusted Source. For those who say that social media brings people together, I can certainly vouch for that.

harry the ASIC guy

An ASIC Guy Visits An FPGA World - Part II

Monday, June 22nd, 2009

Altera FPGA

I mentioned a few weeks ago that I am wrapping up a project with one of my clients and beating the bushes for another project to take its place. As part of my search, I visited a former colleague who works at a small company in Southern California. This company designs a variety of products that utilize FPGAs exclusively (no ASICs), so I got a chance to understand a little bit more about the differences between ASIC and FPGA design. Here’s the follow-on then to my previous post An ASIC Guy Visits An FPGA World.

Recall that the first 4 observations from my previous visit to FPGA World were:

Observation #1 - FPGA people put their pants on one leg at a time, just like me.

Observation #2 - I thought that behavioral synthesis had died, but apparently it was just hibernating.

Observation #3 - Physical design of FPGAs is getting like ASICs.

Observation #4 - Verification of FPGAs is getting like ASICs.

Now for the new observations:

Observation #5 - Parts are damn cheap - According to the CTO of this company, Altera Cyclone parts can cost as little as $10-$20 each in sufficient quantities. A product that requires thousands or even tens of thousands will still cost less than a 90nm mask set. For many non-consumer products with quantities in this range, FPGAs are compelling from a cost standpoint.

True, the high-end parts can cost thousands or even tens of thousands each (e.g. for the latest Xilinx Virtex 6). But considering that a Virtex 6 part is 45nm and has the gate-count equivalent of almost 10M logic gates, what would an equivalent ASIC cost?

Observation # 6 - FPGA verification is different (at least for small to medium sized FPGAs) - Since it is so easy and fast and inexpensive (compared to ASIC) to synthesize and place and route an FPGA, much more of the functional verification is done in the lab on real hardware. Simulation is typically used to get a “warm and fuzzy” that the design is mostly functional, and then the rest is done in the lab with the actual FPGA. Tools like Xilinx ChipScope allow logic-analyzer-like access into the device, providing some, but not all, of the visibility that exists in a simulation. And once bugs are found, they can be fixed with an RTL change and reprogramming the FPGA.

One unique aspect of FPGA verification is that it can be done in phases or “spirals”. Perhaps only some of the requirements for the FPGA are complete or only part of the RTL is available. No problem. One can implement just that part of the design that is complete (for instance just the dataplane processing) and program the part. Since the same part can be used over and over, the cost to do this is basically $0. Once the rest of the RTL is available, the part can be reprogrammed again.

Observation # 7 - FPGA design tools are all free or dirt cheap - I think everybody knows this fact already, but it really hit home talking to this company. Almost all the tools they use for design are free or very inexpensive, yet the tools are more than capable to “get the job done”. In fact, the company probably could not operate in the black if they had to make the kind of investment that ASIC design tools require.

Observation # 8 - Many tools and methods common in the ASIC world are still uncommon in this FPGA world - For this company, there is no such thing as logical equivalence checking. Verification tools that perform formal verification of designs (formal proof), System-Verilog simulation, OVM, VMM…not used at all. Perhaps they’ll be used for the larger designs, but right now they are getting along fine without them.

__________

FPGA verification is clearly the area that is the most controversial. In one camp are the “old skool” FPGA designers that want to get the part in the lab as soon as possible and eschew simulation. In the other camp are the high-level verification proponents who espouse the merits of coverage-driven and metric-driven verification and recommend achieving complete coverage in simulation. I think it would really be fun to host a panel discussion with representatives from both camps and have them debate these points. I think we’d learn a lot.

Hmmm…

harry the ASIC guy

An ASIC Guy Visits An FPGA World

Thursday, June 4th, 2009

I hear so often nowadays that FPGAs are the new ASICs. So I decided to take off half a day and attend a Synopsys FPGA Seminar just down the street from where I’m working (literally a 5 minute walk). I would like to share some observations as an ASIC guy amongst FPGA guys and gals.

Observation #1 - FPGA people put their pants on one leg at a time, just like me. (Actually, I sometimes do both legs at the same time, but that’s another story). I had been led to believe that there was some sort of secret cabal of FPGA people that all knew the magic language of FPGAs that nobody else knew. Not the case. Although there is certainly a unique set of terminology and acronyms in the FPGA arena (LUTs, DCM, Block RAM) they are all fairly straightforward once you know them.

Observation #2 - I thought that behavioral synthesis had died, but apparently it was just hibernating. There is behavioral synthesis capability in some of the higher-level FPGA tools. I’ve never used it, so I can’t say one way or the other. But it sure was a blast from the past (circa 2000). Memories of SPW, Behavioral Compiler, Cossap, Monet, Matisse.

Observation #3 - Physical design of FPGAs is getting like ASICs. There are floorplanning tools, tools that back-annotate placement back into synthesis, tools that perform synthesis and placement together, tools for doing pre-route and post-route timing analysis. Made me think of Floorplan Manager, Physical Compiler, and IC Compiler.

Observation #4 - Verification of FPGAs is getting like ASICs. It can take a day to resynthesize and route a large FPGA to get back in the lab debugging. That’s an unacceptable turnaround time for debugging an FPGA with lots of bugs. Assertions (SVA, PSL), high-level verification languages (System-Verilog / OVM / VMM) and cross domain checkers are methods being stolen from the ASIC design world to address large FPGA verification. The trick is deciding when there has been enough simulation to start debug in the lab.

After this session, I think this ASIC guy is going to feel right at home in the FPGA world of the future.

harry the ASIC guy

(Read Part II of this series here)