Posts Tagged ‘IP’

Altium Looking to Gain Altitude in the Cloud

Sunday, January 30th, 2011

Altium Enterprise Vault SystemOver the holiday break, I came across an interview of Altium CIO Alan Perkins that caught my eye. Sramana Mitra has been focusing on interesting cloud-based businesses and this interview focused on how this EDA company was planning to move into the cloud. I wasn’t able to talk to Alan Perkins directly, but I was able to find out more through their folks in the US (the company is based in Australia). It was interesting enough to warrant a post.

I knew very little about Altium before seeing this interview and maybe you don’t either, so here is a little background. Based in Australia, Altium is a small (~$50M) EDA company focused primarily in the design of printed circuit boards with FPGAs and embedded software. They formed from a company called Protel about 10 years ago and most recently gained attention when they acquired Morfik, a company that offers an IDE for developing web apps (more on that later). According to some data I saw and from what they told me, they added 1700 new customers (companies, not seats) in 2010 just in the US! So, they may be they best kept secret in a long while. (Ironically, the next day at work after I spoke to Altium, I spoke to someone at another company that was using Altium to design a PC board for us).

According to Altium, their big differentiator is that they have a database-centric offering as compared to tool-flow centric offerings like Cadence OrCAD and Allegro and Mentor’s Board Station and Expedition and related tools. I’m not an EDA developer, so I won’t pretend to understand the nuances of one versus the other. However, when I think of a “database-centric”, I think of “frameworks”. I know it’s been almost 20 years since those days, and things have changed, so maybe database-centric makes a lot of sense now. OpenAccess is certainly a good thing for the industry, but that is because it’s an “open standard” while Altium’s database is not. Anyway, enough on this matter because, as I said, I’m not an EDA developer and don’t want to get in too deep here.

A few years ago, I wrote a blog post entitled “Is IP a 4-Letter Word?”. The main thrust of that post was that IP quality is rather poor in general and there needs to be some sort of centralized authority to grade IP quality and to certify its use. So, when Altium told me they plan to enable a marketplace for design IP by creating ”design vaults” in the cloud, my first question was “who is going to make sure this IP is any good”? Is this going to be the iPhone app model, where Apple vets and approves every app? Or is it going to be the Android model, caveat emptor.

To Altium’s credit, they have similar concerns, which is why they are planning to move slowly. With their introduction of Altium Designer 10, Altium will first provide it’s own vetted IP in the cloud. In the past, this IP was distributed to the tool users on their site, but having it in the cloud will make it easier to distribute (pull, insted of push) and also allow for asynchronous release and updates. The tools will automatically detect if you are using an IP that has been revved, and ask you if you want to download the new version.

Once they have this model understood, Altium then plans to open the model up to 3rd party IP which can be offered for free, or licensed, or maybe even traded for credits (like Linden dollars in Second Life). It’s an interesting idea which requires some pretty significant shifts in personal and corporate cultures. I think that sharing of small “jelly bean” type IP is acheivable because none of it is very differentiated. But once you get to IP that required some significant time to design, why share it unless IP is your primary business. The semiconductor industry is still fiercely competitive and I think that will be a significant barrier. Not to mention that it takes something like 4x-5x as much effort to create an IP that is easily reusable as compared to creating it just to be used once.

Being a tool for the design of FPGAs is an advantage for Altium, since the cost of repairing an FPGA bug is so much less than an SoC or ASIC. For FPGAs, the rewards may be greater than the risks, especially for companies that are doing ASICs for the first time. And this is the market that Altium is aiming for … the thousands of sompanies that will have to design their products to work on the internet-of-things. Companies that design toasters that have never had any digital electronics and now have to throw something together. They will be the ones that will want to reuse these designs because they don’t have the ability to design them in-house.

Which brings us to Morfik, that company that Altium acquired that does IDEs for web apps. It’s those same companies that are designing internet enabled toasters that will also need to design a web app for their customers to access the toaster. So if Altium sells the web app and the IP that let’s the toaster talk to the web app, then Altium provides a significant value to the toaster company. That’s the plan.

Still, the cloud aspect is what interests me the most. Even if designers are reluctant to enter this market, the idea of having this type of central repository is best enabled by the cloud. The cloud can enable collaboration and sharing much better than any hosted environment. And it can scale as large and as quickly as needed. It allows a safe sort of DMZ where IP can be evaluated by a customer while still protecting the IP from theft.

This is not by any means a new idea either. OpenCores has been around for more than a decade offering a repository for designers to share and access free IP. I spoke with them a few years ago and at the time the site was used mainly by universities and smaller companies, but their OpenRISC processor has seen some good usage, so it’s a model that can work.

I’m anxious to see what happens over time with this concept. Eventually, I think this sort of sharing will have to happen and it will be interesting to see how this evolves.

harry the ASIC guy

Synopsys’ Digital to Analog Conversion

Tuesday, May 12th, 2009

Last Thursday, the same day that Synopsys announced it’s acquisition of MIPS’ Analog Business Group (ABG) for $22M in cash, I had a long overdue lunch with a former colleague of mine at Synopsys. We spent most of the time talking about family, and how each other’s jobs were going, and the economy, and the industry in general.

At some point, the discussion got around to Aart DeGeus and his leadership qualities. My friend, who plays bass guitar with Aart on occasion, shared with me his observations of Synopsys’ CEO outside of work. “He’s a born leader, even when he’s playing music,” my friend said as he related one story of how Aart lead the band in an improvisational session with the same infectious enthusiasm he brings to Synopsys. Here’s a look.

While driving back from lunch, I recalled a field conference from the mid 1990s where Aart introduced the notion of “Synopsys 2″. Synopsys 2 was to be a new company (figuratively, not literally) that would obsolete Synopsys 1 and take a new leadership role in a transforming industry. At that time, Synopsys 1 was the original “synthesis company” along with some test and simulation tools. The industry challenge driving Synopsys 2 was the need for increased designer productivity to keep up with chip sizes increasing due to the inexorable and ubiquitous Moore’s Law.

Aart’s vision for this new EDA order was twofold. First, behavioral synthesis would allow designers to design at a higher, more efficient, and more productive level of abstraction, thereby increasing their productivity. In fact, your’s truly helped develop and deliver the very first DAC floor demo of Behavioral Compiler. I also developed a very simple but elegant presentation of the power of behavioral synthesis that was used throughout Synopsys, garnered the praise of Aart himself, and sits in my desk as a memento of my time at Synopsys. Unfortunately, behavioral synthesis never really caught on at the time. Oh well. So much for that.

The second part of Aart’s productivity vision was design reuse. Needless to say, that vision has come true in spades. I don’t have reliable numbers at my finger tips, but I would guess that there is hardly a chip designed without some sort of implementation or verification IP reuse. Some chips are almost entirely reusable IP, with the only custom logic stitching it all together. I can’t imagine designing 100M gate chips without design reuse.

Design teams looking for digital IP were faced with a straightforward make vs. buy decision. On the one hand, most design teams could design the IP themselves given enough time and money. They could even prototype and verify the IP via FPGA protoytype to make sure it would work. But could they do it faster and cheaper than buying the IP and could they do it with a higher level of quality? The design team that decided they could do a better, faster, cheaper job themselves, did so. The others bought the IP.

But analog and mixed signal IP is very different. Whereas most design teams have the skills and ability to design digital IP, they usually do not have the expertise to design complex analog and mixed signal IP. Not only are analog designers more scarce, but the problem keeps getting harder at smaller geometries. Ask any analog designer you know how hard it is to design a PLL at 65 nm or 45 nm. What were 4 corner simulations at 90nm become 16 corner or even monte-carlo simulations at 45 nm and below. Not only is analog design difficult, but it often requires access to foundry specific information only available to close partners of the foundries. And even if you can get the info and design the IP, there is no quick FPGA prototype to prove it out. You need to fab a test chip (which is several months), complete with digital noise sources to stress the IP in its eventual environs. The test chip can cost several million dollars (much more than an FPGA protoype for digital IP) and you’d better count on at least one respin to get it right.

That is why Synopsys’ acquisition of the MIPS ABG IP is such a good move. The “value proposition” for analog IP is so much greater than for digital IP. It’s not a matter of whether the customer can design the IP faster, better, cheaper, it’s whether he can design it at all. By expanding its analog IP portfolio, at a bargain price, Synopsys is well positioned to provide much of the analog and mixed signal IP at 65 nm and below. In addition, this acquisition gives Synopsys a real analog design team with which they can perform design services, something they have coveted but lacked for some time.

Once again, it looks like Aart is taking the leadership role. Look for other companies to follow the leader.

harry the ASIC guy

EDA Is Only “Mostly Dead”

Wednesday, March 4th, 2009

Last Wednesday at DVCon, Peggy Aycinena MC’ed what used to be known as the Troublemakers Panel, formerly MC’ed by John Cooley. The topic, “EDA: Dead or Alive?” Well, having attended Aart’s Keynote address immediately preceding and having attended Peggy’s panel discussion, I can answer that question in the immortal words of Miracle Max, “EDA is only MOSTLY dead”. But first, some background.

Back in the mid 90s, I attended a Synopsys field conference where Aart delivered a keynote addressing the challenges of achieving higher and higher productivity in the face of increasing chip size. The solution, he predicted, would be design reuse in the form of intellectual property. Although most of us had only the faintest idea of what design reuse entailed and could barely fathom such a future, Aart’s prediction has indeed come true. Today, there is hardly a chip designed without some form of soft or hard IP and many chips are predominantly IP.

Some years later, he delivered a similar keynote preaching the coming future of embedded software. This was before the term SoC was coined to designate a chip with embedded processors running embedded software. Again, only a handful understood or could fathom this future, but Aart was correct again.

So, this year, immediately preceding Peggy’s Panel, Aart delivered another very entertaining and predictive keynote. After describing the current economic crisis in engineering terms using amplifiers and feedback loops, he moved to the real meat of the presentation which addressed the growing amount of software content in today’s SoCs. He described how project schedules are often paced by embedded software development and validation. How products are increasingly differentiated based on software, not hardware. And he predicted a day when chips would only have custom hardware to implement functions that could not be performed with programmable software. In essence, he described a future with little electronic design as we know it today, where hardware designers are largely replaced by programmers.

Immediately following Aart’s keynote was Peggy’s panel. (If you want to know exactly what occurred, there is no place better to go than Mike Demler’s blow-by-blow account.) Peggy did her best to challenge the EDA execs to defend why EDA would not die out. She kept coming back to that same question in different ways and the execs kept avoiding directly answering the question, choosing instead to offer such philosophical logic such as: “If EDA is dead, then semiconductors are dead. If semiconductors are dead, then electronics are dead. And since electronics will never die, EDA will never die”.

On the surface, logic such as this is certainly comforting. After all, who can imagine a future without electronics? Upon closer inspection, however, and in light of Aart’s keynote, there is plenty reason for skepticism.

Just as Aart was right about design reuse and IP…

Just as Aart was right about embedded software …

I believe that Aart is right about hardware design being replaced by software development.

As processors and co-processors become faster and more capable of handling tasks formerly delegated to hardware…

As time-to-market drives companies to sell products that can be upgraded or fixed later via software patches…

As fewer and fewer companies can afford the cost of chip design at 32nm and below…

More companies will move capabilities to software running on standard chips.

With that, what becomes of the current EDA industry. Will it adapt to embrace software as part of its charter. Or will it continue to focus on chip development.

Personally, I think Aart is right again. Hardware will increasingly become software. And an EDA industry focused on hardware, will be increasingly “mostly dead”.

harry the ASIC guy

Big DAC Attack

Tuesday, May 20th, 2008

OK … I’m registered to go to DAC for at least one day, maybe two. I’ll definitely be there on Tuesday and probably Wednesday evening for a Blogging “Birds-of-a-Feather” session that JL Gray is setting up. Besides hitting the forums and other activities, I’ll have about half a day to attack the exhibit floor or the “suites” to look at some new technology. If you want to meet up, drop me an email and we can arrange something.

Cadence won’t be there and I already talk to Synopsys and Mentor on a regular basis, so I’m planning on focusing on smaller companies with new technology. Here’s what’s on my list so far…

Nusym - They have some new “Path Tracing” technology that finds correlations between a constrained random testbench and hard-to-hit functional coverage points. With this knowledge, they claim to be able to modify the constraints to guide the simulation to hit the coverage points. The main benefit is in getting that last few % of functional coverage that can be difficult with unguided constrained random patterns.

Chip Estimate - Having been around for a few years and recently bought by Cadence, they are basically a portal where you can access 3rd party IP and use the information to do a rough chip floorplan. This allows you to estimate area, power, yield, etc. I’m real curious as to their business model and why Cadence bought them. At a minimum, it should be entertaining to see the hyper-competitive IP vendors present back-to-back at half hour intervals on the DAC floor.

I have a few others on my list, but there are so many small companies that it’s hard to go thru them all and decide what to see. That’s where I need your help.

What would you recommend seeing and why?