Posts Tagged ‘ASIC’

Which Direction for EDA - 2D, 3D, or 360?

Sunday, May 23rd, 2010

2d3d360.JPGA hiker comes to a fork in the road and doesn’t know which way to go to reach his destination. Two men are at the fork, one of whom always tells the truth while the other always lies. The hiker doesn’t know which is which. He may ask one of the men only one question to find his way.

Which man does he ask, and what is the question?

__________

There’s been lots of discussion over the last month or 2 about the direction of EDA going forward. And I mean literally, the “direction” of EDA. Many semiconductor industry folks and proponents have been telling us to hold off on that obituary for 2D scaling and Moore’s law. Others have been doing quiet innovation in the technologies needed for 3D die and wafer stacks. And Cadence has recently unveiled its holistic 360 degree vision for EDA that has us developing apps first and silicon last.

I’ll examine each of these orthogonal directions in the next few posts. In this post, I’ll first examine the problem that is forcing us to make these choices.

The Problem

One of the great things about writing this blog is that I know that you all are very knowledgeable about the industry and technology and I don’t need to start with the basics. So I’ll just summarize them here for clarity:

  • Smaller semiconductor process geometries are getting more and more difficult to achieve and are challenging the semiconductor manufacturing equipment, the EDA tools, and even the physics. No doubt there have been and always will be innovations and breakthroughs that will move us forward, but we can no longer see clearly the path to the next 3 or 4 process geometries down the road. Even if you are one of the people who feels there is no end to the road, you’d have to admit that it certainly is getting steeper.
  • The costs to create fabs for these process nodes is increasing drastically, forcing consolidation in the semiconductor manufacturing industry. Some predict there will only be 3 or 4 fabs in a few years. This cost is passed on to the cost of the semiconductor device. Net cost per gate may not be rising, but the cost to ante up with a set of masks at a new node certainly is.
  • From a device physics and circuit design perspective, we are hitting a knee in the curve where lower geometries are not able to deliver on the full speed increases and power reductions achieved at larger nodes without new “tricks” being employed.
  • Despite these challenges, ICs are still growing in complexity and so are the development costs, some say as high as $100M. Many of these ICs are complex SoCs with analog and digital content, multiple processor cores, and several 3rd party IP blocks. Designing analog and digital circuits in the same process technology is not easy. The presence of embedded processors means that software and hardware have intersected and need to be developed harmoniously … no more throwing the hardware over-the-wall to software. And all this 3rd party IP means that our success is increasingly dependent on the quality of work of others that we have never met.
  • FPGAs are eating away at ASIC market share because of all the factors above. The break even quantity between ASIC and FPGA is increasing, which means more of the lower volume applications will choose FPGAs. Nonetheless, these FPGAs are still complex SoCs requiring similar verification methods as ASICs, including concurrent hardware and software development.

There are no doubt many other factors, but these are the critical ones in my mind. So, then, what does all this mean for semiconductor design and EDA?

At the risk of using a metaphor, many feel we are at a “fork-in-the-road”. One path leads straight ahead, continuing the 2D scaling with new process and circuit innovations. Another path leads straight up, moving Moore’s law into the 3D dimension with die stacks in order to cost effectively manage increasing complexity. And one path turns us 180 degrees around, asking us to look at the applications and software stack first and the semiconductor last. Certainly, 3 separate directions.

Which is the best path? Is there another path to move in? Perhaps a combination of these paths?

I’ll try to examine these questions in the next few posts. Next Post: Is 2D Scaling Really Dead or Just Mostly Dead?

__________

Answer to Riddle: Either man should be asked the following question: “If I were to ask you if this is the way I should go, would you say yes?” While asking the question, the hiker should be pointing at either of the directions going from the fork.

harry the ASIC guy

The Burning Platform

Monday, March 1st, 2010

The Burning PlatformAlthough I was unable to attend DVCon last week, and I missed Jim Hogan and Paul McLellan presenting “So you want to start an EDA Company? Here’s how“, I was at least able to sit in on an interesting webinar offered by RTM Consulting entitled Achieving Breakthrough Customer Satisfaction through Project Excellence.

As you may recall, I wrote a previous blog post about a Consulting Soft Skills training curriculum developed by RTM in conjunction with Mentor Graphics for their consulting organization. Since that time, I’ve spoken on and off with RTM CEO Randy Mysliviec. During a recent conversation he made me aware of this webinar and offered one of the slots for me to attend. I figured it would be a good refresher, at a minimum, and if I came out of it with at least one new nugget or perspective, I was ahead of the game. So I accepted.

I decided to “live tweet” the seminar. That is to say, I posted tweets of anything interesting that I heard during the webinar, all using the hash tag #RTMConsulting. If you want to view the tweets from that webinar, go here.

After 15 years in the consulting biz, I certainly had learned a lot, and the webinar was indeed a good refresher on some of the basics of managing customer satisfaction. There was a lot of material for the 2 hours that we had, and there were no real breaks, so it was very dense and full of material. The only downside is that I wish there had been some more time for discussion or questions, but that’s really a minor nit to pick.

I did get a new insight out of the webinar, and so I guess I’m ahead of the game. I had never heard of the concept of the “burning platform” before, especially as applies to projects. The story goes that there was an oil rig in the North Sea that caught fire and was bound to be destroyed. One of the workers had to decide whether to stay on the rig or jump into the freezing waters. The fall might kill him and he’d face hypothermia within minutes if not rescued, but he decided to jump anyway, since probable death was better than certain death. According to the story, the man survived and was rescued. Happy ending.

The instructor observed that many projects are like burning platforms, destined for destruction unless radically rethought. In thinking back, I immediately thought of 2 projects I’d been involved with that turned out to be burning platforms.

The first was a situation where a design team was trying to reverse engineer an asynchronously designed processor in order to port it to another process. The motivation was that the processor (I think it was an ADSP 21 something or other) was being retired by the manufacturer and this company wanted to continue to use it nonetheless. We were called in when the project was already in trouble, significantly over budget and schedule and with no clear end in sight. After a few weeks of looking at the situation, we decided that there was no way they would ever be able to verify the timing and functionality of the ported design. We recommended that they kill this approach and start over with a standard processor core that could do the job. There was a lot of resistance, especially from the engineer whose idea it was to reverse engineer the existing processor. But, eventually the customer made the right choice and redesigned using an ARM core.

Another group at the same company also had a burning platform. They were on their 4th version of a particular chip and were still finding functional bugs. Each time they developed a test plan and executed it, there were still more bugs that they had missed. Clearly their verification methodology was outdated and insufficient, depending on directed tests and FPGA prototypes rather than more current measurable methods. We tried to convince them to use assertions, functional coverage, constrained random testing, etc. But they were convinced that they just had to fix the few known bugs and they’d be OK. From their perspective, it wasn’t worth all the time and effort to develop and execute a new plan. They never did take our recommendations and I lost track of that project. I wonder if they ever finished.

As I think about these 2 examples, I realize that “burning platform” projects have some characteristics in common. And they align with the 3 key elements of a project. To tell if you have a “burning platform” on your hands, you might ask yourself the following 3 questions:

  1. Scope - Are you spending more and more time every week managing issues and risks? Is the list growing, rather than shrinking?
  2. Schedule - Are you on a treadmill with regards to schedule? Do you update the schedule every month only to realize that the end date has moved out by a month, or more?
  3. Resources - Are the people that you respect the most trying to jump off of the project? Are people afraid to join you?

If you answered yes to at least 2 of these, then you probably have a burning platform project on your hands. It’s time to jump in the water. That is, it’s time to scrap the plan and rethink your project from a fresh perspective and come up with a new plan. Of course, this is not a very scientific way of identifying an untenable project, but I think it’s a good rule-of-thumb.

There are other insights that I had from the webinar, but I thought I’d only share just the one. I don’t know if this particular webinar was recorded, but there are 2 more upcoming that you can attend. If you do, please feel free to live tweet the event like I did, using the #RTMConsulting hash tag.

But please, no “flaming” :-)

harry the ASIC guy

My Obligatory TOP 10 for 2009

Thursday, December 31st, 2009

2009 To 2010

http://www.flickr.com/photos/optical_illusion/ / CC BY 2.0

What’s a blog without some sort of obligatory year end TOP 10 list?

So, without further ado, here is my list of the TOP 10 events, happenings, occurrences, observations that I will remember from 2009. This is my list, from my perspective, of what I will remember. Here goes:

  1. Verification Survey - Last February, as DVCon was approaching, I thought it would be interesting to post a quickie survey to see what verification languages and methodologies were being used. Naively, I did not realize to what extent the fans of the various camps would go to rig the results in their favor. Nonetheless, the results ended up very interesting and I learned a valuable lesson on how NOT to do a survery.
  2. DVCon SaaS and Cloud Computing EDA Roundtable - One of the highlights of the year was definitely the impromptu panel that I assembled during DVCon to discuss Software-as-a-Service and Cloud Computing for EDA tools. My thanks to the panel guests, James Colgan (CEO @ Xuropa), Jean Brouwers (Consultant to Xuropa),  Susan Peterson (Verification IP Marketing Manager @ Cadence), Jeremy Ralph (CEO @ PDTi), Bill Alexander (VP Marketing @ Blue Pearl Software), Bill Guthrie (VP Marketing @ Numetrics). Unfortunately, the audio recording of the event was not of high enough quality to post, but you can read about it from others at the following locations:

    > 3 separate blog posts from Joe Hupcey (1, 2, 3)

    > A nice mention from Peggy Aycinena

    > Numerous other articles and blog posts throughout the year that were set in motion, to some extent, by this roundtable

  3. Predictions to the contrary, Magma is NOT dead. Cadence was NOT sold. Oh, and EDA is NOT dead either.
  4. John Cooley IS Dead - OK, he’s NOT really dead. But this year was certainly a turning point for his influence in the EDA space. It started off with John’s desperate attempt at a Conversation Central session at DAC to tell bloggers that their blog sucks and convince them to just send him their thoughts. For those who took John up on his offer by sending their thoughts, they would have waited 4 months to see them finally posted by John in his December DAC Trip report. I had a good discussion on this topic with John earlier this year, which he asked me to keep “off the record”. Let’s just say, he just doesn’t get it and doesn’t want to get it.
  5. The Rise of the EDA Bloggers.
  6. FPGA Taking Center Stage - It started back in March when Gartner issued a report stated that there were 30 FPGA design starts for every ASIC start. That number seemed very high to me and to others, but that did not stop this 30:1 ratio from being quoted as fact in all sorts of FPGA marketing materials throughout the year. On the technical side, it was a year where the issues of verification of large FPGAs came front-and-center and where a lot of ASIC people started transitioning to FPGA.
  7. Engineers Looking For Work - This was one of the more unfortunate trends that I will remember from 2009 and hopefully 2010 will be better. Personally, I had difficulty finding work between projects. DAC this year seemed to be as much about finding work as finding tools. A good friend of mine spent about 4 months looking for work until he finally accepted a job at 30% less pay and with a 1.5 hour commute because he “has to pay the bills”. A lot of my former EDA sales and AE colleagues have been laid off. Some have been looking for the right position for over a year. Let’s hope 2010 is a better year.
  8. SaaS and Cloud Computing for EDA - A former colleague of mine, now a VP of Sales at one of the small but growing EDA companies, came up to me in the bar during DAC one evening and stammered some thoughts regarding my predictions of SaaS and Cloud Computing for EDA. “It will never happen”. He may be right and I may be a bit biased, but this year I think we started to see some of the beginnings of these technologies moving into EDA. On a personal note, I’m involved in one of those efforts at Xuropa. Look for more developments in 2010.
  9. Talk of New EDA Business Models - For years, EDA has bemoaned the fact that the EDA industry captures so little of the value ($5B) of the much larger semiconductor industry ($250B) that it enables. At the DAC Keynote, Fu-Chieh Hsu of TSMC tried to convince everyone that the solution for EDA is to become part of some large TSMC ecosystem in which TSMC would reward the EDA industry like some sort of charitable tax deduction. Others talked about EDA companies having more skin in the game with their customers and being compensated based on their ultimate product success. And of course there is the SaaS business model I’ve been talking about. We’ll see if 2010 brings any of these to fruition.
  10. The People I Got to Meet and the People Who Wanted to Meet Me- One of the great things about having a blog is that I got to meet so many interesting people that I would never have had an opportunity to even talk to. I’ve had the opportunity to talk with executives at Synopsys, Cadence, Mentor, Springsoft, GateRocket, Oasys, Numetrics, and a dozen other EDA companies. I’ve even had the chance to interview some of them. And all the fellow bloggers I’ve met and now realize how much they know. On the flip side, I’ve been approached by PR people, both independent and in-house. I was interviewed 3 separate times, once by email by Rick Jamison, once by Skype by Liz Massingill, and once live by Dee McCrorey. EETimes added my blog as a Trusted Source. For those who say that social media brings people together, I can certainly vouch for that.

harry the ASIC guy

An ASIC Guy Visits An FPGA World - Part II

Monday, June 22nd, 2009

Altera FPGA

I mentioned a few weeks ago that I am wrapping up a project with one of my clients and beating the bushes for another project to take its place. As part of my search, I visited a former colleague who works at a small company in Southern California. This company designs a variety of products that utilize FPGAs exclusively (no ASICs), so I got a chance to understand a little bit more about the differences between ASIC and FPGA design. Here’s the follow-on then to my previous post An ASIC Guy Visits An FPGA World.

Recall that the first 4 observations from my previous visit to FPGA World were:

Observation #1 - FPGA people put their pants on one leg at a time, just like me.

Observation #2 - I thought that behavioral synthesis had died, but apparently it was just hibernating.

Observation #3 - Physical design of FPGAs is getting like ASICs.

Observation #4 - Verification of FPGAs is getting like ASICs.

Now for the new observations:

Observation #5 - Parts are damn cheap - According to the CTO of this company, Altera Cyclone parts can cost as little as $10-$20 each in sufficient quantities. A product that requires thousands or even tens of thousands will still cost less than a 90nm mask set. For many non-consumer products with quantities in this range, FPGAs are compelling from a cost standpoint.

True, the high-end parts can cost thousands or even tens of thousands each (e.g. for the latest Xilinx Virtex 6). But considering that a Virtex 6 part is 45nm and has the gate-count equivalent of almost 10M logic gates, what would an equivalent ASIC cost?

Observation # 6 - FPGA verification is different (at least for small to medium sized FPGAs) - Since it is so easy and fast and inexpensive (compared to ASIC) to synthesize and place and route an FPGA, much more of the functional verification is done in the lab on real hardware. Simulation is typically used to get a “warm and fuzzy” that the design is mostly functional, and then the rest is done in the lab with the actual FPGA. Tools like Xilinx ChipScope allow logic-analyzer-like access into the device, providing some, but not all, of the visibility that exists in a simulation. And once bugs are found, they can be fixed with an RTL change and reprogramming the FPGA.

One unique aspect of FPGA verification is that it can be done in phases or “spirals”. Perhaps only some of the requirements for the FPGA are complete or only part of the RTL is available. No problem. One can implement just that part of the design that is complete (for instance just the dataplane processing) and program the part. Since the same part can be used over and over, the cost to do this is basically $0. Once the rest of the RTL is available, the part can be reprogrammed again.

Observation # 7 - FPGA design tools are all free or dirt cheap - I think everybody knows this fact already, but it really hit home talking to this company. Almost all the tools they use for design are free or very inexpensive, yet the tools are more than capable to “get the job done”. In fact, the company probably could not operate in the black if they had to make the kind of investment that ASIC design tools require.

Observation # 8 - Many tools and methods common in the ASIC world are still uncommon in this FPGA world - For this company, there is no such thing as logical equivalence checking. Verification tools that perform formal verification of designs (formal proof), System-Verilog simulation, OVM, VMM…not used at all. Perhaps they’ll be used for the larger designs, but right now they are getting along fine without them.

__________

FPGA verification is clearly the area that is the most controversial. In one camp are the “old skool” FPGA designers that want to get the part in the lab as soon as possible and eschew simulation. In the other camp are the high-level verification proponents who espouse the merits of coverage-driven and metric-driven verification and recommend achieving complete coverage in simulation. I think it would really be fun to host a panel discussion with representatives from both camps and have them debate these points. I think we’d learn a lot.

Hmmm…

harry the ASIC guy

An ASIC Guy Visits An FPGA World

Thursday, June 4th, 2009

I hear so often nowadays that FPGAs are the new ASICs. So I decided to take off half a day and attend a Synopsys FPGA Seminar just down the street from where I’m working (literally a 5 minute walk). I would like to share some observations as an ASIC guy amongst FPGA guys and gals.

Observation #1 - FPGA people put their pants on one leg at a time, just like me. (Actually, I sometimes do both legs at the same time, but that’s another story). I had been led to believe that there was some sort of secret cabal of FPGA people that all knew the magic language of FPGAs that nobody else knew. Not the case. Although there is certainly a unique set of terminology and acronyms in the FPGA arena (LUTs, DCM, Block RAM) they are all fairly straightforward once you know them.

Observation #2 - I thought that behavioral synthesis had died, but apparently it was just hibernating. There is behavioral synthesis capability in some of the higher-level FPGA tools. I’ve never used it, so I can’t say one way or the other. But it sure was a blast from the past (circa 2000). Memories of SPW, Behavioral Compiler, Cossap, Monet, Matisse.

Observation #3 - Physical design of FPGAs is getting like ASICs. There are floorplanning tools, tools that back-annotate placement back into synthesis, tools that perform synthesis and placement together, tools for doing pre-route and post-route timing analysis. Made me think of Floorplan Manager, Physical Compiler, and IC Compiler.

Observation #4 - Verification of FPGAs is getting like ASICs. It can take a day to resynthesize and route a large FPGA to get back in the lab debugging. That’s an unacceptable turnaround time for debugging an FPGA with lots of bugs. Assertions (SVA, PSL), high-level verification languages (System-Verilog / OVM / VMM) and cross domain checkers are methods being stolen from the ASIC design world to address large FPGA verification. The trick is deciding when there has been enough simulation to start debug in the lab.

After this session, I think this ASIC guy is going to feel right at home in the FPGA world of the future.

harry the ASIC guy

(Read Part II of this series here)

5 Degrees Of Consultant Twiteration

Thursday, April 23rd, 2009

There is a consultant working with one of my clients with whom I’ve developed a good working relationship. Today he came by and asked me if I knew of someone to help on another project with a different client. The area of expertise, board design, was not one that I had a lot of contacts. So I decided to Twitter the opportunity:

13:20pm harrytheASICguy: Friend has short term need to design a board for cons elec startup in SoCal. Contact me if you r interested. Please retweet.

The post got retweeted 3 times (to my knowledge). At 7:55pm I got a reference to a board designer and hooked him up with my consultant buddy.

The request came from (1) the customer to (2) my buddy to (3) me to (4) another guy who recommended (5) the board designer. I don’t know the guy or if he’ll get the job or work out, but the speed with which a qualified candidate was identified was remarkable. Just slightly more than 4 1/2 hours. Of course, it would have been a lot less if I had more board design followers on Twitter, and that is the point.

Twitter, for all of its annoyances (and there are many), provides the fastest way to communicate to a large audience today. For identifying possible candidates to fill job opportunities, permanent or temporary, Twitter seems ideally suited.

So, if you are one of the unlucky ones to be looking for another job or another client, you need to get on Twitter. Here are 20 Tips to Twitter Job Search Success. Good luck.

harry the ASIC guy

A Tale of Two Booths - Certess and Nusym

Tuesday, June 10th, 2008

I had successfully avoided the zoo that is Monday at DAC and spent Tuesday zig-zagging the exhibit halls looking for my target list of companies to visit. (And former EDA colleagues, now another year older, greyer, and heavier). Interestingly enough, the first and last booths I visited on Tuesday seemed to offer opposite approaches to address the same issue. It was the best of times, it was the worst of times.

A well polished street magician got my attention at first at the Certess booth. After a few card tricks, finding the card I had picked out in the deck, he told me that it was as easy for him to find the card as it was for Certess to find the bugs in my design. Very clever!!! Someone must have been pretty proud they came up with that one. In any case, I’d had some exposure to Certess previously and was interested enough to invest 15 minutes.

Certess’ tool does something they call functional qualification. It’s kinda like ATPG fault grading for your verification suite. Basically, it seeds your DUT with potential bugs, then considers a bug “qualified” if the verification suite would cause the bug to be controlled and observed by a checker or assertion. If you have unqualified bugs (i.e. aspects of your design that are not tested), then there are holes in your verification suite.

This is a potentially useful tool since it helps you understand where the holes are in your verification suite. What next? Write more tests and run more vectors to get to those unqualified bugs. Ugh….more tests? I was hoping this would reduce the work, not increase it!!! This might be increasing my confidence, but life was so much simpler when I could delude myself that my test suite was actually complete.

Whereas the magician caught my attention at the Certess booth, I almost missed the Nusym booth as it was tucked away in the back corner of the Exhibit Hall. Actually, they did not really have a booth, just a few demo suites with a Nusymian guarding the entrance armed with nothing more than a RFID reader and a box of Twinkies. (I did not have my camera, so you’ll have to use your imagination). After all the attention they had gotten at DVCon and from Cooley, I was surprised that “harry the ASIC guy” could just walk up and get a demo in the suite.

(Disclaimer: There was no NDA required and I asked if this was OK to blog about and was told “Yup”, so here goes…)

The cool technology behind Nusym is the ability to do on-the-fly (during simulation) coverage analysis and reactively focused vector generation. Imagine a standard System Verilog testbench with constrained random generators and checkers and coverage groups defining your functional coverage goal. Using standard constrained random testing, the generators create patterns independent of what is inside the DUT and what is happening with the coverage monitors. If you hit actual coverage monitors or not, it doesn’t matter. The generators will do what they will do, perhaps hitting the same coverage monitors over and over and missing others altogether. Result: Lots of vectors run, insufficient functional coverage, more tests needed (random or directed).

The Nusym tool (no name yet) understands the DUT and does on-the-fly coverage analysis. It builds an internal model that includes all of the branches in your DUT and all of your coverage monitors. The constraint solver then generates patterns that try to get to the coverage monitors intentionally. In this way, it can get to deeply nested and hard to reach coverage points in a few vectors whereas constrained random may take a long time or never get there. Also, when you trigger a coverage monitor, it crosses it off the list and know it does not have to hit that monitor again. So the next vectors will try to hit something new. As compared to Certess, this is actually reducing the number of tests I need to write. In fact, they recommend just having a very simple generator that defines the basic constraints and focusing most of the energy on writing the coverage monitors. Result: Much fewer vectors run, high functional coverage. No more tests needed.

It sounds too good to be true, but it was obvious that these guys really believe in this tool and that they have something special. They are taking it slow. Nusym does not have a released product yet, but they have core technology with which they are working with a few customers/partners. They are also focusing on the core of the market, Verilog DUT, System Verilog Testbench. I would not throw out my current simulator just yet, but this seems like very unique and very powerful technology that can get coverage closure orders of magnitude faster than current solutions.

If anyone else saw their demo or has any comments, please chime in.

harry the ASIC guy

counter


Is IP a 4-letter Word ???

Friday, May 9th, 2008

As I’ve been thinking a lot about Intellectual Property (IP) lately, I recently recalled a consulting project that I had led several years ago … I think it was 2002. The client was designing a processor chip that had a PowerPC core and several peripherals. The core and some of the peripherals were purchased IP and our job was to help with the verification and synthesis of the chip.

Shaun was responsible for the verification. As he started to verify one of the interfaces, he started to uncover bugs in the associated peripheral, which was purchased IP. We contacted the IP provider and were told most assuredly that it had all been 100% verified and silicon proven. But we kept finding bugs. Eventually, faced with undeniable proof of the poor quality of their IP, they finally fessed up. It seems the designer responsible for verifying the design had left the company half way through the project. They never finished the verification. Ugh 1!

Meanwhile, Suzanne was helping with synthesis of the chip, including the PowerPC core. No matter what she did, she kept finding timing issues in the core. Eventually, she dug into the PowerPC core enough to figure out what was going on. Latches! They had used latches in order to meet timing. All well and good, but the timing constraints supplied with the design did not reflect any of that. Ugh 2!

About a week later, I was called to a meeting with Gus, who was the client’s project lead’s boss’s boss. As I walked into his office, he said something that I’ll never forget …

“I’m beginning to believe that IP is a 4-letter word”.

How true. Almost every IP I have every encountered, be it a complex mixed-signal hard IP block, a synthesizable processor core, an IO library … they all have issues. How can an industry survive when the majority of the products don’t work? Do you think the HDTV market would be around if more than half the TVs did not work? Or any market. Yet this is tolerated for IP.

That is not to say that some IP providers don’t take quality seriously. Synopsys learned it’s lesson many years ago when it came out with a PCI core that was a quality disaster. To their credit, they took failure as a learning opportunity, developed a robust reuse methodology along with Mentor Graphics, and reintroduced a PCI core that is still in use today.

Still … no IP is 100% perfect out-of-the-box. IP providers need to have a relationship and business model with their customers that encourages open sharing of design flaws. This is a two-way street. The IP provider must notify its customers when it finds bugs, and the customer must inform the IP provider when it finds bugs. As an example, Synopsys and many other reputable IP providers will inform customers of any design issue immediately, a transparency that I could have only prayed for from the company providing IP to my client. In return, they need their customers support by reporting design issues to them. Sounds simple, right?

Maybe not. I had another client who discovered during verification that there was a bug in a USB Host Controller IP. They had debugged and corrected the problem already, so I asked the project manager if they had informed the IP provider yet. He refused. The rationale? He wanted his competition to have the buggy design while he had the only fix!

We, as users, play a role because we have a responsibility to report bugs for the good of all of us using the product. Karen Bartleson talks about a similar situation with her luggage provider, where customers are encouraged to send back their broken luggage in order to help the company improve their luggage design. The luggage gets better and better as a result.

So, besides reporting bugs and choosing IP carefully, what else can we as designers do to drive IP quality. I have one idea. One day, when I have some free time, I’d like to start an independent organization that would objectively assess and grade IP. We’d take it though all the tools and flows and look at all the views, logical and physical, and come out with an assessment. This type of open grading system would encourage vendors to improve their IP and would allow us to make more informed choices rather than playing Russian Roulette.

I’m half inclined to start one today … anybody with me?

harry the ASIC guy

The Contrary ASIC Designer

Wednesday, April 23rd, 2008

Last Saturday night I went to a family Seder to celebrate the first night of Passover. You know, like in The Ten Commandments with Charlton Heston. As part of the Seder, we read a story of 4 sons, one wise, one contrary, one simple, and one unable to ask a question.

This got me thinking about some of the contrary ASIC designers I’ve worked with through the years … you know the type:

1. If everyone else wants to take road A, he wants to take road B.
2. If everyone else wants to take road B, he wants to take road C.
3. If you’ve got a plan, he’ll tell you why it won’t work.
4. Once he takes a stand on an issue, he’ll never give up.
5. He doesn’t really care what others think about him.
6. Every battle is worth fighting … to the death.

The contrarian ASIC designer can sap the energy and optimism out of a design team with all his negativity. Obviously, not good. So, why would anyone want to work with a contrarian?

Well, I’m here to tell you that the contrarian gets a bad rap and he can be a critical member of the team. First, some background…

Most law schools use a method of contrarian argument based upon the Socratic Method, that goes something like this:

• A legal decision to consider is chosen
• One student or the professor argues one interpretation
• Another student is assigned to argue the opposite position.

It does not matter what the individuals actually believe. They need to argue their assigned position as vigorously as they can. The goal is not for there to be a winner or loser in the argument. The goal is for the students to get the most complete and thorough understanding of the issue under consideration as possible. And only by giving both sides equal status can this be done. In the end, the law students emerge better prepared.

So, again, why would anyone want to work with a contrarian? In short, because the contrarian keeps the rest of us honest.

Consider the 6 behaviors of a contrarian that I mentioned earlier. Viewed within the context of law school argument, the contrarian is simply holding up his end of the bargain, to represent the opposite viewpoint. He’s the one most likely to find the holes that would otherwise eventually kill the project. Sure, he may find 9 holes that are not real for every real hole. But the one real hole he finds probably never would have been found by anyone else. In that sense, the contrarian is actually the ultimate optimist, because he’s the one trying the hardest to protect project success.

So, when you see that Contrarian on your project the next time, give him a hug…well, maybe not.

The Revolution Will Not Be Televised!!!

Thursday, April 3rd, 2008

My friend Ron has a knack for recognizing revolutionary technologies before most of us. He was one of the first to appreciate the power of the browser and how it would transform the internet, previously used only by engineers and scientists. He was one of the first and best podcasters. And now he’s become a self-proclaimed New Media Evangelist, preaching the good news of Web 2.0 and making it accessible to “the rest of us”.

Most of us are familiar with mainstream Web 2.0 applications, whether we use them or our friends use them or our kids use them. Social and professional networks such as My Space, Facebook, and LinkedIn. Podcasts in iTunes. Blogging sites on every topic. Virtual worlds such as Second Life. Collaboration tools such as Wikipedia. File sharing sites such as Youtube and Flickr. Social bookmarking sites such as Digg and Technorati. Open source publishing tools such as Wordpress and Joomla. Using these technologies we’re having conversations, collaborating, and getting smarter in ways that were unimaginable just 5 years ago. Imagine, a rock climber in Oregon can share climbing techniques with a fellow climber in Alice Springs. And mostly for free, save for the cost of the internet connection.

When we think of Web 2.0, we tend to think of teenagers and young adults. But this technology was invented by us geeks and so it’s no surprise that the ASIC design world is also getting on-board. Here are some examples from the ASIC Design industry:

Social media is networking ASIC designer to ASIC designer enabling us to be smarter faster. But that’s not all. Many forward looking companies have recognized the opportunity to talk to their customers directly. About 6 months ago, Synopsys launched several blogs on its microsite. Xilinx also has a User Community and a blog. It’s great that this is happening, but does it really make much of a difference? Consider what I believe could be a watershed event:

A few months ago, JL Grey published a post on his Cool Verification blog entitled The Brewing Standards War - Verification Methodology. As expected, verification engineers chimed in and expressed their ardent opinions and viewpoints. What came next was not expected … stakeholders from Synopsys and Mentor joined the conversation. The chief VMM developer from Synopsys, Janick Bergeron, put forth information to refute certain statements that he felt were erroneous. A marketing manager from Mentor, Dennis Brophy, offered his views on why OVM was open and VMM was not. And Karen Bartleson, who participates in several standards committees for Synopsys, disclosed Synopsys’ plan to encourage a single standard by donating VMM to Accellera.

From what I’ve heard, this was one of the most viewed ASIC related blog postings ever (JL: Do you have any stats you can share?). But did it make a difference in changing the behavior of any of the protagonists? I think it did and here is why:

  • This week at the Synopsys Users Group meeting in San Jose, the VMM / OVM issues were the main topic of questioning for CEO Aart DeGeus after his keynote address. And the questions picked up where they left off in the blog post…Will VMM ever be open and not just licensed? Is Synopsys trying to talk to Mentor and Cadence directly? If we have access to VMM, can we run it on other simulators besides VCS?
  • Speaking to several Synopsoids afterwards, I discovered that the verification marketing manager referenced this particular Cool Verification blog posting in an email to an internal Synopsys verification mailing list. It seems he approved of some of the comments and wanted to make others in Synopsys aware of these customer views. Evidently he sees these opinions as valuable and valid. Good for him.
  • Speaking to some at Synopsys who have a say in the future of VMM, I believe that Synopsys’ decision to donate VMM to Accellera has been influenced and pressured, at least in part, by the opinions expressed in the blog posting and the subsequent comments. Good for us.

I’d like to believe that the EDA companies and other suppliers are coming to recognize what mainstream companies have recognized … that the battle for customers is decreasingly being fought with advertisements, press releases, glossy brochures, and animated Power Point product pitches. Instead, as my friend Ron has pointed out, I am able to talk to “passionate content creators who know more about designing chips than any reporter could ever learn”, and find out what they think. Consider these paraphrased excerpts of the cluetrain manifesto : the end of business as usual:

  • The Internet is enabling conversations among human beings that were simply not possible in the era of mass media. As a result, markets are getting smarter, more informed, more organized.
  • People in networked markets have figured out that they get far better information and support from one another than from vendors.
  • There are no secrets. The networked market knows more than companies do about their own products. And whether the news is good or bad, they tell everyone.
  • Companies that don’t realize their markets are now networked person-to-person, getting smarter as a result and deeply joined in conversation are missing their best opportunity.
  • Companies can now communicate with their markets directly. If they blow it, it could be their last chance.

In short, this ASIC revolution will not be televised!!!

harry the ASIC guy