Posts Tagged ‘Mentor’

Altium Looking to Gain Altitude in the Cloud

Sunday, January 30th, 2011

Altium Enterprise Vault SystemOver the holiday break, I came across an interview of Altium CIO Alan Perkins that caught my eye. Sramana Mitra has been focusing on interesting cloud-based businesses and this interview focused on how this EDA company was planning to move into the cloud. I wasn’t able to talk to Alan Perkins directly, but I was able to find out more through their folks in the US (the company is based in Australia). It was interesting enough to warrant a post.

I knew very little about Altium before seeing this interview and maybe you don’t either, so here is a little background. Based in Australia, Altium is a small (~$50M) EDA company focused primarily in the design of printed circuit boards with FPGAs and embedded software. They formed from a company called Protel about 10 years ago and most recently gained attention when they acquired Morfik, a company that offers an IDE for developing web apps (more on that later). According to some data I saw and from what they told me, they added 1700 new customers (companies, not seats) in 2010 just in the US! So, they may be they best kept secret in a long while. (Ironically, the next day at work after I spoke to Altium, I spoke to someone at another company that was using Altium to design a PC board for us).

According to Altium, their big differentiator is that they have a database-centric offering as compared to tool-flow centric offerings like Cadence OrCAD and Allegro and Mentor’s Board Station and Expedition and related tools. I’m not an EDA developer, so I won’t pretend to understand the nuances of one versus the other. However, when I think of a “database-centric”, I think of “frameworks”. I know it’s been almost 20 years since those days, and things have changed, so maybe database-centric makes a lot of sense now. OpenAccess is certainly a good thing for the industry, but that is because it’s an “open standard” while Altium’s database is not. Anyway, enough on this matter because, as I said, I’m not an EDA developer and don’t want to get in too deep here.

A few years ago, I wrote a blog post entitled “Is IP a 4-Letter Word?”. The main thrust of that post was that IP quality is rather poor in general and there needs to be some sort of centralized authority to grade IP quality and to certify its use. So, when Altium told me they plan to enable a marketplace for design IP by creating ”design vaults” in the cloud, my first question was “who is going to make sure this IP is any good”? Is this going to be the iPhone app model, where Apple vets and approves every app? Or is it going to be the Android model, caveat emptor.

To Altium’s credit, they have similar concerns, which is why they are planning to move slowly. With their introduction of Altium Designer 10, Altium will first provide it’s own vetted IP in the cloud. In the past, this IP was distributed to the tool users on their site, but having it in the cloud will make it easier to distribute (pull, insted of push) and also allow for asynchronous release and updates. The tools will automatically detect if you are using an IP that has been revved, and ask you if you want to download the new version.

Once they have this model understood, Altium then plans to open the model up to 3rd party IP which can be offered for free, or licensed, or maybe even traded for credits (like Linden dollars in Second Life). It’s an interesting idea which requires some pretty significant shifts in personal and corporate cultures. I think that sharing of small “jelly bean” type IP is acheivable because none of it is very differentiated. But once you get to IP that required some significant time to design, why share it unless IP is your primary business. The semiconductor industry is still fiercely competitive and I think that will be a significant barrier. Not to mention that it takes something like 4x-5x as much effort to create an IP that is easily reusable as compared to creating it just to be used once.

Being a tool for the design of FPGAs is an advantage for Altium, since the cost of repairing an FPGA bug is so much less than an SoC or ASIC. For FPGAs, the rewards may be greater than the risks, especially for companies that are doing ASICs for the first time. And this is the market that Altium is aiming for … the thousands of sompanies that will have to design their products to work on the internet-of-things. Companies that design toasters that have never had any digital electronics and now have to throw something together. They will be the ones that will want to reuse these designs because they don’t have the ability to design them in-house.

Which brings us to Morfik, that company that Altium acquired that does IDEs for web apps. It’s those same companies that are designing internet enabled toasters that will also need to design a web app for their customers to access the toaster. So if Altium sells the web app and the IP that let’s the toaster talk to the web app, then Altium provides a significant value to the toaster company. That’s the plan.

Still, the cloud aspect is what interests me the most. Even if designers are reluctant to enter this market, the idea of having this type of central repository is best enabled by the cloud. The cloud can enable collaboration and sharing much better than any hosted environment. And it can scale as large and as quickly as needed. It allows a safe sort of DMZ where IP can be evaluated by a customer while still protecting the IP from theft.

This is not by any means a new idea either. OpenCores has been around for more than a decade offering a repository for designers to share and access free IP. I spoke with them a few years ago and at the time the site was used mainly by universities and smaller companies, but their OpenRISC processor has seen some good usage, so it’s a model that can work.

I’m anxious to see what happens over time with this concept. Eventually, I think this sort of sharing will have to happen and it will be interesting to see how this evolves.

harry the ASIC guy

My Obligatory TOP 10 for 2009

Thursday, December 31st, 2009

2009 To 2010

http://www.flickr.com/photos/optical_illusion/ / CC BY 2.0

What’s a blog without some sort of obligatory year end TOP 10 list?

So, without further ado, here is my list of the TOP 10 events, happenings, occurrences, observations that I will remember from 2009. This is my list, from my perspective, of what I will remember. Here goes:

  1. Verification Survey - Last February, as DVCon was approaching, I thought it would be interesting to post a quickie survey to see what verification languages and methodologies were being used. Naively, I did not realize to what extent the fans of the various camps would go to rig the results in their favor. Nonetheless, the results ended up very interesting and I learned a valuable lesson on how NOT to do a survery.
  2. DVCon SaaS and Cloud Computing EDA Roundtable - One of the highlights of the year was definitely the impromptu panel that I assembled during DVCon to discuss Software-as-a-Service and Cloud Computing for EDA tools. My thanks to the panel guests, James Colgan (CEO @ Xuropa), Jean Brouwers (Consultant to Xuropa),  Susan Peterson (Verification IP Marketing Manager @ Cadence), Jeremy Ralph (CEO @ PDTi), Bill Alexander (VP Marketing @ Blue Pearl Software), Bill Guthrie (VP Marketing @ Numetrics). Unfortunately, the audio recording of the event was not of high enough quality to post, but you can read about it from others at the following locations:

    > 3 separate blog posts from Joe Hupcey (1, 2, 3)

    > A nice mention from Peggy Aycinena

    > Numerous other articles and blog posts throughout the year that were set in motion, to some extent, by this roundtable

  3. Predictions to the contrary, Magma is NOT dead. Cadence was NOT sold. Oh, and EDA is NOT dead either.
  4. John Cooley IS Dead - OK, he’s NOT really dead. But this year was certainly a turning point for his influence in the EDA space. It started off with John’s desperate attempt at a Conversation Central session at DAC to tell bloggers that their blog sucks and convince them to just send him their thoughts. For those who took John up on his offer by sending their thoughts, they would have waited 4 months to see them finally posted by John in his December DAC Trip report. I had a good discussion on this topic with John earlier this year, which he asked me to keep “off the record”. Let’s just say, he just doesn’t get it and doesn’t want to get it.
  5. The Rise of the EDA Bloggers.
  6. FPGA Taking Center Stage - It started back in March when Gartner issued a report stated that there were 30 FPGA design starts for every ASIC start. That number seemed very high to me and to others, but that did not stop this 30:1 ratio from being quoted as fact in all sorts of FPGA marketing materials throughout the year. On the technical side, it was a year where the issues of verification of large FPGAs came front-and-center and where a lot of ASIC people started transitioning to FPGA.
  7. Engineers Looking For Work - This was one of the more unfortunate trends that I will remember from 2009 and hopefully 2010 will be better. Personally, I had difficulty finding work between projects. DAC this year seemed to be as much about finding work as finding tools. A good friend of mine spent about 4 months looking for work until he finally accepted a job at 30% less pay and with a 1.5 hour commute because he “has to pay the bills”. A lot of my former EDA sales and AE colleagues have been laid off. Some have been looking for the right position for over a year. Let’s hope 2010 is a better year.
  8. SaaS and Cloud Computing for EDA - A former colleague of mine, now a VP of Sales at one of the small but growing EDA companies, came up to me in the bar during DAC one evening and stammered some thoughts regarding my predictions of SaaS and Cloud Computing for EDA. “It will never happen”. He may be right and I may be a bit biased, but this year I think we started to see some of the beginnings of these technologies moving into EDA. On a personal note, I’m involved in one of those efforts at Xuropa. Look for more developments in 2010.
  9. Talk of New EDA Business Models - For years, EDA has bemoaned the fact that the EDA industry captures so little of the value ($5B) of the much larger semiconductor industry ($250B) that it enables. At the DAC Keynote, Fu-Chieh Hsu of TSMC tried to convince everyone that the solution for EDA is to become part of some large TSMC ecosystem in which TSMC would reward the EDA industry like some sort of charitable tax deduction. Others talked about EDA companies having more skin in the game with their customers and being compensated based on their ultimate product success. And of course there is the SaaS business model I’ve been talking about. We’ll see if 2010 brings any of these to fruition.
  10. The People I Got to Meet and the People Who Wanted to Meet Me- One of the great things about having a blog is that I got to meet so many interesting people that I would never have had an opportunity to even talk to. I’ve had the opportunity to talk with executives at Synopsys, Cadence, Mentor, Springsoft, GateRocket, Oasys, Numetrics, and a dozen other EDA companies. I’ve even had the chance to interview some of them. And all the fellow bloggers I’ve met and now realize how much they know. On the flip side, I’ve been approached by PR people, both independent and in-house. I was interviewed 3 separate times, once by email by Rick Jamison, once by Skype by Liz Massingill, and once live by Dee McCrorey. EETimes added my blog as a Trusted Source. For those who say that social media brings people together, I can certainly vouch for that.

harry the ASIC guy

Synopsys Synphony Synopsis

Monday, October 12th, 2009

sheet_music.jpgI was contacted a few weeks ago by Synopsys’ PR agency to see if I’d be interested in covering an upcoming product announcement. I usually ignore these “opportunities” since the information provided is usually carefully wordsmithed marketing gobbledygook and not enough for me to really form an opinion. However, it turned out that this announcement was on a subject I know a little bit about, so I took them up on their offer.

The announcement was “embargoed“, that is, I was not to make it public until today. Embargoes are a vestige of the days when traditional journalism ruled the roost and when PR departments thought they could control the timing of their message. I don’t think embargoes benefit companies anymore since news is reported at light speed (literally) and people will write what they want when they want. Still, I consider it a sort of gentleman’s agreement so I’m not writing about it until today.

I also waited a little bit until the “mainstream press” wrote their articles. That let’s me point you to the best of them and conserve the space here for my own views, rather that regurgitating the press release and nuts and bolts.

(Update: Here is a very good description of the Synphony flow from Ron Wilson).

Today, Synopsys announced a new product called Synphony High Level Synthesis. You can read about this here. Basically, Synopsys is introducing a high level synthesis (aka behavioral synthesis) product that takes as its input Matlab M-Code and produces RTL Code, a cycle accurate C-model, and a testbench for simulation. Since I have not used the tool, I cannot comment on the capabilities or the quality of results or compare it to other tools on the market. However, I have had some past experience with tools like Matlab (specifically SPW) and Synphony (specifically Behavioral Compiler). So, here are my thoughts, observations, opinions that come to mind.

  1. Synopsys, once the leader in behavioral synthesis, is now the follower - When Synopsys introduced Behavioral Compiler over a decade ago they were the first to preach the gospel of high-level synthesis and all the associated benefits. Architectural optimization. Faster simulation. Bridging the gap between system design and ASIC design. Smaller and easier to understand code. Dogs and cats living together. The promises never fully materialized and Synopsys seemingly moved out of the market. Meanwhile, Mentor introduced Catapult C, Cadence introduced C-to-Silicon, and several others including Forte, Agility, Bluespec, Synfora, ChipVision, and AutoESL introduced their own high-level synthesis tools. Now, after acquiring Synplify DSP through Synplicity, Synopsys is finally re-entering the market (at least for ASIC design) with Synphony. The hunted have become the hunters.
  2. Synphony takes M-code from Matlab as its only source - Whereas most (but not all) other high-level synthesis tools input C like languages, Synopsys has chosen to input M-code only, at least for now. According to Chris Eddington, who is Director of Product Marketing for System-Level Products at Synopsys (according to his LinkedIn profile), approximately 60% of those who say they do “high-level design” are using M-code or some form of C (ANSI C, C++, System-C) for some portion of their design activities. Of those, slightly more use the C variants than M-code, which means that somewhere close to 25% of all ASIC designers could be a possible market for this tool.
  3. Synopsys can try to leverage the Matlab installed base - As mentioned above, Synopsys estimates that 25% of high-level designers could use the Synphony tool which is a pretty big market. By targeting mainly algorithmic design, not control logic, Synopsys can try to serve the Matlab installed base with a more narrowly targeted offering which should make it easier to support. It also allows Synopsys to avoid a bloody battle over C dominance and to pursue a blue ocean strategy with Matlab’s installed base. Interestingly though, there is no partnership with MathWorks implied by this announcement.
  4. Synphony leverages existing IP libraries - Libraries already exist for many common functions that were available for the Synplify DSP tool. The library elements are available as well for Synphony, allowing the designer to specify his functionality using this library or using M-code as the source.
  5. An FPGA tool is being adapted for ASIC - This is probably one of the first times that a tool initially developed for FPGAs (Synplify DSP) is being adapted for ASICs. It’s usually the other way around (e.g. FPGA Compiler grew out of Design Compiler). It should be interesting to see if the FPGA tool can “cut-it” in the ASIC world.
  6. Ties to implementation are seemingly tenuous - A tool that can take M-code as its input and produce RTL and C and do all the other things is all fine and good. But for Synphony to become more than an experimentation tool, it has to produce results (speed, area, power) as good or better than hand-coded RTL. However, the ties to the implementation tool (Design Compiler) are not as direct as even Behavioral Compiler was a decade ago. It seems that Synphony takes an approach where it pre-compiles and estimates timing for various blocks (kind of like building DesignWare libraries), but it assembles the design outside of DesignCompiler without all the associated timing views and engines necessary for true design and timing closure. It’s hard to understand how this can reliably produce results that consistently meet timing, but perhaps there is something that I am not aware of?
  7. Focus on “algorithmic design”, not control - As mentioned above, Synopsys is going after the folks using Matlab. And those designers are developing algorithms, not state machines. In essence, Synphony can focus on the fairly straightforward problem of scheduling mathematical operations to hit throughput and latency goals and not deal with more complex control logic. Much simpler.
  8. Conversion from Floating Point to Fixed Point - Anyone who has designed a filter or any DSP function knows that the devil is in the details, specifically the details of fixed point bit width. One choice of bit width affects downstream choices. You have to decide whether to round or truncate and these decisions can introduce unexpected artifacts into your signal. Synphony converts the floating point Matlab model into a fixed point implementation. Supposedly, it then allows you to easily fiddle with the bit widths to tweak the performance. Some earlier Synopsys products did this (Cossap, System Studio) and it’s a nice feature that can save time. We’ll see how useful it really is over time.
  9. Synphony produces real RTL, as well as C-code and a testbench - One of the drawbacks of Behavioral Compiler is that it never produced a human readable form of RTL code. This made it hard to simulate and debug the RTL. Synphony supplies readable RTL (or so I am told) as well as cycle accurate C-code for system simulation and a testbench for block simulation. This should help facilitate full chip simulations for chip integration, since Synphony will probably only be used on blocks, not entire chips.
  10. Couldn’t Synopsys come up with a better reference than Toyon Research Corporation - No offense to Toyon, but they are hardly a household name. It makes me wonder how many partners Synopsys has engaged in this development and how well tested this flow is. Not saying it isn’t well tested, just that Synopsys is making me wonder. Gimme a name I’ve heard of, please.

Only time will tell if Synphony is truly music to our ears, or if it is just SYNthesis that is PHONY.

harry the ASIC guy

DAC Theme #1 - “The Rise of the EDA Bloggers”

Sunday, August 2nd, 2009

Harry Gries at Conversation Central

(Photo courtesy J.L. Gray

Last year, at the Design Automation Conference, there were only a couple dozen individuals who would have merited the title of EDA blogger. Of those, perhaps a dozen or so wrote regularly and had any appreciable audience. In order to nurture this fledgling group, JL Gray (with the help of John Ford, Sean Murphy, and yours truly) scrounged a free room after-hours in the back corner of the Anaheim Convention Center in which to hold the first ever EDA Bloggers Birds-of-a-Feather session. At this event, attended by both bloggers and traditional journalists, as John Ford put it, us bloggers got our collective butts sniffed by the top dog journalists.

My, how things have changed in just one year.

This year at DAC, us EDA bloggers (numbering 233 according to Sean Murphy) and other new media practitioners took center stage:

  • Bloggers were literally on stage at the Denali party as part of an EDA’s Next Top Blogger competition.
  • Bloggers were literally center stage at the exhibits, in the centrally located Synopsys booth, engaging in lively conversation regarding new media.
  • Atrenta held a Blogfest.
  • There was a Pavillion Panel dedicated to tweeting and blogging.
  • And most conspicuously, there was the 14-foot Twitter Tower streaming DAC related tweets.

Meanwhile, the traditional journalists who were still covering DAC seemed to fall into 2 camps. There were those who embraced the bloggers as part of the media and those that didn’t. Those that did, like Brian Fuller, could be found in many of the sessions and venues I mentioned above. Those that did not, could be found somewhere down the hall between North and South halls of Moscone in their own back corner room. I know this because I was given access to the press room this year and I did indeed find that room to be very valuable … I was able to print out my boarding pass on their printer.

Here’s my recap of the new media events:

I had mixed feelings regarding the Denali Top Blogger competition as I know others did as well. JL, Karen, and I all felt it was kind of silly, parading like beauty queens to be judged. Especially since blogging is such a collaborative, rather than competitive, medium. So often we reference and riff off of each other’s blog posts. Still, I think it was good recognition and publicity for blogging in EDA and one could not argue with the legitimacy of the blogger representatives, all first-hand experts in the areas that they cover. Oh, by the way, congratulations to Karen Bartleson for winning the award.

Conversation Central, hosted by Synopsys, was my highlight of DAC.  It was a little hard to find (they should have had a sign), located in a little frosted glass room on the left front corner of the Synopsys booth. But if you could find your way there, it was well worth the search. I’m a little biased since I hosted conversations there Monday - Wednesday on “Job Search: How Social Media Can Help Job Seekers & Employers”. The sessions were a combination of specific advice and lively discussions and debates. I was fortunate to have a recruiter show up one day and a hiring manager another day to add their unique perspectives. I think that that was the real power of this very intimate kitchen table style format. Everybody felt like they were allowed to and even encouraged to participate and add their views into the discussions. This is very different from a very formal style presentation and even panel discussions.

Unfortunately, I was not able to clone myself in order to attend all the sessions there, many of which I heard about afterwards from others or in online writeups. I did attend the session by Ron Ploof entitled “Objectivity is Overrated: Corporate Bloggers Aren’t Journalists, & Why They Shouldn’t Even Try”. Interestingly enough, no journalists showed up to the session. Still, it was a lively discussion, the key point being that bloggers don’t just talk the talk, they walk the walk, and therefore bring to the table a deeper understanding and experience with EDA and design than a journalist, even one that was previously a designer.

I also attended Rick Jamison’s session on “Competitors in Cyberspace: Why Be Friends?” which attracted several Cadence folks (Joe Hupcey, Adam Sherer, Bob Dwyer) and some Mentor folks. Although competitors for their respective companies, there was a sense of fraternity and a lot of the discussion concerned what is “fair play” with regards to blog posting and commenting. The consensus was that advocacy was acceptable and even expected from the partisans, as long as it could be backed up by fact and kept within the bounds of decorum (i.e. no personal attacks). EDA corporate bloggers have been very fair in this regards in contrast to some rather vitriolic “discussions” in other industries.

The Atrenta Blogfest sounded very interesting and I was very disappointed that I could not attend because it conflicted with my Conversation Central discussion. Mike Demler has a brief summary on his blog as does Daniel Nenni on his blog.

Late Wednesday, Michael Sanie hosted a DAC Pavillion Panel entitled “Tweet, Blog or News: How Do I Stay Current?” Panelists Ron Wilson (Practical Chip Design in EDN), John Busco (John’s Semi-Blog) and Sean Murphy (his blog) shared insights into the ways they use social media to stay current with events in the industry, avoid information overload, and separate fact from fiction. Ron Wilson commented that social networks are taking the place of the socialization that engineers used to get by attending conferences and the shared experience reading the same traditional media news. John Busco, the recognized first EDA blogger, shared how he keeps his private life and his job at NVidia separate from his blogging life. And Sean Murphy gave perspective on how blogging has grown within EDA and will continue to grow to his projection of 500 EDA bloggers in 2011.

Last, but not least, there was the Twitter Tower, located next to the Synopsys booth. Previous conferences, such as DVCon attempted to use hashtags (#DVCon) to aggregate conference related tweets. The success was limited, attracting perhaps a few dozen tweets at most. This time, Karen Bartleson had a better idea. Appeal to people’s vanity. The Twitter Tower displayed a realtime snapshot of all tweets containing “#46DAC“, the hashtag designated for the 46th DAC. If one stood in front of the tower and tweeted with this hastag, the tweet would show up within seconds on the tower. How cool is that? Sure it was a little gimmicky, but it made everyone who passed by aware of this new standard. As I write this, there have been over 1500 tweets using the #46DAC hashtag.

If you want to read more, Sean Murphy has done the not-so-glamorous but oh-so-valuable legwork of compiling a pretty comprehensive roundup of the DAC coverage by bloggers and traditional press. (Thanks Sean!)

harry the ASIC guy

Coffee, Jobs, and DAC

Sunday, July 26th, 2009

Coffeeshop

I’m writing to you today from a Coffee Bean & Tea Leaf in beautiful Southern California. There’s something about the atmosphere at a coffee shop that helps me get my thoughts together. Maybe it’s the white noise of the cappuccino machines or the conversations or music in the background.

I’m not the only one of course. Daniel Nenni and his two great danes can often be found at the downtown Danville Starbucks. And like the show Cheers, there are regulars at my local coffee shop that I see most days I am here. Sales people and college students come here a lot. And there has been a noticeable increase in another group. People out of work or “in transition”. In fact, as I glance over to the next table, I see a woman working on her resume. No lie.

Despite the uncertainty, I’ve actually benefited from the opportunity to take a one month break between projects, something I never got as a full-time employee. I’ve been able to catch up with old friends and colleagues on the phone, or over coffee, lunch, or some beers. I’ve also been able to start up some new business opportunities that you’ll be hearing more about in the near future. It never hurts to have multiple irons in the fire, especially in today’s economy.

Which brings me to the topic of jobs. I don’t care what any politician or semiconductor analyst or economist says or what the Dow or NASDAQ is at today. The high tech jobs market sucks. When I ask my very experienced friends and colleagues “what’s happening” they tell me they “can’t find no work, can’t find no job, my friend”. (Marvin Gaye fans will get the reference). Here are some examples:

  • Al Magnani, a friend in the Bay Area with 23 years experience, educated at MIT, USC, and Carnegie-Mellon, an expert in computer architecture, networking, and graphics processing, who’s led dozens of ASIC design developments, who’s been a Director managing a total team of over 50 people, has gone through almost all of his 229 LinkedIn contacts and has not even been able to get an interview in almost 2 months.
  • Jon Atwood, former VP of Sales at Synopsys and a man who has so much EDA experience that he remembers Joe Costello before he played guitar, has been looking for almost 6 months and has started a blog called Job Search 2.0 chronicling his job search adventure. He’s even been on ABC news talking about his employment woes.
  • I’ve received emails from several other very experienced designers, both employees and independent consultants, who tell similar stories of months looking for work.
  • On a personal level, as I have been looking for that “next project”, I have encountered much of the same, and count myself lucky that I actually have a next project to work on.

Having talked to so many of these people and recruiters, here is how I assess the high-tech job situation today:

  • There are a lot more job seekers than jobs out there. OK, that’s obvious. But to give you an idea, of the magnitude, my recruiter friend says she receives hundreds of resumes for every job posted and there are usually many, sometimes dozens of, qualified candidates to choose from.
  • Many of the job postings are soft. That is, the employer does not need to hire someone right away but just has the job posted in case the perfect candidate comes along.
  • Employers are looking for the perfect candidate to come along. If they have 10 requirements for the position, and you meet 9 of them, you are probably on the B-list. And not only are they looking for the right experience, they want you to have been doing pretty much the same job very recently, not 2 years ago.
  • Submitting your resume to a corporate website is a waste of time. Even if you are perfectly qualified, recruiters get too many job postings and your resume may not even get looked at because they run out of time and already have many candidates.
  • Experience counts … against you. Many employers are looking for younger people who don’t have high salary expectations and will work long hours and travel. In fact, I spoke to a recruiter that was retained by a recent chip synthesis startup that told me that he was only looking for candidates with <5 years experience to be an AE at that company. They are not the only ones.
  • Employers hold all the cards. I heard today about someone who accepted a job at 10% less than she was currently making. Don’t expect to make more or even as much as you made before. Don’t expect stock options or signing bonuses. And don’t expect more than 24 hours to make a decision on an offer because there is someone on-deck.

So, with the news that bad, it would be easy to get discouraged. I have been discouraged, for myself and for my friends. Still, here are a few tips that I think will help:

  1. Update your online identity. Every recruiter and hiring manager will do 2 things before they ever pick up the phone and call you. They will Google your name and they will search for you on LinkedIn. Space prohibits me from going into the details of how to do this, but believe me that this is critical. If you want to see an example, you can see my LinkedIn profile.
  2. Find someone in the company who can introduce you or your resume to the hiring manager with a recommendation. This has always been the best way to find a job, but today it is the only way. As I said, the odds of you making it through the corporate website and HR are very low. LinkedIn can help tremendously since you can identify easily who you know at a target company and also whether your contacts know somebody there to whom they can introduce you.
  3. Let your contact refer you before you submit anything to the corporate website. Even in this economy, many companies still give bonuses to employees who refer candidates. If you let your contact get the referral bonus, he will be more likely to help you find the right people in the company to talk to and even sell you to them.
  4. Sign up for job boards. I know that everyone else is using these, but there are still real jobs posted there and you can get an idea which companies are hiring and then use your networking skills to get in the door. Simplyhired and even craigslist are good.
  5. Be willing to take a step back to go forward. You will probably need to a take a cut in pay or take on a position with less responsibility or prestige than you currently have. Accept it. I have a friend who refused to look at jobs that paid less than he previously made. He ended up out of work for 6 months and then ended up taking a lower paying job anyway. It’s more important that you get a job you can do well and that the company has a good outlook going forward.
  6. Help others find a job. You can file this under good karma, or pay it forward, or just plain being a mentsch. If you come across a position for which someone you know would be a good fit, let them know, help them out. It will make you feel a little better and you’ll have made a loyal friend who may be in a position to help you out one day soon.
  7. Get into social networking. I’ll be talking about this more at DAC, but for now, look for opportunities to get on Twitter. Start reading, commenting on, or even writing a blog. Join relevant LinkedIn groups. Join online communities like those at Synopsys, Mentor, and Cadence or independent ones like OVMWorld or Xuropa.
  8. Keep up your skills. There are so many free webinars and opportunities to keep up-to-date that you have no excuse. Check out the Mentor Displaced Worker program.
  9. Consider doing some free work. I know that does not sound great, but you can possibly learn something new in the process and at least avoid having a gap in your resume (remember how picky employers are).
  10. Decide if you are willing to relocate or travel. If you are only looking for positions within your commuting distance then that limits your opportunities.

For those of you who will be attending DAC this coming week, I will be in the Synopsys Conversation Central booth Monday, Tuesday, and Wednesday at 1:30 hosting a conversation on Using Social Media for Job Seekers and Employers.

Please stop and we can talk over a cup of coffee.

harry the ASIC guy

What Makes DAC 2009 different from other DACs?

Sunday, July 12th, 2009

By Narendra (Nari) Shenoy, Technical Program Co-Chair, 46th DAC

Each year, around this time, the electronic design industry and academia meticulously prepare to showcase the latest research and technologies at the Design Automation Conference. For the casual attendee, after a few years the difference between the conferences of years past begins to dim. If you are one of them, allow me to dispel this notion and invite you to look at what is different this year.

For starters, we will be in the beautiful city of San Francisco from July 26-31. The DAC 2009 program, as in previous years, has been thoughtfully composed from using two approaches. The bottom up approach selects technical papers from a pool of submissions using a rigorous review process. This ensures that only the best technical submissions are accepted. For 2009, we see an increasing focus on research towards system level design, low power design and analysis, and physical design and manufacturability. This year, a special emphasis for the design community has been added to the program, with a User Track that runs throughout the conference. The new track, which focuses on the use of EDA tools, attracted 117 submissions reviewed by a committee made up of experienced tool users from the industry. The User Track features front end and back end sessions and a poster session that allows a perfect opportunity to interact with presenters and other DAC attendees. In addition to the traditional EDA professionals, we invite all practitioners in the design community – design tool users, hardware and software designers, application engineers, consultants, and flow/methodology developers, to come join us.

This first approach is complemented by a careful top-down selection of themes and topics in the form of panels, special sessions, keynote sessions, and management day events. The popular CEO panel returns to DAC this year as a keynote panel. The captains of the EDA industry, Aart deGeus (Synopsys), Lip-Bu Tan (Cadence) and Walden Rhines (Mentor) will explore what the future holds for EDA. The keynote on Tuesday by Fu-Chieh Hsu (TSMC), will discuss alignment of business and technology models to overcome design complexity. William Dally (Nvidia and Stanford) will present the challenges and opportunities that throughput computing provides to the EDA world in his keynote on Wednesday. Eight panels on relevant areas are spread across the conference. One panel explores whether the emphasis on Design for Manufacturing is a differentiator or a distraction. Other panels focus on a variety of themes such as confronting hardware-dependent software design, analog and mixed signal verification challenges, and various system prototyping approaches. The financial viability of Moore’s law is explored in a panel, while another panel explores the role of statistical analysis in several fields, including EDA. Lastly, we have a panel exploring the implications of recent changes in the EDA industry from an engineer’s perspective.

Special technical sessions will deal with a wide variety of themes such as preparing for design at 22nm, designing circuits in the face of uncertainty, verification of large systems on chip, bug-tracking in complex designs, novel computation models and multi-core computing. Leading researchers and industry experts will present their views on each of these topics.

Management day includes topics that tackle challenges and decision making in a complex technology and business environment. The current “green” trend is reflected in a slate of events during the afternoon of Thursday July 30th. We start with a special plenary that explores green technology and its impact on system design, public policy and our industry. A special panel investigates the system level power design challenge and finally a special session considers technologies for data centers.

Rather than considering it a hindrance to attendance, the prolonged economic malaise this year should provide a fundamental reason to participate at DAC. As a participant in the technical program, DAC offers an opportunity to share your research and win peer acclaim. As an exhibitor, it is an ideal environment to demonstrate your technology and advance your business agenda. As an attendee, you cannot afford to miss the event where “electronic design meets”. DAC provides an unparalleled chance to network and learn about advances in electronic design for everyone. Won’t you join us at the Moscone Center at the end of the month?

__________

This year’s DAC will be held July 26-31 at the Moscone Center in San Francisco. Register today at www.dac.com. Note also that there are 600 free DAC passes being offered courtesy of the DAC Fan Club (Atrenta, Denali, Springsoft) for those who have no other means to attend.

Mentor Is Listening

Thursday, June 11th, 2009

My morning routine is pretty, well, routine.

Get up.  Wake the kids.

Check email.  Ask the kids to stop jumping on the couch.

Check Twitter. Tell the kids again to stop jumping on the couch.

Check my Google Reader. Glare at the kids with that “I’ve asked you for the last time” look.

You get the idea.

This Wednesday morning, somewhere in between conversations with my kids, walking the dog, and getting ready for work, I came across the following comment on a friend’s blog:

Ron, we are listening.

http://www.mentor.com/blogs

Ron Fuller
Web Manager, Mentor Graphics

For background, Ron Ploof is the guy who got the crazy idea almost 3 years ago that Synopsys should be doing something in this new world called social media. (Actually, I don’t think the term “social media” had even been coined back then). He evangelized this belief to the VP of Marketing at Synopsys and created for himself a job as Synopsys’ “New Media Evangelist” (actual title on his business card). He launched Synopsys’ first foray into social media, including podcasts, videos, and most prominently, blogs.

Synopsys’ success motivated Cadence to follow suit (something confided to me by Cadence’s former community manager). And it seems, according to the comment on Ron’s blog, it also motivated Mentor’s move into social media.

__________

I wanted to find out more about the Mentor blogs and I was able to set up some time to talk over lunch with Sonia Harrison at Mentor (see her sing at the Denali DAC party) . Sonia had helped me set up my previous interview with Paul Hofstadler and had extended me an invitation to attend the Mentor User2User conference (which, unfortunately, I could not attend). As it turns out, Sonia was the absolutely right person to talk to.

Even though I had only now become aware of Mentor blogs, Mentor had evidently coordinated their launch with the launch of their new website several months ago. Sonia was quite humble, but it seems that she was the driving force behind the blogs and Mentor’s presence in other social media like Twitter. She had been watching what was going on for some time, hesitant to jump in without a good plan, and now was the time.

According to Sonia, Mentor’s motivation for doing the blogs was to extend into a new media their “thought leadership” in the industry, to draw customers in to their website, and to exchange information with customers. Interestingly, Mentor did not hire an outside social media consultant or community manager like Cadence had. Rather, the project was homegrown. Sonia recruited various technical experts and others as bloggers. She developed “common sense” social media guidelines to make sure bloggers were informed of and played by social media rules (e.g. no sensitive or proprietary information, be polite, respect copyrights, give attribution).

According to Sonia, “one of the more difficult things was to get people to commit to blogging regularly. Writing takes time, it’s almost a full time job.” Despite this additional work burden, Mentor has no plans to bring in professional journalists as bloggers like Richard Goering at Cadence. And it doesn’t seem they need to. Simon Favre received a blog of the week award from System Level Design a few weeks ago, so they are doing quite well on their own.

Sonia does not have any specific measurable goals (page views, subscribers, etc.), which I think is a mistake, especially when her upper management comes asking for evidence that these efforts are paying off. My friend Ron likes to tell me that social media is the most measurable media ever and it’s a shame not to use the data.

I started playing with the site later in the afternoon and noticed a few things. First, when I added a comment to one of the blogs without registering, it did not show up right away, nor did I get a message that the comment was being moderated. It did show up later in the day, but it would be nice to at least be told that it was “awaiting moderation”. Still better, why moderate or require registration at all? The likelihood of getting inappropriate comments from engineering professionals is very low, and they can always be removed if need be. Moderation of comments will also kill a hot topic in its tracks. I’ve personally had the experience of publishing a new blog post late at night and waking up to several comments, some addressing other comments. Had I moderated the blog, none of those comments would have even showed up until later in the day.

Second, there was no way to enter a URL or blog address when leaving a comment. It is pretty standard practice to have this feature to allow readers to “check out” the person leaving the comment. Hopefully thay can add this.

On the positive side, the most important feature of a blog is the content and the content looks very good, especially the PCB blogs. Also, there is apparently no internal review or censorship of blog posts, so bloggers have the freedom to write whatever they want, within the social media guidelines of course.

 __________

It’s been almost 3 years since Ron made his first pitch to his manager. Who would have thought that the Big 3 and many others would have adopted social media in such a short time. Meanwhile, my kids are still jumping on the couch.

GTG

harry the ASIC guy

TSMC Challenges Lynx With Flow Of Their Own

Wednesday, May 6th, 2009

About a month and a half ago, I wrote a 5 part series of blog posts on the newly introduced Lynx Design System from Synopsys:

One key feature, the inclusion of pre-qualified technology and node specific libraries in the flow, was something I had pushed for when I was previously involved with Lynx (then called Pilot). These libraries would have made Lynx into a complete out-of-the-box foundry and node specific design kit … no technology specific worries. Indeed, everyone thought that it was a good idea and would have happened had it not been for resistance from the foundries that were approached. Alas!

In the months before the announcement of Lynx, I heard that Synopsys had finally cracked that nut and that foundry libraries would be part of Lynx after all. Whilst speaking to Synopsys about Lynx in preparation for my posts, I asked whether this was the case. Given my expectations, I was rather surprised when I was told that no foundry libraries would be included as part of Lynx or as an option.

The explanation was that it proved too difficult to handle the many options that customers used. High Vt and low Vt. Regular and low power process. IO and RAM libraries from multiple vendors like ARM and Virage. Indeed, this was a very reasonable explanation to me since my experience was that all chips used some special libraries along the way. How could one QA a set of libraries for all the combinations? So, I left it at that. Besides, Synopsys offered a script that would build the Lynx node from the DesignWare TSMC Foundry Libraries.

Two weeks ago, at the TSMC Technology Symposium in San Jose, TSMC announced their own Integrated Sign-off Flow that competes with the Lynx flow, this one including their libraries. Now it seems to make sense. TSMC may  have backed out of providing libraries to Synopsys to use with Lynx since they were cooking up a flow offering of their own. I don’t know this to be a fact, but I think it’s a reasonable explanation.

So, besides the libraries, how does the TSMC flow compare to the Synopsys Lynx flow? I’m glad you asked. Here are the salient details of the TSMC offering:

  • Complete RTL to GDSII flow much like Lynx
  • Node and process specific optimizations
  • Uses multiple EDA vendors’ tools  (Synopsys mostly, but also Cadence, Mentor, and Azuro)
  • Available only for TSMC 65nm process node (at this time)
  • No cost (at least to early adopters … the press release is unclear whether TSMC will charge in the future)
  • And of course, libraries are included.

In comparison to Synopsys’ Lynx Design System, there were some notable features missing from the announcement:

  • No mention of anything like a Management Cockpit or Runtime Manager
  • No mention of how this was going to be supported
  • No mention of any chips or customers that have been through the flow

To be fair, just because these were not mentioned, does not mean that they are really missing, I have not seen a demo of the flow or spoken to TSMC (you know how to reach me) and that would help a lot in evaluating how this compares to Lynx. Still, from what I know, I’d like to give you my initial assessment of the strength of these offerings.

TSMC Integrated Signoff Flow

  • The flow includes EDA tools from multiple vendors. There is an assumption that TSMC has created a best-of-breed flow by picking the tool that performed each step in the flow the best and making all the tools work together. Synopsys will claim that their tools are all best-of-breed and that other tools can be easily integrated. But, TSMC’s flow comes that way with no additional work required. (Of course, you still need to go buy those other tools).
  • Integrated libraries, as I’ve described above. Unfortunately if you are using any 3rd party libraries, you’ll need to integrate them yourself it seems.
  • Node and process specific optimizations should provide an extra boost in quality of results.
  • Free (at least for now)

Synopsys Lynx Design System

  • You can use the flow with any foundry or technology node. A big advantage unless you are set on TSMC 65nm (which a lot of people are).
  • Other libraries and tools are easier to integrate into the flow I would think. It’s not clear whether TSMC even supports hacking the flow for other nodes.
  • Support from the Synopsys field and support center. Recall, this is now a full fledged product. Presumably, the price customers pay for Lynx will fund the support costs. If there is no cost for the TSMC flow, how will they fund supporting it? Perhaps they will take on the cost to get the silicon business, but that’s a business decision I am not privy to. And don’t underestimate the support effort. This is much like a flow that ASIC vendors (TI, Motorola/Freescale, LSI Logic), not foundries, would have offered. They had whole teams developing and QA’ing their flows. And then they would be tied to a specific set of tool releases and frozen.
  • Runtime Manager and Management Cockpit. Nice to have features.
  • Been used to create real chips before. As I’d said, the core flow in Lynx dates back almost 10 years and has been updated continuously. It’s not clear what is the genesis of the new TSMC flow. Is it a derivative of the TSMC reference flows? Is it something that has been used to create chips? Again, I don’t know, but I’ve got to give Synopsys the nod in terms of “production proven”.

So, what do I recommend. Well, if you are not going to TSMC 65 nm with TSMC standard cell libraries, then there is not much reason to look at the TSMC flow. However, if you are using the technology that TSMC currently supports, the appeal of a turnkey, optimized, and FREE flow is pretty strong. I’d at least do my due diligence and look at the TSMC flow. It might help you get better pricing from TSMC.

If anyone out there has actually seen or touched the TSMC flow, please add a comment below. Everyone would love to know what you think first hand.
harry the ASIC guy

EDA Merger Poll - What’d Be The Best Merger

Friday, May 1st, 2009

Rumors are flying concerning some big changes next week in EDA amongst the big players. It first got started by John Blyler on Twitter. Then Magma stock took off this week for no apparent reason. And rumors of a Cadence-Magma merger have been flying around for about a month since Rajeev denied them.

Something may happen or nothing may happen. But it’s always fun to speculate. So, what do you think would be the best merger of the top 4 EDA companies?

Vote here or feel free to leave your comments below. We’ll see who, if anyone, is right :-)

harry the ASIC guy

Soft Skills Aren’t Hard To Learn

Tuesday, April 28th, 2009

It was 1992 and I was supporting the Motorola Iridium project in Chandler, AZ. There was a project lead named Steve who I was tasked to work with. My job was to get certain elements of our DesignWare library working properly to support his ASIC design team.

Steve was a bit of a control freak. Whenever there were technical decisions to be made, Steve wanted to be the one making the decisions. And once he made his decision, there was no changing it. You see, Steve had a big ego and did not like to be wrong, much less wrong in front of his team.

Unfortunately, his decisions were not always the correct decisions and I had no problem telling him that. You see, I had a big ego too.

As you can imagine, Steve and I did not get along very well.

Fortunately, I had a boss who had dealt with Steve before and who gave me some advice that I carry to this day. He suggested that I bring the relevant facts to Steve and present them in such a way that the decision was obvious. Then, I needed to say these words, “I’m not sure what is the best choice. What do you think?”

As hard as it was for me to relinquish control of these decisions, it turned out to be the right way to handle Steve. Instead of feeling like he was put on the spot to win a debate with the local AE, he felt like a respected authority figure. With this pressure removed, Steve usually ended up making the right decision (i.e. the one I would have recommended).

Steve was happier. I was happier. And we got a lot more productive work done as a result!

__________

The soft skills that I describe in the story above do not come naturally to most engineers. A matter of fact, I’ve often heard it said “he’s a great engineer, but I’d never take him to a client”. So I was very interested when I came across a press release describing how Mentor Graphics and RTM Consulting collaborated to develop a soft skills training class for Mentor consultants. I sent an email to Paul Hofstadler, VP of Consulting at Mentor, requesting to talk to him about the class, and he graciously accepted.

According to Paul, Mentor’s Services are typically focused on deploying to their clients new working processes around the EDA tools that Mentor sells. That is, they are teaching their clients to fish, rather than selling them fish. As you can imagine, it requires a great deal of influence and political savvy to effectively implement these types of changes in a client’s organization. Unfortunately, these skills don’t necessarily come naturally for most engineers. Indeed, when Mentor went back and examined the projects that had challenges, they discovered that the core issues were not technical, but rather involved corporate politics and communication issues.

Paul decided that he needed to increase the soft skills of his consultants in order to be more effective on projects and to recognize opportunities for more business in a tough economy. “More than half the work in consulting is finding and growing people”.  Rather than building a training program internally, or piecing one together from existing off-the-shelf classes, Paul engaged with RTM Consulting to develop a customized class to meet Mentor’s specific needs. “We didn’t want to pull our best consultants off of time critical customer projects to develop the class. They are the ones guiding our customers through complex projects. In addition, we wanted the outside point of view that RTM brought to the situation.”

Most of the course material came from RTM Consulting . The specific case studies and industry specific material came from Mentor. Paul had senior consultants help with the development of the material, especially the case studies which were based on real experiences. The result is a 3 day course that is very hands-on. There is standard lecture time and also several 5-6 person role play case studies. “The collaboration with Mentor Graphics was key to honing in on customization of the training to give the them the best chance at gaining the right skills necessary, and providing a solid return on their educational investment”, according to Randy Mysliviec, CEO of RTM Consulting.

Paul Hofstadler particularly praised the case studies. “The case studies were the most interesting part of the course. I never knew what was going to come out of them. Each group solved the case studies slightly differently using the skills taught in the class.” Even so, Paul resisted the urge to let the consultants bring real customer situations into the class for fear that the entire class would end up working on one real customer case. Instead, Mentor asked consultants to present real case studies after the class, several weeks later, and present them to the internal team. This served as a reinforcement of the material and helped to put the course material into practice.

A 3-day training course for the entire consulting team seems like a big investment. “Ironically, the cost of soft skills training can often be offset by just a single large project overrun or a collection of overruns”, according to Randy Mysliviec. Fortunately, the timing of the class coincided with an end of year lull in delivery, so Mentor was able to implement the training class with minimal customer project impact as well.

Since the training was administered just a few months ago, it is difficult to definitively measure the value. However, there is strong anecdotal evidence that it is working. One senior consultant, who was very skeptical at the beginning, used the techniques in the class to turn around a difficult customer (similar to my story at the beginning of this post). Paul has indicated that “consulting orders this quarter are a lot better than last quarter” and he attributes that in part to the training, particularly the parts that help consultants recognize potential follow-on opportunities for more business.

“In this economy, it is more important than ever to understand the customer’s needs, communicate effectively, and deliver excellent solutions on every engagement” said Paul in summary. “It is clear to me that our projects are running more smoothly after the training. As a bonus, our repeat customer order rate is up indicating that we are continuing to deliver high value to our customers despite the ‘interesting’ times in which we find ourselves.”

Due to the success of the training, Mentor is looking at extending the training to other parts of the consulting organization and to other organizations in Mentor. In the meantime, RTM Consulting is offering the course for other customers, minus the Mentor specific material, of course. “The soft skills needs at Mentor are certainly not unique in the professional and consulting services world”, says’ Randy Mysliviec. “Most technology and pure services companies do a good job of teaching their teams about products, services, and technologies they need to know to effectively serve clients. What is most often missed are the soft skills necessary for consultants to effectively interact with their clients.”

Thanks to folks like RTM Consulting, these soft skill aren’t hard to learn after all.

harry the ASIC guy