Thoughts on IC development, EDA, and hardware design languages.

EDA Tool Developer Series: Does Your Tool’s GUI Scale?

Posted by: Mike Lee | Posted on: January 13th, 2015 | 0 Comments

Share on LinkedIn29Share on Facebook0Tweet about this on Twitter0Share on Google+0Email this to someone

If I were to look back at my EDA developments I would easily state that high up among the list of the most difficult aspects was GUI development. And not just GUI usability, that is hard enough on it’s own, but in the world of EDA, GUI scalability can also be really tricky. Even something as conceptually simple and common as a design hierarchy browser can cause you conniptions as you scale up from a small testcase, through a block or subsystem level design, and up to a full chip. Lets take a look.

Almost every EDA tools needs a hierarchy browser, and a quick look at the online documentation for any common GUI framework leads to the obvious implementation: just enumerate the instances, load the data structure into memory, and point the widget in the right direction. And quickly, you have something! And it works great for your simple SystemVerilog or VHDL testcase. Plus, given the powerful nature of today’s machines, it even works reasonably well for a block level design. Things are looking pretty good, but then the chip lead, decides to take it for a spin on the full chip RTL…. 45 minutes later, it is still loading! CPU and memory usage from your GUI are going through the roof, and suddenly your tool grinds to a halt loading with the 400000+ SystemVerilog instances within the toplevel design… and you thought this was going to be pretty easy.

So how do you make this scaleable?

Like many GUI issues, the trick is to consider the user. No user can see all the hierarchy at once, anyway. So, you don’t need to track all the instances, just the ones the user wants to see. You can use a much more dynamic approach to your hierarchy browser to get the users what they want. Just build what they want to see when they want to see it. The fact is, that multicore workstation on your desktop isn’t smarter than you, but it’s certainly faster than you. Those few extra CPU cycles it takes to build and re-build the hierarchy tree on-demand, rather than cached at startup, are likely to go unnoticed. Memory is cheap, but it’s not unlimited, so it’s important to find that balance between memory and CPU usage to maintain scaleability.

Take the RTL Design Tree Widget included with Invio’s GUI Builder. To the user, it presents an intuitive and responsive view into their RTL or netlist design hierarchy, allowing them to very quickly explore their RTL. But behind the scenes, it’s dynamically allocating and releasing memory as tree nodes are expanded and collapsed (but it doesn’t feel that way). And since only one level of hierarchy within a single instance is being expanded at a time, rebuilding the node on demand is just as responsiveness, all the while maintaining a scaleable and frugal use of resources.

So the Invio Python code would look something like this:
import guiBuilder
root = Tk()

And for Tcl:
package require guiBuilder
pack [DesignWidget::init .design] -side left
pack [RTLWidget::init .rtl] -side left

Will give you this:

With this approach the RTL Design Tree Widget in Invio can easily handle large designs such as the Oracle T2 processor (500M Transistors, 1200 files and >1M lines of Verilog) and still provide that quick interaction users want.

This “just-in-time” approach is often a critical trick for responsive GUIs, but is only one of many approaches. Where have you had issues in your previous developments? If you’ve had experience or know of other ways of tackling these kinds of problems I’d love to hear about it in the comments below!


Share on LinkedIn29Share on Facebook0Tweet about this on Twitter0Share on Google+0Email this to someone

Leave a Comment