Open Design Circuits

Open Design Circuits
Open Design Circuits

Introduction

Open Design Circuits are the chip design counterparts of Open Source Software, with designs (sources) openly shared among developers and users. The open-design circuit approach outlined here captures the true advantages of open-source software, and applies them to hardware. It avoids the large initial investments usually needed for hardware development, and it allows for the rapid design sharing, testing, and user feedback which are key to open-source software success.

Open-design circuits (ODC) offer an approach that differs quite a bit from other open-design hardware initiatives: The Open Hardware Certification porno Program, The Open Hardware Specification, Open Souce Hardware Page (sic). Open Design Circuits more closely resemble Open Source Software, the workings of which are most clearly described in The Cathedral and the Bazaar. The most obvious open-source success so far is Linux.

Open-design circuits are very close to reality, even though no development community exists for them at the time of this writing.

About this document

The first part of this document, which describes the ideas for ODC development, is written with generally computer-literate readers in mind. Later on, starting with the technical issues section, some knowledge of hardware and circuit design will help. These more in-depth topics are of interest mostly to people who wish to contribute to the project.

Contacts

Tune in to the open-design circuits mailing list for general discussion on the topic.

Mail circuits@circu.its.tudelft.nl to contact us directly.
Mail webmaster@circu.its.tudelft.nl for comments on the web site.
Mail Reinoud Lamberts to contact the author.

News

[98/06/30] More links added, thanks to Graham Seaman.
[98/03/13] Marnix created an Interesting Links page.
[98/03/13] You can read the mailing list archive on the web now, see the list page.

The problem with open hardware development

The primary problem for open hardware development is the cost associated with manufacturing a design. Manufacturing cannot be used to share and test a design in a way similar to open-source software releases. (Actually, you may need serious financial backing to get a design manufactured even once.)

When considering chips, the fabrication costs are particularly high for the relatively small quantities needed for development versions. Also problematic are the long fabrication delays, and the effort required to get test boards working on many sites. And, last but not least, manufacturers (foundries) are very secretive about their fabrication processes and cell libraries, while documentation is necessary to create open-source design tools.

We can safely conclude that it is best to avoid manufacturing. So how are we going to do hardware development, then?

Looking for solutions

When trying to avoid above problems, there are two fairly obvious alternatives to manufacturing:

Simulation is a widely used method to test a design, and it is possible to distribute both a simulator and a design as open-source software. For many types of hardware, simulation may well be the best you can do to share its design. However, most simulated hardware is not very useful in practice, and fairly uninteresting to most users. (How useful is a simulated SCSI controller, that achieves a whopping 2 kByte/s transfer rate to a hard disk, which is also simulated – in RAM?) Worse, a large investment is still needed to manufacture the actual hardware.
FPGAs (Field Programmable Gate Arrays) provide for an interesting alternative to fabrication when considering open chip development. FPGAs are chips that contain large numbers of programmable logic blocks (logic gates and registers), and programmable interconnect. As such, FPGAs are like usual gate array chips; however, unlike gate arrays, FPGAs can be programmed without the need for a wafer processing plant. For open chip development, SRAM FPGA is the most interesting type: its configuration is stored in RAM on the chip, allowing for simple, fast, unlimited reprogramming. Look here for more on FPGAs.
Using FPGAs, it is possible to share and test designs just like software, yet actually achieve hardware performance. This is possible with affordable standard chips and boards, so there is no large investment required to manufacture new hardware.

The problem with FPGAs is that, compared to custom chips, their price is relatively high, while their performance is often relatively low. For users or people who want to tinker with designs occasionally, there is little incentive to obtain FPGA hardware unless there are several interesting uses for it, allowing re-use of the relatively expensive hardware (like we do with microprocessors). For designers there is little incentive to design for a target technology that does not provide for adequate price/performance when used for a single design, unless enough users have it installed already. So without a compelling reason to move in that direction, such FPGA-based development won’t take off.

A desire for low cost, open chip development by the net community may be just the motivation that is needed to get FPGA-based development off the ground. Once there, it would provide for economical hardware for users and very low cost development for designers.

In short, to bootstrap open chip development we need at least one of the following:

Availability of several attractive open designs for FPGA
An installed base of FPGA hardware
Better price/performance than FPGA (without losing FPGA advantages)
I expect that all three of the above will be achieved, once the open-design circuit development process (see below) is started by a small number of motivated people on the net.

Open Design Circuit Development Process

Below is a tentative description of how open-design circuits could become a viable (and maybe even dominant) integrated circuit development model, following the successful model of open-source software. The process is split into 6 phases, and I expect that when phase one is accomplished, the others will follow (not necessarily in the exact order described here).
Phase one. A small group of people (on the net):

sets goals for open design tools, data formats and hardware,
selects, improves, and integrates existing open tools (see comp.lsi.cad FAQ and Lola),
starts development of new tools (and maybe OS support for the new hardware) if necessary,
selects FPGA devices and boards for initial development (look here for starters),
spreads the word of their activities.
Phase two. With high quality open design tools becoming available, more people are joining the initiative, and work is started on comprehensive design libraries and large designs. All this is relatively easy to do on FPGA hardware.

Phase three. With several attractive designs becoming available, users and occasionally interested people start buying FPGA boards and start downloading, using, testing and sometimes improving the designs. The number of (potential) designers is increasing, and the advantages of open development are full at work.

Phase four. Universties and small companies are noticing the powerful tools and high quality designs that are openly available, and start using them for courses, in-house applications and product development. A significant opportunity for open-design circuit consultants emerges. FPGA manufacturers start to address the wishes of the open-design community, acknowledging the huge market potential.

Phase five. Tools, design libraries and large designs reach maturity and compare favorably with commercial offerings on technical merit (they always compared favorably on price). The open design community is an established movement now with a large number of designers (home-based, in universities and industry), and becomes a major source for quality designs and new developments. Larger companies take notice and start to use the available open technology for their own tools and designs.

Phase six. World domination :-). Open-design circuits are used routinely as the basis of full-custom chips (thereby achieving top performance for open-design circuits). Open-design tools are used routinely for large commercial design projects, simply because they have become so much better than closed, commercial design software. Open-design compatible high-performance FPGA hardware has become a commodity and is used on a large scale, comparable to microprocessors.

The above may remain wishful thinking – or will it become a self-fulfilling prophecy? Phase six may indeed be taking things a bit far, but we will see. You are all invited to join the discussion and help make it happen.

About the name: Open Design Circuits

People may wonder if ‘Open Design Circuits’ is really the right phrase. Why not ‘Open Design Hardware’? Because that is too general: the ODC approach specifically applies to designing digital circuits for chips, not to e.g. writing device drivers for existing hardware, or to motherboard design. Well then, why not use ‘Open Source Circuits’, and ride on the wave of publicity around ‘Open Source Software’? For one, because that phrase already has a totally unrelated meaning in the context of chip design (it describes circuits with a floating MOS transistor source terminal). Also, unlike software, chip designs are certainly not always represented by sources!

Technical issues

There are several technical issues which have to be addressed when trying to make open-design circuits practical for wide-spread use. Below are some thoughts that may serve as a basis for further discussion.
From here on, the discussion will be fairly technical and address details of circuit and system design.

FPGA architecture and devices. Of course we want a powerful and low-cost FPGA architecture that suits many types of design, with very high gate counts, high speed, support for fast and dense arithmetic functions, fast on-chip RAM, fast reconfigurability and lots of I/O. However, there may be even more important properties than those obvious ones. Consider:

Openness: we need devices with open specifications if we wish to program them with our own tools.
Robustness: a design mistake should not harm the device.
Portability: will a design for one device be portable to another (future) device?
Interestingly, the special wishes for open-design circuits are close to those for reconfigurable computing. At this moment I know of only one device family that meets must-have requirements like openness and robustness: the XC6200. This device lacks on-chip RAM and arithmetic support, and is probably expensive, but it seems to have some very nice properties too. Does anyone know of contenders for this device? Properties that the XC6200 lacks but that I would like to see are low cost, and fast on-chip RAM that can be used both for fast reconfiguration and as embedded (data) RAM. Well, maybe cost is the most important issue.
Board features and standards. Of course we want a cheap development board with a PCI bus-master interface, the FPGA device of our choice, a load of fast RAM, programmable clocks and expansion sockets and connectors. However, we should also consider openness (documentation), and that we may be setting standards for expansion connectors (expect designs for add-on hardware to pop into existence).

Design verification, testing, and reliability. Even if we restrict ourselves to a ‘safe’ design style (e.g. single-phase synchronous), if a design runs on your FPGA at a certain clock speed, how do you know how fast it will run on someone else’s? Are we going to distribute test and characterization routines with every design, or will there be standard device characterization support in tools, or will we distribute device description files and use conservative designs? How can we reliably port designs to new devices? This choice can be quite important for open-design circuits. Not setting standards here will probably invite chaos. How about power dissipation at high clocks on different devices?

Device and design description formats, portability and scalability. To what extent do we want (tool and OS support for) portability across devices? What FPGA and board-level features can we fit in a standard target description for the tools? We could distribute designs in source form (HDLs, schematics), in a retargettable intermediate format, or as ‘binaries’ (configuration files) tuned for specific devices. Mixed-format design descriptions can be very convenient too (HDL description with some hard macros for critical pieces of logic). Maybe it is not so hard to extract any tuned device-specific design and retarget it automatically to any device with at least the capacity of the original one (possibly with some constraints on design style).

Going further, to achieve maximum device-independence, we may want to be able to run designs that exceed the logic capacity of the device at hand. This may be possible by swapping parts of a design in and out of the device, transferring state as necessary.

As you may have noticed, these are just some random thoughts on the issue. However, I think it is very important to consider these issues beforehand, when setting up tooling for open-design circuits. A bad choice now may result in massive loss of effort later on.

Integration in OS and software. The issues are similar to those of the item above. However, support for (transparent) hardware-acceleration of software may introduce additional constrains and requirements on the design software. Something to keep in mind when making choices.

Patents. There are too many potential issues to list here, and I don’t have the answers. Anyone?

Licensing. Same as above. Could the existing open software licenses be applied to open-design circuits?

Design ideas

Here are some random cool design ideas for open-design circuits. More suggestions are welcome!
Fast network router. Think about what low-cost networking muscle you would have with a Linux box and a fast hardware router… (Note that any decent FPGA board will have fast SRAM on board, which can be used for routing tables.)
Hardware-accelerated GIMP (GNU Image Manipulation Program) image processing.
A foobar interface. Ever wish you could interface to this neat piece of hardware with foobar interface? You could do it with minimal homebrew harware (cable, maybe buffers), and do the real work with the open-design kit. Once several interface designs are implemented, your FPGA board could become quite a multi-function interface!
Faster and better compression than ever. I sure wouldn’t mind hardware-assisted bzip2!
Tackle DES challenges etc. faster than a room full of the fastest workstations. And of course you can link together over the net to join forces…
Fast hardware-encrypted/compressed file systems.
Processors (as general-purpose core, architecture experiment, for hack value, …). Neat idea: think of what you could do with your own processor design on a board with a PCI bus master interface, the Linux kernel, and portable compilers like gcc/egcs. How would you like to reduce your current CPU to an I/O processor?
Signal processing functions like modems, speech recognition and synthesis, music synthesis, real-time MPEG encoding/decoding.
Automatic transformation of inner loops of suitable (existing?) software to run on the FPGA hardware.
Evolutionary optimization in hardware, or ultra-fast cellular automata, or particle simulations, or neural networks, or …

Richard Stallman – On “Free Hardware”

richard-stallman
Richard Stallman

A number of people have asked the GNU Project if we would like to branch out from free software into free hardware designs, and expressed their interest in working on them. Some people have even suggested a project to make free chip designs.

To understand this issue clearly, recall that “free software” is a matter of freedom, not price; broadly speaking, it means that users are free to copy and modify the software. So if we try to apply the same concept to hardware, “free hardware” means hardware that users are free to copy and modify; a “free hardware design” means a design that users are free to copy, modify, and convert into hardware.

Free software is often available for zero price, since it often costs you nothing to make your own copy. Thus the tendency to confuse “free” with “videospornogratis.cl XXX gratis”. For hardware, the difference between “free” and “gratis” is more clear-cut; you can’t download hardware through the net, and we don’t have automatic copiers for hardware. (Maybe nanotechnology will provide that capability.) So you must expect that making fresh a copy of some hardware will cost you, even if the hardware or design is free. The parts will cost money, and only a very good friend is likely to make circuit boards or solder wires and chips for you as a favor.

Because copying hardware is so hard, the question of whether we’re allowed to do it is not vitally important. I see no social imperative for free hardware designs like the imperative for free software. Freedom to copy software is an important right because it is easy now–any computer user can do it. Freedom to copy hardware is not as important, because copying hardware is hard to do. Present-day chip and board fabrication technology resembles the printing press. Copying hardware is as difficult as copying books was in the age of the printing press, or more so. So the ethical issue of copying hardware is more like the ethical issue of copying books 50 years ago, than like the issue of copying software today.

However, a number of hardware ethusiasts are interested in developing free hardware designs, either because they have fun designing hardware, or because they want to customize. If you want to work on this, it is a fine thing to do. The GNU volunteer coordinators can put you in touch with other people who share this interest. If organizations are formed for this purpose, the GNU Project will refer interested people to them.

People often ask about the possibility of using the GNU GPL or some other kind of copyleft for hardware designs.

Firmware such as programs for programmable logic devices or microcoded machines are software, and can be copylefted like any other software. For actual circuits, though, the matter is more complex.

Circuits cannot be copylefted because they cannot be copyrighted. Definitions of circuits written in HDL (hardware definition languages) can be copylefted, but the copyleft covers only the expression of the definition, not the circuit itself. Likewise, a drawing or layout of a circuit can be copylefted, but this only covers the drawing or layout, not the circuit itself. What this means is that anyone can legally draw the same circuit topology in a different-looking way, or write a different HDL definition which produces the same circuit. Thus, the strength of copyleft when applied to circuits is limited. However, copylefting HDL definitions and printed circuit layouts may do some good nonetheless.

It is probably not possible to use patents for this purpose either. Patents do not work like copyrights, and they are very expensive to obtain.

Whether or not a hardware device’s internal design is free, it is absolutely vital for its interface specifications to be free. We can’t write free software to run the hardware without knowing how to operate it. (Selling a piece of hardware, and refusing to tell the customer how to use it, strikes me as unconscionable.) But that is another issue.

Copyright 1999 Richard Stallman
Verbatim copying and redistribution of this entire article is permitted provided this notice is preserved.

Richard Stallman is the founder of the Free Software Foundation, the author of the GNU General Public License (GPL), and the original developer of such notable software as gcc and Emacs.

Code-morphing: Fresh as a DAISY

Code-morphing: Fresh as a DAISY
Code-morphing: Fresh as a DAISY

COVER: IBM, Transmeta code morphingThere’s more than one way to guarantee software compatibility between X86-based software and very long instruction word processors.
Transmeta — which made its code-morphing software the centerpiece of its low-power chips — has developed one approach to insuring compatibility. IBM Research, with its Dynamically Architected Instruction Set from Yorktown (DAISY) translator, is building another.

Last month, the very long instruction word (VLIW) project at IBM’s T.J. Watson Research Center released DAISY into open source, under the IBM (NYSE: IBM) open-source license. The DAISY dynamic compiler work is an offshoot of IBM’S VLIW initiative, which began in 1986.

Parallelism + power = performance For its part, Transmeta has said it plans to release an updated version of its code-morphing software in the first quarter of 2001.

Transmeta’s code-morphing software is designed to provide software compatibility between existing Intel X86-based software applications and its own Crusoe instruction set, which provides for parallel processing and, therefore, higher performance.

Transmeta claims that the code-morphing software “continuously learns about and re-optimizes software applications a user is running to improve power usage and performance.”

By providing that compatibility in software, rather than hardware, Transmeta says Crusoe can rely on a simpler hardware design that has been optimized for lower power consumption.

Transmeta has trademarked the term “code-morphing.”

Two paths, one goal IBM Research’s DAISY also is aimed at providing compatibility between X86 software and IBM’s VLIW processors. But, according to IBM, DAISY also will provide compatibility between PowerPC, S/390, IBM’s Java Virtual Machine and VLIW, and “other novel (instruction-level parallelism) ILP architectures.”

Calls to both IBM and Transmeta, requesting comment on how their VLIW-translation strategies differ, were not returned prior to publication.

But one commentator, posting on the open-source enthusiast site Slashdot.org, attempted to explain the differences this way: “According to their white paper, Transmeta uses dynamic binary translation to convert x86 code into code for Transmeta’s internal architecture. This is similar in concept to the current version of DAISY which converts PowerPC code into code for an underlying DAISY VLIW machine.” The poster, Scott Dier, continued: “DAISY was developed at IBM independently of Transmeta. The DAISY research project focuses less on low power and more on achieving instruction level parallelism in a server environment and on convergence of different architectures on a common microprocessor core. A more detailed comparison of the DAISY and Transmeta approaches will be possible after Transmeta publishes their techniques in more detail.”