33
\$\begingroup\$

The Intel 8080 is a classic microprocessor released in 1974, fabricated using an enhancement-mode NMOS process, and shows various unique characteristics related to this process, such as the requirement of a two-phase clock, and three power rails: −5 V, +5 V, and +12 V.

In the description of the power pin from Wikipedia, it says

Pin 2: GND (VSS) - Ground

Pin 11: −5 V (VBB) - The −5 V power supply. This must be the first power source connected and the last disconnected, otherwise the processor will be damaged.

Pin 20: +5 V (VCC) - The + 5 V power supply.

Pin 28: +12 V (VDD) - The +12 V power supply. This must be the last connected and first disconnected power source.

I cross-referenced to the original datasheet, but the information is a bit contradictory.

Absolute Maximum:

VCC (+5 V), VDD (+12 V) and VSS (GND) with respect to VBB (−5 V): −0.3 V to +20 V.

Even if VBB is 0 V when it's unconnected, VDD would be +17 V, and it shouldn't exceed the absolute maximum. Is it the original claim on Wikipedia that a Intel 8080 chip be destroyed if +12 V is connected before −5 V correct?

If it is correct, what is the exact failure mechanism if I do this? Why would the chip be destroyed if +12 V is applied first without −5 V? I suspect it must has something to do with the enhancement-mode NMOS process, but I don't know how semiconductors work.

Could you explain how the power supply is implemented internally inside Intel 8080? Did the problem exist among other chips in the same era built using a similar process?

Also, if I need to design a power supply for the Intel 8080, let's say using three voltage regulators, how do I prevent damages to the chip if +12 V rail ramps up before −5 V?

\$\endgroup\$
8
  • 2
    \$\begingroup\$ Back in the day we just ignored what Intel recommended about the power supply sequencing. See the IMSAI MPU-A schematic for how much the young and stupid could get away with. \$\endgroup\$
    – Dan1138
    Commented Sep 2, 2019 at 2:15
  • 3
    \$\begingroup\$ If I ever saw an Intel application note on this it was over 40 years ago, As you can see the designers of the day did not do it, There is no reasonable situation imaginable to use an Intel 8080A in a new design. Be more forthcoming about your application. Crank your search-fu to eleven, Google is your friend. \$\endgroup\$
    – Dan1138
    Commented Sep 2, 2019 at 2:38
  • 7
    \$\begingroup\$ @Dan1138 The intention is to understand how it worked, not to use it in a new design. Thanks for the tip anyway, it seems a transient violation of the proper sequence didn't turn out to be a problem in practice... I'll try digging Bitsavers and archive.org, hopefully found some related materials and answer it myself, and update the citation at Wikipedia... \$\endgroup\$ Commented Sep 2, 2019 at 2:44
  • 1
    \$\begingroup\$ At the time I use the Intel Intellec Microcomputer Development Systems (MDS) based on boards built on the Intel Multibus card and bus specifications. The CPU cards do not enforce power start sequencing for the 8080A chip so the bus specification must be what controls the power on sequence. I know for certain that home built computer systems kits (Altair, IMSAI, etc.) of the day did not have main power bus sequencing. \$\endgroup\$
    – Dan1138
    Commented Sep 2, 2019 at 3:16
  • 3
    \$\begingroup\$ Mind that "not connected" is definitely not the same as "0V". In any integrated circuit you want to have the Bulk to be tied to a low-impedance source to avoid latch-up, which can absolutely destroy your chip! Especially this early design, where the Bulk is seemingly connected to a different voltage source than source/drain is prone to fail. You most likely won't find anything like this in modern bulk designs (FDSOI doesn't latch-up). \$\endgroup\$
    – michi7x7
    Commented Sep 3, 2019 at 6:15

3 Answers 3

40
\$\begingroup\$

I don't have a complete answer for you, but the 8080 was one of Intel's first chips to use an NMOS process rather that the PMOS process of the 4004, 4040, and 8008 chips. In NMOS, the substrate must be the most negative point in the entire circuit, in order to make sure that the isolating junctions of other circuit elements are properly reverse-biased.

So, I suspect that the -5V supply, among other things, is tied directly to the substrate, and if the other voltages are supplied without this bias present, there are all kinds of unintended conduction paths through the chip, many of which could lead to latch-up and self-destruction.

To answer your last question, if your power supply doesn't have the correct sequencing by design, then you need a separate sequencer — a circuit that itself requires the -5V supply to be present before it allows the other voltages to reach the chip.


To echo some of the comments on your question, I don't recall any special care being taken in the actual 8080-based systems of the day.

However, such systems were usually built with four power supplies — or more precisely, two pairs of power supplies: ±5V and ±12V (-12V would have been used in any serial interfaces), each driven from a transformer winding and a bridge rectifier. It would have been natural for the 5V supplies to come up before the 12V supplies — and of those two, -5V would be quicker than +5V, being far less heavily loaded.

So (again I'm guessing), the power supplies either "just worked" in terms of sequencing, or the danger was not really as severe as the datasheet writers would have you believe.

\$\endgroup\$
2
  • 2
    \$\begingroup\$ I didn't see your answer (Firefox didn't scroll to it) and was already writing a comment about the substrate. I'm sure you are correct about why the -5 V supply had to come up as the first low impedance voltage. pMOS was used earlier because + charges in the oxide decrease Vth and nMOS was thus a disaster due to impurity problems. So they were finally learning how to do nMOS as cleanliness finally reached new thresholds. (This was just prior to CMOS successes.) Research showed that the biggest problem was sodium contamination, though potassion and lithium were lesser contributing issues. +1! \$\endgroup\$
    – jonk
    Commented Sep 2, 2019 at 3:23
  • \$\begingroup\$ "I suspect that the -5V supply, among other things, is tied directly to the substrate". I think you're right. A strong hint of this is the reference quoted by the OP, where the -5V rail is labeled VBB, where "B" most probably stands for "Body", i.e. the substrate of the NMOS transistors. \$\endgroup\$ Commented Sep 3, 2019 at 13:47
13
\$\begingroup\$

In the process used for the 8080, +12 provided the primary voltage for the logic, +5 supplied voltage for the I/O pin logic (which was intended to be TTL compatible, thus limited to 0 -> 5 volt signals) and -5 was connected to the substrate. The latter voltage insured that all of the active devices on the IC remained isolated by maintaining a reverse bias on the PN junctions that separated them from the common silicon substrate.

If any I/O signal went "below" substrate voltage, it could potentially drive the isolating junction into a SCR-like latchup condition, with the resulting continuous high current potentially destroying the device. The required sequence of turning on and turning off the three power supply voltages was intended to minimize this risk.

As a previous answer correctly pointed out, in practice system designers ran fast and loose with this requirement. Basically, the most important thing was to power the rest of the system logic with the same +5 supply that drove the CPU, so that at minimum the voltages applied to CPU input pins would never be greater than the CPU "+5" supply, or lower than the CPU "-5" supply, and to insure that the "+12" supply was equal to or greater than the "+5 supply at all times. A schottky power diode sometimes was bridged between those voltages, to maintain that relationship e.g. during power-down.

Typically, the electrolytic filter cap values for the three supplies were chosen such that -5 and +12 ramped up fairly quickly, and +5 lagged a bit after.

MOS process refinements allowed later IC designs to be powered solely by +5, and if a negative substrate voltage was needed it was generated on-chip by a small charge pump circuit. (e.g. 2516 EPROM vs. 2508, 8085 cpu vs. 8080.)

\$\endgroup\$
11
\$\begingroup\$

if I need to design a power supply for the Intel 8080, let's say using three voltage regulators, how do I prevent damages to the chip if +12v rail ramps up before -5v?

With a little care you should be able to avoid that situation. the CPU draws very little current at -5V, so with an oversized filter capacitor it will naturally come up fast and go down slowly.

+12V can be made to rise slower by having a lower unregulated voltage which provides less 'headroom', and lower capacitance relative to current draw to make it drop faster. A bleeder resistor will ensure that voltage drops fast enough even with low loading.

I simulated the power supply in the Altair 8800. All supply voltages rose pretty much together within 4ms of switch on. At switch off the +12V supply dropped first, followed by the +5V supply and then the -5V supply.

Here's the first mains cycle at switch on:-

enter image description here

And here's the switch off after 60 mains cycles:-

enter image description here

The Altair's -5V circuit looks like this:-

schematic

simulate this circuit – Schematic created using CircuitLab

The combination of high unregulated DC voltage (relative to 5V), large filter capacitance and light loading gives a fast rise time and slow fall time.

The Altair's +12V supply has a similar circuit, but 12V is not much less than 16V so the voltage drops below 12V faster (also helped by higher current draw from the +12V supply).

\$\endgroup\$

Not the answer you're looking for? Browse other questions tagged or ask your own question.