25

The ZX Spectrum sold with either 16 or 48K RAM, necessitating an optional 32K memory bank which was achieved in a characteristically (for Sinclair) cleverly frugal way: with half-bad 64kbit DRAM chips, i.e. that had a defect on one half of the chip, leaving the other half usable, so the chip could be sold at a discount.

This makes so much sense that I'm surprised it wasn't a recurring pattern in later years, but it doesn't seem to have been. Tandy did the same thing with the CoCo, but that was around the same time.

In fact, it may be possible to pinpoint the timing more exactly. Acorn designed the Electron as a follow-up to the BBC Micro, just late enough that it wanted to use 64kbit RAM chips instead of 16k. And certainly it would have been great to have 64K in the machine. But their architecture was designed for 32K, existing software wasn't designed to take advantage of extra, they would've had to design a bank switching scheme, and above all else, the Electron was supposed to be cheap to compete with the Spectrum, so they ended up using just four chips to provide 32K, which meant every memory access took two cycles, which made the machine much slower than it would've been.

So this was an ideal case for half-bad chips. Why didn't Acorn use them? Maybe they just didn't think of it in time. But the Spectrum was designed starting in 1981 for release in 82. The Electron was designed starting in 82 for release in 83. (And was a financial disaster because the Ferranti ULAs still had trouble working reliably at 16 MHz, so they didn't get volume production until 84, by which time the market peak was past, so they ended up with warehouses full of unsold machines, but that's another matter.)

Did half-bad RAM chips disappear between 81 and 82? Was yield really that close to perfect already? And why didn't they recur in the 256kbit and subsequent generations? Or was something else going on that I'm not taking into account?

9
  • 11
    At some point memory chip manufacturers realized they could add some amount of spare memory cells and route accesses to bad memory cells to the spare cells to increase yield of sellable chips even if they have some defects. I don't actually know if this was done on that era or with DRAM cells though, so just considering it as possible explanation.
    – Justme
    Commented Feb 10, 2021 at 17:02
  • 3
    I do think that Question has already be asked and answered.
    – Raffzahn
    Commented Feb 10, 2021 at 17:13
  • 2
    I do remember a question that was exactly about that area, something along why not more half working chips were sold, and the Answer was about that's simply a game about volume and statistic. The gap for making such is closing down fast and there is not enough time to design a worthwhile use case. And yes, I think it was one of you questions.
    – Raffzahn
    Commented Feb 10, 2021 at 17:26
  • 2
    @rwallace Well, I guess if this is about getting answers, focusing on title wording might be kind of frivolous, don't you think so? Knowledge is about content.
    – Raffzahn
    Commented Feb 10, 2021 at 18:15
  • 2
    Given that Acorn reserved the $8000–$DFFF region for paged memory, and the Electron already takes liberties there (e.g. having BASIC and the keyboard appear twice, and having the keyboard appear at all) I don't think the memory map was an issue. Acorn could easily have added 16kb or 32kb of such sideways RAM. Or, I guess, invented shadow RAM a year or two early. This has been a digressive aside; Raffzahn's answer below is, as usual, the proper answer.
    – Tommy
    Commented Feb 10, 2021 at 19:01

4 Answers 4

19

Sinclair's use was a very unique case in a very specific situation that never occurred again later on.

Production side:

  • There were many more manufacturers of chips back then.
  • The ones that wanted to compete at the forefront used RAMs as gateways to technological development. (*1)
  • Anyone not ramping up their output fast into the upper 90s will lose money.
  • First mass produced 64 Ki came in 1978/79.
  • Last manufacturer to start a production was East Germany in 1981/82.
  • Only by-1 chips are produced during the ramp up phase.

System Side:

  • 'half' size chips do only really make sense for ready produced machines (*2).
  • 8 bit machines are the only with a use case for 32 Ki by 1.
  • 16 bit systems are designed for memory sizes past 64 KiB, so whatever is maximum (at the time) in by-1 configuration is the design goal.
  • Mixed RAM type expansion is a pain in the ass and no designer would go that way (*3).

Management side:

Sinclair computer products were never about forward thinking, but hitting the lowest price point possible - and unlike Apple, they were also sold at that point. So every fraction of a penny for the actual product did count, no matter if that part would become available in a year or two from now.

There were no 8 Ki by 1 before, nor any notable amount of 128 Ki by 1 later. The cost-cutting niche was filled in the 256 Ki days with 64Ki by 4 configuration, well shown with several later 8-bit computers and redesigns for cost reduction (Amstrad, Commodore).

All of that combined into a unique use case for the Spectrum, a singular point in time... like a few others during the punk days of computing.


Did half-bad RAM chips disappear between 81 and 82?

They only appeared for a very short time around '80/'81

Was yield really that close to perfect already?

Any design that doesn't climb soon past 95+% isn't worth making. Sure, there may be special (usually military) cases, were it doesn't matter if 90% are defective, as long as enough are produced to be sold at extraordinary high prices. Or for competitive reasons, like with some Intel processor designs, which are purely made to keep the 'crown' of being fastest, even though not enough can be made to satisfy demand.

And why didn't they recur in the 256kbit and subsequent generations?

Because the 64/32 Ki episode already showed that it isn't a sustainable business case.


*1 - (D)RAM have easy symmetric structures, so circuit design is rather easy, letting manufacturing focus on process handling.

*2 - I.e. closed boxes, not prepared for internal upgrade, like most home computers.

*3 - Wozniak's Apple 2 design is an outlier here, as it was made at a time when only 4 Ki were available in good numbers and 16 Ki were just coming to mainstream.

9
  • 2
    I believe 64kbit DRAM started to use redundancy - spare rows/columns and fuses to select them. This is part of why 64kbit DRAM chips replaced 16kbit ones by being cheaper per bit. The effective yields were much better with some redundancy.
    – Brian
    Commented Feb 10, 2021 at 20:43
  • 1
    @riffraff169 No, the 366sx was a 16 bit version of a 386dx - different chip. What you may might think of was the 486sx, except, only a small number (if at all) that were sold had a defective FPU - in next to all it was disabled on purpose for marketing reasons. It enabled Intel to have a low price device to counter other manufacturers 486 compatible offerings. Doable at a time when an FPU was still unneeded by most users.
    – Raffzahn
    Commented Feb 10, 2021 at 22:51
  • 2
    @ralfzahn -- Sinclair products in general were about hitting the lowest price point possible. My own personal nemesis was the IC-12.
    – dave
    Commented Feb 10, 2021 at 23:15
  • 1
    I've heard that it's still common enough to produce RAM chips when setting up a new process. Those are not intended to go to market, but it's dead easy to test each bit of RAM on an entire wafer full of chips. And when some bits fail, figuring out where in the process it failed is also easier.
    – MSalters
    Commented Feb 11, 2021 at 16:42
  • 1
    If you sell 8 GB sticks with a few bad bits for quite a bit less I'd buy them today, provided you can tell me where the bad bits are. I just won't use those 4k surrounding blocks. (Wait what? There's a badram kernel patch--if the bad bits aren't in the bottom 64MB the kernel can simply avoid using them by use of a boot parameter)
    – Joshua
    Commented Feb 11, 2021 at 23:05
7

The Memotech MTX (1984-5 or so) has a wire link on its circuit board that allows you to use half-good chips. I dont think I’ve seen any being used, but it seems the designer thought it at least possible they might want to.

4

tl;dr: I suspect the introduction of heavily subsisdied Japanese memory fabs was a key factor; once they started to flood the market with their RAM, it simply wasn't worth the extra risk/hassle of buying "half-bad" RAM chips when Japan was dumping out perfectly functional RAM at below cost.

https://apnews.com/article/905e90eaf80859bc4c50bb66ab974d1e

Beyond this, in some ways, Sinclair was the Nintendo of their time (and possibly even helped to inspire Nintendo's "lateral thinking with withered technology" philosophy): they took older and cheaper technology and then used it in "innovative" ways.

By which, I mean they pushed said technology to the absolute limits, and factored a high failure/return rate into their business plans.

E.g. according to Wikipedia, the Sinclair Executive calculator "was around half the price of comparable calculators, but still twice the average weekly wage". https://en.wikipedia.org/wiki/Sinclair_Executive

Given the choice between spending two weeks or four weeks worth of wages, most people were happy to wait a few seconds longer for their calculator to spit out an answer! Even if return rates were high - the gamble may have worked for his calculators, but the high return rate for the later Black Watch pretty much destroyed Sinclair Radionics, which led to Sir Clive effectively walking away and spinning up a new company which then morphed into Sinclair Research.

https://rk.nvg.ntnu.no/sinclair/other/blackwatch.htm

And the same initially applied to the ZX Spectrum: it was far cheaper than it's rivals, while being just barely Good Enough.

Unfortunately for Sinclair, the world changed. Moore's law marched on and reduced the price delta. Japan flooded the market with cheap RAM. And the US price war between Commodore/Tandy/Apple/etc drove prices down even further.

Put simply, the market matured and the cost of rival hardware dropped to the point where the risk/benefit of using Sinclair's technology wasn't worth it.

So, the Microdrive was dropped. The QL was pretty much a commercial failure at launch, thanks in equal measure to the cost cutting measures which crippled the hardware and the bugs which meant early models had to have a ROM dongle attached (shades of the "dead cockroach" hack for early models of the ZX Spectrum!).

And the less said about the C5, the better. Though this did lead to the Spectrum IP being sold to Amstrad, who moved production to Hong Kong and churned out resigned models which were both cheaper and more reliable than their older brethren.

The era of shonky hardware which was just barely Good Enough was over. Though at least the QL did give a first taste of computing to a certain Linus Torvalds...

However, if memory serves, Sir Clive did continue to look into similar stuff after Sinclair Research. E.g. Anamartic Ltd experimented with wafer scale integration (aka: put all the integrated circuits for a system on to a single wafer, with sufficient redundancy that you can just "switch off" the bits of the wafer which don't work)

https://en.wikipedia.org/wiki/Wafer-scale_integration

But again, that kind of manufacturing is complex; it's simpler (and therefore easier to drive down costs) to use traditional fab processes to mass produce standalone ICs. Though I suppose you could argue there's some parallels with modern CPU production, as AMD/Intel/etc have been known to sell CPUs which are underclocked or have cores disabled as a result of production failures...

3
  • All very interesting, though only the first paragraph seems to address the question (and still doesn't put an actual date on it). Commented Feb 11, 2021 at 17:37
  • 1
    TBH, I'm not sure bad ram chips ever stopped being available. It's just that over time, the use-case for them dwindled. The key thing about Sinclair is that his "high-risk" approach worked well in the 60s/70s because he had no direct competition: you either bought his stuff, or you paid at least double for something else. By 1981/2, he did have competition at the same price point, and buyers started to look at things other than cost. Such as reliability. Then too, everyone who followed after Sinclair Radionic had a pretty graphic example of what happens if your quality control fails!
    – Juice
    Commented Feb 12, 2021 at 14:44
  • 2
    A second point is that the financial benefits of using bad chips decreased over time. E.g (via a contrived example): in 1978, a working IC might cost 30p, while a "bad" one cost 1p. And it then cost 5p to get someone to certify the bad IC as fit for purpose. Total saving: 24p! Come 1982, and that working IC now only costs 5p, thanks to the advances/price wars outlined above. So now, the cost of certifying a bad IC is now more than the cost of buying a new one - and it also carries a higher risk of failing quality control or triggering a return request...
    – Juice
    Commented Feb 12, 2021 at 14:53
3

A slightly different situation, but you can buy a brand new MacBook Air right now where only seven out of eight GPUs are working, and save $50. And all new Macs with an M1 processor are sold with either 16GB or 8 GB of RAM, they are obviously tested and there is no reason why a failing 16GB CPU wouldn’t be sold as a working 8GB CPU.

So it seems that Apple tests all their GPUs and the ones that fail go into this one specific model.

But on the chip level, there is just no market for this. You’d have to sell a broken 8GB chip for a lot less than a working 4GB chip if it is a separate product, and there wouldn’t be enough market. It only makes sense if the customer doesn’t know the difference.

1
  • 1
    In the world of NAND flash, the normal state of affairs is that devices are specified as having certain regions that are guaranteed to be good, at most a specified number of regions will be marked bad, and all regions not marked as bad are guaranteed to be good, but one should not generally expect that all of the blocks on a device will be good. One might luck out and get a device that's free of defects, but one should expect that typical devices are likely to contain bad blocks.
    – supercat
    Commented Feb 12, 2021 at 22:04

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .