3
\$\begingroup\$

More broadly, what the practical maximum a modern silicon chip could be expected to last (assuming it operates at some temperature above 77K, and it has stable power)?

For example, I expect diffusion of dopant materials to over time make it lose it's properties. I believe diffusion rates are roughly exponential with temperature, so one might expect an \$e^{77\alpha}\$ as opposed to \$e^{273\alpha}\$, which is roughly \$e^{200\alpha}\$ times slower.


To clarify the applications I have in mind, take the Clock of the Long Now, which is being designed to last over 10,000 years, but made out of massive mechanical components. Another application would be sending interstellar objects at velocities practically achievable today: it would take about 80,000 years for the voyager probe to reach Alpha Centauri. Radiation shielding isn't that much of a theoretical problem since it's a matter of scaling up the shielding mass surrounding the probe (if you can launch and accelerate 1 probe, you can launch and accelerate 1000 shielding probe-masses at 1000x the cost).

\$\endgroup\$
9
  • \$\begingroup\$ the problem here is that you need to define "operate": Does it imply that if I reset my device in 800 years from now, it will still work and execute the software I feed it with then, or does it imply that over a span of 800 years of continuous operation, there will not be a data error? \$\endgroup\$ Commented Jul 16, 2017 at 9:37
  • \$\begingroup\$ The role-of-tumb is "dobule lifetime for every 10°C lower temperature" so your numbers seem off by a few orders of magnitude. \$\endgroup\$
    – Turbo J
    Commented Jul 16, 2017 at 9:40
  • \$\begingroup\$ With a heavy heart I rolled back the edit of @TurboJ – it was, afaik, a good edit, but it might have changed the central point of the question by correcting a core misunderstand and thus depriving OP of the chance to get an answer or comment on why the \$\alpha\$ is not in the exponent. Could you please explain why that's the case here, and then undo my roll back? \$\endgroup\$ Commented Jul 16, 2017 at 9:42
  • \$\begingroup\$ @MarcusMüller Good question. I think it's a good start that in the scale of millenia the device shouldn't exceed a data error rate we customarily accept for commercial chips. Let's assume any soft errors that do occur (due to statistics, not hardware degradation) can be managed using some kind of redundancy, such as error correction coding or multiple-processor voting scheme. \$\endgroup\$
    – Real
    Commented Jul 16, 2017 at 9:51
  • 1
    \$\begingroup\$ @PaulUszak: because in the case of someone who actually knows they CAN'T last millenia due to aging effects, it is not opinion based. ARM/Intel chips are using as close to bleeding edge technology as they can on almost all chips. This means the devices are smaller and age more quickly. Using very old stable technology and designing for reliability over speed or area could make this possible. But current chips, no. Diffusion is not the only factor, a couple other first order effects are gate oxide damage and electromigration. \$\endgroup\$
    – jbord39
    Commented Jul 16, 2017 at 15:53

2 Answers 2

2
\$\begingroup\$

I think the broader problem is that the chip is useless in isolation as it requires supporting components in order to make a useful circuit. Many of these components have a failure rate that exceeds that of silicon chips by many orders of magnitude.

In reliability engineering we know the failure rate of a serial system is greater than that of the highest failure rate component of the system. So to focus attention on what is arguably the most reliable component in a circuit is perhaps a priori but it is not germane to making hyper long life, functioning circuits for applications as proposed in the question.

\$\endgroup\$
6
  • \$\begingroup\$ Well, I suppose it's theoretically possible to design a device in which everything from power source to the processor are manufactured on the same silicon/semi process, for example with photovoltaic cells or MEMs, in effect making the actual reliability of the device approach that of the semiconductor part itself. \$\endgroup\$
    – Tony K
    Commented Jul 16, 2017 at 12:43
  • \$\begingroup\$ @TonyK A solar cell, while made of silicon, is an example of a supporting component with a significantly higher failure rate. It would be very problematic to integrate everything and survive radiation in the space example. \$\endgroup\$
    – Glenn W9IQ
    Commented Jul 16, 2017 at 12:52
  • \$\begingroup\$ Well, ordinary transistors also tend not to fair well under the extreme conditions in space, especially as the process node shrinks, so this is not something isolated to "supporting components". \$\endgroup\$
    – Tony K
    Commented Jul 16, 2017 at 15:55
  • \$\begingroup\$ Ironically older, larger process nodes likely performs better under such conditions than more "modern" chips as OP has referred to. \$\endgroup\$
    – Tony K
    Commented Jul 16, 2017 at 15:57
  • \$\begingroup\$ Could you cite components which have much greater fundamental reliability issues then the silicon chips themselves? I can't really picture any fundamental limitations from the likes of resistors, (ceramic) capacitors and large scale transistors. \$\endgroup\$
    – Real
    Commented Jul 18, 2017 at 6:57
0
\$\begingroup\$

I expect the limitation of systems, not just electronic systems, to be the air conditioning ball bearings.

For sufficiently long-lived systems, the attendants must have access to machines to manufacture new ball bearings........for the cooling fans.

Thus modern chips..........will be at the mercy of the cooling systems' longevity.

\$\endgroup\$

Not the answer you're looking for? Browse other questions tagged or ask your own question.