38
\$\begingroup\$

Transistors serve multiple purposes in an electrical circuit, i.e switches, to amplify electronic signals, allowing you to control current etc...

However, I recently read about Moore's law, among other random internet articles, that modern electronic devices have a huge number of transistors packed into them, with the amount of transistors that are in modern electronics being in the range of millions, if not billions.

However, why exactly would anyone need so many transistors anyway? If transistors work as switches etc, why would we need such a absurdly large amount of them in our modern electronic devices? Are we not able to make things more efficient so that we use wayy less transistors than what we are using currently?

\$\endgroup\$
12
  • 7
    \$\begingroup\$ I'd suggest going down to what your chip is made of. Adders, Multipliers, Multiplexers, Memory, More Memory... And think of the numbers of these things that need to be present there... \$\endgroup\$
    – Dzarda
    Commented Jul 1, 2014 at 9:14
  • 9
    \$\begingroup\$ Somewhat related (and self-promoting): Why does more transistors = more processing power? \$\endgroup\$
    – user15426
    Commented Jul 1, 2014 at 10:33
  • 1
    \$\begingroup\$ Also the continuous use of transistors as replacements for most mechanical devices helped shape modern consumer electronics more than anything else. Image your phone clackering each time it turns the backlight on or off (whilst being the size and weight of a car) \$\endgroup\$
    – Mark
    Commented Jul 1, 2014 at 12:00
  • 7
    \$\begingroup\$ You ask why we cannot "make things more efficient" to use fewer transistors; you assume that we seek to minimise the number of transistors. But what if power efficiency is improved by adding more for control? Or more notably time efficiency in doing whatever computation? 'Efficiency' is no one thing. \$\endgroup\$
    – OJFord
    Commented Jul 1, 2014 at 18:19
  • 2
    \$\begingroup\$ It's not that we need that many transistors to build a CPU, but since we can make all those transistors, we might as well use them in ways that make the CPU faster. \$\endgroup\$ Commented Jul 2, 2014 at 11:32

12 Answers 12

48
\$\begingroup\$

Transistors are switches, yes, but switches are more than just for turning lights on and off.

Switches are grouped together into logic gates. Logic gates are grouped together into logic blocks. Logic blocks are grouped together into logic functions. Logic functions are grouped together into chips.

For example, a TTL NAND gate typically uses 2 transistors (NAND gates are considered one of the fundamental building blocks of logic, along with NOR):

schematic

simulate this circuit – Schematic created using CircuitLab

As the technology transitioned from TTL to CMOS (which is now the de-facto standard) there was basically an instant doubling of transistors. For instance, the NAND gate went from 2 transistors to 4:

schematic

simulate this circuit

A latch (such as an SR) can be made using 2 CMOS NAND gates, so 8 transistors. A 32-bit register could therefore be made using 32 flip-flops, so 64 NAND gates, or 256 transistors. An ALU may have multiple registers, plus lots of other gates as well, so the number of transistors grows rapidly.

The more complex the functions the chip performs, the more gates are needed, and thus the more transistors.

Your average CPU these days is considerably more complex than say a Z80 chip from 30 years ago. It not only uses registers that are 8 times the width, but the actual operations it performs (complex 3D transformations, vector processing, etc) are all far far more complex than the older chips can perform. A single instruction in a modern CPU may take a many seconds (or even minutes) of computation in an old 8-bitter, and all that is done, ultimately, by having more transistors.

\$\endgroup\$
18
  • \$\begingroup\$ NAND = 4 not 2 Transistors and FF's are more than just 2 NORs \$\endgroup\$ Commented Jul 1, 2014 at 14:16
  • 2
    \$\begingroup\$ Oh my! you really need to rethink that. Show even ONE design that has Million of transistors that is done in Bipolar!! ALL of these designs are CMOS, \$\endgroup\$ Commented Jul 1, 2014 at 15:59
  • 2
    \$\begingroup\$ Fair point. Added a second schematic to highlight the difference, and the subsequent doubling of transistors just from that. \$\endgroup\$
    – Majenko
    Commented Jul 1, 2014 at 17:08
  • 4
    \$\begingroup\$ weak vs strong pullup is a completely different issue from TTL vs CMOS. BJTs do come in PNP, after all. CMOS does not involve "doubling of transistors". Large-scale integration does, since transistors are far smaller than pull-up resistors in any ASIC process. \$\endgroup\$
    – Ben Voigt
    Commented Jul 2, 2014 at 3:52
  • 1
    \$\begingroup\$ That is not a TTL NAND gate. That is an RTL logic gate. \$\endgroup\$
    – fuzzyhair2
    Commented Jul 22, 2014 at 12:49
17
\$\begingroup\$

I checked on local supplier of various semiconductor devices and the biggest SRAM chip they had was 32Mbits. That's 32 million individual areas where a 1 or a 0 can be stored. Given that "at least" 1 transistor is needed to store 1 bit of information, then that's 32 million transistors at an absolute minimum.

What does 32 Mbits get you? That's 4 Mbytes or about the size of a low quality 4 minute MP3 music file.


EDIT - an SRAM memory cell according to my googling looks like this: -

enter image description here

So, that's 6 transistors per bit and more like 192 million transistors on that chip I mentioned.

\$\endgroup\$
5
  • \$\begingroup\$ ... and now imagine 8GB memory with 68719476736 bits of information \$\endgroup\$
    – Kamil
    Commented Jul 1, 2014 at 9:50
  • 1
    \$\begingroup\$ ... except they don't use transistors in DRAM. \$\endgroup\$
    – Majenko
    Commented Jul 1, 2014 at 9:56
  • 1
    \$\begingroup\$ @Majenko: At least not as many as for other technologies. 1 transistor + 1 capacitor (on microscopic scope obviously) for 1 bit - if I remember correctly. \$\endgroup\$
    – Rev
    Commented Jul 1, 2014 at 9:58
  • 29
    \$\begingroup\$ Each bit of SRAM is at least 4 and often 6 transistors so 128 million transistors or more. DRAM doesn't use transistors for storage - but each bit (stored on a capacitor) has its own transistor switch to charge the cap. \$\endgroup\$
    – user16324
    Commented Jul 1, 2014 at 10:23
  • 6
    \$\begingroup\$ Now imagine the transistors in a 1T SSD (granted 3 bits/cell, and it's on more than one chip) but that's still 2.7 trillion transistors just for the storage- not counting addressing, controlling and allowance for bad bits and wear). \$\endgroup\$ Commented Jul 1, 2014 at 11:59
7
\$\begingroup\$

I think the OP may be confused by electronic devices having so many transistors. Moore's Law is primarily of concern for computers (CPUs, SRAM/DRAM/related storage, GPUs, FPGAs, etc.). Something like a transistor radio might be (mostly) on a single chip, but can't make use of all that many transistors. Computing devices, on the other hand, have an insatiable appetite for transistors for additional functions and wider data widths.

\$\endgroup\$
4
  • 3
    \$\begingroup\$ Radios these days are computing devices, or at the very least contain them. Digital synthesis of FM frequencies, DSP signal processing of the audio (a biggie), digital supervisory control of station switching and so on. For example, the TAS3208 ti.com/lit/ds/symlink/tas3208.pdf \$\endgroup\$ Commented Jul 1, 2014 at 18:15
  • 1
    \$\begingroup\$ You're still not going to see tens or hundreds of million, much less billions, of transistors used for a radio. Sure, they're becoming small special-purpose computers with all that digital function, but nothing on the scale of a multicore 64 bit CPU. \$\endgroup\$
    – Phil Perry
    Commented Jul 2, 2014 at 17:55
  • \$\begingroup\$ @PhilPerry surely a digital radio has something like an ARM in it? Not billions of transistors, but well into the tens of millions. \$\endgroup\$
    – user46779
    Commented Jul 4, 2014 at 7:21
  • \$\begingroup\$ Well, if you've crossed "the line" from analog radio to a computer that (among other things) receives radio signals, you'll use lots of transistors. My point still stands that the OP's question about electronic devices sounds like confusion between classic analog radios, etc. and computing devices. Yes, they perform in very different manners even if they're both black boxes pulling music out of the air. \$\endgroup\$
    – Phil Perry
    Commented Jul 8, 2014 at 13:27
4
\$\begingroup\$

As previously stated, SRAM requires 6 transistors per bit. As we enlarge our caches (for efficiency purpose), we require more and more transistors. Looking at a processor wafer, you may see that cache is bigger than a single core of a processor, and, if you look closer at the cores, you will see well organized parts in it, which are also cache (probably data and instruction L1 caches). With 6MB of cache, you need 300 millions transistors (plus the addressing logic).

But, also as previously stated, transistors are not the only reason to increase the number of transistors. On a modern Core i7, you have more than 7 instructions executed per clock period and per core (using the well-known dhrystone test). This means one thing : state-of-the-art processors do a lot of parallel computing. Doing more operations at the same time requires to have more units to do it, and very cleverer logic to schedule it. Cleverer logic requires much more complex logical equations, and so much more transistors to implement it.

\$\endgroup\$
1
  • \$\begingroup\$ SRAM has not required 6 transistors in quite a few years. In fact 6T Sram is pretty wasteful when you get use 1T 2T or 4T srams as essentially drop in replacements. \$\endgroup\$
    – cb88
    Commented Mar 24, 2016 at 17:44
2
\$\begingroup\$

Stepping away from the details a bit:

Computers are complex digital switching devices. They have layer upon layer upon layer of complexity. The simplest level is logic gates like NAND gates, as discussed, Then you get to adders, shift registers, latches, etc. Then you add clocked logic, instruction decoding, caches, arithmetic units, address decoding, It goes on and on and on. (Not to mention memory, which requires several transistors per bit of data stored)

Every one of those levels is using lots of parts from the previous level of complexity, all of which are based on lots and lots of the basic logic gates.

Then you add concurrency. In order to get faster and faster performance, modern computers are designed to do lots of things at the same time. Within a single core, the address decoder, arithmetic unit, vector processor, cache manager, and various other subsystems all run at the same time, all with their own control systems and timing systems.

Modern computers also have larger and larger numbers of separate cores (multiple CPUs on a chip.)

Every time you go up a layer of abstraction, you have many orders of magnitude more complexity. Even the lowest level of complexity has thousands of transistors. Go up to high level subsystems like a CPU and you are talking at least millions of transistors.

Then there's GPUs (Graphics Processing Units). A GPU might have a THOUSAND separate floating point processors that are optimized to do vector mathematics, and each sub-processor will have several million transistors in it.

\$\endgroup\$
1
\$\begingroup\$

Without attempting to discuss how many transistors are needed for specific items, CPU's use more transistors for increased capabilities including:

  • More complex instruction sets
  • More on-chip cache so that fewer fetches from RAM are required
  • More registers
  • More processor cores
\$\endgroup\$
1
\$\begingroup\$

Aside from increasing raw storage capacities of RAM, cache, registers and well as adding more computing cores and wider bus widths (32 vs 64 bit, etc), it is because the CPU is increasingly complicated.

CPUs are computing units made up of other computing units. A CPU instruction goes through several stages. In the old days, there was one stage, and the clock signal would be as long as the worst-case time for all the logic gates (made from transistors) to settle. Then we invented pipe lining, where the CPU was broken up into stages: instruction fetch, decode, process and write result. That simple 4- stage CPU could then run at a clock speed of 4x the original clock. Each stage, is separate from the other stages. This means not only can your clock speed increase to 4x (at 4x gain) but you can now have 4 instructions layered (or "pipelined") in the CPU, resulting in 4x the performance. However, now "hazards" are created because one instruction coming in may depend on the previous instruction's result, but because it's pipelined, it won't get it as it enters the process stage as the other one exits the process stage. Therefore, you need to add circuitry to forward this result to the instruction entering the process stage. The alternative is to stall the pipeline which decreases performance.

Each pipeline stage, and particularly the process part, can be sub-divided into more and more steps. As a result, you end up creating a vast amount of circuitry to handle all the inter-dependencies (hazards) in the pipeline.

Other circuits can be enhanced as well. A trivial digital adder called a "ripple carry" adder is the easiest, smallest, but slowest adder. The fastest adder is a "carry look-ahead" adder and takes a tremendous exponential amount of circuitry. In my computer engineering course, I ran out of memory in my simulator of a 32-bit carry look-ahead adder, so I cut it in half, 2 16 bit CLA adders in a ripple-carry configuration. (Adding and subtracting are very hard for computers, multiplying easy, division is very hard)

A side affect of all this is as we shrink the size of transistors, and subdivide the stages, the clock frequencies can increase. This allows the processor to do more work so it runs hotter. Also, as the frequencies increase propagation delays become more apparent (the time it takes for a pipeline stage to complete, and for the signal to be available at the other side) Due to impedance, the effective speed of propagation is about 1 ft per nanosecond (1 Ghz). As your clock speed increases, it chip layout becomes increasingly important as a 4 Ghz chip has a max size of 3 inches. So now you must start including additional buses and circuits to manage all the data moving around the chip.

We also add instructions to chips all the time. SIMD (Single instruction multiple data), power saving, etc. they all require circuitry.

Finally, we add more features to chips. In the old days, your CPU and your ALU (Arithmetic Logic Unit) were separate. We combined them. The the FPU (Floating point unit) was separate, that got combined too. Now days, we add USB 3.0, Video Acceleration, MPEG decoding etc... We move more and more computation from software into hardware.

\$\endgroup\$
1
\$\begingroup\$

Majenko has a great answer on how the transistors are used. So let me instead go from a different approach vector and deal with efficiency.

Is it efficient to use as few transistors as you can when designing something?

This basically boils down to what efficiency you're talking about. Perhaps you're a member of a religion that maintains it is necessary to use as few transistors as possible - in that case, the answer is pretty much given. Or perhaps you're a company building a product. Suddenly, a simple question about efficiency becomes a very complicated question about the cost - benefit ratio.

And here comes the kicker - transistors in integrated circuits are extremely cheap, and they're getting ever cheaper with time (SSDs are a great example of how the cost of transistors was pushed down). Labor, on the other hand, is extremely expensive.

In the times when ICs were just getting started, there was a certain push to keep the amount of components required as low as possible. This was simply because they had a significant impact on the cost of a final product (in fact, they were often most of the cost of the product), and when you're building a finished, "boxed" product, the labor cost is spread out over all the pieces you make. The early IC-based computers (think video arcades) were driven to as small per-piece cost as possible. However, the fixed costs (as opposed to per-piece costs) are strongly impacted by the amount you are able to sell. If you were only going to sell a couple, it probably wasn't worth it to spend too much time on lowering the per-piece costs. If you were trying to build a whole huge market, on the other hand, driving the per-piece costs as low as possible had a pay-off.

Note an important part - it only makes sense to invest a lot of time in improving the "efficiency" when you're designing something for mass-production. This is basically what "industry" is - with artisans, skilled labor costs are often the main cost of the finished product, in a factory, more of the costs comes from materials and (relatively) unskilled labor.

Let's fast forward to the PC revolution. When IBM-style PC's came around, they were very stupid. Extremely stupid. They were general purpose computers. For pretty much any task you could design a device that could do it better, faster, cheaper. In other words, in the simplistic efficiency view, they were highly inefficient. Calculators were much cheaper, fit in your pocket and run for a long time of a battery. Video game consoles had special hardware to make them very good at creating games. The problem was, they couldn't do anything else. PC could do everything - it had a much worse price / output ratio, but you weren't railroaded into doing a calculator, or a 2D sprite game console. Why did Wolfenstein and Doom (and on Apple PC's, Marathon) appear on general purpose computers and not on game consoles? Because the consoles were very good at doing 2D sprite-based games (imagine the typical JRPG, or games like Contra), but when you wanted to stray away from the efficient hardware, you found out there's not enough processing power to do anything else!

So, the apparently less efficient approach gives you some very interesting options:

  • It gives you more freedom. Contrast old 2D consoles with old IBM PCs, and old 3D graphics accelerators to modern GPUs, which are slowly becoming pretty much general purpose computers on their own.
  • It enables mass-production efficiency increases even though the end products (software) is "artisan" in some ways. So companies like Intel can drive the cost of unit of work down much more efficiently than all the individual developers all over the world.
  • It gives more space for more abstractions in the development, thus allowing better reuse of ready solutions, which in turn allows lower development and testing costs, for better output. This is basically the reason why every school-boy can write a full-fledged GUI-based application with database access and internet connectivity and all the other stuff that would be extremely hard to develop if you had to always start from scratch.
  • In PCs, this used to mean that your applications basically got faster over time without your input. The free-lunch time is mostly over now, since it's getting harder and harder to improve the raw speed of computers, but it shaped most of the PC's lifetime.

All this comes at a "waste" of transistors, but it's not real waste, because the real total costs are lower than they would be if you pushed for the simple "as few transistors as possible".

\$\endgroup\$
1
\$\begingroup\$

Another side of the "so many transistors" story is that these transistors are not individually designed-in by a human. A modern CPU core has on the order of 0.1 billion transistors, and no human designs every one of those transistors directly. It wouldn't be possible. A 75 year lifetime is only 2.3 billion seconds.

So, to make such huge designs feasible, the humans are involved in defining the functionality of the device at a much higher level of abstraction than individual transistors. The transformation to the individual transistors is known as circuit synthesis, and is done by very expensive, proprietary tools that collectively cost on the order of a billion dollars to develop over the years, aggregating among the major CPU makers and foundries.

The circuit synthesis tools don't generate designs with the least number of transistors possible. This is done for a multitude of reasons.

First, let's cover the most basic case: any complex circuit can be simulated by a much simpler, perhaps serial, CPU, with sufficient memory. You can certainly simulate an i7 chip, with perfect accuracy, if only you hook up enough serial RAM to an Arduino. Such a solution will have much less transistors than the real CPU, and will run abysmally slowly, with an effective clock rate of 1kHz or less. We clearly don't intend the transistor number reduction to go that far.

So we must limit ourselves to a certain class of design-to-transistors transformations: those that maintain the parallel capacity built into the original design.

Even then, the optimization for minimal number of transistors will likely produce designs that are not manufacturable using any existing semiconductor process. Why? Because chips that you can actually make are 2D structures, and require some circuit redundancy simply so that you can interconnect those transistors without requiring a kilogram of metal to do so. The fan-in and fan-out of the transistors, and resulting gates, does matter.

Finally, the tools aren't theoretically perfect: it'd usually require way too much CPU time and memory to generate solutions that are globally minimal in terms of transistor numbers, given a constraint of a manufacturable chip.

\$\endgroup\$
0
\$\begingroup\$

I think what the OP needs to know is that a 'simple switch' often needs several transistors? Why? Well, for many reasons. Sometimes extra transistors are needed so that power usage is low for either 'on' or 'off' state. Sometimes transistors are needed to deal with uncertainties in voltage inputs or component specifications. A lot of reasons. But I appreciate the point. Look at the circuit diagram for an OP-AMP and you see a few dozen transistors! But they wouldn't be there if they didn't serve some purpose to the circuit.

\$\endgroup\$
0
\$\begingroup\$

Basically all the computer understands is 0s and 1s.. which is decided by these switches.. Yes, the transistors' functions are more than that of switches. So if a switch is can decide if the output has to be a 0 or a 1 (assuming that as a single bi operation), the more the number of bits. the more transistors.. so no wonder why we have to embed millions of transistors into a single microprocessor.. :)

\$\endgroup\$
0
\$\begingroup\$

In the era of technology, we need smart devices (small, fast and efficient). These devices are made up of integrated circuits (IC's) which contain a no. of transistors. We need more and more transistors to make IC smarter and faster because in electronics, every circuit in an IC is made from an adder, subs-tractor, multiplier, divider, logic gates, registers, multiplexers, flip flops, counters, shifters, memories and microprocessors etc. to implement any logic in devices and these are made up of transistors only (MOSFETs). With the help of transistors, we can implement any logic. So we need more and more transistors.....

enter image description here

\$\endgroup\$

Not the answer you're looking for? Browse other questions tagged or ask your own question.