1
\$\begingroup\$

So lets take your run of the mill SRAM, such as 23K256 from Microchip: http://ww1.microchip.com/downloads/en/DeviceDoc/22100D.pdf

Do they mostly just use a generic 6T cell setup such as: http://www.iis.ee.ethz.ch/~kgf/aries/FIG/fig5.4.gif

If not what do they typically use?

Also im not sure I completely understand the 6T cell setup, Like I get the general structure, and how it stores it using an inverter.....but which line are we reading from? The Bitline or NOTBitline?

Or is it we write to wordline to active the Cells, Send voltage to BIT/NOTBIT to over-write the new value. How do we write a value of 0?

And to read we just activate wordline, but don't send voltage to BIT/NOTBIT

is that correct or wayy off?

Also a side question: In the initial state, the inverters have no value....on either, so what would the value be when voltage is first applied? Since they are both technically sending out a 0? (sorry if that's a confusing question, it's hard to word it). I face the same thing with D-Flip Flops.....whats the initial value of them?

\$\endgroup\$
1
  • 1
    \$\begingroup\$ Honestly you should probably split your question in two. Asking for authorities/references on whether 6T is the dominant SRAM cell design is one question. Asking for a tutorial on how the 6T works is a different question. \$\endgroup\$ Commented Dec 30, 2014 at 10:44

2 Answers 2

1
+50
\$\begingroup\$

I believe almost all basic SRAM designed and manufactured now use this 6T design. Smaller design exists but they are usually more expensive.

To analyse this cell circuit let's redraw it a bit:

schematic

simulate this circuit – Schematic created using CircuitLab

The bit is held in the two cross-coupled inverters formed by the four MOSFETs in the middle. Confusing looking, but this positive feedback loop latches, and this latching is what stores the bit.

The two MOSFETs on top are connected into constant current sources (or diodes, I cannot tell yet, but that does not really matter I believe.)

When the word line go high the two MOSFETs M1 and M2 turns on, allowing both detecting the voltage of the bit lines or forcing a voltage in changing the state of the latch.

\$\endgroup\$
8
  • \$\begingroup\$ How do we know what the initial state is though? since without power both inverters technically have "0" voltage going to them. So will BL# read a 0 or 1 first? \$\endgroup\$
    – user3073
    Commented Dec 30, 2014 at 19:59
  • 1
    \$\begingroup\$ As the answer below said the initial state is unknown during design time, but determined randomly during fabrication. \$\endgroup\$ Commented Dec 31, 2014 at 5:34
  • 1
    \$\begingroup\$ As noted, the two inverters in the middle are what stores the information. If the output of the lower inverter is low, the output of the upper one will be high, and we say the SRAM cell has stored a zero. Likewise, if the lower inverter output is high, the upper one will output a low voltage and we say the SRAM cell has stored a one. Now, if we were to read the cell, we could (hypothetically) put a voltmeter to one of the inverter outputs and read the voltage there, or we could use a transmission gate instead of M1 and M2 to put the voltage to one of the bit lines. ...continued... \$\endgroup\$
    – mox
    Commented Jan 2, 2017 at 17:15
  • 1
    \$\begingroup\$ ...continued... However, we want to minimise the space, and PMOS transistors are big. So instead, we leave out the PMOS from the pass gate and instead connect both bit lines. NMOS are good for driving voltages low. So, if we have a high level on the bit lines (or a constant current, pulling them high, as in the picture), when we turn on M1 and M2, one of them will drive a bit line low, while the other would drive "his" bitline high (but does not, as NMOS cannot drive "1"s and because the bitline is already high). Now, we check which bit line is low and know the state of the SRAM cell. \$\endgroup\$
    – mox
    Commented Jan 2, 2017 at 17:24
  • 1
    \$\begingroup\$ For writing, you apply the data on the bitline and the inverted data on the bitline_not. For example, to write a zero, you pull bitline low, then turn on the pass transistors (M1 and M2). The pull-down strength of M1 in series with the pull-down transistor pulling the bitline low has to be stronger than the pull-up transistor in the lower inverter to drive the input voltage of the upper transistor low enough to flip the SRAM cell. Now, when you turn of M1 and M2, the cell has stored the zero. If the bitlines were low during read, they might also flip the SRAM. \$\endgroup\$
    – mox
    Commented Jan 2, 2017 at 17:32
1
\$\begingroup\$

When voltage is first applied to any kind of sequential logic circuit, the initial value depends on how the circuit came out of the fab. Each flop or RAM bit is different. To get the memories into a known state, there are two options. The first is a reset signal, which is used for flops. This isn't done for RAMs since it would make the circuit larger, and RAMs are all about density. Instead, you can either clear the RAM by writing to each address, or write your software so that no memory location is used before being written. (The latter is always a good idea, of course.) RAM clears may be done in software, but the hardware may also be able to do its own clear via a state machine. Obviously the hardware version will be faster.

Memory bits are usually pretty generic from a circuit design standpoint. The manufacturing process and physical layout is what ultimately determines capacity per dollar, speed, reliability, etc. The coupled inverters can be very weak, so it may be necessary to use sense amplifiers to measure their state quickly and correctly. These are basically differential amplifiers with a precharge/evaluate cycle similar to dynamic logic.

That's the extent of my personal knowledge from working in MCU development. This presentation from the University of Texas has some more information. It says that the complementary bitlines are both used at the same time. Probably this is both because it gets you differential reads for free, and because it makes overriding the feedback during a write faster.

In your schematic, I suspect that the two top PMOS transistors and the capacitors are there to represent the precharge circuitry and the capacitive loading of the bitlines, and would not be part of the actual bit cell layout.

\$\endgroup\$