Skip to main content

Timeline for Why does RAM have to be volatile?

Current License: CC BY-SA 3.0

35 events
when toggle format what by license comment
Sep 27, 2013 at 10:02 audit First posts
Sep 27, 2013 at 10:03
Sep 19, 2013 at 13:34 audit First posts
Sep 19, 2013 at 13:35
Sep 4, 2013 at 9:21 comment added geezanansa Very interesting and well described answer. Shame it does not answer the question. No mention of NVRAM!
Sep 2, 2013 at 15:16 comment added pjc50 That's a source for chips only, not suitable for fitting in your PC.
Sep 2, 2013 at 15:11 comment added Gizmo finally some sources! Thank you @pjc50
Sep 2, 2013 at 13:34 comment added pjc50 @Gizmo note that SRAM capacity per unit area is much less: digikey.com/catalog/en/partgroup/ddr-ii-xtreme-sram/34225 , so to get 8Gb you'd need several square feet of chips, at which point the wiring latency kills the speed advantage.
Sep 1, 2013 at 20:38 comment added supercat ...could be bits on different rows; as computers started exploiting this, manufacturers increased their focus on improving the efficiency of larger transfers on a single row. Although the common usage mode shifted from manipulating a single-bit per row per operation (a 16-bit write would do one bit on each of 16 independent rows), it wouldn't make sense to stop referring to a chip as "random access" just because the speed of fully-random accesses didn't improve as much as the speed of same-page accesses.
Sep 1, 2013 at 20:33 comment added supercat In a typical 1980's computer, a DRAM read cycle would have each chip copy an entire row of bits to a buffer (erasing it from within the main store in the process), read a bit from the buffer, and then rewrite the whole row from the buffer. A write cycle would copy the entire row to the buffer, change a bit, and write it back. The fact that every read or write required the entire row to be read or written wasn't relevant to the circuitry that was using the chip. Later computers exploited the fact that multiple bits within a row could be read or written more efficiently than...
Aug 31, 2013 at 19:00 comment added Gizmo mhm well would be cool to have 8GB of SRAM instead of normal DDR RAM for memory
Aug 31, 2013 at 18:56 comment added psusi @Vector, flash takes high voltage to change its state. High enough that it damages it a little bit each time you do, so eventually it burns out.
Aug 31, 2013 at 18:55 comment added psusi @Gizmo, you don't... back in the 8086 days you could, and in the 386 days you could install it in special slots on some motherboards to be used as high speed cache for the slower and larger DRAM, but these days it's only found in the cache built into the CPU. DRAM prices and densities are just so much higher that nobody uses SRAM any more.
Aug 31, 2013 at 15:14 comment added Gizmo um.. and where would I buy SRAM to put into my desktop? Can't find anything
Aug 31, 2013 at 11:46 comment added Daniel R Hicks @Vector - Some non-volatile memory designs undergo an actual physical change when a location is written or erased -- effectively melting and refreezing a little "blob" of something in a different configuration. With each change the "blob" gets a little more disorganized, until it no longer can be reliably switched back and forth.
Aug 31, 2013 at 7:57 comment added Vector SSD's have no moving parts - why should they wear out more than "volatile" memory?
Aug 31, 2013 at 0:14 comment added psusi Not all non volatile memories have to wear out; see the ancient core memory, and IBM has been saying for a few years now that they are developing a modern version of that. The correct answer is that they don't have to be, just current cost-effective technology is because it is based on capacitors, which leak, but can be made very small/dense. Also DRAM is not read in blocks. You have to open a row before you can read or write it, but once opened, you can read or write a single byte then close it.
Aug 30, 2013 at 17:33 comment added Maja Piechotka @Val It depends where user sits. From POV of program the file (disk) can be randomly accessed - that is not the same as POV of os. On the other hand from POV of memory controller the RAM is read in burst and usually multiple burst are read for efficiency (I believe it has something to do with parallelism of addressing in DRAM but I may be wrong). Given that DRAM can be used in FPGA there memory controller can be 'an user'.
Aug 30, 2013 at 17:19 comment added Daniel R Hicks @pjc50 -- You are quite correct. "random access" is as opposed to sequential access such as tapes. See the RAMAC for an early (but not the earliest) reference.
Aug 30, 2013 at 16:12 comment added Synetech The part about HDDs not being RAM is interesting. On the one hand, what are you nuts‽ of course they are!, after all, CDs/DVDs and HDDs are clearly RAM because unlike tape, you don’t have to wait for it to go through everything in-between to get to the part you want. On the other, um, yes, you actually do have to go through everything in-between (albeit much faster) because as you said, the head/laser has to seek (unless the files are contiguous of course). So it’s amusing (and frustrating) that industry terms (including old, well–worn-in ones) can still be ambiguous and inconsistent.
Aug 30, 2013 at 15:53 history edited gronostaj CC BY-SA 3.0
Incorporated suggestions from comments, emphasized that DRAM's "RAMiness" is disputed
Aug 30, 2013 at 15:37 comment added Val @pjc50 Yes, user perspective is important. That is why saying that "RAM is the same as serial access" is not acceptable and makes no sense.
Aug 30, 2013 at 15:35 comment added pjc50 "RAM" was I believe (I can't find a good reference) derived in opposition to sequential memory (magnetic or paper tape; mercury delay lines) which could only be accessed in order. Meanwhile, I found a digression on terms for "RAM" in other languages: smo.uhi.ac.uk/~oduibhin/tearmai/etymology.htm which emphasise different aspects of the RAM/ROM difference.
Aug 30, 2013 at 15:35 comment added Ben Voigt @jlliagre: His definition of RAM "any kind of memory that can be read or written in any order" definitely does not include ROM. ROM cannot be written in arbitrary order. It can't be written at all (some varieties can be programmed, but that's very different from a memory write).
Aug 30, 2013 at 15:31 comment added Val Also, from the user point of view, the disk is exposed as block-access device whereas RAM is truly random access. It satisfy every definition of randomness: you address specific byte, you do not care about the neighbors and and access is immediate - every clock cycle a new memory cell is accessed. This is nothing like disk.
Aug 30, 2013 at 15:16 comment added Val You used "obviously" in the sense "it is hard to explain why". It is a bad explanation. I did some dram access and I do not remember that it is serial. You may consider it parallel, if consider the internal workings. But it is not the same as serial, what you claim anyway. When you make strong statement, using "it is the same" is not enough especially because it is not the same at all.
Aug 30, 2013 at 15:13 comment added MSalters Common DRAM has been grouped in "blocks"/"pages" at least since Fast Page Mode DRAM, which dates back to 1992 or so. So he does have a real point. In modern memory the relative speeds differ even more, as most pages are powered down when not recently accessed.
Aug 30, 2013 at 14:50 comment added gronostaj @Val Maybe I haven't stated my point clearly, my thought process may be chaotic and English isn't my first language. I said that HDDs are an example of non-RAM and I've used them to explain why. Then I stated that the same reasoning applies to DRAM, no mixing between those two. Now, if wearing is problematic with SSDs, then it's much more serious with RAM (if P/E limit can be hit in basic usage, it will fail in more challenging conditions). I could stand slower write speeds, but not replacing memory every few months, so IMO it's more important.
Aug 30, 2013 at 14:45 comment added ratchet freak if you call random access any memory where accessing a random spot takes only O(1) time in terms of size regardless of the current state then DRAM is random access, a HDD has access in O(#tracks+rotation_time) which varies for size
Aug 30, 2013 at 14:22 comment added Val This is not pedantry, this is nonsense. He explains that DRAM is not RAM because HDDs are not. It is not justifiable, it is nonsene. Also, concentrating eclusively on SSD wear out, he neglects the first important aspect of RAM - the fast write capability. SSD write in blocks and very slowly. That is why they suck in the first place. Let the wear out alone. BTW, SSD is really block access. THis answer puts everything upsidedonwn and in confusion for uncertain reasons. This answer is opposite to pendantism because pedantism = order.
Aug 30, 2013 at 13:59 comment added Marcks Thomas The introduction of this answer might be somewhat extreme, but justifyable. As opposite to 'sequential access', RAM has always been a misnomer; there is no randomness involved. I'd argue all commonly accepted definitions are equally wrong, but using them interexchangably would only add to the confusion. This answer clearly announces what definition it subscribes to. I think this is necessary more than it is pedantic.
Aug 30, 2013 at 11:44 comment added gronostaj @DanielRHicks That's interesting. Maybe "RAMiness" isn't binary: DRAM is less random than SRAM, HDDs are less random than DRAM and so on.
Aug 30, 2013 at 11:33 comment added Daniel R Hicks But the original disk drives were referred to as "RAM" (since the other alternative was tape). If history determines precedence, DASD (what you young'ins refer to as HDD) is definitely RAM.
Aug 30, 2013 at 11:08 vote accept Chintan Trivedi
Aug 30, 2013 at 11:08 vote accept Chintan Trivedi
Aug 30, 2013 at 11:08
Aug 30, 2013 at 11:04 comment added jlliagre +1 for being with the 0.1% of people rightly stating ROM is also RAM ! (stating D-RAM is not RAM is a little extreme though ...)
Aug 30, 2013 at 10:58 history answered gronostaj CC BY-SA 3.0