2

Which is better? High cache memory or low cache? And what exactly is their difference.

2 Answers 2

2

Always - More the better.

If you are talking about the CPU cache, Wikipedia says it best -

A CPU cache is a cache used by the central processing unit of a computer to reduce the average time to access memory. The cache is a smaller, faster memory which stores copies of the data from the most frequently used main memory locations. As long as most memory accesses are cached memory locations, the average latency of memory accesses will be closer to the cache latency than to the latency of main memory.

0

The phrasing of your question, referring to high versus low cache, is uncommon and ambiguous.

Caching was originally intended to speed up DISK-to-RAM read interactions to avoid starving CPU processes, leaving them with idle cycles before having access to requested data.

Interpretations:

  • [1] location/type of cache, or
  • [2] amount of cache.

For location/type, you have

  • CPU on-board: for multi-CPU chips, you have some memory allocated as shared by all CPUs and you have some memory allocated as dedicated to each CPU. Unless you are doing supercomputing, it is my understanding that it is best to maximize the shared memory portion.

  • OS shared memory: there is un-reserved memory which can be allocated to any new process on-demand. there is memory reserved by the OS for RAM-based caching (you may see references to segment swapping and page swapping). there is logical memory assignment under the heading of Virtual Memory, where the full range of accessible memory is a combination of segments in physical RAM (high memory) and the remaining segments allocated to a usually-reserved space on one or many physical disk partitions. there is memory allocated on a per process basis. there is memory allocated on a per device basis. If the ratio of disk/ram VM is too high, in relation to the CPU data thruput, the system will suffer performance-wise. Hence the current trend to RAM-disk to provide near-RAM performance rather than much slower hard disk drives. Using RAM-disk does not eliminate the need for carefully choosing which kernel parameters for tuning.

  • Disk on-board: caching is mostly intended to address device read requests, but writing cache is used to hold onto data being written in a "disk-internal RAM" which the disk itself would scan before going to the platters of the drive to fulfill the requests. as a go-between from platter to computer interface, manufacturers put on-board memory to reduce multiple physical accessing of the same data by keeping block mirrors in a buffer and purging based on a oldest access to make room for new mirrors. I am not aware of whether manufacturers offer tunable parameters for the hard disk drive on-board caching.

For amount of cache,

  • if you have too much cache (similar to too much SWAP), the system (OS or hardware) takes longer to fulfill a read request, wasting time looking for data in the cache before redirecting its efforts to the actual device for guaranteed fulfillment of the request.
  • there are a number of kernel parameters that are tunable for caching at the OS level; I recommend you review all of the caching related parameters before making your decisions on which parameters to change and which values to assign.

Some useful references:

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .