181

This could sound like a subjective question, but what I am looking for are specific instances, which you could have encountered related to this.

  1. How to make code, cache effective/cache friendly (more cache hits, as few cache misses as possible)? From both perspectives, data cache & program cache (instruction cache), i.e. what things in one's code, related to data structures and code constructs, should one take care of to make it cache effective.

  2. Are there any particular data structures one must use/avoid, or is there a particular way of accessing the members of that structure etc... to make code cache effective.

  3. Are there any program constructs (if, for, switch, break, goto,...), code-flow (for inside an if, if inside a for, etc ...) one should follow/avoid in this matter?

I am looking forward to hearing individual experiences related to making cache efficient code in general. It can be any programming language (C, C++, Assembly, ...), any hardware target (ARM, Intel, PowerPC, ...), any OS (Windows, Linux,S ymbian, ...), etc..

The variety will help to better to understand it deeply.

2

15 Answers 15

133

The cache is there to reduce the number of times the CPU would stall waiting for a memory request to be fulfilled (avoiding the memory latency), and as a second effect, possibly to reduce the overall amount of data that needs to be transfered (preserving memory bandwidth).

Techniques for avoiding suffering from memory fetch latency is typically the first thing to consider, and sometimes helps a long way. The limited memory bandwidth is also a limiting factor, particularly for multicores and multithreaded applications where many threads wants to use the memory bus. A different set of techniques help addressing the latter issue.

Improving spatial locality means that you ensure that each cache line is used in full once it has been mapped to a cache. When we have looked at various standard benchmarks, we have seen that a surprising large fraction of those fail to use 100% of the fetched cache lines before the cache lines are evicted.

Improving cache line utilization helps in three respects:

  • It tends to fit more useful data in the cache, essentially increasing the effective cache size.
  • It tends to fit more useful data in the same cache line, increasing the likelyhood that requested data can be found in the cache.
  • It reduces the memory bandwidth requirements, as there will be fewer fetches.

Common techniques are:

  • Use smaller data types
  • Organize your data to avoid alignment holes (sorting your struct members by decreasing size is one way)
  • Beware of the standard dynamic memory allocator, which may introduce holes and spread your data around in memory as it warms up.
  • Make sure all adjacent data is actually used in the hot loops. Otherwise, consider breaking up data structures into hot and cold components, so that the hot loops use hot data.
  • avoid algorithms and datastructures that exhibit irregular access patterns, and favor linear datastructures.

We should also note that there are other ways to hide memory latency than using caches.

Modern CPU:s often have one or more hardware prefetchers. They train on the misses in a cache and try to spot regularities. For instance, after a few misses to subsequent cache lines, the hw prefetcher will start fetching cache lines into the cache, anticipating the application's needs. If you have a regular access pattern, the hardware prefetcher is usually doing a very good job. And if your program doesn't display regular access patterns, you may improve things by adding prefetch instructions yourself.

Regrouping instructions in such a way that those that always miss in the cache occur close to each other, the CPU can sometimes overlap these fetches so that the application only sustain one latency hit (Memory level parallelism).

To reduce the overall memory bus pressure, you have to start addressing what is called temporal locality. This means that you have to reuse data while it still hasn't been evicted from the cache.

Merging loops that touch the same data (loop fusion), and employing rewriting techniques known as tiling or blocking all strive to avoid those extra memory fetches.

While there are some rules of thumb for this rewrite exercise, you typically have to carefully consider loop carried data dependencies, to ensure that you don't affect the semantics of the program.

These things are what really pays off in the multicore world, where you typically wont see much of throughput improvements after adding the second thread.

4
  • 5
    When we have looked at various standard benchmarks, we have seen that a surprising large fraction of those fail to use 100% of the fetched cache lines before the cache lines are evicted. May I ask what sort of profiling tools give you this kind of information, and how?
    – user4842163
    Commented May 3, 2015 at 19:25
  • "Organize your data to avoid alignment holes (sorting your struct members by decreasing size is one way)" - why compiler doesn't optimize this itself? why compiler can't always "sort members by decreasing size"? what is advantage to keep members unsorted? Commented Sep 2, 2016 at 7:49
  • I know not the origins, but for one, member order is crucial in let's say network communication, where you may want to send entire structures byte by byte over the web.
    – Kobrar
    Commented Nov 4, 2016 at 12:54
  • 1
    @javapowered The compiler may be able to do that depending on the language, though I'm not sure if any of them do. The reason you can't do it in C is that it's perfectly valid to address members by base address+offset rather than by name, which means reordering the members would completely break the program. Commented Feb 16, 2018 at 19:40
60

I can't believe there aren't more answers to this. Anyway, one classic example is to iterate a multidimensional array "inside out":

pseudocode
for (i = 0 to size)
  for (j = 0 to size)
    do something with ary[j][i]

The reason this is cache inefficient is because modern CPUs will load the cache line with "near" memory addresses from main memory when you access a single memory address. We are iterating through the "j" (outer) rows in the array in the inner loop, so for each trip through the inner loop, the cache line will cause to be flushed and loaded with a line of addresses that are near to the [j][i] entry. If this is changed to the equivalent:

for (i = 0 to size)
  for (j = 0 to size)
    do something with ary[i][j]

It will run much faster.

5
  • 10
    back in college we had an assignment on matrix multiplication. It turned out that it was faster to take a transpose of the "columns" matrix first and multiply rows by rows rather than rows by cols for that precise reason. Commented Jun 1, 2009 at 17:34
  • 13
    actually, most of the modern compilers can figure this out by itselves (with optimizations turned on) Commented Apr 27, 2010 at 13:29
  • 1
    @ykaganovich That is also the example in Ulrich Dreppers article: lwn.net/Articles/255364 Commented Mar 7, 2013 at 13:19
  • I'm not sure this is always correct - if the entire array fits within the L1 cache (often 32k!) both orders will have the same number of cache hits and misses. Perhaps memory pre-fetching might have some impact I guess. Happy to be corrected of course. Commented Oct 4, 2016 at 9:54
  • who ever will choose first version of this code if order does not matter? Commented Aug 8, 2018 at 7:57
49

The basic rules are actually fairly simple. Where it gets tricky is in how they apply to your code.

The cache works on two principles: Temporal locality and spatial locality. The former is the idea that if you recently used a certain chunk of data, you'll probably need it again soon. The latter means that if you recently used the data at address X, you'll probably soon need address X+1.

The cache tries to accomodate this by remembering the most recently used chunks of data. It operates with cache lines, typically sized 128 byte or so, so even if you only need a single byte, the entire cache line that contains it gets pulled into the cache. So if you need the following byte afterwards, it'll already be in the cache.

And this means that you'll always want your own code to exploit these two forms of locality as much as possible. Don't jump all over memory. Do as much work as you can on one small area, and then move on to the next, and do as much work there as you can.

A simple example is the 2D array traversal that 1800's answer showed. If you traverse it a row at a time, you're reading the memory sequentially. If you do it column-wise, you'll read one entry, then jump to a completely different location (the start of the next row), read one entry, and jump again. And when you finally get back to the first row, it will no longer be in the cache.

The same applies to code. Jumps or branches mean less efficient cache usage (because you're not reading the instructions sequentially, but jumping to a different address). Of course, small if-statements probably won't change anything (you're only skipping a few bytes, so you'll still end up inside the cached region), but function calls typically imply that you're jumping to a completely different address that may not be cached. Unless it was called recently.

Instruction cache usage is usually far less of an issue though. What you usually need to worry about is the data cache.

In a struct or class, all members are laid out contiguously, which is good. In an array, all entries are laid out contiguously as well. In linked lists, each node is allocated at a completely different location, which is bad. Pointers in general tend to point to unrelated addresses, which will probably result in a cache miss if you dereference it.

And if you want to exploit multiple cores, it can get really interesting, as usually, only one CPU may have any given address in its L1 cache at a time. So if both cores constantly access the same address, it will result in constant cache misses, as they're fighting over the address.

3
  • 4
    +1, good and practical advice. One addition: Time locality and space locality combined suggest, that for matrix ops for example, it might be advisable to split them into smaller matrices that completely fit into a cache line, or whose rows/columns fit into cache lines. I remember doing that for visualization of multidim. data. It provided some serious kick in the pants. It is good to remember that the cache does hold more than one 'line' ;)
    – AndreasT
    Commented Aug 11, 2009 at 14:47
  • 1
    You say only 1 CPU can have a given address in L1 cache at a a time -- I assume you mean cache lines rather than address. Also I've heard of false sharing problems when at least one of the CPUs is doing writes, but not if both are only doing reads. So by 'access' do you actually mean writes? Commented Apr 3, 2012 at 14:04
  • 2
    @JosephGarvin: yes, I meant writes. You are right, multiple cores can have the same cache lines in their L1 caches at the same time, but when one core writes to these addresses, it gets invalidated in all other L1 caches, and then they have to reload it before they can do anything with it. Sorry for the imprecise (wrong) wording. :) Commented Feb 19, 2013 at 23:34
46

I recommend reading the 9-part article What every programmer should know about memory by Ulrich Drepper if you're interested in how memory and software interact. It's also available as a 104-page PDF.

Sections especially relevant to this question might be Part 2 (CPU caches) and Part 5 (What programmers can do - cache optimization).

2
17

Apart from data access patterns, a major factor in cache-friendly code is data size. Less data means more of it fits into the cache.

This is mainly a factor with memory-aligned data structures. "Conventional" wisdom says data structures must be aligned at word boundaries because the CPU can only access entire words, and if a word contains more than one value, you have to do extra work (read-modify-write instead of a simple write). But caches can completely invalidate this argument.

Similarly, a Java boolean array uses an entire byte for each value in order to allow operating on individual values directly. You can reduce the data size by a factor of 8 if you use actual bits, but then access to individual values becomes much more complex, requiring bit shift and mask operations (the BitSet class does this for you). However, due to cache effects, this can still be considerably faster than using a boolean[] when the array is large. IIRC I once achieved a speedup by a factor of 2 or 3 this way.

0
10

One example I saw used in a game engine was to move data out of objects and into their own arrays. A game object that was subject to physics might have a lot of other data attached to it as well. But during the physics update loop all the engine cared about was data about position, speed, mass, bounding box, etc. So all of that was placed into its own arrays and optimized as much as possible for SSE.

So during the physics loop the physics data was processed in array order using vector math. The game objects used their object ID as the index into the various arrays. It was not a pointer because pointers could become invalidated if the arrays had to be relocated.

In many ways this violated object-oriented design patterns but it made the code a lot faster by placing data close together that needed to be operated on in the same loops.

This example is probably out of date because I expect most modern games use a prebuilt physics engine like Havok.

4
  • 2
    +1 Not at all out of date. This is the best way to organise data for game engines -- make data blocks contiguous, and perform all of a given type of operation (say AI) before moving on to the next (say physics) in order to leverage cache proximity / locality of reference.
    – Engineer
    Commented Nov 5, 2013 at 16:20
  • I saw this exact example in a video somewhere a couple of weeks ago, but have since lost the link to it / can't remember how to find it. Do rememeber where you saw this example?
    – will
    Commented Aug 8, 2014 at 0:16
  • @will: No, I don't remember exactly where this was.
    – Zan Lynx
    Commented Aug 11, 2014 at 22:48
  • 2
    This is the very idea of an entity component system (ECS: en.wikipedia.org/wiki/Entity_component_system). Store data as struct-of-arrays rather than the more traditional array-of-structs that OOP practices encourage.
    – BuschnicK
    Commented Mar 11, 2019 at 13:55
9

The most effective data structure for a cache is an array. Caches work best, if your data structure is laid out sequentially as CPUs read entire cache lines (usually 32 bytes or more) at once from main memory.

Any algorithm which accesses memory in random order trashes the caches because it always needs new cache lines to accomodate the randomly accessed memory. On the other hand an algorithm, which runs sequentially through an array is best because:

  1. It gives the CPU a chance to read-ahead, e.g. speculatively put more memory into the cache, which will be accessed later. This read-ahead gives a huge performance boost.

  2. Running a tight loop over a large array also allows the CPU to cache the code executing in the loop and in most cases allows you to execute an algorithm entirely from cache memory without having to block for external memory access.

3
  • @Grover: About your point 2. so can one say that if insidea tight loop, a function is being called for each loop count, then it wil fetch new code altogether and cause a cache miss, instead if u can put the function as a code in the for loop itself,no function call, it would be faster due to fewer cache misses?
    – goldenmean
    Commented Apr 18, 2009 at 11:23
  • 1
    Yes and no. The new function will be loaded in the cache. If there's enough cache space, then on second iteration it'll already have that function in the cache so there's no reason to reload it again. So it is a hit on the first call. In C/C++ you can ask the compiler to place functions right next to each other using appropriate segments.
    – grover
    Commented Apr 18, 2009 at 13:01
  • One more note: If you call out of the loop and there's not enough cache space the new function will be loaded into cache regardless. It may even happen that the original loop will be thrown out of the cache. In this case the call will incur up to three penalties for each iteration: One to load the call target and another to reload the loop. And a third if the loop head is not in the same cache line as the call return address. In that case jumping to the loop head also needs a new memory access.
    – grover
    Commented Apr 18, 2009 at 13:04
8

A remark to the "classic example" by user 1800 INFORMATION (too long for a comment)

I wanted to check the time differences for two iteration orders ( "outter" and "inner"), so I made a simple experiment with a large 2D array:

measure::start();
for ( int y = 0; y < N; ++y )
for ( int x = 0; x < N; ++x )
    sum += A[ x + y*N ];
measure::stop();

and the second case with the for loops swapped.

The slower version ("x first") was 0.88sec and the faster one, was 0.06sec. That's the power of caching :)

I used gcc -O2 and still the loops were not optimized out. The comment by Ricardo that "most of the modern compilers can figure this out by itselves" does not hold

3
  • Not sure I get this. In both examples you are still accessing each variable in the for loop. Why is one way faster than the other?
    – ed-
    Commented Oct 14, 2015 at 4:39
  • ultimately intuitive for me to understand how it affects :)
    – Laie
    Commented Nov 25, 2015 at 6:53
  • @EdwardCorlew It's because of the order in which they are accessed. The y-first order is faster because it accesses the data sequentially. When the first entry is requested the L1 cache loads an entire cache-line, which includes the int requested plus the next 15 (assuming a 64-byte cache-line), so there is no CPU stall waiting for the next 15. The x-first order is slower because the element accessed is not-sequential, and presumably N is large enough that the memory being accessed is always outside the L1 cache and so every operation stalls. Commented Oct 4, 2016 at 9:13
7

Only one post touched on it, but a big issue comes up when sharing data between processes. You want to avoid having multiple processes attempting to modify the same cache line simultaneously. Something to look out for here is "false" sharing, where two adjacent data structures share a cache line and modifications to one invalidates the cache line for the other. This can cause cache lines to unnecessarily move back and forth between processor caches sharing the data on a multiprocessor system. A way to avoid it is to align and pad data structures to put them on different lines.

4

I can answer (2) by saying that in the C++ world, linked lists can easily kill the CPU cache. Arrays are a better solution where possible. No experience on whether the same applies to other languages, but it's easy to imagine the same issues would arise.

6
  • @Andrew: How about structures. Are they cache efficient? Do they have any size constraints to be cache efficient?
    – goldenmean
    Commented Apr 18, 2009 at 11:30
  • A struct is a single block of memory, so as long as it doesn't exceed the size of your cache you won't see an impact. It's only when you have a collection of structs (or classes) that you'll see cache hits and it depends on the way you organise the collection. An array butts the objects up against each other (good) but a linked list can have objects all over your address space with links between them, which is obviously bad for cache performance.
    – Andrew
    Commented Apr 18, 2009 at 12:28
  • 1
    Some way to use linked lists without killing the cache, most effective for not big lists, is to create your own memory pool, that is - to allocate one large array. then instead of 'malloc'ing (or 'new'ing in C++) memory for each little linked list member, which may be allocated in an entirely different place in memory, and waste managing space, you give it memory from your memory pool, highly increasing the odds that logically close members of the list, will be on the cache together. Commented Apr 18, 2009 at 20:08
  • 2
    Sure, but it's a lot of work getting std::list<> et al. to use your custom memory blocks. When I was a young whippersnapper I'd absolutely go that path, but these days... too many other things to tackle.
    – Andrew
    Commented Apr 18, 2009 at 22:06
  • 1
4

Cache is arranged in "cache lines" and (real) memory is read from and written to in chunks of this size.

Data structures that are contained within a single cache-line are therefore more efficient.

Similarly, algorithms which access contiguous memory blocks will be more efficient than algorithms which jump through memory in a random order.

Unfortunately the cache line size varies dramatically between processors, so there's no way to guarantee that a data structure that's optimal on one processor will be efficient on any other.

1
  • not necessarily. just be careful about false sharing. sometimes you have to split data into different cache lines. how effective is the cache always relies on how do you use it.
    – DAG
    Commented May 26, 2016 at 2:24
4

To ask how to make a code, cache effective-cache friendly and most of the other questions , is usually to ask how to Optimize a program, that's because the cache has such a huge impact on performances that any optimized program is one that is cache effective-cache friendly.

I suggest reading about Optimization, there are some good answers on this site. In terms of books, I recommend on Computer Systems: A Programmer's Perspective which has some fine text about the proper usage of the cache.

(b.t.w - as bad as a cache-miss can be, there is worse - if a program is paging from the hard-drive...)

4

There has been a lot of answers on general advices like data structure selection, access pattern, etc. Here I would like to add another code design pattern called software pipeline that makes use of active cache management.

The idea is borrow from other pipelining techniques, e.g. CPU instruction pipelining.

This type of pattern best applies to procedures that

  1. could be broken down to reasonable multiple sub-steps, S[1], S[2], S[3], ... whose execution time is roughly comparable with RAM access time (~60-70ns).
  2. takes a batch of input and do aforementioned multiple steps on them to get result.

Let's take a simple case where there is only one sub-procedure. Normally the code would like:

def proc(input):
    return sub-step(input))

To have better performance, you might want to pass multiple inputs to the function in a batch so you amortize function call overhead and also increases code cache locality.

def batch_proc(inputs):
    results = []
    for i in inputs:
        // avoids code cache miss, but still suffer data(inputs) miss
        results.append(sub-step(i))
    return res

However, as said earlier, if the execution of the step is roughly the same as RAM access time you can further improve the code to something like this:

def batch_pipelined_proc(inputs):
    for i in range(0, len(inputs)-1):
        prefetch(inputs[i+1])
        # work on current item while [i+1] is flying back from RAM
        results.append(sub-step(inputs[i-1]))
        
    results.append(sub-step(inputs[-1]))

The execution flow would look like:

  1. prefetch(1) ask CPU to prefetch input[1] into cache, where prefetch instruction takes P cycles itself and return, and in the background input[1] would arrive in cache after R cycles.
  2. works_on(0) cold miss on 0 and works on it, which takes M
  3. prefetch(2) issue another fetch
  4. works_on(1) if P + R <= M, then inputs[1] should be in the cache already before this step, thus avoid a data cache miss
  5. works_on(2) ...

There could be more steps involved, then you can design a multi-stage pipeline as long as the timing of the steps and memory access latency matches, you would suffer little code/data cache miss. However, this process needs to be tuned with many experiments to find out right grouping of steps and prefetch time. Due to its required effort, it sees more adoption in high performance data/packet stream processing. A good production code example could be found in DPDK QoS Enqueue pipeline design: http://dpdk.org/doc/guides/prog_guide/qos_framework.html Chapter 21.2.4.3. Enqueue Pipeline.

More information could be found:

https://software.intel.com/en-us/articles/memory-management-for-optimal-performance-on-intel-xeon-phi-coprocessor-alignment-and

http://infolab.stanford.edu/~ullman/dragon/w06/lectures/cs243-lec13-wei.pdf

2

Besides aligning your structure and fields, if your structure if heap allocated you may want to use allocators that support aligned allocations; like _aligned_malloc(sizeof(DATA), SYSTEM_CACHE_LINE_SIZE); otherwise you may have random false sharing; remember that in Windows, the default heap has a 16 bytes alignment.

1

Write your program to take a minimal size. That is why it is not always a good idea to use -O3 optimisations for GCC. It takes up a larger size. Often, -Os is just as good as -O2. It all depends on the processor used though. YMMV.

Work with small chunks of data at a time. That is why a less efficient sorting algorithms can run faster than quicksort if the data set is large. Find ways to break up your larger data sets into smaller ones. Others have suggested this.

In order to help you better exploit instruction temporal/spatial locality, you may want to study how your code gets converted in to assembly. For example:

for(i = 0; i < MAX; ++i)
for(i = MAX; i > 0; --i)

The two loops produce different codes even though they are merely parsing through an array. In any case, your question is very architecture specific. So, your only way to tightly control cache use is by understanding how the hardware works and optimising your code for it.

3
  • Interesting point. Do look-ahead caches make assumptions based on the direction of a loop/pass through memory?
    – Andrew
    Commented Apr 18, 2009 at 12:30
  • 1
    There are many ways to design speculative data caches. Stride based ones do measure the 'distance' and 'direction' of data accesses. Content based ones chase pointer chains. There are other ways to design them.
    – sybreon
    Commented Apr 18, 2009 at 14:08
  • Often, -Os is just as good as -O2 this isn't quite true. Most compilers don't enable vectorization at -Os but many do at -O2. Nowadays it's usually better to use -O3 to enable all autovectorization options. If done properly it'll be many times faster than scalar code
    – phuclv
    Commented Apr 17, 2021 at 0:48

Not the answer you're looking for? Browse other questions tagged or ask your own question.