Basic Cache Optimization – Part 1

Decades of refinement have resulted in CPUs optimized for processing arrays of numerical data. A cache, then multiple caches, were introduced to ameliorate the growing disparity between between CPU and RAM speeds. To achieve the maximum possible CPU performance for a given calculation, the data necessary for the calculation must be loaded into the cache as quickly as possible and as few times as possible.

If all of the data required for a calculation fits into cache simultaneously, no cache-specific optimization is necessary. When the data required for a calculation is too large to fit into the cache, it must be processed in blocks. These blocks should contain all necessary data to complete a subset of the calculation. If possible, they should also contain the data necessary to complete the next subset of the calculation. Failing to do this efficiently will result in unnecessary RAM accesses and longer execution times.

Modern CPUs have multiple cores and multiple caches. Commonly, each core has its own cache (the L1 cache) and shares one or more other caches (L2, L3, perhaps others) with the remaining cores. This article uses vector dot product and matrix-matrix multiplication to demonstrate optimizing for the L1 cache. Depending on the data characteristics and potential processing speed of a particular calculation, optimizing for other caches may be useful.

Basic Example: Vector Dot Product

Some calculations may be inherently cache optimized. Vector dot product is one. Take two nine element vectors, X and Y, stored as one-dimensional arrays. The CPU has a six element cache, CA. To compute X • Y, corresponding elements of X and Y are multiplied together and the resulting products are summed. System memory initially looks like:

NewImage

The dot product code starts with the first elements of X and Y, X[0] and Y[0]. When the code requests these from RAM, the memory manager fetches X[0] and the values stored in the next several sequential memory locations. These values are copied into cache. It then performs the same process for Y[0].

Once the data has been retrieved, the code stores 1 * 11 in a temporary variable. (Writing to cache will be covered in a separate article.) If the memory manager fetches three elements at a time, system memory now looks like:

NewImage

The code moves on to the second pair of elements. When it requests X[1] and Y[1], the memory manager retrieves each from cache. This replaces two slow RAM accesses with two fast cache accesses. The third element pair, X[2] and Y[2], is also retrieved from cache.

When the code requests the fourth element pair, the memory manager has to read them from RAM. It performs one read for X’s elements 4, 5, and 6 and another read for Y’s elements 4, 5, and 6. These six values are copied into the cache. System memory now looks like:

NewImage

Elements pairs 4, 5, and 6 can be multiplied without accessing RAM. Element pairs 7, 8, and 9 require two more RAM accesses. This entire dot product calculation requires six reads from RAM: two reads each for the first, fourth, and seventh element pairs. These six reads load the remaining element pairs as a side-effect.

This calculation is inherently cache-optimized. The vectors are stored as one-dimensional arrays. The array elements are sequential in RAM. The array elements are accessed in sequence. The memory manager’s default actions are optimal for this calculation.

A programmer could short-circuit this inherent optimization. If X • Y were computed as X[0] * Y[0] + X[3] * Y[3] + X[6] * Y[6] and so forth, the calculation would take much longer than necessary. Every element pair would require two reads from RAM. The calculation would go from six RAM reads to eighteen.

Unfortunately, many data structures do not work well with how the cache is filled.

Complex Example: Matrix-Matrix Multiplication

While programmers and researchers often use n-dimensional data structures, RAM is a one-dimensional array. Mapping a two-, three-, or more-dimensional array into one-dimensional RAM often leads to inefficient cache use. Matrix-matrix multiplication is a perfect example. If needed, please see A Quick Introduction to Matrix-Matrix Multiplication for a brief refresher.

Basic Multiplication

Matrices are generally thought of as two-dimensional arrays. But what appears on paper as:

NewImage

looks like this in RAM:

NewImage

The first element of A * B is the dot product of A’s first row and B’s first column. On paper, this is simple.

NewImage

In RAM, it is not.

NewImage

When the code begins calculating the dot product, it requests A[0,0] and B[0,0]. Using the same six element cache and memory manager that loads three sequential elements at a time, the cache looks like this:

NewImage

The first three elements of A’s first row are now in cache. The first three elements of B’s first row are also in cache. Unfortunately, we need the first three elements of B’s first column. Only the first element of B’s first row is useful.

Since one row of A and one row of B cannot fit into cache at the same time, the best scenario for a single dot product calculation is four reads from RAM. Two reads per row of A and two reads per column of B. But, as shown, the data structure doesn’t allow this.

With this data structure, each dot product requires eight RAM accesses: one for every three elements of a row of A plus one for every element of a column of B. A * B requires 54 dot products. The multiplication calculation went from 216 RAM accesses to 432 RAM accesses. A modern CPU core can do hundreds if not thousands of operations in the time one RAM access takes.

First Optimization

If B’s columns were in rows, they would be in sequential memory. This would result in better cache use because the memory manager would automatically load three useful elements of B into cache per RAM access. Transposing a matrix swaps the rows and columns. BT represents transposed B. On paper, the new matrices look like:

NewImage

They look like this in RAM:

NewImage

This change turns the cache for the first portion of the first dot product into:

NewImage

This reduces the RAM accesses per dot product to four. Since this is the calculated minimum number of accesses per dot product, the data now seems optimized for the cache.

But it isn’t. Cache usage for calculating a single dot product is optimized. But the calculation multiplies two matrices. The second element of A * B is the dot product of the first row of A and the second column of B. Using BT, the calculation is the dot product of the first row of A and the second row of BT.

NewImage

This will take an additional four RAM accesses. Two of these will be spent loading the first row of A into the cache for the second time. Optimizing the low-level calculation, in this case dot product, improved performance. However, optimizing the calculation as a whole will cut the number of RAM accesses almost in half for this example.

To be continued in part two.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s