Basic Cache Optimization – Part 1

Decades of refinement have resulted in CPUs optimized for processing arrays of numerical data. A cache, then multiple caches, were introduced to ameliorate the growing disparity between between CPU and RAM speeds. To achieve the maximum possible CPU performance for a given calculation, the data necessary for the calculation must be loaded into the cache as quickly as possible and as few times as possible.

If all of the data required for a calculation fits into cache simultaneously, no cache-specific optimization is necessary. When the data required for a calculation is too large to fit into the cache, it must be processed in blocks. These blocks should contain all necessary data to complete a subset of the calculation. If possible, they should also contain the data necessary to complete the next subset of the calculation. Failing to do this efficiently will result in unnecessary RAM accesses and longer execution times.

Modern CPUs have multiple cores and multiple caches. Commonly, each core has its own cache (the L1 cache) and shares one or more other caches (L2, L3, perhaps others) with the remaining cores. This article uses vector dot product and matrix-matrix multiplication to demonstrate optimizing for the L1 cache. Depending on the data characteristics and potential processing speed of a particular calculation, optimizing for other caches may be useful.

Basic Example: Vector Dot Product

Some calculations may be inherently cache optimized. Vector dot product is one. Take two nine element vectors, X and Y, stored as one-dimensional arrays. The CPU has a six element cache, CA. To compute X • Y, corresponding elements of X and Y are multiplied together and the resulting products are summed. System memory initially looks like:

NewImage

The dot product code starts with the first elements of X and Y, X[0] and Y[0]. When the code requests these from RAM, the memory manager fetches X[0] and the values stored in the next several sequential memory locations. These values are copied into cache. It then performs the same process for Y[0].

Once the data has been retrieved, the code stores 1 * 11 in a temporary variable. (Writing to cache will be covered in a separate article.) If the memory manager fetches three elements at a time, system memory now looks like:

NewImage

The code moves on to the second pair of elements. When it requests X[1] and Y[1], the memory manager retrieves each from cache. This replaces two slow RAM accesses with two fast cache accesses. The third element pair, X[2] and Y[2], is also retrieved from cache.

When the code requests the fourth element pair, the memory manager has to read them from RAM. It performs one read for X’s elements 4, 5, and 6 and another read for Y’s elements 4, 5, and 6. These six values are copied into the cache. System memory now looks like:

NewImage

Elements pairs 4, 5, and 6 can be multiplied without accessing RAM. Element pairs 7, 8, and 9 require two more RAM accesses. This entire dot product calculation requires six reads from RAM: two reads each for the first, fourth, and seventh element pairs. These six reads load the remaining element pairs as a side-effect.

This calculation is inherently cache-optimized. The vectors are stored as one-dimensional arrays. The array elements are sequential in RAM. The array elements are accessed in sequence. The memory manager’s default actions are optimal for this calculation.

A programmer could short-circuit this inherent optimization. If X • Y were computed as X[0] * Y[0] + X[3] * Y[3] + X[6] * Y[6] and so forth, the calculation would take much longer than necessary. Every element pair would require two reads from RAM. The calculation would go from six RAM reads to eighteen.

Unfortunately, many data structures do not work well with how the cache is filled.

Complex Example: Matrix-Matrix Multiplication

While programmers and researchers often use n-dimensional data structures, RAM is a one-dimensional array. Mapping a two-, three-, or more-dimensional array into one-dimensional RAM often leads to inefficient cache use. Matrix-matrix multiplication is a perfect example. If needed, please see A Quick Introduction to Matrix-Matrix Multiplication for a brief refresher.

Basic Multiplication

Matrices are generally thought of as two-dimensional arrays. But what appears on paper as:

NewImage

looks like this in RAM:

NewImage

The first element of A * B is the dot product of A’s first row and B’s first column. On paper, this is simple.

NewImage

In RAM, it is not.

NewImage

When the code begins calculating the dot product, it requests A[0,0] and B[0,0]. Using the same six element cache and memory manager that loads three sequential elements at a time, the cache looks like this:

NewImage

The first three elements of A’s first row are now in cache. The first three elements of B’s first row are also in cache. Unfortunately, we need the first three elements of B’s first column. Only the first element of B’s first row is useful.

Since one row of A and one row of B cannot fit into cache at the same time, the best scenario for a single dot product calculation is four reads from RAM. Two reads per row of A and two reads per column of B. But, as shown, the data structure doesn’t allow this.

With this data structure, each dot product requires eight RAM accesses: one for every three elements of a row of A plus one for every element of a column of B. A * B requires 54 dot products. The multiplication calculation went from 216 RAM accesses to 432 RAM accesses. A modern CPU core can do hundreds if not thousands of operations in the time one RAM access takes.

First Optimization

If B’s columns were in rows, they would be in sequential memory. This would result in better cache use because the memory manager would automatically load three useful elements of B into cache per RAM access. Transposing a matrix swaps the rows and columns. BT represents transposed B. On paper, the new matrices look like:

NewImage

They look like this in RAM:

NewImage

This change turns the cache for the first portion of the first dot product into:

NewImage

This reduces the RAM accesses per dot product to four. Since this is the calculated minimum number of accesses per dot product, the data now seems optimized for the cache.

But it isn’t. Cache usage for calculating a single dot product is optimized. But the calculation multiplies two matrices. The second element of A * B is the dot product of the first row of A and the second column of B. Using BT, the calculation is the dot product of the first row of A and the second row of BT.

NewImage

This will take an additional four RAM accesses. Two of these will be spent loading the first row of A into the cache for the second time. Optimizing the low-level calculation, in this case dot product, improved performance. However, optimizing the calculation as a whole will cut the number of RAM accesses almost in half for this example.

To be continued in part two.

A Quick Introduction to Matrix-Matrix Multiplication

Multiplying two matrices together is most easily explained with an example. Once we work through the example, we’ll express what we did in words.

Imagine we have two matrices, A and B. A has 5 rows and 3 columns. B has 3 rows and 4 columns. Multiplying A * B results in C, which has 5 rows and 4 columns.

NewImageThe numbers down the left of each matrix are the row numbers. The ones across the top are the column numbers.

We refer to an element in a matrix by its row and column position: Matrix Name[row, column]. A[1,1] is 1. B[2,3] is 6. C[5,4] is… well, we don’t know what C[5,4] is.

NewImageThe shaded cells are the elements referred to.

To calculate C[1,1], we need the first row of A and the first column of B.

NewImage

We need to take the dot product of the first row of A and the first column of B. The dot product is represented by a dot: x • y. Dot product means multiplying each element of A’s first row by the corresponding element of B’s first column, then adding up the result of the multiplications. Put another way:

A's Row 1 • B's Column 1 = A[1,1] * B[1,1] + A[1,2] * B[2,1] + A[1,3] * B[3,1] 

It’s easier to calculate if we write it down. First, we’ll take just the row and column we need.

NewImageThe numbers down the left are the row numbers.
The ones across the top are the column numbers.

Next, we’ll write B’s first column underneath A’s first row, lining up A’s columns with B’s rows.

NewImageThe numbers across the top represent column numbers for A and row numbers for B.

Then we’ll multiply the numbers in each column and write each result underneath. 1 * 12 is 12, 2 * 8 is 16, and 3 * 4 is 12.NewImageFinally, we add them together. 12 + 16 + 12 = 40NewImage

And put the result in C[1,1].

NewImage

The remaining elements of C are calculated in a similar manner. C[1,2] is the dot product of the first row of A and the second column of B. C[1, 3] is the dot product of the first row of A and the third column of B.

Even though A only has three columns, we’re taking the dot product of rows of A and columns of B. Since B has another column, we take the dot product of the first row of A and the fourth column of B to calculate C[1, 4]. If B had additional columns, we would continue calculating dot products using the first row of A until we ran out of columns.

The remaining rows of A follow the same pattern. C[2,1] is the dot product of the second row of A and the first column of B. C[3,2] is the dot product of the third row of A and the second row of B. And so forth.

The general rule is C[i, j] equals the dot product of the i-th row of A and the j-th column of B. We’ll do one more example. C[5, 3] is the dot product of the fifth row of A and the third column of B.

NewImage

The dot product of A’s row 5 and B’s row 3 is 244.

NewImage

So C[5, 3] is 244. Let’s fill in the rest of C.

NewImage

Matrix Multiplication Formula

Taking what we learned from the above example:

If A is a matrix with M rows and P columns and B is a matrix with P rows and N columns, A * B results in a matrix C. C has M rows and N columns. Each element in C, C[i, j], is the dot product of the i-th row of A and the j-th column of B. Two matrices can be multiplied if and only if A has the same number of columns that B has rows.

C#, Generics, and Optimizers

Generics are a handy feature of many languages. C# has had them since version 2.0. In some circumstances, their performance impact is minimal. In others, using them can cause significant slowdowns. Here is an example of the latter.

My matrix interface requires an indexer. Indexers can adversely effect performance, so my matrix classes have indexer versions of each of the multiplication methods for benchmarking purposes. This was done even though the code is shared, as I suspected there would be a performance price for using generics.

Having determined the performance cost of indexers, I decided to convert the methods to generics. I thought the additional cost of generics would be fairly low and it would be handy to easily multiply different matrix classes. The methods went from this format:

public MatrixAA Multiply(MatrixAA multiplier) { .... }

to this:

public static U Multiply<U, V>(U matrixA, V matrixB)
    where U:IMatrix<U>, new()
    where V:IMatrix<V> { .... }

The interface itself uses generics as the required methods must return their own type. New() is required for U so the product matrix can be created.

The performance hit was incredible. The above generic is 82% slower than the non-generic. The parallel version is 65% slower. More sophisticated multiplication methods show similar degradation.

Ildasm‘s disassemblies of the generic and non-generic versions of the method imply the issue is either the use of an interface or the use of a generic interface. The non-generic version compiles to late-bound calls to MatrixAA.get_Item(). The generic version compiles to late-bound calls to IMatrix<U>.get_Item() and IMatrix<V>.get_Item().

The speed difference could be the overhead of getting from IMatrix<U>.get_Item() to MatrixAA.get_Item() at run-time. Or the JIT optimizer has trouble optimizing the generic version of the call. Either way, the performance loss is too great for me to use the generic versions. The indexer methods need to show the cost of using an indexer, not the cost of using generics in this situation.

Implementing a High Performance Matrix-Matrix Block Multiply

A basic block matrix-matrix multiply code divides the operands into logical sub-matrices and multiplies and adds the sub-matrices to calculate the product. For A * B = C and a block size of B:

for (int blockRow = 0; blockRow < A.Rows; blockRow += B) {
    for (int blockColumn = 0; blockColumn < B.Columns; blockColumn += B) {
        for (int blockK = 0; blockK < A.Columns; blockK += B) {
            for (int row = blockRow; row < MIN(blockRow + B, A.Rows); ++row) {
                for (int column = blockColumn; column < MIN(blockColumn + B, B.Columns); ++column) {
                    for (int k = blockK; k < MIN(blockK + B, A.Columns); ++k) {
                        C[row, column] += A[row, k] * B[k, column];
                    }
                }
            }
        }
    }
}

While this code is easy to understand, it relies on the compiler to treat the matrices as blocks. Many if not most compilers do this poorly. Even with significant optimization, this code cannot overcome the limitations of most commonly used matrix data structures.

In order to implement a high performance matrix-matrix block multiply, the data structure and implementation code must be carefully optimized. In this article, the block size is B, the matrices are A * B = C, and the matrix elements are double-precision (64-bit) floating point numbers.

The Data Structure

Instead of relying on the compiler to optimize the basic block multiply code, the matrix data will be stored directly in blocks. While a matrix M has X logical rows and Y logical columns of double-precision elements, the actual structure will be an array of arrays. M has CEILING(X / B) rows and CEILING(Y / B) columns. Each physical element of M is a one-dimensional array of doubles with B * B elements. While using an array of arrays of an existing matrix class may be tempting, overall performance is unlikely to be adequate.

This design will result in a small amount of wasted memory. The worst case, where both the number of rows and number of columns are integer multiples of B plus one additional element, will waste X * (B – 1) * sizeof(double) + Y * (B – 1) * sizeof(double) bytes. For a 100,000 x 100,00 matrix, this adds  ~47 MB to the matrix’s ~75 GB size.

The Multiplication Algorithm

Since a matrix of matrices is still a matrix, the standard matrix-matrix multiplication algorithm holds. Each element C[i, j] is the dot product of the i-th row of A and the j-th column of B. The dot product of two vectors of matrices is the sum of the product of the corresponding vector elements.

The Implementation Code

// C99/C++-style code. If using C#, substitute double[] for double*

for (int blockRow = 0; blockRow < A.blockRowCount; ++blockRow) {
    for (int blockColumn = 0; blockColumn < B.blockColumnCount; ++blockColumn) {
        double* subC = C[blockRow, blockColumn];
        for (int blockK = 0; blockK < A.blockColumnCount; ++blockK) {
            double* subA = A[blockRow, blockK];
            double* subB = B[blockK, blockColumn];
            for (int row = 0; row < B; ++row) {
                for (int column = 0; column < B; ++column) {
                    for (int k = 0; k < B; ++k) {
                        subC += subA[row * B + k] * subB[k * B + column];
                    }
                }
            }
        }
    }
}

Average single-threaded performance should be near 98% of theoretical, non-SIMD maximum for matrices over 2000 x 2000. Average multi-threaded performance should be over 95% for matrices over 3000 x 3000. Performance for square matrices should increase as matrix size increases.

If these performance levels are not reached via compiler options, some hand-optimization may be necessary. Once the optimal block size is known, unrolling the k loop will provide a significant speed improvement. Changing the order of the blockRow, blockColumn, and blockK loops may improve performance by a few percentage points.

Determining Block Size

Experimentation will determine the optimal block size. Sqtr(L1 Cache Size / (4 * sizeof(double))) is a good place to start.

Parallelization Note

When parallelizing the code, special attention must be paid to the loop design. The loop design must guarantee each result sub-matrix subC is updated by a single thread. If the design itself cannot guarantee this, explicit locking must be used. If this is not done, random errors may result. These errors may only involve a tiny number of elements out of millions or billions. The test system must include comparing every element of the result to the result of a known-good matrix-matrix multiplication routine.

Final Notes

This combination of data structure, algorithm, and implementation code provides a high-performance, non-SIMD matrix-matrix multiplication routine. Maintaining 90%+ performance using SIMD instructions may require optimizations that go beyond what is possible in straight C/C++/C#.