Matrix multiplication in Mojo
This notebook describes how to write a matrix multiplication (matmul) algorithm in Mojo. We will start with a pure Python implementation, transition to a naive implementation that is essentially a copy of the Python one, then add types, then continue the optimizations by vectorizing, tiling, and parallelizing the implementation.
First, let's define matrix multiplication. Given two dense matrices and of dimensions and respectively, we want to compute their dot product (also known as matmul). The dot product is defined by
Please take look at our blog post on matmul and why it is important for ML and DL workloads.
The format of this notebook is to start with an implementation which is identical to that of Python (effectively renaming the file extension), then look at how adding types to the implementation helps performance before extending the implementation by leveraging the vectorization and parallelization capabilities available on modern hardware. Throughout the execution, we report the GFlops achieved.
Python Implementation
Let's first implement matmul in Python directly from the definition.
%%python
def matmul_python(C, A, B):
for m in range(C.rows):
for k in range(A.cols):
for n in range(C.cols):
C[m, n] += A[m, k] * B[k, n]
%%python
def matmul_python(C, A, B):
for m in range(C.rows):
for k in range(A.cols):
for n in range(C.cols):
C[m, n] += A[m, k] * B[k, n]
Let's benchmark our implementation using 128 by 128 square matrices and report the achieved GFLops.
Install numpy if it's not already:
%%python
from importlib.util import find_spec
import shutil
import subprocess
fix = """
-------------------------------------------------------------------------
fix following the steps here:
https://github.com/modularml/mojo/issues/1085#issuecomment-1771403719
-------------------------------------------------------------------------
"""
def install_if_missing(name: str):
if find_spec(name):
return
print(f"{name} not found, installing...")
try:
if shutil.which('python3'): python = "python3"
elif shutil.which('python'): python = "python"
else: raise ("python not on path" + fix)
subprocess.check_call([python, "-m", "pip", "install", name])
except:
raise ImportError(f"{name} not found" + fix)
install_if_missing("numpy")
%%python
from importlib.util import find_spec
import shutil
import subprocess
fix = """
-------------------------------------------------------------------------
fix following the steps here:
https://github.com/modularml/mojo/issues/1085#issuecomment-1771403719
-------------------------------------------------------------------------
"""
def install_if_missing(name: str):
if find_spec(name):
return
print(f"{name} not found, installing...")
try:
if shutil.which('python3'): python = "python3"
elif shutil.which('python'): python = "python"
else: raise ("python not on path" + fix)
subprocess.check_call([python, "-m", "pip", "install", name])
except:
raise ImportError(f"{name} not found" + fix)
install_if_missing("numpy")
%%python
from timeit import timeit
import numpy as np
class Matrix:
def __init__(self, value, rows, cols):
self.value = value
self.rows = rows
self.cols = cols
def __getitem__(self, idxs):
return self.value[idxs[0]][idxs[1]]
def __setitem__(self, idxs, value):
self.value[idxs[0]][idxs[1]] = value
def benchmark_matmul_python(M, N, K):
A = Matrix(list(np.random.rand(M, K)), M, K)
B = Matrix(list(np.random.rand(K, N)), K, N)
C = Matrix(list(np.zeros((M, N))), M, N)
secs = timeit(lambda: matmul_python(C, A, B), number=2)/2
gflops = ((2*M*N*K)/secs) / 1e9
print(gflops, "GFLOP/s")
return gflops
%%python
from timeit import timeit
import numpy as np
class Matrix:
def __init__(self, value, rows, cols):
self.value = value
self.rows = rows
self.cols = cols
def __getitem__(self, idxs):
return self.value[idxs[0]][idxs[1]]
def __setitem__(self, idxs, value):
self.value[idxs[0]][idxs[1]] = value
def benchmark_matmul_python(M, N, K):
A = Matrix(list(np.random.rand(M, K)), M, K)
B = Matrix(list(np.random.rand(K, N)), K, N)
C = Matrix(list(np.zeros((M, N))), M, N)
secs = timeit(lambda: matmul_python(C, A, B), number=2)/2
gflops = ((2*M*N*K)/secs) / 1e9
print(gflops, "GFLOP/s")
return gflops
python_gflops = benchmark_matmul_python(128, 128, 128).to_float64()
python_gflops = benchmark_matmul_python(128, 128, 128).to_float64()
Importing the Python implementation to Mojo
Using Mojo is as simple as Python. First, let's include that modules from the Mojo stdlib that we are going to use:
Import utilities and define Matrix
(click to show/hide)
import benchmark
from memory import memset_zero
from random import rand, random_float64
import benchmark
from memory import memset_zero
from random import rand, random_float64
Then, we can copy and paste our Python code. Mojo adopts the syntax of Python, so the same Python code will run as Mojo code
# This exactly the same Python implementation, but is infact Mojo code!
def matmul_untyped(C, A, B):
for m in range(C.rows):
for k in range(A.cols):
for n in range(C.cols):
C[m, n] += A[m, k] * B[k, n]
# This exactly the same Python implementation, but is infact Mojo code!
def matmul_untyped(C, A, B):
for m in range(C.rows):
for k in range(A.cols):
for n in range(C.cols):
C[m, n] += A[m, k] * B[k, n]
We can then benchmark the implementation. As before we use a 128 by 128 matrix
fn matrix_getitem(self: object, i: object) raises -> object:
return self.value[i]
fn matrix_setitem(self: object, i: object, value: object) raises -> object:
self.value[i] = value
return None
fn matrix_append(self: object, value: object) raises -> object:
self.value.append(value)
return None
fn matrix_init(rows: Int, cols: Int) raises -> object:
var value = object([])
return object(
Attr("value", value), Attr("__getitem__", matrix_getitem), Attr("__setitem__", matrix_setitem),
Attr("rows", rows), Attr("cols", cols), Attr("append", matrix_append),
)
def benchmark_matmul_untyped(M: Int, N: Int, K: Int, python_gflops: Float64):
C = matrix_init(M, N)
A = matrix_init(M, K)
B = matrix_init(K, N)
for i in range(M):
c_row = object([])
b_row = object([])
a_row = object([])
for j in range(N):
c_row.append(0.0)
b_row.append(random_float64(-5, 5))
a_row.append(random_float64(-5, 5))
C.append(c_row)
B.append(b_row)
A.append(a_row)
@parameter
fn test_fn():
try:
_ = matmul_untyped(C, A, B)
except:
pass
var secs = benchmark.run[test_fn](max_runtime_secs=0.5).mean()
_ = (A, B, C)
var gflops = ((2*M*N*K)/secs) / 1e9
var speedup : Float64 = gflops / python_gflops
print(gflops, "GFLOP/s, a", speedup, "x speedup over Python")
fn matrix_getitem(self: object, i: object) raises -> object:
return self.value[i]
fn matrix_setitem(self: object, i: object, value: object) raises -> object:
self.value[i] = value
return None
fn matrix_append(self: object, value: object) raises -> object:
self.value.append(value)
return None
fn matrix_init(rows: Int, cols: Int) raises -> object:
var value = object([])
return object(
Attr("value", value), Attr("__getitem__", matrix_getitem), Attr("__setitem__", matrix_setitem),
Attr("rows", rows), Attr("cols", cols), Attr("append", matrix_append),
)
def benchmark_matmul_untyped(M: Int, N: Int, K: Int, python_gflops: Float64):
C = matrix_init(M, N)
A = matrix_init(M, K)
B = matrix_init(K, N)
for i in range(M):
c_row = object([])
b_row = object([])
a_row = object([])
for j in range(N):
c_row.append(0.0)
b_row.append(random_float64(-5, 5))
a_row.append(random_float64(-5, 5))
C.append(c_row)
B.append(b_row)
A.append(a_row)
@parameter
fn test_fn():
try:
_ = matmul_untyped(C, A, B)
except:
pass
var secs = benchmark.run[test_fn](max_runtime_secs=0.5).mean()
_ = (A, B, C)
var gflops = ((2*M*N*K)/secs) / 1e9
var speedup : Float64 = gflops / python_gflops
print(gflops, "GFLOP/s, a", speedup, "x speedup over Python")
benchmark_matmul_untyped(128, 128, 128, python_gflops)
benchmark_matmul_untyped(128, 128, 128, python_gflops)
Note the huge speedup with no effort that we have gotten.
Adding types to the Python implementation
The above program, while achieving better performance than Python, is still not the best we can get from Mojo. If we tell Mojo the types of the inputs, it can optimize much of the code away and reduce dispatching costs (unlike Python, which only uses types for type checking, Mojo exploits type info for performance optimizations as well).
To do that, let's first define a Matrix
struct. The Matrix
struct contains a data pointer along with size fields. While the Matrix
struct can be parametrized on any data type, here we set the data type to be Float32 for conciseness.
from memory import memset_zero
alias type = DType.float32
struct Matrix[rows: Int, cols: Int]:
var data: UnsafePointer[Scalar[type]]
# Initialize zeroeing all values
fn __init__(out self):
self.data = UnsafePointer[Scalar[type]].alloc(rows * cols)
memset_zero(self.data, rows * cols)
# Initialize taking a pointer, don't set any elements
fn __init__(out self, data: UnsafePointer[Scalar[type]]):
self.data = data
# Initialize with random values
@staticmethod
fn rand() -> Self:
var data = UnsafePointer[Scalar[type]].alloc(rows * cols)
rand(data.address, rows * cols)
return Self(data)
fn __getitem__(self, y: Int, x: Int) -> Scalar[type]:
return self.load[1](y, x)
fn __setitem__(self, y: Int, x: Int, val: Scalar[type]):
self.store(y, x, val)
fn load[nelts: Int](self, y: Int, x: Int) -> SIMD[type, nelts]:
return self.data.load[width=nelts](y * self.cols + x)
fn store[nelts: Int, //](self, y: Int, x: Int, val: SIMD[type, nelts]):
return self.data.store(y * self.cols + x, val)
from memory import memset_zero
alias type = DType.float32
struct Matrix[rows: Int, cols: Int]:
var data: UnsafePointer[Scalar[type]]
# Initialize zeroeing all values
fn __init__(out self):
self.data = UnsafePointer[Scalar[type]].alloc(rows * cols)
memset_zero(self.data, rows * cols)
# Initialize taking a pointer, don't set any elements
fn __init__(out self, data: UnsafePointer[Scalar[type]]):
self.data = data
# Initialize with random values
@staticmethod
fn rand() -> Self:
var data = UnsafePointer[Scalar[type]].alloc(rows * cols)
rand(data.address, rows * cols)
return Self(data)
fn __getitem__(self, y: Int, x: Int) -> Scalar[type]:
return self.load[1](y, x)
fn __setitem__(self, y: Int, x: Int, val: Scalar[type]):
self.store(y, x, val)
fn load[nelts: Int](self, y: Int, x: Int) -> SIMD[type, nelts]:
return self.data.load[width=nelts](y * self.cols + x)
fn store[nelts: Int, //](self, y: Int, x: Int, val: SIMD[type, nelts]):
return self.data.store(y * self.cols + x, val)
Note that we implement getitem
and setitem
in terms of load
and store
. For the naive implementation of matmul it does not make a difference, but we will utilize this later in a more optimized vectorized version of matmul.
With the above Matrix
type we can effectively copy and paste the Python implementation and just add type annotations:
# Note that C, A, and B have types.
fn matmul_naive(C: Matrix, A: Matrix, B: Matrix):
for m in range(C.rows):
for k in range(A.cols):
for n in range(C.cols):
C[m, n] += A[m, k] * B[k, n]
# Note that C, A, and B have types.
fn matmul_naive(C: Matrix, A: Matrix, B: Matrix):
for m in range(C.rows):
for k in range(A.cols):
for n in range(C.cols):
C[m, n] += A[m, k] * B[k, n]
We are going to benchmark the implementations as we improve, so let's write a helper function that will do that for us:
alias M = 1024
alias N = 1024
alias K = 1024
@always_inline
fn bench[
func: fn (Matrix, Matrix, Matrix) -> None](base_gflops: Float64) raises:
var C = Matrix[M, N]()
var A = Matrix[M, K].rand()
var B = Matrix[K, N].rand()
@always_inline
@parameter
fn test_fn():
_ = func(C, A, B)
var secs = benchmark.run[test_fn](max_runtime_secs=1).mean()
A.data.free()
B.data.free()
C.data.free()
var gflops = ((2 * M * N * K) / secs) / 1e9
var speedup: Float64 = gflops / base_gflops
print(gflops, "GFLOP/s, a", speedup, "x speedup over Python")
alias M = 1024
alias N = 1024
alias K = 1024
@always_inline
fn bench[
func: fn (Matrix, Matrix, Matrix) -> None](base_gflops: Float64) raises:
var C = Matrix[M, N]()
var A = Matrix[M, K].rand()
var B = Matrix[K, N].rand()
@always_inline
@parameter
fn test_fn():
_ = func(C, A, B)
var secs = benchmark.run[test_fn](max_runtime_secs=1).mean()
A.data.free()
B.data.free()
C.data.free()
var gflops = ((2 * M * N * K) / secs) / 1e9
var speedup: Float64 = gflops / base_gflops
print(gflops, "GFLOP/s, a", speedup, "x speedup over Python")
Benchmarking shows significant speedups. We increase the size of the matrix to 512 by 512, since Mojo is much faster than Python.
bench[matmul_naive](python_gflops)
bench[matmul_naive](python_gflops)
Adding type annotations gives a huge improvement compared to the original untyped version.
Vectorizing the inner most loop
We can do better than the above implementation by utilizing vector instructions. Rather than assuming a vector width, we query the simd width of the specified dtype using simdwidthof
. This makes our code portable as we transition to other hardware. Leveraging SIMD instructions is as easy as:
from sys import simdwidthof
# simdwidthof = number of float32 elements that fit into a single SIMD register
# using a 2x multiplier allows some SIMD operations to run in the same cycle
alias nelts = simdwidthof[DType.float32]() * 2
fn matmul_vectorized_0(C: Matrix, A: Matrix, B: Matrix):
for m in range(C.rows):
for k in range(A.cols):
for nv in range(0, C.cols - nelts + 1, nelts):
C.store(m, nv, C.load[nelts](m, nv) + A[m, k] * B.load[nelts](k, nv))
# Handle remaining elements with scalars.
for n in range(nelts * (C.cols // nelts), C.cols):
C[m, n] += A[m, k] * B[k, n]
from sys import simdwidthof
# simdwidthof = number of float32 elements that fit into a single SIMD register
# using a 2x multiplier allows some SIMD operations to run in the same cycle
alias nelts = simdwidthof[DType.float32]() * 2
fn matmul_vectorized_0(C: Matrix, A: Matrix, B: Matrix):
for m in range(C.rows):
for k in range(A.cols):
for nv in range(0, C.cols - nelts + 1, nelts):
C.store(m, nv, C.load[nelts](m, nv) + A[m, k] * B.load[nelts](k, nv))
# Handle remaining elements with scalars.
for n in range(nelts * (C.cols // nelts), C.cols):
C[m, n] += A[m, k] * B[k, n]
We can benchmark the above implementation. Note that many compilers can detect naive loops and perform optimizations on them. Mojo, however, allows you to be explicit and precisely control what optimizations are applied.
bench[matmul_vectorized_0](python_gflops)
bench[matmul_vectorized_0](python_gflops)
Vectorization is a common optimization, and Mojo provides a higher-order function that performs vectorization for you. The vectorize
function takes a vector width and a function which is parametric on the vector width and is going to be evaluated in a vectorized manner.
# Simplify the code by using the builtin vectorize function
from algorithm import vectorize
fn matmul_vectorized_1(C: Matrix, A: Matrix, B: Matrix):
for m in range(C.rows):
for k in range(A.cols):
@parameter
fn dot[nelts: Int](n: Int):
C.store(m, n, C.load[nelts](m, n) + A[m, k] * B.load[nelts](k, n))
vectorize[dot, nelts, size = C.cols]()
# Simplify the code by using the builtin vectorize function
from algorithm import vectorize
fn matmul_vectorized_1(C: Matrix, A: Matrix, B: Matrix):
for m in range(C.rows):
for k in range(A.cols):
@parameter
fn dot[nelts: Int](n: Int):
C.store(m, n, C.load[nelts](m, n) + A[m, k] * B.load[nelts](k, n))
vectorize[dot, nelts, size = C.cols]()
There is only a slight difference in terms of performance between the two implementations:
bench[matmul_vectorized_1](python_gflops)
bench[matmul_vectorized_1](python_gflops)
Parallelizing Matmul
With Mojo we can easily run code in parallel with the parallelize
function.
Let's modify our matmul implementation and make it multi-threaded (for simplicity, we only parallelize
on the M dimension).
In parallelize
below we're overpartitioning by distributing the work more evenly among processors. This ensures they all have something to work on even if some tasks finish before others, or some processors are stragglers. Intel and Apple now have separate performance and efficiency cores and this mitigates the problems that can cause.
# Parallelize the code by using the builtin parallelize function
from algorithm import parallelize
fn matmul_parallelized(C: Matrix, A: Matrix, B: Matrix):
@parameter
fn calc_row(m: Int):
for k in range(A.cols):
@parameter
fn dot[nelts : Int](n : Int):
C.store(m,n, C.load[nelts](m,n) + A[m,k] * B.load[nelts](k,n))
vectorize[dot, nelts, size = C.cols]()
parallelize[calc_row](C.rows, C.rows)
# Parallelize the code by using the builtin parallelize function
from algorithm import parallelize
fn matmul_parallelized(C: Matrix, A: Matrix, B: Matrix):
@parameter
fn calc_row(m: Int):
for k in range(A.cols):
@parameter
fn dot[nelts : Int](n : Int):
C.store(m,n, C.load[nelts](m,n) + A[m,k] * B.load[nelts](k,n))
vectorize[dot, nelts, size = C.cols]()
parallelize[calc_row](C.rows, C.rows)
We can benchmark the parallel matmul implementation.
bench[matmul_parallelized](python_gflops)
bench[matmul_parallelized](python_gflops)
Tiling Matmul
Tiling is an optimization performed for matmul to increase cache locality. The idea is to keep sub-matrices resident in the cache and increase the reuse. The tile function itself can be written in Mojo as:
from algorithm import Static2DTileUnitFunc as Tile2DFunc
# Perform 2D tiling on the iteration space defined by end_x and end_y.
fn tile[tiled_fn: Tile2DFunc, tile_x: Int, tile_y: Int](end_x: Int, end_y: Int):
# Note: this assumes that ends are multiples of the tiles.
for y in range(0, end_y, tile_y):
for x in range(0, end_x, tile_x):
tiled_fn[tile_x, tile_y](x, y)
from algorithm import Static2DTileUnitFunc as Tile2DFunc
# Perform 2D tiling on the iteration space defined by end_x and end_y.
fn tile[tiled_fn: Tile2DFunc, tile_x: Int, tile_y: Int](end_x: Int, end_y: Int):
# Note: this assumes that ends are multiples of the tiles.
for y in range(0, end_y, tile_y):
for x in range(0, end_x, tile_x):
tiled_fn[tile_x, tile_y](x, y)
The above will perform 2 dimensional tiling over a 2D iteration space defined to be between . Once we define it above, we can use it within our matmul kernel. For simplicity we choose 4
as the tile height and since we also want to vectorize we use 4 * nelts
as the tile width (since we vectorize on the columns).
# Use the above tile function to perform tiled matmul.
fn matmul_tiled_parallelized(C: Matrix, A: Matrix, B: Matrix):
@parameter
fn calc_row(m: Int):
@parameter
fn calc_tile[tile_x: Int, tile_y: Int](x: Int, y: Int):
for k in range(y, y + tile_y):
@parameter
fn dot[nelts: Int](n: Int):
C.store(m, n + x, C.load[nelts](m, n + x) + A[m, k] * B.load[nelts](k, n + x))
vectorize[dot, nelts, size = tile_x]()
# We hardcode the tile factor to be 4.
alias tile_size = 4
tile[calc_tile, nelts * tile_size, tile_size](A.cols, C.cols)
parallelize[calc_row](C.rows, C.rows)
# Use the above tile function to perform tiled matmul.
fn matmul_tiled_parallelized(C: Matrix, A: Matrix, B: Matrix):
@parameter
fn calc_row(m: Int):
@parameter
fn calc_tile[tile_x: Int, tile_y: Int](x: Int, y: Int):
for k in range(y, y + tile_y):
@parameter
fn dot[nelts: Int](n: Int):
C.store(m, n + x, C.load[nelts](m, n + x) + A[m, k] * B.load[nelts](k, n + x))
vectorize[dot, nelts, size = tile_x]()
# We hardcode the tile factor to be 4.
alias tile_size = 4
tile[calc_tile, nelts * tile_size, tile_size](A.cols, C.cols)
parallelize[calc_row](C.rows, C.rows)
Again, we can benchmark the tiled parallel matmul implementation:
bench[matmul_tiled_parallelized](python_gflops)
bench[matmul_tiled_parallelized](python_gflops)
One source of overhead in the above implementation is the fact that the we are not unrolling the loops introduced by vectorize of the dot function. We can do that via the unroll_factor
higher-order function in Mojo:
# Unroll the vectorized loop by a constant factor.
fn matmul_tiled_unrolled_parallelized(C: Matrix, A: Matrix, B: Matrix):
@parameter
fn calc_row(m: Int):
@parameter
fn calc_tile[tile_x: Int, tile_y: Int](x: Int, y: Int):
for k in range(y, y + tile_y):
@parameter
fn dot[nelts: Int](n: Int):
C.store(m, n + x, C.load[nelts](m, n + x) + A[m, k] * B.load[nelts](k, n + x))
# Vectorize by nelts and unroll by tile_x/nelts
# Here unroll factor is 4
alias unroll_factor = tile_x // nelts
vectorize[dot, nelts, size=tile_x, unroll_factor=unroll_factor]()
alias tile_size = 4
tile[calc_tile, nelts * tile_size, tile_size](A.cols, C.cols)
parallelize[calc_row](C.rows, C.rows)
# Unroll the vectorized loop by a constant factor.
fn matmul_tiled_unrolled_parallelized(C: Matrix, A: Matrix, B: Matrix):
@parameter
fn calc_row(m: Int):
@parameter
fn calc_tile[tile_x: Int, tile_y: Int](x: Int, y: Int):
for k in range(y, y + tile_y):
@parameter
fn dot[nelts: Int](n: Int):
C.store(m, n + x, C.load[nelts](m, n + x) + A[m, k] * B.load[nelts](k, n + x))
# Vectorize by nelts and unroll by tile_x/nelts
# Here unroll factor is 4
alias unroll_factor = tile_x // nelts
vectorize[dot, nelts, size=tile_x, unroll_factor=unroll_factor]()
alias tile_size = 4
tile[calc_tile, nelts * tile_size, tile_size](A.cols, C.cols)
parallelize[calc_row](C.rows, C.rows)
Again, we can benchmark the new tiled parallel matmul implementation with unrolled and vectorized inner loop:
bench[matmul_tiled_unrolled_parallelized](python_gflops)
bench[matmul_tiled_unrolled_parallelized](python_gflops)
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!