λ

MAT Manual

Table of Contents

[in package MGL-MAT]

λ

1 The mgl-mat ASDF System

λ

2 Links

Here is the official repository and the HTML documentation for the latest version.

λ

3 Introduction

λ

3.1 What's MGL-MAT?

MGL-MAT is library for working with multi-dimensional arrays which supports efficient interfacing to foreign and CUDA code with automatic translations between cuda, foreign and lisp storage. BLAS and CUBLAS bindings are available.

λ

3.2 What kind of matrices are supported?

Currently only row-major single and double float matrices are supported, but it would be easy to add single and double precision complex types too. Other numeric types, such as byte and native integer, can be added too, but they are not supported by CUBLAS. There are no restrictions on the number of dimensions, and reshaping is possible. All functions operate on the visible portion of the matrix (which is subject to displacement and shaping), invisible elements are not affected.

λ

3.3 Where to Get it?

All dependencies are in quicklisp except for CL-CUDA that needs to be fetched from github. Just clone CL-CUDA and MGL-MAT into quicklisp/local-projects/ and you are set. MGL-MAT itself lives at github, too.

Prettier-than-markdown HTML documentation cross-linked with other libraries is available as part of PAX World.

λ

4 Tutorial

We are going to see how to create matrices, access their contents.

Creating matrices is just like creating lisp arrays:

(make-mat '6)
==> #<MAT 6 A #(0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0)>

(make-mat '(2 3) :ctype :float :initial-contents '((1 2 3) (4 5 6)))
==> #<MAT 2x3 AB #2A((1.0 2.0 3.0) (4.0 5.0 6.0))>

(make-mat '(2 3 4) :initial-element 1)
==> #<MAT 2x3x4 A #3A(((1.0d0 1.0d0 1.0d0 1.0d0)
-->                    (1.0d0 1.0d0 1.0d0 1.0d0)
-->                    (1.0d0 1.0d0 1.0d0 1.0d0))
-->                   ((1.0d0 1.0d0 1.0d0 1.0d0)
-->                    (1.0d0 1.0d0 1.0d0 1.0d0)
-->                    (1.0d0 1.0d0 1.0d0 1.0d0)))>

The most prominent difference from lisp arrays is that mats are always numeric and their element type (called ctype here) defaults to :double.

Individual elements can be accessed or set with mref:

(let ((m (make-mat '(2 3))))
  (setf (mref m 0 0) 1)
  (setf (mref m 0 1) (* 2 (mref m 0 0)))
  (incf (mref m 0 2) 4)
  m)
==> #<MAT 2x3 AB #2A((1.0d0 2.0d0 4.0d0) (0.0d0 0.0d0 0.0d0))>

Compared to aref mref is a very expensive operation and it's best used sparingly. Instead, typical code relies much more on matrix level operations:

(princ (scal! 2 (fill! 3 (make-mat 4))))
.. #<MAT 4 BF #(6.0d0 6.0d0 6.0d0 6.0d0)>
==> #<MAT 4 ABF #(6.0d0 6.0d0 6.0d0 6.0d0)>

The content of a matrix can be accessed in various representations called facets. MGL-MAT takes care of synchronizing these facets by copying data around lazily, but doing its best to share storage for facets that allow it.

Notice the abf in the printed results. It illustrates that behind the scenes fill! worked on the backing-array facet (hence the b) that's basically a 1d lisp array. scal! on the other hand made a foreign call to the BLAS dscal function for which it needed the foreign-array facet (f). Finally, the a stands for the array facet that was created when the array was printed. All facets are up-to-date (else some of the characters would be lowercase). This is possible because these three facets actually share storage which is never the case for the cuda-array facet. Now if we have a CUDA-capable GPU, CUDA can be enabled with with-cuda*:

(with-cuda* ()
  (princ (scal! 2 (fill! 3 (make-mat 4)))))
.. #<MAT 4 C #(6.0d0 6.0d0 6.0d0 6.0d0)>
==> #<MAT 4 A #(6.0d0 6.0d0 6.0d0 6.0d0)>

Note the lonely c showing that only the cuda-array facet was used for both fill! and scal!. When with-cuda* exits and destroys the CUDA context, it destroys all CUDA facets, moving their data to the array facet, so the returned mat only has that facet.

When there is no high-level operation that does what we want, we may need to add new operations. This is usually best accomplished by accessing one of the facets directly, as in the following example:

(defun logdet (mat)
  "Logarithm of the determinant of MAT. Return -1, 1 or 0 (or
  equivalent) to correct for the sign, as the second value."
  (with-facets ((array (mat 'array :direction :input)))
    (lla:logdet array)))

Notice that logdet doesn't know about CUDA at all. with-facets gives it the content of the matrix as a normal multidimensional lisp array, copying the data from the GPU or elsewhere if necessary. This allows new representations (facets) to be added easily and it also avoids copying if the facet is already up-to-date. Of course, adding CUDA support to logdet could make it more efficient.

Adding support for matrices that, for instance, live on a remote machine is thus possible with a new facet type and existing code would continue to work (albeit possibly slowly). Then one could optimize the bottleneck operations by sending commands over the network instead of copying data.

It is a bad idea to conflate resource management policy and algorithms. MGL-MAT does its best to keep them separate.

λ

5 Basics

λ

6 Element types

λ

7 Printing

λ

8 Shaping

We are going to discuss various ways to change the visible portion and dimensions of matrices. Conceptually a matrix has an underlying non-displaced storage vector. For (make-mat 10 :displacement 7 :max-size 21) this underlying vector looks like this:

displacement | visible elements  | slack
. . . . . . . 0 0 0 0 0 0 0 0 0 0 . . . .

Whenever a matrix is reshaped (or displaced to in lisp terminology), its displacement and dimensions change but the underlying vector does not.

The rules for accessing displaced matrices is the same as always: multiple readers can run in parallel, but attempts to write will result in an error if there are either readers or writers on any of the matrices that share the same underlying vector.

λ

8.1 Comparison to Lisp Arrays

One way to reshape and displace mat objects is with make-mat and its displaced-to argument whose semantics are similar to that of make-array in that the displacement is relative to the displacement of displaced-to.

(let* ((base (make-mat 10 :initial-element 5 :displacement 1))
       (mat (make-mat 6 :displaced-to base :displacement 2)))
  (fill! 1 mat)
  (values base mat))
==> #<MAT 1+10+0 A #(5.0d0 5.0d0 1.0d0 1.0d0 1.0d0 1.0d0 1.0d0 1.0d0 5.0d0
-->                  5.0d0)>
==> #<MAT 3+6+2 AB #(1.0d0 1.0d0 1.0d0 1.0d0 1.0d0 1.0d0)>

There are important semantic differences compared to lisp arrays all which follow from the fact that displacement operates on the underlying conceptual non-displaced vector.

λ

8.2 Functional Shaping

The following functions are collectively called the functional shaping operations, since they don't alter their arguments in any way. Still, since storage is aliased modification to the returned matrix will affect the original.

λ

8.3 Destructive Shaping

The following destructive operations don't alter the contents of the matrix, but change what is visible. adjust! is the odd one out, it may create a new mat.

λ

9 Assembling

The functions here assemble a single mat from a number of mats.

λ

10 Caching

Allocating and initializing a mat object and its necessary facets can be expensive. The following macros remember the previous value of a binding in the same thread and /place/. Only weak references are constructed so the cached objects can be garbage collected.

While the cache is global, thread safety is guaranteed by having separate subcaches per thread. Each subcache is keyed by a /place/ object that's either explicitly specified or else is unique to each invocation of the caching macro, so different occurrences of caching macros in the source never share data. Still, recursion could lead to data sharing between different invocations of the same function. To prevent this, the cached object is removed from the cache while it is used so other invocations will create a fresh one which isn't particularly efficient but at least it's safe.

λ

11 BLAS Operations

Only some BLAS functions are implemented, but it should be easy to add more as needed. All of them default to using CUDA, if it is initialized and enabled (see use-cuda-p).

Level 1 BLAS operations

Level 3 BLAS operations

λ

12 Destructive API

Finally, some neural network operations.

λ

13 Non-destructive API

λ

14 Mappings

λ

15 Random numbers

Unless noted these work efficiently with CUDA.

λ

16 I/O

λ

17 Debugging

The largest class of bugs has to do with synchronization of facets being broken. This is almost always caused by an operation that mispecifies the direction argument of with-facet. For example, the matrix argument of scal! should be accessed with direciton :io. But if it's :input instead, then subsequent access to the array(0 1) facet will not see the changes made by axpy!, and if it's :output, then any changes made to the array facet since the last update of the cuda-array facet will not be copied and from the wrong input scal! will compute the wrong result.

Using the SLIME inspector or trying to access the cuda-array facet from threads other than the one in which the corresponding CUDA context was initialized will fail. For now, the easy way out is to debug the code with CUDA disabled (see *cuda-enabled*).

Another thing that tends to come up is figuring out where memory is used.

λ

18 Facet API

λ

18.1 Facets

A mat is a cube (see Cube Manual) whose facets are different representations of numeric arrays. These facets can be accessed with with-facets with one of the following facet-name locatives:

Facets bound by with with-facets are to be treated as dynamic extent: it is not allowed to keep a reference to them beyond the dynamic scope of with-facets.

For example, to implement the fill! operation using only the backing-array, one could do this:

(let ((displacement (mat-displacement x))
      (size (mat-size x)))
 (with-facets ((x* (x 'backing-array :direction :output)))
   (fill x* 1 :start displacement :end (+ displacement size))))

direction is :output because we clobber all values in x. Armed with this knowledge about the direction, with-facets will not copy data from another facet if the backing array is not up-to-date.

To transpose a 2d matrix with the array facet:

(destructuring-bind (n-rows n-columns) (mat-dimensions x)
  (with-facets ((x* (x 'array :direction :io)))
    (dotimes (row n-rows)
      (dotimes (column n-columns)
        (setf (aref x* row column) (aref x* column row))))))

Note that direction is :io, because we need the data in this facet to be up-to-date (that's the input part) and we are invalidating all other facets by changing values (that's the output part).

To sum the values of a matrix using the foreign-array facet:

(let ((sum 0))
  (with-facets ((x* (x 'foreign-array :direction :input)))
    (let ((pointer (offset-pointer x*)))
      (loop for index below (mat-size x)
            do (incf sum (cffi:mem-aref pointer (mat-ctype x) index)))))
  sum)

See direction for a complete description of :input, :output and :io. For mat objects, that needs to be refined. If a mat is reshaped and/or displaced in a way that not all elements are visible then those elements are always kept intact and copied around. This is accomplished by turning :output into :io automatically on such mats.

We have finished our introduction to the various facets. It must be said though that one can do anything without ever accessing a facet directly or even being aware of them as most operations on mats take care of choosing the most appropriate facet behind the scenes. In particular, most operations automatically use CUDA, if available and initialized. See with-cuda* for detail.

λ

18.2 Foreign arrays

One facet of mat objects is foreign-array which is backed by a memory area that can be a pinned lisp array or is allocated in foreign memory depending on *foreign-array-strategy*.

λ

18.3 CUDA

λ

18.3.1 CUDA Memory Management

The GPU (called device in CUDA terminology) has its own memory and it can only perform computation on data in this device memory so there is some copying involved to and from main memory. Efficient algorithms often allocate device memory up front and minimize the amount of copying that has to be done by computing as much as possible on the GPU.

MGL-MAT reduces the cost of device of memory allocations by maintaining a cache of currently unused allocations from which it first tries to satisfy allocation requests. The total size of all the allocated device memory regions (be they in use or currently unused but cached) is never more than n-pool-bytes as specified in with-cuda*. n-pool-bytes being nil means no limit.

That's it about reducing the cost allocations. The other important performance consideration, minimizing the amount copying done, is very hard to do if the data doesn't fit in device memory which is often a very limited resource. In this case the next best thing is to do the copying concurrently with computation.

Also note that often the easiest thing to do is to prevent the use of CUDA (and consequently the creation of cuda-array facets, and allocations). This can be done either by binding *cuda-enabled* to nil or by setting cuda-enabled to nil on specific matrices.

λ

19 Writing Extensions

New operations are usually implemented in lisp, CUDA, or by calling a foreign function in, for instance, BLAS, CUBLAS, CURAND.

λ

19.1 Lisp Extensions

λ

19.2 CUDA Extensions

λ

19.2.1 CUBLAS

In a with-cuda* BLAS Operations will automatically use CUBLAS. No need to use these at all.

λ

19.2.2 CURAND

This the low level CURAND API. You probably want Random numbers instead.