Racogorad: Library Usage & History


Why yes, dear friend, I’m glad you asked. There absolutely is. But first, a little history.

Lisp, which stands for List Processing, was created by John McCarthy—the same John McCarthy who popularized the term “Artificial Intelligence.” Lisp was designed for symbolic computation, which is crucial for AI, with lists as its main data structure. Throughout the 60s and 70s, Lisp was the dominant language for AI research. Features like recursion, dynamic typing, and garbage collection were innovative then.

Over the years, Lisp has fallen out of favor, with languages like Python and C++ taking the spotlight. However, it is still used in academic research in AI, compilers, and metaprogramming. So, why a deep learning library in Lisp? For all the reasons mentioned above, and to decentralize AI libraries from the grasp of Python. While Python is frequently used and quite capable, Lisp offers certain advantages that make it worth considering.

Introducing Racogorad

This brings us to our library Racogorad, a deep learning framework written in a version of Scheme called Racket—a powerful Lisp dialect. Racogorad leverages Racket’s advanced features, such as macros, dynamic typing, and expressive syntax, to create a flexible and efficient deep learning framework. The core functionality includes tensor operations, forward and backward propagation for neural networks, and various utility functions. For a full overview, check it out on GitHub.

Current Features

Library Usage

USAGE

#lang racket
(require "mnist.rkt")

;; The mnist.rkt module will automatically load and train a logistic regression model
;; on the MNIST dataset when required
    

Training a CNN on MNIST

#lang racket
(require "CNN.rkt")

;; To train a CNN on the default device (MLX if available):
(train-cnn)

;; To specify the device:
(train-cnn 'cpu)  ; Use CPU
(train-cnn 'mlx)  ; Use MLX (Apple Silicon)
(train-cnn 'gpu)  ; Use GPU (via OpenCL)

;; To specify training parameters:
(train-cnn 'cpu 10 64)  ; 10 epochs, batch size 64
    

Tensor Operations

(require "tensor.rkt")

;; Create a tensor
(define t (t:create '(2 3) #(1 2 3 4 5 6)))

;; Basic operations
(t:add t1 t2)      ; Add two tensors
(t:mul t1 t2)      ; Matrix multiplication
(t:scale t 2.0)    ; Scalar multiplication
(t:transpose t)    ; Transpose tensor

;; Device-aware tensors
(require "tensor_device.rkt")
(require "device.rkt")

;; Create a device tensor on CPU
(define dt (dt:create '(2 3) #(1 2 3 4 5 6) (cpu)))

;; Move to GPU if available
(dt:to dt (gpu))

;; Operations automatically use the appropriate device
(dt:add dt1 dt2)