On this page:
Malt - A minimalist deep learning toolkit
8.12

Malt - A minimalist deep learning toolkit🔗ℹ

Anurag Mendhekar and Daniel P. Friedman

Malt is a minimalist deep learning toolkit that is designed to support the book The Little Learner, by Daniel P. Friedman and Anurag Mendhekar.

The framework provides for tensors, automatic differentiation, gradient descent, commonly used loss functions, layer functions and neural network construction tools.

While it has started off as a pedagogical tool, it is designed with the future in mind and we are seeking fellow enthusiasts who would be interested in making it production worthy.

    1 Overview

      1.1 Tensors

      1.2 Automatic Differentiation

      1.3 Operator Extension

      1.4 Deep Learning Functions

      1.5 Summary of Types

    2 Entry points

    3 List functions

    4 Tensor functions

    5 Extended Functions

      5.1 Unary function extension rules

      5.2 Binary function extension rules

      5.3 Primitives in learner and nested-tensors

      5.4 Primitives in flat-tensors

      5.5 Extension in flat-tensors and nested-tensors

    6 Automatic Differentiation

    7 Differentiable extended numerical functions

      7.1 Extended operators

    8 Non-differentiable extended numerical functions

    9 Base-rank (non-extended) differentiable functions

    10 Boolean comparison functions

    11 Tensorized comparison functions

    12 Hyperparameters

    13 Gradient Descent Functions and Hyperparameters

    14 Layer functions

      14.1 Single layer functions

      14.2 Deep layer functions

    15 Loss Functions

    16 Building blocks for neural networks

    17 He Initialization

    18 Random number functions

    19 Models and Accuracy

    20 Logging

    21 Utilities

    22 Setting tensor implementations