 boost | |
  serialization | |
 mic | |
  application | |
  applications | |
   MNISTPatchReconstructionApplication | Class implementing a simple MNIST patch reconstruction with multi-layer neural net |
   MNISTPatchSoftmaxApplication | Class implementing a simple MNIST patch softmax classification with multi-layer neural net - imported from previously loaded auto-encoder net and adds softmax layer "at the top" |
  mlnn | |
   activation_function | |
    ELU | Class implementing the layer with Exponential Linear Unit (ELU). http://arxiv.org/pdf/1511.07289v5.pdf |
    ReLU | |
    Sigmoid | |
   convolution | |
    Convolution | Class representing a convolution layer, with "valid padding" and variable stride |
    Cropping | Class implementing cropping operation - crops the size of image (matrix) by a margin of n pixels on every image side/channel |
    MaxPooling | Layer performing max pooling |
    Padding | Class implementing padding operation - expanding the size of image (matrix) by a margin of n pixels on every image side |
   cost_function | |
    Softmax | Softmax activation function |
   experimental | |
    ConvHebbian | Class implementing a convolutional hebbian layer |
   fully_connected | |
    BinaryCorrelator | Class implementing a linear, fully connected layer |
    HebbianLinear | Class implementing a linear, fully connected layer |
    SparseLinear | Class implementing a linear, fully connected layer with sparsity regulation |
    Linear | Class implementing a linear, fully connected layer |
   regularisation | |
    Dropout | Droput layer - a layer used for the regularization of neural network by randomly dropping neurons during training |
   BackpropagationNeuralNetwork | Class representing a multi-layer neural network based on backpropagation/gradient descent |
   HebbianNeuralNetwork | Class representing a multi-layer neural network based on hebbian learning |
   MultiLayerNeuralNetwork | Class representing a multi-layer neural network |
   Layer | |
  neural_nets | |
   learning | |
    BinaryCorrelatorLearningRule | Updates according to classical Hebbian rule (wij += ni * x * y) |
    HebbianRule | Updates according to classical Hebbian rule (wij += ni * x * y) |
    NormalizedHebbianRule | Updates according to classical Hebbian rule (wij += ni * x * y) with additional normalization |
    NormalizedZerosumHebbianRule | Updates according to a modified Hebbian rule (wij += ni * f(x, y)) with additional normalization and zero summing for optimal edge detection |
   loss | |
    CrossEntropyLoss | Class representing a cross-entropy loss function (classification) |
    LogLikelihoodLoss | Class representing a log-likelihood cost (to be used with softmax logistic regression) |
    Loss | Abstract class representing a loss function. Defines interfaces |
    SquaredErrorLoss | Class representing a squared error loss function (regression). L = 1/2 sum (t - p)^2 |
   optimization | |
    artificial_landscapes | |
     DifferentiableFunction | Abstract class representing interface to a differentiable function |
     SphereFunction | A sphere function - square function generalized to n dimensions |
     Beale2DFunction | 2D Beale's function |
     Rosenbrock2DFunction | 2D Rosenbrock function |
    AdaDelta | Update using AdaDelta - adaptive gradient descent with running average E[g^2] and E[d^2] |
    AdaGrad | Update using AdaGrad - adaptive gradient descent |
    Adam | Adam - adaptive moment estimation |
    AdamID | AdamID - ADAM with integral and derivative coefficients |
    GradientDescent | Update in the direction of gradient descent |
    GradPID | GradPID - adaptive gradient descent with proportional, integral and derivative coefficients |
    AdaGradPID | AdaGradPID - adaptive gradient descent with proportional, integral and derivative coefficients |
    Momentum | Update in the direction of gradient descent - with momentum |
    OptimizationArray | A dynamic array of optimization functions (a hash-table) |
    OptimizationFunction | Abstract class representing interface to optimization function |
    RMSProp | Update using RMSProp - adaptive gradient descent with running average E[g^2] |
   unit_tests | |
    Conv2x2x2Filter2x1x1s1Double | Test Fixture - layer of input size 2x2x2 and with filter bank of 2 filters of size 1x1 with stride 1, double. Math example taken from my own calculations;) |
    Conv3x3x2Filter3x2x2s1Float | Test Fixture - layer of input size 3x3x2 and with filter bank of 3 filters of size 2x2 with stride 1, floats. Math example taken from my whiteboard;) |
    Conv4x4x1Filter1x2x2s2Float | Test Fixture - layer of input size 4x4x1 and with filter bank of 1 filters of size 2x2 with stride 2, floats. Math example taken from my own YET ANOTHER calculations! ech! |
    Conv4x4x1Filter3x1x1s3Double | Test Fixture - layer of input size 4x4x1 and with filter bank of 3 filters of size 1x1 with stride 3, double. Math example taken from my own calculations;) |
    Conv5x5x1Filter1x3x3s1Float | Test Fixture - layer of input size 5x5x1 and with filter bank of 1 filter of size 3x3 with stride 1 (floats). Math example taken from: https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/ |
    Conv5x5x1Filter1x2x2s3Float | Test Fixture - layer of input size 5x5x1 and with filter bank of 1 filter of size 2x2 with stride 3 (float). Math example taken from: https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/ |
    Conv7x7x3Filter3x3x3s2Float | Test Fixture - layer of input size 7x7x3 and with filter bank of 2 filters of 3x3 with stride 2 (floats). Math example taken from: http://cs231n.github.io/convolutional-networks/ |
    Conv5x6x1Filter1x4x4s1Float | Test Fixture - layer of input size 5x6x1 and with filter bank of 1 filter of size 4x4 with stride 1, floats. Math example taken from: http://soumith.ch/ex/pages/2014/08/07/why-rotate-weights-convolution-gradient/ |
    Conv28x28x1Filter2x28x28s1Double | Test Fixture - layer of input size 28x28x1 and with filter bank of 2 filters of size 28x28 with stride 1, double |
    Conv8x8x1Filter2x4x4s4Double | Test Fixture - layer of input size 8x8x1 and with filter bank of 2 filters of size 4x4 with stride 4, double |
    Linear1x1Float | Test Fixture - layer of size 1x1, floats, sets W[0] = 1.0 and b[0] = 1.0 |
    Linear5x2Float | Test Fixture - layer of size 5x2, floats |
    Linear2x3Float | Test Fixture - layer of size 2x3, floats, sets all internal and external values |
    Linear2x3Double | Test Fixture - layer of size 2x3, doubles, sets all internal and external values |
    Linear50x100Double | Test Fixture - layer of size 50x100, doubles, randomly sets all internal and external values required for numerical gradient verification |
    Simple2LayerRegressionNN | Test Fixture - simple ff net with 2 layers |
    Tutorial2LayerNN | Test Fixture - feed-forward net with 2 layers. A "formalized" example from a step-by-step tutorial: https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ |
 Beale2DLandscape | Test fixture - artificial landscape - Beale's function 2D |
 Rosenbrock2DLandscape | Test fixture - artificial landscape - Rosenbrock function 2D |
 Softmax4x1Float | Test Fixture - 4x1 softmax layer |
 Sphere1DLandscape | Test fixture - artificial landscape - sphere function 1D (square function) |
 Sphere20DLandscape | Test fixture - artificial landscape - sphere function 20D (square function) |
 Vectors3x2Float | Test Fixture - two vectors of size 3x2, floats |
 Vectors4x1Float | Test Fixture - two vectors of size 4x1, floats |
 Vectors4x1Float2 | Test Fixture - two predictions of size 4x1, floats |