  mic::neural_nets::optimization::artificial_landscapes::DifferentiableFunction< eT > | Abstract class representing interface to a differentiable function  | 
   mic::neural_nets::optimization::artificial_landscapes::Beale2DFunction< eT > | 2D Beale's function  | 
   mic::neural_nets::optimization::artificial_landscapes::Rosenbrock2DFunction< eT > | 2D Rosenbrock function  | 
   mic::neural_nets::optimization::artificial_landscapes::SphereFunction< eT > | A sphere function - square function generalized to n dimensions  | 
  mic::neural_nets::optimization::artificial_landscapes::DifferentiableFunction< double > |  | 
   mic::neural_nets::optimization::artificial_landscapes::Beale2DFunction< double > |  | 
   mic::neural_nets::optimization::artificial_landscapes::Rosenbrock2DFunction< double > |  | 
   mic::neural_nets::optimization::artificial_landscapes::SphereFunction< double > |  | 
  mic::mlnn::Layer< eT > |  | 
   mic::mlnn::activation_function::ELU< eT > | Class implementing the layer with Exponential Linear Unit (ELU). http://arxiv.org/pdf/1511.07289v5.pdf  | 
   mic::mlnn::activation_function::ReLU< eT > |  | 
   mic::mlnn::activation_function::Sigmoid< eT > |  | 
   mic::mlnn::convolution::Convolution< eT > | Class representing a convolution layer, with "valid padding" and variable stride  | 
   mic::mlnn::convolution::Cropping< eT > | Class implementing cropping operation - crops the size of image (matrix) by a margin of n pixels on every image side/channel  | 
   mic::mlnn::convolution::MaxPooling< eT > | Layer performing max pooling  | 
   mic::mlnn::convolution::Padding< eT > | Class implementing padding operation - expanding the size of image (matrix) by a margin of n pixels on every image side  | 
   mic::mlnn::cost_function::Softmax< eT > | Softmax activation function  | 
   mic::mlnn::experimental::ConvHebbian< eT > | Class implementing a convolutional hebbian layer  | 
   mic::mlnn::fully_connected::BinaryCorrelator< eT > | Class implementing a linear, fully connected layer  | 
   mic::mlnn::fully_connected::HebbianLinear< eT > | Class implementing a linear, fully connected layer  | 
   mic::mlnn::fully_connected::Linear< eT > | Class implementing a linear, fully connected layer  | 
    mic::mlnn::fully_connected::SparseLinear< eT > | Class implementing a linear, fully connected layer with sparsity regulation  | 
   mic::mlnn::regularisation::Dropout< eT > | Droput layer - a layer used for the regularization of neural network by randomly dropping neurons during training  | 
  mic::mlnn::Layer< double > |  | 
   mic::mlnn::convolution::Convolution< double > |  | 
   mic::mlnn::fully_connected::Linear< double > |  | 
  mic::mlnn::Layer< float > |  | 
   mic::mlnn::convolution::Convolution< float > |  | 
   mic::mlnn::fully_connected::Linear< float > |  | 
   mic::mlnn::cost_function::Softmax< float > |  | 
  mic::neural_nets::loss::Loss< dtype > | Abstract class representing a loss function. Defines interfaces  | 
   mic::neural_nets::loss::CrossEntropyLoss< dtype > | Class representing a cross-entropy loss function (classification)  | 
   mic::neural_nets::loss::LogLikelihoodLoss< dtype > | Class representing a log-likelihood cost (to be used with softmax logistic regression)  | 
   mic::neural_nets::loss::SquaredErrorLoss< dtype > | Class representing a squared error loss function (regression). L = 1/2 sum (t - p)^2  | 
  mic::neural_nets::loss::Loss< double > |  | 
   mic::neural_nets::loss::SquaredErrorLoss< double > |  | 
  mic::neural_nets::loss::Loss< float > |  | 
   mic::neural_nets::loss::SquaredErrorLoss< float > |  | 
  mic::mlnn::MultiLayerNeuralNetwork< eT > | Class representing a multi-layer neural network  | 
   mic::mlnn::BackpropagationNeuralNetwork< eT > | Class representing a multi-layer neural network based on backpropagation/gradient descent  | 
   mic::mlnn::HebbianNeuralNetwork< eT > | Class representing a multi-layer neural network based on hebbian learning  | 
  mic::mlnn::MultiLayerNeuralNetwork< double > |  | 
   mic::mlnn::BackpropagationNeuralNetwork< double > |  | 
  mic::mlnn::MultiLayerNeuralNetwork< float > |  | 
   mic::mlnn::BackpropagationNeuralNetwork< float > |  | 
  OpenGLContinuousLearningApplication |  | 
   mic::applications::MNISTPatchReconstructionApplication | Class implementing a simple MNIST patch reconstruction with multi-layer neural net  | 
   mic::applications::MNISTPatchSoftmaxApplication | Class implementing a simple MNIST patch softmax classification with multi-layer neural net - imported from previously loaded auto-encoder net and adds softmax layer "at the top"  | 
  mic::neural_nets::optimization::OptimizationArray< T > | A dynamic array of optimization functions (a hash-table)  | 
  mic::neural_nets::optimization::OptimizationArray< double > |  | 
  mic::neural_nets::optimization::OptimizationArray< eT > |  | 
  mic::neural_nets::optimization::OptimizationArray< float > |  | 
  mic::neural_nets::optimization::OptimizationFunction< eT > | Abstract class representing interface to optimization function  | 
   mic::neural_nets::learning::BinaryCorrelatorLearningRule< eT > | Updates according to classical Hebbian rule (wij += ni * x * y)  | 
   mic::neural_nets::learning::HebbianRule< eT > | Updates according to classical Hebbian rule (wij += ni * x * y)  | 
   mic::neural_nets::learning::NormalizedHebbianRule< eT > | Updates according to classical Hebbian rule (wij += ni * x * y) with additional normalization  | 
   mic::neural_nets::learning::NormalizedZerosumHebbianRule< eT > | Updates according to a modified Hebbian rule (wij += ni * f(x, y)) with additional normalization and zero summing for optimal edge detection  | 
   mic::neural_nets::optimization::AdaDelta< eT > | Update using AdaDelta - adaptive gradient descent with running average E[g^2] and E[d^2]  | 
   mic::neural_nets::optimization::AdaGrad< eT > | Update using AdaGrad - adaptive gradient descent  | 
   mic::neural_nets::optimization::AdaGradPID< eT > | AdaGradPID - adaptive gradient descent with proportional, integral and derivative coefficients  | 
   mic::neural_nets::optimization::Adam< eT > | Adam - adaptive moment estimation  | 
   mic::neural_nets::optimization::AdamID< eT > | AdamID - ADAM with integral and derivative coefficients  | 
   mic::neural_nets::optimization::GradientDescent< eT > | Update in the direction of gradient descent  | 
   mic::neural_nets::optimization::GradPID< eT > | GradPID - adaptive gradient descent with proportional, integral and derivative coefficients  | 
   mic::neural_nets::optimization::Momentum< eT > | Update in the direction of gradient descent - with momentum  | 
   mic::neural_nets::optimization::RMSProp< eT > | Update using RMSProp - adaptive gradient descent with running average E[g^2]  | 
  Test |  | 
   Beale2DLandscape | Test fixture - artificial landscape - Beale's function 2D  | 
   mic::neural_nets::unit_tests::Conv28x28x1Filter2x28x28s1Double | Test Fixture - layer of input size 28x28x1 and with filter bank of 2 filters of size 28x28 with stride 1, double  | 
   mic::neural_nets::unit_tests::Conv2x2x2Filter2x1x1s1Double | Test Fixture - layer of input size 2x2x2 and with filter bank of 2 filters of size 1x1 with stride 1, double. Math example taken from my own calculations;)  | 
   mic::neural_nets::unit_tests::Conv3x3x2Filter3x2x2s1Float | Test Fixture - layer of input size 3x3x2 and with filter bank of 3 filters of size 2x2 with stride 1, floats. Math example taken from my whiteboard;)  | 
   mic::neural_nets::unit_tests::Conv4x4x1Filter1x2x2s2Float | Test Fixture - layer of input size 4x4x1 and with filter bank of 1 filters of size 2x2 with stride 2, floats. Math example taken from my own YET ANOTHER calculations! ech!  | 
   mic::neural_nets::unit_tests::Conv4x4x1Filter3x1x1s3Double | Test Fixture - layer of input size 4x4x1 and with filter bank of 3 filters of size 1x1 with stride 3, double. Math example taken from my own calculations;)  | 
   mic::neural_nets::unit_tests::Conv5x5x1Filter1x2x2s3Float | Test Fixture - layer of input size 5x5x1 and with filter bank of 1 filter of size 2x2 with stride 3 (float). Math example taken from: https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/  | 
   mic::neural_nets::unit_tests::Conv5x5x1Filter1x3x3s1Float | Test Fixture - layer of input size 5x5x1 and with filter bank of 1 filter of size 3x3 with stride 1 (floats). Math example taken from: https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/  | 
   mic::neural_nets::unit_tests::Conv5x6x1Filter1x4x4s1Float | Test Fixture - layer of input size 5x6x1 and with filter bank of 1 filter of size 4x4 with stride 1, floats. Math example taken from: http://soumith.ch/ex/pages/2014/08/07/why-rotate-weights-convolution-gradient/  | 
   mic::neural_nets::unit_tests::Conv7x7x3Filter3x3x3s2Float | Test Fixture - layer of input size 7x7x3 and with filter bank of 2 filters of 3x3 with stride 2 (floats). Math example taken from: http://cs231n.github.io/convolutional-networks/  | 
   mic::neural_nets::unit_tests::Conv8x8x1Filter2x4x4s4Double | Test Fixture - layer of input size 8x8x1 and with filter bank of 2 filters of size 4x4 with stride 4, double  | 
   mic::neural_nets::unit_tests::Linear1x1Float | Test Fixture - layer of size 1x1, floats, sets W[0] = 1.0 and b[0] = 1.0  | 
   mic::neural_nets::unit_tests::Linear2x3Double | Test Fixture - layer of size 2x3, doubles, sets all internal and external values  | 
   mic::neural_nets::unit_tests::Linear2x3Float | Test Fixture - layer of size 2x3, floats, sets all internal and external values  | 
   mic::neural_nets::unit_tests::Linear50x100Double | Test Fixture - layer of size 50x100, doubles, randomly sets all internal and external values required for numerical gradient verification  | 
   mic::neural_nets::unit_tests::Linear5x2Float | Test Fixture - layer of size 5x2, floats  | 
   mic::neural_nets::unit_tests::Simple2LayerRegressionNN | Test Fixture - simple ff net with 2 layers  | 
   mic::neural_nets::unit_tests::Tutorial2LayerNN | Test Fixture - feed-forward net with 2 layers. A "formalized" example from a step-by-step tutorial: https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/  | 
   Rosenbrock2DLandscape | Test fixture - artificial landscape - Rosenbrock function 2D  | 
   Softmax4x1Float | Test Fixture - 4x1 softmax layer  | 
   Sphere1DLandscape | Test fixture - artificial landscape - sphere function 1D (square function)  | 
   Sphere20DLandscape | Test fixture - artificial landscape - sphere function 20D (square function)  | 
   Vectors3x2Float | Test Fixture - two vectors of size 3x2, floats  | 
   Vectors4x1Float | Test Fixture - two vectors of size 4x1, floats  | 
   Vectors4x1Float2 | Test Fixture - two predictions of size 4x1, floats  |