MachineIntelligenceCore:NeuralNets
 All Classes Namespaces Files Functions Variables Enumerations Enumerator Friends Macros
Class Hierarchy

Go to the graphical class hierarchy

This inheritance list is sorted roughly, but not completely, alphabetically:
[detail level 123]
oCmic::neural_nets::optimization::artificial_landscapes::DifferentiableFunction< eT >Abstract class representing interface to a differentiable function
|oCmic::neural_nets::optimization::artificial_landscapes::Beale2DFunction< eT >2D Beale's function
|oCmic::neural_nets::optimization::artificial_landscapes::Rosenbrock2DFunction< eT >2D Rosenbrock function
|\Cmic::neural_nets::optimization::artificial_landscapes::SphereFunction< eT >A sphere function - square function generalized to n dimensions
oCmic::neural_nets::optimization::artificial_landscapes::DifferentiableFunction< double >
|oCmic::neural_nets::optimization::artificial_landscapes::Beale2DFunction< double >
|oCmic::neural_nets::optimization::artificial_landscapes::Rosenbrock2DFunction< double >
|\Cmic::neural_nets::optimization::artificial_landscapes::SphereFunction< double >
oCmic::mlnn::Layer< eT >
|oCmic::mlnn::activation_function::ELU< eT >Class implementing the layer with Exponential Linear Unit (ELU). http://arxiv.org/pdf/1511.07289v5.pdf
|oCmic::mlnn::activation_function::ReLU< eT >
|oCmic::mlnn::activation_function::Sigmoid< eT >
|oCmic::mlnn::convolution::Convolution< eT >Class representing a convolution layer, with "valid padding" and variable stride
|oCmic::mlnn::convolution::Cropping< eT >Class implementing cropping operation - crops the size of image (matrix) by a margin of n pixels on every image side/channel
|oCmic::mlnn::convolution::MaxPooling< eT >Layer performing max pooling
|oCmic::mlnn::convolution::Padding< eT >Class implementing padding operation - expanding the size of image (matrix) by a margin of n pixels on every image side
|oCmic::mlnn::cost_function::Softmax< eT >Softmax activation function
|oCmic::mlnn::experimental::ConvHebbian< eT >Class implementing a convolutional hebbian layer
|oCmic::mlnn::fully_connected::BinaryCorrelator< eT >Class implementing a linear, fully connected layer
|oCmic::mlnn::fully_connected::HebbianLinear< eT >Class implementing a linear, fully connected layer
|oCmic::mlnn::fully_connected::Linear< eT >Class implementing a linear, fully connected layer
||\Cmic::mlnn::fully_connected::SparseLinear< eT >Class implementing a linear, fully connected layer with sparsity regulation
|\Cmic::mlnn::regularisation::Dropout< eT >Droput layer - a layer used for the regularization of neural network by randomly dropping neurons during training
oCmic::mlnn::Layer< double >
|oCmic::mlnn::convolution::Convolution< double >
|\Cmic::mlnn::fully_connected::Linear< double >
oCmic::mlnn::Layer< float >
|oCmic::mlnn::convolution::Convolution< float >
|oCmic::mlnn::fully_connected::Linear< float >
|\Cmic::mlnn::cost_function::Softmax< float >
oCmic::neural_nets::loss::Loss< dtype >Abstract class representing a loss function. Defines interfaces
|oCmic::neural_nets::loss::CrossEntropyLoss< dtype >Class representing a cross-entropy loss function (classification)
|oCmic::neural_nets::loss::LogLikelihoodLoss< dtype >Class representing a log-likelihood cost (to be used with softmax logistic regression)
|\Cmic::neural_nets::loss::SquaredErrorLoss< dtype >Class representing a squared error loss function (regression). L = 1/2 sum (t - p)^2
oCmic::neural_nets::loss::Loss< double >
|\Cmic::neural_nets::loss::SquaredErrorLoss< double >
oCmic::neural_nets::loss::Loss< float >
|\Cmic::neural_nets::loss::SquaredErrorLoss< float >
oCmic::mlnn::MultiLayerNeuralNetwork< eT >Class representing a multi-layer neural network
|oCmic::mlnn::BackpropagationNeuralNetwork< eT >Class representing a multi-layer neural network based on backpropagation/gradient descent
|\Cmic::mlnn::HebbianNeuralNetwork< eT >Class representing a multi-layer neural network based on hebbian learning
oCmic::mlnn::MultiLayerNeuralNetwork< double >
|\Cmic::mlnn::BackpropagationNeuralNetwork< double >
oCmic::mlnn::MultiLayerNeuralNetwork< float >
|\Cmic::mlnn::BackpropagationNeuralNetwork< float >
oCOpenGLContinuousLearningApplication
|oCmic::applications::MNISTPatchReconstructionApplicationClass implementing a simple MNIST patch reconstruction with multi-layer neural net
|\Cmic::applications::MNISTPatchSoftmaxApplicationClass implementing a simple MNIST patch softmax classification with multi-layer neural net - imported from previously loaded auto-encoder net and adds softmax layer "at the top"
oCmic::neural_nets::optimization::OptimizationArray< T >A dynamic array of optimization functions (a hash-table)
oCmic::neural_nets::optimization::OptimizationArray< double >
oCmic::neural_nets::optimization::OptimizationArray< eT >
oCmic::neural_nets::optimization::OptimizationArray< float >
oCmic::neural_nets::optimization::OptimizationFunction< eT >Abstract class representing interface to optimization function
|oCmic::neural_nets::learning::BinaryCorrelatorLearningRule< eT >Updates according to classical Hebbian rule (wij += ni * x * y)
|oCmic::neural_nets::learning::HebbianRule< eT >Updates according to classical Hebbian rule (wij += ni * x * y)
|oCmic::neural_nets::learning::NormalizedHebbianRule< eT >Updates according to classical Hebbian rule (wij += ni * x * y) with additional normalization
|oCmic::neural_nets::learning::NormalizedZerosumHebbianRule< eT >Updates according to a modified Hebbian rule (wij += ni * f(x, y)) with additional normalization and zero summing for optimal edge detection
|oCmic::neural_nets::optimization::AdaDelta< eT >Update using AdaDelta - adaptive gradient descent with running average E[g^2] and E[d^2]
|oCmic::neural_nets::optimization::AdaGrad< eT >Update using AdaGrad - adaptive gradient descent
|oCmic::neural_nets::optimization::AdaGradPID< eT >AdaGradPID - adaptive gradient descent with proportional, integral and derivative coefficients
|oCmic::neural_nets::optimization::Adam< eT >Adam - adaptive moment estimation
|oCmic::neural_nets::optimization::AdamID< eT >AdamID - ADAM with integral and derivative coefficients
|oCmic::neural_nets::optimization::GradientDescent< eT >Update in the direction of gradient descent
|oCmic::neural_nets::optimization::GradPID< eT >GradPID - adaptive gradient descent with proportional, integral and derivative coefficients
|oCmic::neural_nets::optimization::Momentum< eT >Update in the direction of gradient descent - with momentum
|\Cmic::neural_nets::optimization::RMSProp< eT >Update using RMSProp - adaptive gradient descent with running average E[g^2]
\CTest
 oCBeale2DLandscapeTest fixture - artificial landscape - Beale's function 2D
 oCmic::neural_nets::unit_tests::Conv28x28x1Filter2x28x28s1DoubleTest Fixture - layer of input size 28x28x1 and with filter bank of 2 filters of size 28x28 with stride 1, double
 oCmic::neural_nets::unit_tests::Conv2x2x2Filter2x1x1s1DoubleTest Fixture - layer of input size 2x2x2 and with filter bank of 2 filters of size 1x1 with stride 1, double. Math example taken from my own calculations;)
 oCmic::neural_nets::unit_tests::Conv3x3x2Filter3x2x2s1FloatTest Fixture - layer of input size 3x3x2 and with filter bank of 3 filters of size 2x2 with stride 1, floats. Math example taken from my whiteboard;)
 oCmic::neural_nets::unit_tests::Conv4x4x1Filter1x2x2s2FloatTest Fixture - layer of input size 4x4x1 and with filter bank of 1 filters of size 2x2 with stride 2, floats. Math example taken from my own YET ANOTHER calculations! ech!
 oCmic::neural_nets::unit_tests::Conv4x4x1Filter3x1x1s3DoubleTest Fixture - layer of input size 4x4x1 and with filter bank of 3 filters of size 1x1 with stride 3, double. Math example taken from my own calculations;)
 oCmic::neural_nets::unit_tests::Conv5x5x1Filter1x2x2s3FloatTest Fixture - layer of input size 5x5x1 and with filter bank of 1 filter of size 2x2 with stride 3 (float). Math example taken from: https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/
 oCmic::neural_nets::unit_tests::Conv5x5x1Filter1x3x3s1FloatTest Fixture - layer of input size 5x5x1 and with filter bank of 1 filter of size 3x3 with stride 1 (floats). Math example taken from: https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/
 oCmic::neural_nets::unit_tests::Conv5x6x1Filter1x4x4s1FloatTest Fixture - layer of input size 5x6x1 and with filter bank of 1 filter of size 4x4 with stride 1, floats. Math example taken from: http://soumith.ch/ex/pages/2014/08/07/why-rotate-weights-convolution-gradient/
 oCmic::neural_nets::unit_tests::Conv7x7x3Filter3x3x3s2FloatTest Fixture - layer of input size 7x7x3 and with filter bank of 2 filters of 3x3 with stride 2 (floats). Math example taken from: http://cs231n.github.io/convolutional-networks/
 oCmic::neural_nets::unit_tests::Conv8x8x1Filter2x4x4s4DoubleTest Fixture - layer of input size 8x8x1 and with filter bank of 2 filters of size 4x4 with stride 4, double
 oCmic::neural_nets::unit_tests::Linear1x1FloatTest Fixture - layer of size 1x1, floats, sets W[0] = 1.0 and b[0] = 1.0
 oCmic::neural_nets::unit_tests::Linear2x3DoubleTest Fixture - layer of size 2x3, doubles, sets all internal and external values
 oCmic::neural_nets::unit_tests::Linear2x3FloatTest Fixture - layer of size 2x3, floats, sets all internal and external values
 oCmic::neural_nets::unit_tests::Linear50x100DoubleTest Fixture - layer of size 50x100, doubles, randomly sets all internal and external values required for numerical gradient verification
 oCmic::neural_nets::unit_tests::Linear5x2FloatTest Fixture - layer of size 5x2, floats
 oCmic::neural_nets::unit_tests::Simple2LayerRegressionNNTest Fixture - simple ff net with 2 layers
 oCmic::neural_nets::unit_tests::Tutorial2LayerNNTest Fixture - feed-forward net with 2 layers. A "formalized" example from a step-by-step tutorial: https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/
 oCRosenbrock2DLandscapeTest fixture - artificial landscape - Rosenbrock function 2D
 oCSoftmax4x1FloatTest Fixture - 4x1 softmax layer
 oCSphere1DLandscapeTest fixture - artificial landscape - sphere function 1D (square function)
 oCSphere20DLandscapeTest fixture - artificial landscape - sphere function 20D (square function)
 oCVectors3x2FloatTest Fixture - two vectors of size 3x2, floats
 oCVectors4x1FloatTest Fixture - two vectors of size 4x1, floats
 \CVectors4x1Float2Test Fixture - two predictions of size 4x1, floats