MachineIntelligenceCore:NeuralNets
 All Classes Namespaces Files Functions Variables Enumerations Enumerator Friends Macros
Class List
Here are the classes, structs, unions and interfaces with brief descriptions:
[detail level 12345]
oNboost
|\Nserialization
oNmic
|oNapplication
|oNapplications
||oCMNISTPatchReconstructionApplicationClass implementing a simple MNIST patch reconstruction with multi-layer neural net
||\CMNISTPatchSoftmaxApplicationClass implementing a simple MNIST patch softmax classification with multi-layer neural net - imported from previously loaded auto-encoder net and adds softmax layer "at the top"
|oNmlnn
||oNactivation_function
|||oCELUClass implementing the layer with Exponential Linear Unit (ELU). http://arxiv.org/pdf/1511.07289v5.pdf
|||oCReLU
|||\CSigmoid
||oNconvolution
|||oCConvolutionClass representing a convolution layer, with "valid padding" and variable stride
|||oCCroppingClass implementing cropping operation - crops the size of image (matrix) by a margin of n pixels on every image side/channel
|||oCMaxPoolingLayer performing max pooling
|||\CPaddingClass implementing padding operation - expanding the size of image (matrix) by a margin of n pixels on every image side
||oNcost_function
|||\CSoftmaxSoftmax activation function
||oNexperimental
|||\CConvHebbianClass implementing a convolutional hebbian layer
||oNfully_connected
|||oCBinaryCorrelatorClass implementing a linear, fully connected layer
|||oCHebbianLinearClass implementing a linear, fully connected layer
|||oCSparseLinearClass implementing a linear, fully connected layer with sparsity regulation
|||\CLinearClass implementing a linear, fully connected layer
||oNregularisation
|||\CDropoutDroput layer - a layer used for the regularization of neural network by randomly dropping neurons during training
||oCBackpropagationNeuralNetworkClass representing a multi-layer neural network based on backpropagation/gradient descent
||oCHebbianNeuralNetworkClass representing a multi-layer neural network based on hebbian learning
||oCMultiLayerNeuralNetworkClass representing a multi-layer neural network
||\CLayer
|\Nneural_nets
| oNlearning
| |oCBinaryCorrelatorLearningRuleUpdates according to classical Hebbian rule (wij += ni * x * y)
| |oCHebbianRuleUpdates according to classical Hebbian rule (wij += ni * x * y)
| |oCNormalizedHebbianRuleUpdates according to classical Hebbian rule (wij += ni * x * y) with additional normalization
| |\CNormalizedZerosumHebbianRuleUpdates according to a modified Hebbian rule (wij += ni * f(x, y)) with additional normalization and zero summing for optimal edge detection
| oNloss
| |oCCrossEntropyLossClass representing a cross-entropy loss function (classification)
| |oCLogLikelihoodLossClass representing a log-likelihood cost (to be used with softmax logistic regression)
| |oCLossAbstract class representing a loss function. Defines interfaces
| |\CSquaredErrorLossClass representing a squared error loss function (regression). L = 1/2 sum (t - p)^2
| oNoptimization
| |oNartificial_landscapes
| ||oCDifferentiableFunctionAbstract class representing interface to a differentiable function
| ||oCSphereFunctionA sphere function - square function generalized to n dimensions
| ||oCBeale2DFunction2D Beale's function
| ||\CRosenbrock2DFunction2D Rosenbrock function
| |oCAdaDeltaUpdate using AdaDelta - adaptive gradient descent with running average E[g^2] and E[d^2]
| |oCAdaGradUpdate using AdaGrad - adaptive gradient descent
| |oCAdamAdam - adaptive moment estimation
| |oCAdamIDAdamID - ADAM with integral and derivative coefficients
| |oCGradientDescentUpdate in the direction of gradient descent
| |oCGradPIDGradPID - adaptive gradient descent with proportional, integral and derivative coefficients
| |oCAdaGradPIDAdaGradPID - adaptive gradient descent with proportional, integral and derivative coefficients
| |oCMomentumUpdate in the direction of gradient descent - with momentum
| |oCOptimizationArrayA dynamic array of optimization functions (a hash-table)
| |oCOptimizationFunctionAbstract class representing interface to optimization function
| |\CRMSPropUpdate using RMSProp - adaptive gradient descent with running average E[g^2]
| \Nunit_tests
|  oCConv2x2x2Filter2x1x1s1DoubleTest Fixture - layer of input size 2x2x2 and with filter bank of 2 filters of size 1x1 with stride 1, double. Math example taken from my own calculations;)
|  oCConv3x3x2Filter3x2x2s1FloatTest Fixture - layer of input size 3x3x2 and with filter bank of 3 filters of size 2x2 with stride 1, floats. Math example taken from my whiteboard;)
|  oCConv4x4x1Filter1x2x2s2FloatTest Fixture - layer of input size 4x4x1 and with filter bank of 1 filters of size 2x2 with stride 2, floats. Math example taken from my own YET ANOTHER calculations! ech!
|  oCConv4x4x1Filter3x1x1s3DoubleTest Fixture - layer of input size 4x4x1 and with filter bank of 3 filters of size 1x1 with stride 3, double. Math example taken from my own calculations;)
|  oCConv5x5x1Filter1x3x3s1FloatTest Fixture - layer of input size 5x5x1 and with filter bank of 1 filter of size 3x3 with stride 1 (floats). Math example taken from: https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/
|  oCConv5x5x1Filter1x2x2s3FloatTest Fixture - layer of input size 5x5x1 and with filter bank of 1 filter of size 2x2 with stride 3 (float). Math example taken from: https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/
|  oCConv7x7x3Filter3x3x3s2FloatTest Fixture - layer of input size 7x7x3 and with filter bank of 2 filters of 3x3 with stride 2 (floats). Math example taken from: http://cs231n.github.io/convolutional-networks/
|  oCConv5x6x1Filter1x4x4s1FloatTest Fixture - layer of input size 5x6x1 and with filter bank of 1 filter of size 4x4 with stride 1, floats. Math example taken from: http://soumith.ch/ex/pages/2014/08/07/why-rotate-weights-convolution-gradient/
|  oCConv28x28x1Filter2x28x28s1DoubleTest Fixture - layer of input size 28x28x1 and with filter bank of 2 filters of size 28x28 with stride 1, double
|  oCConv8x8x1Filter2x4x4s4DoubleTest Fixture - layer of input size 8x8x1 and with filter bank of 2 filters of size 4x4 with stride 4, double
|  oCLinear1x1FloatTest Fixture - layer of size 1x1, floats, sets W[0] = 1.0 and b[0] = 1.0
|  oCLinear5x2FloatTest Fixture - layer of size 5x2, floats
|  oCLinear2x3FloatTest Fixture - layer of size 2x3, floats, sets all internal and external values
|  oCLinear2x3DoubleTest Fixture - layer of size 2x3, doubles, sets all internal and external values
|  oCLinear50x100DoubleTest Fixture - layer of size 50x100, doubles, randomly sets all internal and external values required for numerical gradient verification
|  oCSimple2LayerRegressionNNTest Fixture - simple ff net with 2 layers
|  \CTutorial2LayerNNTest Fixture - feed-forward net with 2 layers. A "formalized" example from a step-by-step tutorial: https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/
oCBeale2DLandscapeTest fixture - artificial landscape - Beale's function 2D
oCRosenbrock2DLandscapeTest fixture - artificial landscape - Rosenbrock function 2D
oCSoftmax4x1FloatTest Fixture - 4x1 softmax layer
oCSphere1DLandscapeTest fixture - artificial landscape - sphere function 1D (square function)
oCSphere20DLandscapeTest fixture - artificial landscape - sphere function 20D (square function)
oCVectors3x2FloatTest Fixture - two vectors of size 3x2, floats
oCVectors4x1FloatTest Fixture - two vectors of size 4x1, floats
\CVectors4x1Float2Test Fixture - two predictions of size 4x1, floats