MachineIntelligenceCore:NeuralNets
 All Classes Namespaces Files Functions Variables Enumerations Enumerator Friends Macros
mic::neural_nets::optimization::AdaGradPID< eT > Class Template Reference

AdaGradPID - adaptive gradient descent with proportional, integral and derivative coefficients. More...

#include <GradPID.hpp>

Inheritance diagram for mic::neural_nets::optimization::AdaGradPID< eT >:
Collaboration diagram for mic::neural_nets::optimization::AdaGradPID< eT >:

Public Member Functions

 AdaGradPID (size_t rows_, size_t cols_, eT decay_=0.9, eT eps_=1e-8)
 
mic::types::MatrixPtr< eT > calculateUpdate (mic::types::MatrixPtr< eT > x_, mic::types::MatrixPtr< eT > dx_, eT learning_rate_=0.001)
 
- Public Member Functions inherited from mic::neural_nets::optimization::OptimizationFunction< eT >
 OptimizationFunction ()
 
virtual ~OptimizationFunction ()
 Virtual destructor - empty. More...
 
virtual void update (mic::types::MatrixPtr< eT > p_, mic::types::MatrixPtr< eT > dp_, eT learning_rate_, eT decay_=0.0)
 
virtual void update (mic::types::MatrixPtr< eT > p_, mic::types::MatrixPtr< eT > x_, mic::types::MatrixPtr< eT > y_, eT learning_rate_=0.001)
 

Protected Attributes

eT decay
 Decay ratio, similar to momentum. More...
 
eT eps
 Smoothing term that avoids division by zero. More...
 
mic::types::MatrixPtr< eT > p_rate
 Adaptive proportional factor (learning rate). More...
 
mic::types::MatrixPtr< eT > i_rate
 Adaptive integral factor (learning rate). More...
 
mic::types::MatrixPtr< eT > d_rate
 Adaptive proportional factor (learning rate). More...
 
mic::types::MatrixPtr< eT > Edx
 Surprisal - for feed forward nets it is based on the difference between the prediction and target. More...
 
mic::types::MatrixPtr< eT > dx_prev
 Previous value of gradients. More...
 
mic::types::MatrixPtr< eT > deltaP
 Proportional update. More...
 
mic::types::MatrixPtr< eT > deltaI
 Integral update. More...
 
mic::types::MatrixPtr< eT > deltaD
 Derivative update. More...
 
mic::types::MatrixPtr< eT > delta
 Calculated update. More...
 

Detailed Description

template<typename eT = float>
class mic::neural_nets::optimization::AdaGradPID< eT >

AdaGradPID - adaptive gradient descent with proportional, integral and derivative coefficients.

Author
tkornuta

Definition at line 180 of file GradPID.hpp.

Constructor & Destructor Documentation

template<typename eT = float>
mic::neural_nets::optimization::AdaGradPID< eT >::AdaGradPID ( size_t  rows_,
size_t  cols_,
eT  decay_ = 0.9,
eT  eps_ = 1e-8 
)
inline

Member Function Documentation

template<typename eT = float>
mic::types::MatrixPtr<eT> mic::neural_nets::optimization::AdaGradPID< eT >::calculateUpdate ( mic::types::MatrixPtr< eT >  x_,
mic::types::MatrixPtr< eT >  dx_,
eT  learning_rate_ = 0.001 
)
inlinevirtual

Calculates the update according to the AdaGradPID update rule.

Parameters
x_Pointer to the current matrix.
dx_Pointer to current gradient of that matrix.
learning_rate_Learning rate (default=0.001).

Implements mic::neural_nets::optimization::OptimizationFunction< eT >.

Definition at line 227 of file GradPID.hpp.

References mic::neural_nets::optimization::AdaGradPID< eT >::delta, and mic::neural_nets::optimization::AdaGradPID< eT >::Edx.

Member Data Documentation

template<typename eT = float>
mic::types::MatrixPtr<eT> mic::neural_nets::optimization::AdaGradPID< eT >::d_rate
protected

Adaptive proportional factor (learning rate).

Definition at line 350 of file GradPID.hpp.

Referenced by mic::neural_nets::optimization::AdaGradPID< eT >::AdaGradPID().

template<typename eT = float>
eT mic::neural_nets::optimization::AdaGradPID< eT >::decay
protected

Decay ratio, similar to momentum.

Definition at line 338 of file GradPID.hpp.

template<typename eT = float>
mic::types::MatrixPtr<eT> mic::neural_nets::optimization::AdaGradPID< eT >::delta
protected
template<typename eT = float>
mic::types::MatrixPtr<eT> mic::neural_nets::optimization::AdaGradPID< eT >::deltaD
protected

Derivative update.

Definition at line 368 of file GradPID.hpp.

Referenced by mic::neural_nets::optimization::AdaGradPID< eT >::AdaGradPID().

template<typename eT = float>
mic::types::MatrixPtr<eT> mic::neural_nets::optimization::AdaGradPID< eT >::deltaI
protected

Integral update.

Definition at line 365 of file GradPID.hpp.

Referenced by mic::neural_nets::optimization::AdaGradPID< eT >::AdaGradPID().

template<typename eT = float>
mic::types::MatrixPtr<eT> mic::neural_nets::optimization::AdaGradPID< eT >::deltaP
protected

Proportional update.

Definition at line 362 of file GradPID.hpp.

Referenced by mic::neural_nets::optimization::AdaGradPID< eT >::AdaGradPID().

template<typename eT = float>
mic::types::MatrixPtr<eT> mic::neural_nets::optimization::AdaGradPID< eT >::dx_prev
protected

Previous value of gradients.

Definition at line 359 of file GradPID.hpp.

Referenced by mic::neural_nets::optimization::AdaGradPID< eT >::AdaGradPID().

template<typename eT = float>
mic::types::MatrixPtr<eT> mic::neural_nets::optimization::AdaGradPID< eT >::Edx
protected

Surprisal - for feed forward nets it is based on the difference between the prediction and target.

Decaying average of gradients up to time t - E[g].

Definition at line 356 of file GradPID.hpp.

Referenced by mic::neural_nets::optimization::AdaGradPID< eT >::AdaGradPID(), and mic::neural_nets::optimization::AdaGradPID< eT >::calculateUpdate().

template<typename eT = float>
eT mic::neural_nets::optimization::AdaGradPID< eT >::eps
protected

Smoothing term that avoids division by zero.

Definition at line 341 of file GradPID.hpp.

template<typename eT = float>
mic::types::MatrixPtr<eT> mic::neural_nets::optimization::AdaGradPID< eT >::i_rate
protected

Adaptive integral factor (learning rate).

Definition at line 347 of file GradPID.hpp.

Referenced by mic::neural_nets::optimization::AdaGradPID< eT >::AdaGradPID().

template<typename eT = float>
mic::types::MatrixPtr<eT> mic::neural_nets::optimization::AdaGradPID< eT >::p_rate
protected

Adaptive proportional factor (learning rate).

Definition at line 344 of file GradPID.hpp.

Referenced by mic::neural_nets::optimization::AdaGradPID< eT >::AdaGradPID().


The documentation for this class was generated from the following file: