MachineIntelligenceCore:NeuralNets
 All Classes Namespaces Files Functions Variables Enumerations Enumerator Friends Macros
mic::neural_nets::optimization::Adam< eT > Class Template Reference

Adam - adaptive moment estimation. More...

#include <Adam.hpp>

Inheritance diagram for mic::neural_nets::optimization::Adam< eT >:
Collaboration diagram for mic::neural_nets::optimization::Adam< eT >:

Public Member Functions

 Adam (size_t rows_, size_t cols_, eT beta1_=0.9, eT beta2_=0.999, eT eps_=1e-8)
 
mic::types::MatrixPtr< eT > calculateUpdate (mic::types::MatrixPtr< eT > x_, mic::types::MatrixPtr< eT > dx_, eT learning_rate_=0.001)
 
- Public Member Functions inherited from mic::neural_nets::optimization::OptimizationFunction< eT >
 OptimizationFunction ()
 
virtual ~OptimizationFunction ()
 Virtual destructor - empty. More...
 
virtual void update (mic::types::MatrixPtr< eT > p_, mic::types::MatrixPtr< eT > dp_, eT learning_rate_, eT decay_=0.0)
 
virtual void update (mic::types::MatrixPtr< eT > p_, mic::types::MatrixPtr< eT > x_, mic::types::MatrixPtr< eT > y_, eT learning_rate_=0.001)
 

Protected Attributes

mic::types::MatrixPtr< eT > m
 Exponentially decaying average of past gradients. More...
 
mic::types::MatrixPtr< eT > v
 Exponentially decaying average of past squared gradients. More...
 
mic::types::MatrixPtr< eT > delta
 Calculated update. More...
 
eT beta1
 Decay rate 1 (momentum for past gradients). More...
 
eT beta2
 Decay rate 2 (momentum for past squared gradients). More...
 
eT eps
 Smoothing term that avoids division by zero. More...
 
eT beta1_powt
 Decay rate 1 to the power of t. More...
 
eT beta2_powt
 Decay rate 2 to the power of t. More...
 

Detailed Description

template<typename eT = float>
class mic::neural_nets::optimization::Adam< eT >

Adam - adaptive moment estimation.

Author
tkornuta

Definition at line 39 of file Adam.hpp.

Constructor & Destructor Documentation

template<typename eT = float>
mic::neural_nets::optimization::Adam< eT >::Adam ( size_t  rows_,
size_t  cols_,
eT  beta1_ = 0.9,
eT  beta2_ = 0.999,
eT  eps_ = 1e-8 
)
inline

Constructor. Sets dimensions, momentum rates (beta1=0.9 and beta2=0.999) and eps(default=1e-8).

Parameters
rows_Number of rows of the updated matrix/its gradient.
cols_Number of columns of the updated matrix/its gradient.

Definition at line 47 of file Adam.hpp.

References mic::neural_nets::optimization::Adam< eT >::beta1, mic::neural_nets::optimization::Adam< eT >::beta1_powt, mic::neural_nets::optimization::Adam< eT >::beta2, mic::neural_nets::optimization::Adam< eT >::beta2_powt, mic::neural_nets::optimization::Adam< eT >::delta, mic::neural_nets::optimization::Adam< eT >::m, and mic::neural_nets::optimization::Adam< eT >::v.

Member Function Documentation

template<typename eT = float>
mic::types::MatrixPtr<eT> mic::neural_nets::optimization::Adam< eT >::calculateUpdate ( mic::types::MatrixPtr< eT >  x_,
mic::types::MatrixPtr< eT >  dx_,
eT  learning_rate_ = 0.001 
)
inlinevirtual

Member Data Documentation

template<typename eT = float>
eT mic::neural_nets::optimization::Adam< eT >::beta1
protected

Decay rate 1 (momentum for past gradients).

Definition at line 105 of file Adam.hpp.

Referenced by mic::neural_nets::optimization::Adam< eT >::Adam(), and mic::neural_nets::optimization::Adam< eT >::calculateUpdate().

template<typename eT = float>
eT mic::neural_nets::optimization::Adam< eT >::beta1_powt
protected

Decay rate 1 to the power of t.

Definition at line 114 of file Adam.hpp.

Referenced by mic::neural_nets::optimization::Adam< eT >::Adam(), and mic::neural_nets::optimization::Adam< eT >::calculateUpdate().

template<typename eT = float>
eT mic::neural_nets::optimization::Adam< eT >::beta2
protected

Decay rate 2 (momentum for past squared gradients).

Definition at line 108 of file Adam.hpp.

Referenced by mic::neural_nets::optimization::Adam< eT >::Adam(), and mic::neural_nets::optimization::Adam< eT >::calculateUpdate().

template<typename eT = float>
eT mic::neural_nets::optimization::Adam< eT >::beta2_powt
protected

Decay rate 2 to the power of t.

Definition at line 117 of file Adam.hpp.

Referenced by mic::neural_nets::optimization::Adam< eT >::Adam(), and mic::neural_nets::optimization::Adam< eT >::calculateUpdate().

template<typename eT = float>
mic::types::MatrixPtr<eT> mic::neural_nets::optimization::Adam< eT >::delta
protected
template<typename eT = float>
eT mic::neural_nets::optimization::Adam< eT >::eps
protected

Smoothing term that avoids division by zero.

Definition at line 111 of file Adam.hpp.

Referenced by mic::neural_nets::optimization::Adam< eT >::calculateUpdate().

template<typename eT = float>
mic::types::MatrixPtr<eT> mic::neural_nets::optimization::Adam< eT >::m
protected

Exponentially decaying average of past gradients.

Definition at line 96 of file Adam.hpp.

Referenced by mic::neural_nets::optimization::Adam< eT >::Adam(), and mic::neural_nets::optimization::Adam< eT >::calculateUpdate().

template<typename eT = float>
mic::types::MatrixPtr<eT> mic::neural_nets::optimization::Adam< eT >::v
protected

Exponentially decaying average of past squared gradients.

Definition at line 99 of file Adam.hpp.

Referenced by mic::neural_nets::optimization::Adam< eT >::Adam(), and mic::neural_nets::optimization::Adam< eT >::calculateUpdate().


The documentation for this class was generated from the following file: