nn_module#
- class nn_module#
Bases:
Module
This class serves as a wrapper for all the mltoolbox models.
The steps to extend the mltoolbox with an additional model: 1. Extend the nn_module class, implementing the forward method. 2. Register the new class to DNNFactory using the @DNNFactory.register decoration
Example
import torch.nn as nn from pyhelayers.mltoolbox.model.nn_module import nn_module from pyhelayers.mltoolbox.model.DNN_factory import DNNFactory
@DNNFactory.register(‘new_model’) class newModel(nn_module):
- def __init__(self, **kwargs):
super().__init__()
#define your torch.nn model self.cnn = …
- def forward(self, x):
super().forward(x) x = self.cnn(x) return x
Add an import to the new class in your main (so that the new class gets registered on start)
Example
import newModel
- __init__()#
Initializes an instance of the nn_module class.
Methods
__init__
()Initializes an instance of the nn_module class.
addBatchNormAfterActivation
(bn_info)Adds batch normalization after specific layers, specified by the user
Adds batch normalization after each convolutional layer
Adds batch normalization before each convolutional layer, except for the first one
add_module
(name, module)Adds a child module to the current module.
apply
(fn)Applies
fn
recursively to every submodule (as returned by.children()
) as well as self.assertSize
(x)Asserts the required image size for the model
bfloat16
()Casts all floating point parameters and buffers to
bfloat16
datatype.buffers
([recurse])Returns an iterator over module buffers.
children
()Returns an iterator over immediate children modules.
Clears the accumulated range awareness loss in all range-aware activations of the model.
cpu
()Moves all model parameters and buffers to the CPU.
cuda
([device])Moves all model parameters and buffers to the GPU.
double
()Casts all floating point parameters and buffers to
double
datatype.eval
()Sets the module in evaluation mode.
extra_repr
()Set the extra representation of the module
float
()Casts all floating point parameters and buffers to
float
datatype.forward
(x)The forward method of the wrapper class.
get_buffer
(target)Returns the buffer given by
target
if it exists, otherwise throws an error.get_extra_state
()Returns any extra state to include in the module's state_dict.
Returns the input size expected by the model
get_parameter
(target)Returns the parameter given by
target
if it exists, otherwise throws an error.get_pooling_by_type
(type)Returns a string of the asked torch class (average or max pooling)
get_submodule
(target)Returns the submodule given by
target
if it exists, otherwise throws an error.half
()Casts all floating point parameters and buffers to
half
datatype.Zeroize the actual data range of all range-aware activations.
ipu
([device])Moves all model parameters and buffers to the IPU.
load_state_dict
(state_dict[, strict])Copies parameters and buffers from
state_dict
into this module and its descendants.make_fhe_friendly
(add_bn[, pooling_type, ...])Applies changes to the given model towards making it FHE-Friendly.
modules
()Returns an iterator over all modules in the network.
named_buffers
([prefix, recurse])Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children
()Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix, remove_duplicate])Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse])Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters
([recurse])Returns an iterator over module parameters.
The method post-process an FHE friendly model before converting it into an ONNX file: It replaces WeightedRelu activations by the actual activation and removes range awareness
Replaces each WeightedRelu activation in the given model, by the WeightedRelu.activation:
register_backward_hook
(hook)Registers a backward hook on the module.
register_buffer
(name, tensor[, persistent])Adds a buffer to the module.
register_forward_hook
(hook)Registers a forward hook on the module.
register_forward_pre_hook
(hook)Registers a forward pre-hook on the module.
register_full_backward_hook
(hook)Registers a backward hook on the module.
register_load_state_dict_post_hook
(hook)Registers a post hook to be run after module's
load_state_dict
is called.register_module
(name, module)Alias for
add_module()
.register_parameter
(name, param)Adds a parameter to the module.
Unset the range awareness for all CNN and PolyReLU activations
requires_grad_
([requires_grad])Change if autograd should record operations on parameters in this module.
set_extra_state
(state)This function is called from
load_state_dict()
to handle any extra state found within the state_dict.Replaces each max-pooling by an average-pooling
share_memory
()See
torch.Tensor.share_memory_()
state_dict
(*args[, destination, prefix, ...])Returns a dictionary containing references to the whole state of the module.
to
(*args, **kwargs)Moves and/or casts the parameters and buffers.
to_empty
(*, device)Moves the parameters and buffers to the specified device without copying storage.
train
([mode])Sets the module in training mode.
type
(dst_type)Casts all parameters and buffers to
dst_type
.xpu
([device])Moves all model parameters and buffers to the XPU.
zero_grad
([set_to_none])Sets gradients of all model parameters to zero.
Attributes
T_destination
dump_patches
- addBatchNormAfterActivation(bn_info)#
Adds batch normalization after specific layers, specified by the user
- Parameters:
bn_info (list[bn_info]) – list of layers after which batch normalization should be added
- add_batch_norm_after_conv()#
Adds batch normalization after each convolutional layer
- add_batch_norm_before_conv()#
Adds batch normalization before each convolutional layer, except for the first one
- assertSize(x: tensor)#
Asserts the required image size for the model
- Parameters:
x (tensor) – data tensor
- Raises:
AssertionError – Please resize the images to the matching size
- class bn_info(after_layer_name: str, channels: int)#
Bases:
object
Defines a layer data element for adding batch normalization after a layer
- clear_range_awareness_loss()#
Clears the accumulated range awareness loss in all range-aware activations of the model.
- forward(x: Tensor) Tensor #
The forward method of the wrapper class. It should be called by any subclass (by super().forward(x)) as shown in the class documentation example. This asserts the size of the input image and also clears any accumulated range_awareness_loss from any previous model forward calls (this happens for every batch calculation)
- get_input_size()#
Returns the input size expected by the model
- Returns:
the input size expected by the model: (channels, height, width)
- Return type:
tuple
- static get_pooling_by_type(type)#
Returns a string of the asked torch class (average or max pooling)
- init_actual_data_range()#
Zeroize the actual data range of all range-aware activations.
- make_fhe_friendly(add_bn: bool, pooling_type: str = 'max', bn_list=None)#
Applies changes to the given model towards making it FHE-Friendly. This is the first step. The model may still contain Relu activations after the call to this function.
- Parameters:
add_bn (bool) – If True batch normalization will be added
pooling_type (str, optional) – Required pooling type: ‘avg’ or ‘max’. Average pooling is considered FHE-Friendly, Max is not. Defaults to ‘max’.
bn_list (list[bn_info], optional) – If not None - the batch normalization will be added after the specified layers, if None the batch normalization will be added after each convolutional layer . Defaults to None.
- post_process_activations()#
The method post-process an FHE friendly model before converting it into an ONNX file: It replaces WeightedRelu activations by the actual activation and removes range awareness
- post_process_weighted_relu_act()#
Replaces each WeightedRelu activation in the given model, by the WeightedRelu.activation:
Example
- WeightedRelu( ratio=1.0
(activation): TrainablePolyReLU(coefs=[0.09679805487394333, 0.13760216534137726]))
will be replaced by the plain
TrainablePolyReLU(coefs=[0.09679805487394333, 0.13760216534137726])
- remove_range_awareness()#
Unset the range awareness for all CNN and PolyReLU activations
- set_max_pooling_to_avg()#
Replaces each max-pooling by an average-pooling