nn_module#

class nn_module#

Bases: Module

This class serves as a wrapper for all the mltoolbox models.

The steps to extend the mltoolbox with an additional model: 1. Extend the nn_module class, implementing the forward method. 2. Register the new class to DNNFactory using the @DNNFactory.register decoration

Example

import torch.nn as nn from pyhelayers.mltoolbox.model.nn_module import nn_module from pyhelayers.mltoolbox.model.DNN_factory import DNNFactory

@DNNFactory.register(‘new_model’) class newModel(nn_module):

def __init__(self, **kwargs):

super().__init__()

#define your torch.nn model self.cnn = …

def forward(self, x):

super().forward(x) x = self.cnn(x) return x

  1. Add an import to the new class in your main (so that the new class gets registered on start)

Example

import newModel

__init__()#

Initializes internal Module state, shared by both nn.Module and ScriptModule.

Methods

__init__()

Initializes internal Module state, shared by both nn.Module and ScriptModule.

addBatchNormAfterActivation(bn_info)

Adds batch normalization after specific layers, specified by the user

add_batch_norm_after_conv()

Adds batch normalization after each convolutional layer

add_batch_norm_before_conv()

Adds batch normalization before each convolutional layer, except for the first one

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

assertSize(x)

Asserts the required image size for the model

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

clear_range_awareness_loss()

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(x)

Defines the computation performed at every call.

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

get_extra_state()

Returns any extra state to include in the module's state_dict.

get_input_size()

Returns the input size expected by the model

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_pooling_by_type(type)

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

half()

Casts all floating point parameters and buffers to half datatype.

init_actual_data_range()

ipu([device])

Moves all model parameters and buffers to the IPU.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

make_fhe_friendly(add_bn[, pooling_type, ...])

Applies changes to the given model towards making it FHE-Friendly.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

post_process_activations()

post_process_weighted_relu_act()

Replaces each WeightedRelu activation in the given model, by the WeightedRelu.activation:

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_full_backward_hook(hook)

Registers a backward hook on the module.

register_load_state_dict_post_hook(hook)

Registers a post hook to be run after module's load_state_dict is called.

register_module(name, module)

Alias for add_module().

register_parameter(name, param)

Adds a parameter to the module.

remove_range_awareness()

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict.

set_max_pooling_to_avg()

Replaces each max-pooling by an average-pooling

share_memory()

See torch.Tensor.share_memory_()

state_dict(*args[, destination, prefix, ...])

Returns a dictionary containing references to the whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

Attributes

T_destination

dump_patches

addBatchNormAfterActivation(bn_info)#

Adds batch normalization after specific layers, specified by the user

Parameters:

bn_info (list[bn_info]) – list of layers after which batch normalization should be added

add_batch_norm_after_conv()#

Adds batch normalization after each convolutional layer

add_batch_norm_before_conv()#

Adds batch normalization before each convolutional layer, except for the first one

assertSize(x: tensor)#

Asserts the required image size for the model

Parameters:

x (tensor) – data tensor

Raises:

AssertionError – Please resize the images to the matching size

class bn_info(after_layer_name: str, channels: int)#

Bases: object

Defines a layer data element for adding batch normalization after a layer

get_input_size()#

Returns the input size expected by the model

Returns:

the input size expected by the model: (channels, height, width)

Return type:

tuple

make_fhe_friendly(add_bn: bool, pooling_type: str = 'max', bn_list=None)#

Applies changes to the given model towards making it FHE-Friendly. This is the first step. The model may still contain Relu activations after the call to this function.

Parameters:
  • add_bn (bool) – If True batch normalization will be added

  • pooling_type (str, optional) – Required pooling type: ‘avg’ or ‘max’. Average pooling is considered FHE-Friendly, Max is not. Defaults to ‘max’.

  • bn_list (list[bn_info], optional) – If not None - the batch normalization will be added after the specified layers, if None the batch normalization will be added after each convolutional layer . Defaults to None.

post_process_weighted_relu_act()#

Replaces each WeightedRelu activation in the given model, by the WeightedRelu.activation:

Example

WeightedRelu( ratio=1.0

(activation): TrainablePolyReLU(coefs=[0.09679805487394333, 0.13760216534137726]))

will be replaced by the plain

TrainablePolyReLU(coefs=[0.09679805487394333, 0.13760216534137726])

set_max_pooling_to_avg()#

Replaces each max-pooling by an average-pooling