NeuralNetPlain#
- class NeuralNetPlain#
A plaintext neural network that supports inference, consists of various plaintext layers.
- get_neural_net_config(self: pyhelayers.NeuralNetPlain) pyhelayers.NeuralNetConfig #
Returns non-const reference of the NN context used by the NN.
-
class NeuralNetPlain : public helayers::PlainModel#
A plaintext neural network that supports inference, consists of various plaintext layers.
For detailed documentation about loading NN from external formats, see NeuralNetOnnxParser.h and NeuralNetJsonParser.h
Public Functions
-
NeuralNetPlain()#
Construct an empty object.
-
~NeuralNetPlain()#
Destructor.
-
NeuralNetPlain(const NeuralNetPlain &src) = delete#
Deleted copy constructor.
-
NeuralNetPlain &operator=(const NeuralNetPlain &src) = delete#
Deleted operator=.
-
void initFromArch(const PlainModelHyperParams &hyperParams, const TensorCircuit &tc)#
Initializes the NN from hyperparameters given NN architecture.
- Parameters:
hyperParams – The hyperparameters object.
arch – The NN architecture.
-
virtual std::optional<DimInt> getInputsBatchDim() const override#
Returns the batch dimension of the inputs, if exist.
-
virtual std::vector<PlainTensorMetadata> getInputsPlainTensorMetadata() const override#
Returns a vector of PlainTensorMetadata objects.
The i-th element of this vector contains metadata relating to the i-th input of this PlainModel (such as shape and batch dimension). If this PlainModel is initialized for prediction, the returned vector describes inputs for the the predict() method. If this PlainModel is initialized for fitting, the returned vector describes the inputs for the fit() method.
-
inline const TensorCircuit &getTensorCircuit() const#
Returns the NN architecture used to initialize the NN.
-
inline const NeuralNetContext &getNeuralNetContext() const#
Returns the const NN context used by the NN.
-
inline NeuralNetContext &getNeuralNetContext()#
Returns the non-const NN context used by the NN.
-
virtual std::shared_ptr<HeModel> getEmptyHeModel(const HeContext &he) const override#
Returns an empty HE NeuralNet object.
- Parameters:
he – the context
-
inline virtual std::string getClassName() const override#
Returns the name of this class.
-
virtual void compareWeights(const PlainModel &other, bool verbose = false, double eps = 1e-6) const override#
Compares the internal weights of this model with the weights of the other model.
- Parameters:
other – The other plain model.
verbose – Whether to report comparison results in verbose way.
eps – Comparison tolerance.
-
void setSaveLayersOutputs(bool val) const#
For debugging.
If set to true, will save the outputs of the layers during the forward pass of this neural network and the gradients of the outputs during the backward pass if in fit mode.
- Parameters:
val – The value of the flag.
-
inline const std::vector<DoubleTensor> &getForwardPassLayersOutputs() const#
For debugging.
Return the outputs of the layers in the forward pass of this neural network. Relevant only if setSaveLayersOutputs was set to true.
-
inline const std::vector<std::vector<DoubleTensor>> &getBackwardPassLayersInGradients() const#
For debugging.
Return the gradients of the inputs of forward pass of the layers (computed during the backward pass). Relevant only if setSaveLayersOutputs was set to true and if when the model is in fit mode.
-
inline const std::vector<std::vector<DoubleTensor>> &getBackwardPassLayersWeightsGradients() const#
For debugging.
Return the gradients of the weights of the layers (computed during the backward pass). Relevant only if setSaveLayersOutputs was set to true and if when the model is in fit mode.
-
int getNumTransformerAttentionHeads() const#
Deduces from the architecture how many attention heads there are in a transformer network.
This method is applicable only of the model is a transformer. Otherwise it either throws an exception or returns arbtirary values.
-
NeuralNetPlain()#