ModelIoProcessor#
- class ModelIoProcessor#
A class to handle the IO of an HE model, mainly, encoding and encrypting the inputs to be used for predict and fit, and decrypting and decoding the outputs of predict.
- adjust_generic_packing_inputs_for_fit(self: pyhelayers.ModelIoProcessor, res: pyhelayers.EncryptedData, inputs: List[pyhelayers.CTileTensor]) None #
Adjusts generically-packed inputs to be used by the model for fit.
- Parameters:
res – An output parameter to store the adjusted inputs in.
inputs – The given generically-packed inputs.
- adjust_generic_packing_inputs_for_predict(self: pyhelayers.ModelIoProcessor, res: pyhelayers.EncryptedData, inputs: List[pyhelayers.CTileTensor]) None #
Adjusts generically-packed inputs to be used by the model for predict.
- Parameters:
res – An output parameter to store the adjusted inputs in.
inputs – The given generically-packed inputs.
- decrypt_decode_output(self: pyhelayers.ModelIoProcessor, output: pyhelayers.EncryptedData) numpy.ndarray[numpy.float64] #
Decrypts and decodes the output of predict, assuming a single output.
- Parameters:
output – The single encrypted output of predict.
- decrypt_decode_outputs(*args, **kwargs)#
Overloaded function.
decrypt_decode_outputs(self: pyhelayers.ModelIoProcessor, outputs: pyhelayers.EncryptedData) -> List[numpy.ndarray[numpy.float64]]
Decrypts and decodes the outputs of predict.
- param outputs:
The encrypted outputs of predict.
decrypt_decode_outputs(self: pyhelayers.ModelIoProcessor, outputs: pyhelayers.EncryptedBatch) -> List[numpy.ndarray[numpy.float64]]
Decrypts and decodes the outputs of predict over a single batch.
- param outputs:
The encrypted outputs of predict over a single batch.
- encode_encrypt_inputs_for_fit(self: pyhelayers.ModelIoProcessor, res: pyhelayers.EncryptedData, inputs: List[numpy.ndarray[numpy.float64]]) None #
Encodes and encrypts the inputs for fit.
- Parameters:
res – An output parameter to store the encrypted inputs in.
inputs – The plaintext inputs for fit.
- encode_encrypt_inputs_for_predict(self: pyhelayers.ModelIoProcessor, res: pyhelayers.EncryptedData, inputs: List[numpy.ndarray[numpy.float64]]) None #
Encodes and encrypts the inputs for predict.
- Parameters:
res – An output parameter to store the encrypted inputs in.
inputs – The plaintext inputs for predict.
- get_data_packing(self: pyhelayers.ModelIoProcessor, num_elements: int | None = None) pyhelayers.DataPacking #
Returns the data packing required by the HE model for its inputs.
- Parameters:
num_elements – The number of elements along the batch dimension, if applicable.
-
class ModelIoProcessor : public helayers::Saveable#
A class to handle the IO of an HE model, mainly, encoding and encrypting the inputs to be used for predict and fit, and decrypting and decoding the outputs of predict.
Subclassed by helayers::ArimaIoProcessor, helayers::KMeansIoProcessor, helayers::LogisticRegressionIoProcessor, helayers::NeuralNetIoProcessor, helayers::XGBoostIoProcessor
Public Functions
-
ModelIoProcessor(const HeContext &he)#
Construct an empty IO processor object to be loaded using load().
- Parameters:
he – The HE context.
-
ModelIoProcessor(const HeContext &he, bool fitMode, const std::vector<TTShape> &inputShapesForPredict, const std::vector<int> &inputChainIndexesForPredict, const std::vector<TTShape> &inputShapesForFit, const std::vector<int> &inputChainIndexesForFit, const std::vector<std::vector<DimInt>> &plainInputShapes, std::optional<DimInt> batchSize = std::nullopt, std::optional<DimInt> inputsBatchDim = std::nullopt, std::optional<DimInt> outputsBatchDim = std::nullopt)#
Construct an IO processor object.
- Parameters:
he – The HE context.
fitMode – Whether to support fit or predict mode.
inputShapesForPredict – The desired shapes for the predict inputs.
inputChainIndexesForPredict – The desired chain indexes for the predict inputs.
inputShapesForFit – The desired shapes for the fit inputs.
inputChainIndexesForFit – The desired chain indexes for the fit inputs.
plainInputShapes – The shapes of the plain inputs for predict or fit.
batchSize – The number of elements along the batch dimension to include in each batch. nullopt means no batch dimension exists, or no division to batches should be made.
inputsBatchDim – The inputs batch dimension. nullopt means no batch dimension exists.
outputsBatchDim – The outputs batch dimension. nullopt means no batch dimension exists. -1 means the last dimension is the batch dimension.
-
virtual ~ModelIoProcessor() = default#
Destroy the model IO processor object.
-
DataPacking getDataPacking(std::optional<DimInt> numElements = std::nullopt) const#
Returns the data packing required by the HE model for its inputs.
- Parameters:
numElements – The number of elements along the batch dimension, if applicable.
-
const std::vector<TTShape> &getInputShapes() const#
Returns the desired shapes for the inputs for predict or fit.
-
const std::vector<int> &getInputChainIndexes() const#
Returns the desired chain indexes for the inputs for predict or fit.
-
void encodeEncryptInputsForPredict(EncryptedData &res, const std::vector<DoubleTensorCPtr> &inputs) const#
Encodes and encrypts the inputs for predict.
- Parameters:
res – An output parameter to store the encrypted inputs in.
inputs – The plaintext inputs for predict.
-
void adjustGenericPackingInputsForPredict(EncryptedData &res, const std::vector<CTileTensorCPtr> &inputs) const#
Adjusts generically-packed inputs to be used by the model for predict.
- Parameters:
res – An output parameter to store the adjusted inputs in.
inputs – The given generically-packed inputs.
-
void encodeEncryptInputsForFit(EncryptedData &res, const std::vector<DoubleTensorCPtr> &inputs) const#
Encodes and encrypts the inputs for fit.
- Parameters:
res – An output parameter to store the encrypted inputs in.
inputs – The plaintext inputs for fit.
-
void adjustGenericPackingInputsForFit(EncryptedData &res, const std::vector<CTileTensorCPtr> &inputs) const#
Adjusts generically-packed inputs to be used by the model for fit.
- Parameters:
res – An output parameter to store the adjusted inputs in.
inputs – The given generically-packed inputs.
-
std::vector<DoubleTensorCPtr> decryptDecodeOutputs(const EncryptedData &outputs) const#
Decrypts and decodes the outputs of predict.
- Parameters:
outputs – The encrypted outputs of predict.
-
std::vector<DoubleTensorCPtr> decryptDecodeOutputs(const EncryptedBatch &outputs) const#
Decrypts and decodes the outputs of predict over a single batch.
- Parameters:
outputs – The encrypted outputs of predict over a single batch.
-
DoubleTensorCPtr decryptDecodeOutput(const EncryptedData &output) const#
Decrypts and decodes the output of predict, assuming a single output.
- Parameters:
output – The single encrypted output of predict.
-
void encodeEncryptRandomInputs(EncryptedData &res, size_t numBatches = 1) const#
Encodes and encrypts random inputs for fit or predict.
May be useful for running simulations in which the actual content of the tensors is not important.
- Parameters:
res – An output parameter to store the encrypted inputs in.
numBatches – The desired number of batches to encrypt.
Public Static Functions
-
static DimInt getNumBatchElements(DimInt batchDim, std::optional<DimInt> batchSize, const TTShape &tileLayout)#
Returns the number of elements along the batch dimension to include in the single batch that is returned by “encodeEncryptRandomInputs”.
- Parameters:
batchDim – The inputs batch dimension.
batchSize – The number of elements along the batch dimension to include in each batch. nullopt means no batch dimension exists, or no division to batches should be made. In this case, the number of elements will be the tile size along the batch dimension.
tileLayout – The tile layout.
-
static void validateNumInputs(bool forFit, int actual, int expected)#
Validates the number of inputs for fit or predict matches the expected.
- Parameters:
forFit – Whether the inputs are for fit or predict.
actual – The number of given inputs.
expected – The number of expected inputs.
Public Static Attributes
-
static const size_t maxNumElements = 10#
-
ModelIoProcessor(const HeContext &he)#