HeRunRequirements#
- class HeRunRequirements#
A class to describe all the user requirements for running a model with respect to the library, packaging considerations and computational performance.
- get_optimized_device(self: pyhelayers.HeRunRequirements) pyhelayers.DeviceType #
Return the device type to optimize for.
- optimize_for_batch_size(self: pyhelayers.HeRunRequirements, batch_size: int) None #
Sets the requirement for the batch size to optimize for (default: 1). Cannot be called when “no fixed batch size” mode was set. The effective batch size may be different than the provided value if the provided value is too big or not a power of two.
NOTE: This method cannot be called with respect to a model that has fit batch size set as its hyperparameter. In such case, the batch size to optimize for will be the fit batch size.
- Parameters:
batch_size – The batch size.
- set_aes_key_size(self: pyhelayers.HeRunRequirements, key_size: int) None #
When using AES inputs, sets the size of the AES secret key, in bits (default: 256).
- Parameters:
key_size – The AES secret key size, in bits.
- set_aes_number_config(self: pyhelayers.HeRunRequirements, number_config: pyhelayers.NumberConfig) None #
When using AES inputs, sets the requirement for a number configuration, describing the characteristics of the numbers inside the inputs. Must be set when using AES inputs (no default).
- Parameters:
number_config – The number configuration.
- set_circuit_optimization(self: pyhelayers.HeRunRequirements, circuit_optimization: bool, scheduling_strategy: str = '', num_threads: int = -1, memory_limit: int = -1, gpu_workers: int = 0, gpu_memory_limit: int = -1) None #
Sets the requirement for whether to use circuit optimization when running the model.
- Parameters:
circuit_optimization – whether to use circuit optimization when running
the model :param scheduling_strategy: (optional) set the scheduling strategy. Default is SingleThreadRecordedScheduler. :param num_threads: (optional) set the number of threads to be used by the native worker. Default behaviour is to use all available threads. :param memory_limit: (optional) set the RAM memory limit of native worker. Default is no memory limit. :param gpu_workers: (optional) the number of GPU workers to use when running the circuit (requires GPU availability). Default is 0. :param gpu_memory_limit: (optional) set the GPU memory limit of each of the GPU workers. Default is no memory limit.
- set_compress_aes_key(self: pyhelayers.HeRunRequirements, compress_aes_key: bool) None #
When using AES inputs, sets the requirement for whether or not the AES key encrypted under FHE will be saved and loaded in a compressed mode (default: true).
- Parameters:
compress_aes_key – Whether or not the AES key encrypted under FHE will be saved and loaded in a compressed mode.
- set_exhaustive_search(self: pyhelayers.HeRunRequirements, exhaustive_search: bool) None #
Sets the requirement for whether or not to perform exhaustive search (default: false).
- Parameters:
exhaustive_search – Whether or not to perform exhaustive search.
- set_explicit_he_config_requirement(self: pyhelayers.HeRunRequirements, he_config_requirement: pyhelayers.HeConfigRequirement) None #
Sets the requirement for an explicit HE config requirement. Useful when an already-initialized HE context exists. The HE config requirement is validated to be feasible by the HE context options.
- Parameters:
he_config_requirement – The HE config requirement to set.
- set_explicit_he_config_requirement_for_generic_packing(self: pyhelayers.HeRunRequirements, he_config_requirement: pyhelayers.HeConfigRequirement, generic_packing_config: pyhelayers.GenericPackingConfig = <pyhelayers.GenericPackingConfig object at 0x7f60255c9af0>) None #
Sets the requirement for using generically-packed inputs for the computation, where the HE context used to generically pack data is a custom HE context corresponds to the given HE config requirement.
- Parameters:
he_config_requirement – An HE config requirement corresponds to the HE context
used to generically pack data. :param generic_packing_config: An optional generic packing config. The same config that was used to generically pack data shall be provided.
- set_fixed_num_slots(self: pyhelayers.HeRunRequirements, fixed_num_slots: int) None #
Sets the requirement for a fixed number of slots in a ciphertext (default: no fixed number of slots, recommended).
- Parameters:
fixed_num_slots – A fixed number of slots in a ciphertext.
- set_fixed_tile_layout(self: pyhelayers.HeRunRequirements, fixed_tile_layout: pyhelayers.TTShape) None #
Sets the requirement for a fixed tile layout (default: no fixed tile layout). It is usually best to keep this option unset, allowing the optimizer to pick the best layout suitable for the model. This option is useful when the tile layout is known in advance for some reason, e.g., for conducting some specific tests.
- Parameters:
fixed_tile_layout – A fixed tile layout.
- set_fractional_part_precision(self: pyhelayers.HeRunRequirements, fractional_part_precision: int, use_max_feasible: bool = True) None #
Sets the requirement for the fractional part precision (default: 36).
- Parameters:
fractional_part_precision – The fractional part precision.
use_max_feasible – If a higher precision is feasible by the HE library while still fulfilling all the other requirements, the maximal feasible precision value will be used.
- set_handle_overflow(self: pyhelayers.HeRunRequirements, handle_overflow: bool) None #
Sets the requirement for whether or not to apply overflow handling aiming at preventing overflows during the computation (default: false).
- Parameters:
handle_overflow – Whether or not to apply overflow handling.
- set_he_context_options(*args, **kwargs)#
Overloaded function.
set_he_context_options(self: pyhelayers.HeRunRequirements, he_context_options: List[pyhelayers.HeContext]) -> None
Sets the requirement for HeContext options. This value specifies possible HeContext types that may be used when searching for a profile that satisfies the user’s requirements. The HeContext objects may or may not be initialized, and any attribute other then their type will be ignored.
- param he_context_options:
The HeContext options to set.
- type he_context_options:
list of HeContexts
set_he_context_options(self: pyhelayers.HeRunRequirements, arg0: List[str]) -> None
- set_integer_part_precision(self: pyhelayers.HeRunRequirements, integer_part_precision: int) None #
Sets the requirement for the integer part precision (default: 10).
- Parameters:
integer_part_precision – The integer part precision.
- set_lazy_mode(self: pyhelayers.HeRunRequirements, lazy_mode: pyhelayers.LazyMode, lazy_load_dir: str = '') None #
Sets the requirement for the lazy mode regarding the model’s weights. Options are NONE, LAZY_ENCODING or LAZY_LOADING (default: NONE). When lazy behaviour is applied, a required weight will be encoded/loaded at runtime and free/saved after being used. Lazy encoding can only be applied on a non-encrypted model.
- Parameters:
lazy_mode – The lazy mode regarding the model’s weights.
lazy_load_dir – Directory to save content in disk when using lazy
loading (optional).
- set_max_batch_memory(self: pyhelayers.HeRunRequirements, max_batch_memory: int) None #
Sets the requirement for the maximal batch memory. This value includes the sum of the input memory and the output memory.
- Parameters:
max_batch_memory – The maximal batch memory (bytes).
- set_max_client_inference_cpu_time(self: pyhelayers.HeRunRequirements, max_client_inference_cpu_time: int) None #
Sets the requirement for the maximal client inference CPU time. This value includes the sum of the encrypt input CPU time and the decrypt output CPU time.
- Parameters:
max_client_inference_cpu_time – The maximal client inference CPU time (micro-seconds).
- set_max_client_inference_memory(self: pyhelayers.HeRunRequirements, max_client_inference_memory: int) None #
Sets the requirement for the maximal client inference memory. This value includes the sum of the input memory, the output memory and the context memory.
- Parameters:
max_client_inference_memory – The maximal client inference memory (bytes).
- set_max_context_memory(self: pyhelayers.HeRunRequirements, max_context_memory: int) None #
Sets the requirement for the maximal context memory.
- Parameters:
max_context_memory – The maximal context memory (bytes).
- set_max_decrypt_output_cpu_time(self: pyhelayers.HeRunRequirements, max_decrypt_output_cpu_time: int) None #
Sets the requirement for the maximal decrypt output CPU time.
- Parameters:
max_decrypt_output_cpu_time – The maximal decrypt output CPU time (micro-seconds).
- set_max_encrypt_input_cpu_time(self: pyhelayers.HeRunRequirements, max_encrypt_input_cpu_time: int) None #
Sets the requirement for the maximal encrypt input CPU time.
- Parameters:
max_encrypt_input_cpu_time – The maximal encrypt input CPU time (micro-seconds).
- set_max_fit_cpu_time(self: pyhelayers.HeRunRequirements, max_fit_cpu_time: int) None #
Sets the requirement for the maximal fit CPU time.
- Parameters:
max_fit_cpu_time – The maximal fit CPU time (micro-seconds).
- set_max_inference_cpu_time(self: pyhelayers.HeRunRequirements, max_inference_cpu_time: int) None #
Sets the requirement for the maximal inference CPU time. This value includes the sum of the encrypt input CPU time, the predict CPU time and the decrypt output CPU time.
- Parameters:
max_inference_cpu_time – The maximal inference CPU time (micro-seconds).
- set_max_inference_memory(self: pyhelayers.HeRunRequirements, max_inference_memory: int) None #
Sets the requirement for the maximal inference memory. This value includes the sum of the input memory, the output memory, the context memory and the model memory.
- Parameters:
max_inference_memory – The maximal inference memory (bytes).
- set_max_init_model_cpu_time(self: pyhelayers.HeRunRequirements, max_init_model_cpu_time: int) None #
Sets the requirement for the maximal init model CPU time.
- Parameters:
max_init_model_cpu_time – The maximal init model CPU time (micro-seconds).
- set_max_input_memory(self: pyhelayers.HeRunRequirements, max_input_memory: int) None #
Sets the requirement for the maximal input memory.
- Parameters:
max_input_memory – The maximal input memory (bytes).
- set_max_model_memory(self: pyhelayers.HeRunRequirements, max_model_memory: int) None #
Sets the requirement for the maximal model memory.
- Parameters:
max_model_memory – The maximal model memory (bytes).
- set_max_output_memory(self: pyhelayers.HeRunRequirements, max_output_memory: int) None #
Sets the requirement for the maximal output memory.
- Parameters:
max_output_memory – The maximal output memory (bytes).
- set_max_predict_cpu_time(self: pyhelayers.HeRunRequirements, max_predict_cpu_time: int) None #
Sets the requirement for the maximal predict CPU time.
- Parameters:
max_predict_cpu_time – The maximal predict CPU time (micro-seconds).
- set_model_encrypted(self: pyhelayers.HeRunRequirements, model_encrypted: bool) None #
Sets the requirement for whether or not to encrypt the model’s weights (default: true).
- Parameters:
model_encrypted – Whether or not to encrypt the model’s weights.
- set_no_fixed_batch_size(self: pyhelayers.HeRunRequirements) None #
Sets the requirement for no fixed batch size (default: fixed batch size). Having an unfixed batch size will result in optimizing for throughput considering multiple possible batch sizes and will usually end up with a large batch size. Cannot be called if an explicit batch size was set.
NOTE: This method cannot be called with respect to a model that has fit batch size set as its hyperparameter. In such case, the batch size to optimize for will be the fit batch size.
- set_not_secure(self: pyhelayers.HeRunRequirements) None #
Set the security level to 0. This will result with ciphertexts that can be easily broken. It is useful for quick experiments sometimes, as everything will work faster and consume less memory.
A warning will be issued when this method is called.
- set_optimization_target(self: pyhelayers.HeRunRequirements, optimization: pyhelayers.OptimizationTarget) None #
Sets the requirement for the optimization target (default: SERVER_SIDE_CPU_PER_SAMPLE).
- Parameters:
optimization – The optimization target. SERVER_SIDE_CPU_PER_SAMPLE will optimize for low CPU time per sample in server side operations (fit/predict). It is the only supported optimization target for a model in fit mode. CLIENT_SIDE_CPU_PER_SAMPLE will optimize for low CPU time per sample in client side operations (encrypt input, decrypt output). END_TO_END_LATENCY_PER_SAMPLE will optimize for low end-to-end latency per sample, including client-side operations, server-side operations and communication time, and excluding the time it takes to initialize the model and HE context. Requires setting the system specification by calling set_system_spec.
- set_optimized_device(self: pyhelayers.HeRunRequirements, optimized_device: pyhelayers.DeviceType, hybrid_utilization: bool = False) None #
Sets the requirement for the device type to optimize for (default: DEVICE_CPU). Optimizing for DEVICE_GPU is only available for some HE libraries.
- Parameters:
optimized_device – The device type to optimize for.
hybrid_utilization – Whether a policy of hybrid utilization of devices
of different types shall be applied (default: false).
- set_security_level(self: pyhelayers.HeRunRequirements, security_level: int) None #
Sets the requirement for the security level (default: 128).
- Parameters:
security_level – The security level.
- set_simple_generic_packing(self: pyhelayers.HeRunRequirements, light: bool = False, generic_packing_config: pyhelayers.GenericPackingConfig = <pyhelayers.GenericPackingConfig object at 0x7f60252d36f0>) None #
Sets the requirement for using generically-packed inputs for the computation, where the HE context used to generically pack data is the default generic-packing HE context.
- Parameters:
light – Whether the default generic-packing HE context was initialized
for light mode or not (default: false). :param generic_packing_config: An optional generic packing config. The same config that was used to generically pack data shall be provided.
- set_system_spec(self: pyhelayers.HeRunRequirements, client_parallelization_speedup: float, server_parallelization_speedup: float, client_upload_speed: int, server_upload_speed: int) None #
Sets the system specification, to be used to estimate the expected end-to-end latency when the optimization target is END_TO_END_LATENCY_PER_SAMPLE. Required when optimizing for the above target, not-supported when optimizing for any other target.
- Parameters:
client_parallelization_speedup – The speedup factor in client-side operations achieved by running them in client-side environment compared to a single-thread environment. With the lack of better estimation, consider setting the number of threads available in client-side.
server_parallelization_speedup – The speedup factor in server-side operations achieved by running them in server-side environment compared to a single-thread environment. With the lack of better estimation, consider setting the number of threads available in server-side.
client_upload_speed – The client-side upload speed, in bytes/sec.
server_upload_speed – The server-side upload speed, in bytes/sec.
- set_use_aes_inputs(self: pyhelayers.HeRunRequirements, use_aes_inputs: bool) None #
Sets the requirement for whether or not to use AES inputs for the computation (default: false).
- Parameters:
use_aes_inputs – Whether or not to use AES inputs.
-
class HeRunRequirements#
class to describe all the user requirements for running a model with respect to the library, packaging considerations and computational performance
Public Functions
-
HeRunRequirements()#
Constructor.
-
~HeRunRequirements() = default#
Destructor.
-
void setOptimizationTarget(OPTIMIZATION_TARGET optimization)#
Sets the requirement for the optimization target (default: SERVER_SIDE_CPU_PER_SAMPLE).
- Parameters:
optimization – the optimization target.
SERVER_SIDE_CPU_PER_SAMPLE will optimize for low CPU time per sample in server side operations (fit/predict). It is the only supported optimization target for a model in fit mode.
CLIENT_SIDE_CPU_PER_SAMPLE will optimize for low CPU time per sample in client side operations (encrypt input, decrypt output).
END_TO_END_LATENCY_PER_SAMPLE will optimize for low end-to-end latency per sample, including client-side operations, server-side operations and communication time, and excluding the time it takes to initialize the model and HE context. Requires setting the system specification by calling “setSystemSpec”.
-
void optimizeForBatchSize(DimInt batchSize)#
Sets the requirement for the batch size to optimize for (default: 1).
Cannot be called when “no fixed batch size” mode was set. The effective batch size may be different than the provided value if the provided value is too big or not a power of two.
NOTE: This method cannot be called with respect to a model that has fit batch size set as its hyperparameter. In such case, the batch size to optimize for will be the fit batch size.
- Parameters:
batchSize – the batch size
-
void setNoFixedBatchSize()#
Sets the requirement for no fixed batch size (default: fixed batch size).
Having an unfixed batch size will result in optimizing for throughput considering multiple possible batch sizes and will usually end up with a large batch size. Cannot be called if an explicit batch size was set.
NOTE: This method cannot be called with respect to a model that has fit batch size set as its hyperparameter. In such case, the batch size to optimize for will be the fit batch size.
-
void setSecurityLevel(int securityLevel)#
Sets the requirement for the security level (default: 128)
- Parameters:
securityLevel – the security level
-
void setNotSecure()#
Set the security level to 0.
This will result with ciphertexts that can be easily broken. It is useful for quick experiments sometimes, as everything will work faster and consume less memory.
A warning will be issued when this method is called.
-
void setIntegerPartPrecision(int integerPartPrecision)#
Sets the requirement for the integer part precision (default: 10)
- Parameters:
integerPartPrecision – the integer part precision
-
void setFractionalPartPrecision(int fractionalPartPrecision, bool useMaxFeasible = true)#
Sets the requirement for the fractional part precision (default: 36)
- Parameters:
fractionalPartPrecision – the fractional part precision
useMaxFeasible – if a higher precision is feasible by the HE library while still fulfilling all the other requirements, the maximal feasible precision value will be used.
-
void setExhaustiveSearch(bool exhaustiveSearch)#
Sets the requirement for whether or not to perform exhaustive search (default: false)
- Parameters:
exhaustiveSearch – whether or not to perform exhaustive search
-
void setParallelSearch(bool parallelSearch)#
Sets the requirement for whether or not to perform parallel search (default: true)
- Parameters:
parallelSearch – whether or not to perform parallel search
-
void setModelEncrypted(bool modelEncrypted)#
Sets the requirement for whether or not to encrypt the model’s weights (default: true)
- Parameters:
modelEncrypted – whether or not to encrypt the model’s weights
-
bool getModelEncrypted() const#
Returns whether or not the model’s weights will be encrypted.
-
void setCircuitOptimization(bool circuitOptimization, const std::string &schedulingStrategy = "", int numThreads = -1, int memoryLimit = -1, int gpuWorkers = 0, int gpuMemoryLimit = -1)#
Sets the requirement for whether to use circuit optimization when running the model.
- Parameters:
circuitOptimization – whether to use circuit optimization when running the model
schedulingStrategy – (optional) set the scheduling strategy. Default is SingleThreadRecordedScheduler.
numThreads – (optional) set the number of threads to be used by the native worker. Default behaviour is to use all available threads.
memoryLimit – (optional) set the RAM memory limit of native worker. Default is no memory limit.
gpuWorkers – (optional) the number of GPU workers to use when running the circuit (requires GPU availability). Default is 0.
gpuMemoryLimit – (optional) set the GPU memory limit of each of the GPU workers. Default is no memory limit.
-
void setLazyMode(LazyMode lazyMode, const std::string &lazyLoadDir = "")#
Sets the requirement for the lazy mode regarding the model’s weights.
Options are NONE, LAZY_ENCODING or LAZY_LOADING (default: NONE). When lazy behaviour is applied, a required weight will be encoded/loaded at runtime and free/saved after being used. Lazy encoding can only be applied on a non-encrypted model.
- Parameters:
lazyMode – The lazy mode regarding the model’s weights.
lazyLoadDir – Directory to save content in disk when using lazy loading (optional).
-
void setHandleOverflow(bool handleOverflow)#
This method is deprecated.
Handle overflow feature is no longer supported.
-
DeviceType getOptimizedDevice() const#
Returns the device type to optimize for.
-
void setOptimizedDevice(DeviceType optimizedDevice, bool hybridUtilization = false)#
Sets the requirement for the device type to optimize for (default: DEVICE_CPU).
Optimizing for DEVICE_GPU is only available for some HE libraries.
- Parameters:
optimizedDevice – the device type to optimize for
hybridUtilization – Whether a policy of hybrid utilization of devices of different types shall be applied (default: false).
-
void setFixedNumSlots(DimInt fixedNumSlots)#
Sets the requirement for a fixed number of slots in a ciphertext (default: no fixed number of slots, recommended)
- Parameters:
fixedNumSlots – a fixed number of slots in a ciphertext
-
void setFixedTileLayout(const TTShape &fixedTileLayout)#
Sets the requirement for a fixed tile layout (default: no fixed tile layout).
It is usually best to keep this option unset, allowing the optimizer to pick the best layout suitable for the model. This option is useful when the tile layout is known in advance for some reason, e.g., for conducting some specific tests.
- Parameters:
fixedNumSlots – a fixed tile layout
-
void setSystemSpec(double clientParallelizationSpeedup, double serverParallelizationSpeedup, int64_t clientUploadSpeed, int64_t serverUploadSpeed)#
Sets the system specification, to be used to estimate the expected end-to-end latency when the optimization target is END_TO_END_LATENCY_PER_SAMPLE.
Required when optimizing for the above target, not-supported when optimizing for any other target.
- Parameters:
clientParallelizationSpeedup – the speedup factor in client-side operations achieved by running them in client-side environment compared to a single-thread environment. With the lack of better estimation, consider setting the number of threads available in client-side.
serverParallelizationSpeedup – the speedup factor in server-side operations achieved by running them in server-side environment compared to a single-thread environment. With the lack of better estimation, consider setting the number of threads available in server-side.
clientUploadSpeed – the client-side upload speed, in bytes/sec
serverUploadSpeed – the server-side upload speed, in bytes/sec
-
void setMaxModelMemory(int64_t maxModelMemory)#
Sets the requirement for the maximal model memory.
- Parameters:
maxModelMemory – the maximal model memory (bytes)
-
void setMaxInputMemory(int64_t maxInputMemory)#
Sets the requirement for the maximal input memory.
- Parameters:
maxInputMemory – the maximal input memory (bytes)
-
void setMaxOutputMemory(int64_t maxOutputMemory)#
Sets the requirement for the maximal output memory.
- Parameters:
maxOutputMemory – the maximal output memory (bytes)
-
void setMaxContextMemory(int64_t maxContextMemory)#
Sets the requirement for the maximal context memory.
- Parameters:
maxContextMemory – the maximal context memory (bytes)
-
void setMaxBatchMemory(int64_t maxBatchMemory)#
Sets the requirement for the maximal batch memory.
This value includes the sum of the input memory and the output memory.
- Parameters:
maxBatchMemory – the maximal batch memory (bytes)
-
void setMaxClientInferenceMemory(int64_t maxClientInferenceMemory)#
Sets the requirement for the maximal client inference memory.
This value includes the sum of the input memory, the output memory and the context memory.
- Parameters:
maxClientInferenceMemory – the maximal client inference memory (bytes)
-
void setMaxInferenceMemory(int64_t maxInferenceMemory)#
Sets the requirement for the maximal inference memory.
This value includes the sum of the input memory, the output memory, the context memory and the model memory.
- Parameters:
maxInferenceMemory – the maximal inference memory (bytes)
-
void setMaxPredictCpuTime(int64_t maxPredictCpuTime)#
Sets the requirement for the maximal predict CPU time.
- Parameters:
maxPredictCpuTime – the maximal predict CPU time (micro-seconds)
-
void setMaxFitCpuTime(int64_t maxFitCpuTime)#
Sets the requirement for the maximal fit CPU time.
- Parameters:
maxFitCpuTime – the maximal predict CPU time (micro-seconds)
-
void setMaxInitModelCpuTime(int64_t maxInitModelCpuTime)#
Sets the requirement for the maximal init model CPU time.
- Parameters:
maxInitModelCpuTime – the maximal init model CPU time (micro-seconds)
-
void setMaxEncryptInputCpuTime(int64_t maxEncryptInputCpuTime)#
Sets the requirement for the maximal encrypt input CPU time.
- Parameters:
maxEncryptInputCpuTime – the maximal encrypt input CPU time (micro-seconds)
-
void setMaxDecryptOutputCpuTime(int64_t maxDecryptOutputCpuTime)#
Sets the requirement for the maximal decrypt output CPU time.
- Parameters:
maxDecryptOutputCpuTime – the maximal decrypt output CPU time (micro-seconds)
-
void setMaxClientInferenceCpuTime(int64_t maxClientInferenceCpuTime)#
Sets the requirement for the maximal client inference CPU time.
This value includes the sum of the encrypt input CPU time and the decrypt output CPU time.
- Parameters:
maxClientInferenceCpuTime – the maximal client inference CPU time (micro-seconds)
-
void setMaxInferenceCpuTime(int64_t maxInferenceCpuTime)#
Sets the requirement for the maximal inference CPU time.
This value includes the sum of the encrypt input CPU time, the predict CPU time and the decrypt output CPU time.
- Parameters:
maxInferenceCpuTime – the maximal inference CPU time (micro-seconds)
Sets the requirement for HeContext options.
This value specifies possible HeContext types that may be used when searching for a profile that satisfies the user’s requirements. The HeContext objects may or may not be initialized, and any attribute other then their type will be ignored.
- Parameters:
heContextOptions – The HeContext options to set.
-
void setExplicitHeConfigRequirement(const HeConfigRequirement &heConfReq)#
Sets the requirement for an explicit HE config requirement.
Useful when an already-initialized HE context exists. The HE config requirement is validated to be feasible by the HE context options.
- Parameters:
heConfReq – The HE config requirement to set.
-
std::optional<HeConfigRequirement> getExplicitHeConfigRequirement() const#
Returns the explicit HE config requirement if set, or nullopt otherwise.
-
void setSimpleGenericPacking(bool light = false, const GenericPackingConfig &gpConfig = GenericPackingConfig())#
Sets the requirement for using generically-packed inputs for the computation, where the HE context used to generically pack data is the default generic-packing HE context.
- Parameters:
light – Whether the default generic-packing HE context was initialized for light mode or not (default: false).
gpConfig – An optional generic packing config. The same config that was used to generically pack data shall be provided.
-
void setExplicitHeConfigRequirementForGenericPacking(const HeConfigRequirement &heConfReq, const GenericPackingConfig &gpConfig = GenericPackingConfig())#
Sets the requirement for using generically-packed inputs for the computation, where the HE context used to generically pack data is a custom HE context corresponds to the given HE config requirement.
- Parameters:
heConfReq – An HE config requirement corresponds to the HE context used to generically pack data.
gpConfig – An optional generic packing config. The same config that was used to generically pack data shall be provided.
-
bool getUseGenericPackingInputs() const#
Returns indication for whether or not to use generically-packed inputs.
-
const GenericPackingConfig &getGenericPackingConfig() const#
Returns the generic packing configuration in case of generically-packed inputs.
-
void setUseAesInputs(bool useAesInputs)#
Sets the requirement for whether or not to use AES inputs for the computation (default: false)
- Parameters:
useAesInputs – Whether or not to use AES inputs.
-
void setAesNumberConfig(const NumberConfig &numberConfig)#
When using AES inputs, sets the requirement for a number configuration, describing the characteristics of the numbers inside the inputs.
Must be set when using AES inputs (no default).
- Parameters:
numberConfig – The number configuration.
-
void setAesKeySize(size_t keySize)#
When using AES inputs, sets the size of the AES secret key, in bits (default: 256).
- Parameters:
keySize – The AES secret key size, in bits.
-
void setCompressAesKey(bool compressAesKey)#
When using AES inputs, sets the requirement for whether or not the AES key encrypted under FHE will be saved and loaded in a compressed mode (default: true).
- Parameters:
compressAesKey – Whether or not the AES key encrypted under FHE will be saved and loaded in a compressed mode.
-
bool getUseAesInputs() const#
Returns indication for whether or not to use AES inputs.
-
const NumberConfig &getAesNumberConfig() const#
Returns the number configuration in case of AES inputs.
-
size_t getAesKeySize() const#
Returns the AES secret key size, in bits, in case of AES inputs.
-
bool getCompressAesKey() const#
Returns indication for whether or not the AES key encrypted under FHE will be saved and loaded in a compressed mode, in case of AES inputs.
-
void debugPrint(std::ostream &out = std::cout) const#
Prints the effective set of requirements.
- Parameters:
out – output stream to print to
-
HeRunRequirements()#