HeRunRequirements#

class HeRunRequirements#

A class to describe all the user requirements for running a model with respect to the library, packaging considerations and computational performance.

optimize_for_batch_size(self: pyhelayers.HeRunRequirements, batch_size: int) None#

Sets the requirement for the batch size to optimize for (default: 1). Cannot be called when “no fixed batch size” mode was set. The effective batch size may be different than the provided value if the provided value is too big or not a power of two.

NOTE: This method cannot be called with respect to a model that has fit batch size set as its hyperparameter. In such case, the batch size to optimize for will be the fit batch size.

Parameters:

batch_size – The batch size.

set_aes_key_size(self: pyhelayers.HeRunRequirements, key_size: int) None#

When using AES inputs, sets the size of the AES secret key, in bits (default: 256).

Parameters:

key_size – The AES secret key size, in bits.

set_aes_number_config(self: pyhelayers.HeRunRequirements, number_config: pyhelayers.NumberConfig) None#

When using AES inputs, sets the requirement for a number configuration, describing the characteristics of the numbers inside the inputs. Must be set when using AES inputs (no default).

Parameters:

number_config – The number configuration.

set_compress_aes_key(self: pyhelayers.HeRunRequirements, compress_aes_key: bool) None#

When using AES inputs, sets the requirement for whether or not the AES key encrypted under FHE will be saved and loaded in a compressed mode (default: true).

Parameters:

compress_aes_key – Whether or not the AES key encrypted under FHE will be saved and loaded in a compressed mode.

Sets the requirement for whether or not to perform exhaustive search (default: false).

Parameters:

exhaustive_search – Whether or not to perform exhaustive search.

set_explicit_he_config_requirement(self: pyhelayers.HeRunRequirements, he_config_requirement: pyhelayers.HeConfigRequirement) None#

Sets the requirement for an explicit HE config requirement. Useful when an already-initialized HE context exists. The HE config requirement is validated to be feasible by the HE context options.

Parameters:

he_config_requirement – The HE config requirement to set.

set_explicit_he_config_requirement_for_generic_packing(self: pyhelayers.HeRunRequirements, he_config_requirement: pyhelayers.HeConfigRequirement, generic_packing_config: pyhelayers.GenericPackingConfig = <pyhelayers.GenericPackingConfig object at 0x7fbdc1c87b70>) None#

Sets the requirement for using generically-packed inputs for the computation, where the HE context used to generically pack data is a custom HE context corresponds to the given HE config requirement.

Parameters:

he_config_requirement – An HE config requirement corresponds to the HE context

used to generically pack data. :param generic_packing_config: An optional generic packing config. The same config that was used to generically pack data shall be provided.

set_fixed_num_slots(self: pyhelayers.HeRunRequirements, fixed_num_slots: int) None#

Sets the requirement for a fixed number of slots in a ciphertext (default: no fixed number of slots, recommended).

Parameters:

fixed_num_slots – A fixed number of slots in a ciphertext.

set_fixed_tile_layout(self: pyhelayers.HeRunRequirements, fixed_tile_layout: pyhelayers.TTShape) None#

Sets the requirement for a fixed tile layout (default: no fixed tile layout). It is usually best to keep this option unset, allowing the optimizer to pick the best layout suitable for the model. This option is useful when the tile layout is known in advance for some reason, e.g., for conducting some specific tests.

Parameters:

fixed_tile_layout – A fixed tile layout.

set_fractional_part_precision(self: pyhelayers.HeRunRequirements, fractional_part_precision: int, use_max_feasible: bool = True) None#

Sets the requirement for the fractional part precision (default: 35).

Parameters:
  • fractional_part_precision – The fractional part precision.

  • use_max_feasible – If a higher precision is feasible by the HE library while still fulfilling all the other requirements, the maximal feasible precision value will be used.

set_handle_overflow(self: pyhelayers.HeRunRequirements, handle_overflow: bool) None#

Sets the requirement for whether or not to apply overflow handling aiming at preventing overflows during the computation (default: false).

Parameters:

handle_overflow – Whether or not to apply overflow handling.

set_he_context_options(*args, **kwargs)#

Overloaded function.

  1. set_he_context_options(self: pyhelayers.HeRunRequirements, he_context_options: List[pyhelayers.HeContext]) -> None

    Sets the requirement for HeContext options. This value specifies possible HeContext types that may be used when searching for a profile that satisfies the user’s requirements. The HeContext objects may or may not be initialized, and any attribute other then their type will be ignored.

    param he_context_options:

    The HeContext options to set.

    type he_context_options:

    list of HeContexts

  2. set_he_context_options(self: pyhelayers.HeRunRequirements, arg0: List[str]) -> None

set_integer_part_precision(self: pyhelayers.HeRunRequirements, integer_part_precision: int) None#

Sets the requirement for the integer part precision (default: 10).

Parameters:

integer_part_precision – The integer part precision.

set_lazy_encoding(self: pyhelayers.HeRunRequirements, lazy_encoding: bool) None#

Sets the requirement for whether or not to apply lazy encoding for the model’s weights, such that a required weight will be encoded at runtime and free after being used (default: false). Lazy encoding can only be applied on a non-encrypted model.

Parameters:

lazy_encoding – Whether or not to apply lazy encoding for the model’s weights.

set_max_batch_memory(self: pyhelayers.HeRunRequirements, max_batch_memory: int) None#

Sets the requirement for the maximal batch memory. This value includes the sum of the input memory and the output memory.

Parameters:

max_batch_memory – The maximal batch memory (bytes).

set_max_client_inference_cpu_time(self: pyhelayers.HeRunRequirements, max_client_inference_cpu_time: int) None#

Sets the requirement for the maximal client inference CPU time. This value includes the sum of the encrypt input CPU time and the decrypt output CPU time.

Parameters:

max_client_inference_cpu_time – The maximal client inference CPU time (micro-seconds).

set_max_client_inference_memory(self: pyhelayers.HeRunRequirements, max_client_inference_memory: int) None#

Sets the requirement for the maximal client inference memory. This value includes the sum of the input memory, the output memory and the context memory.

Parameters:

max_client_inference_memory – The maximal client inference memory (bytes).

set_max_context_memory(self: pyhelayers.HeRunRequirements, max_context_memory: int) None#

Sets the requirement for the maximal context memory.

Parameters:

max_context_memory – The maximal context memory (bytes).

set_max_decrypt_output_cpu_time(self: pyhelayers.HeRunRequirements, max_decrypt_output_cpu_time: int) None#

Sets the requirement for the maximal decrypt output CPU time.

Parameters:

max_decrypt_output_cpu_time – The maximal decrypt output CPU time (micro-seconds).

set_max_encrypt_input_cpu_time(self: pyhelayers.HeRunRequirements, max_encrypt_input_cpu_time: int) None#

Sets the requirement for the maximal encrypt input CPU time.

Parameters:

max_encrypt_input_cpu_time – The maximal encrypt input CPU time (micro-seconds).

set_max_fit_cpu_time(self: pyhelayers.HeRunRequirements, max_fit_cpu_time: int) None#

Sets the requirement for the maximal fit CPU time.

Parameters:

max_fit_cpu_time – The maximal fit CPU time (micro-seconds).

set_max_inference_cpu_time(self: pyhelayers.HeRunRequirements, max_inference_cpu_time: int) None#

Sets the requirement for the maximal inference CPU time. This value includes the sum of the encrypt input CPU time, the predict CPU time and the decrypt output CPU time.

Parameters:

max_inference_cpu_time – The maximal inference CPU time (micro-seconds).

set_max_inference_memory(self: pyhelayers.HeRunRequirements, max_inference_memory: int) None#

Sets the requirement for the maximal inference memory. This value includes the sum of the input memory, the output memory, the context memory and the model memory.

Parameters:

max_inference_memory – The maximal inference memory (bytes).

set_max_init_model_cpu_time(self: pyhelayers.HeRunRequirements, max_init_model_cpu_time: int) None#

Sets the requirement for the maximal init model CPU time.

Parameters:

max_init_model_cpu_time – The maximal init model CPU time (micro-seconds).

set_max_input_memory(self: pyhelayers.HeRunRequirements, max_input_memory: int) None#

Sets the requirement for the maximal input memory.

Parameters:

max_input_memory – The maximal input memory (bytes).

set_max_model_memory(self: pyhelayers.HeRunRequirements, max_model_memory: int) None#

Sets the requirement for the maximal model memory.

Parameters:

max_model_memory – The maximal model memory (bytes).

set_max_output_memory(self: pyhelayers.HeRunRequirements, max_output_memory: int) None#

Sets the requirement for the maximal output memory.

Parameters:

max_output_memory – The maximal output memory (bytes).

set_max_predict_cpu_time(self: pyhelayers.HeRunRequirements, max_predict_cpu_time: int) None#

Sets the requirement for the maximal predict CPU time.

Parameters:

max_predict_cpu_time – The maximal predict CPU time (micro-seconds).

set_model_encrypted(self: pyhelayers.HeRunRequirements, model_encrypted: bool) None#

Sets the requirement for whether or not to encrypt the model’s weights (default: true).

Parameters:

model_encrypted – Whether or not to encrypt the model’s weights.

set_no_fixed_batch_size(self: pyhelayers.HeRunRequirements) None#

Sets the requirement for no fixed batch size (default: fixed batch size). Having an unfixed batch size will result in optimizing for throughput considering multiple possible batch sizes and will usually end up with a large batch size. Cannot be called if an explicit batch size was set.

NOTE: This method cannot be called with respect to a model that has fit batch size set as its hyperparameter. In such case, the batch size to optimize for will be the fit batch size.

set_not_secure(self: pyhelayers.HeRunRequirements) None#

Set the security level to 0. This will result with ciphertexts that can be easily broken. It is useful for quick experiments sometimes, as everything will work faster and consume less memory.

A warning will be issued when this method is called.

set_optimization_target(self: pyhelayers.HeRunRequirements, optimization: pyhelayers.OptimizationTarget) None#

Sets the requirement for the optimization target (default: SERVER_SIDE_CPU_PER_SAMPLE).

Parameters:

optimization – The optimization target. SERVER_SIDE_CPU_PER_SAMPLE will optimize for low CPU time per sample in server side operations (fit/predict). It is the only supported optimization target for a model in fit mode. CLIENT_SIDE_CPU_PER_SAMPLE will optimize for low CPU time per sample in client side operations (encrypt input, decrypt output). END_TO_END_LATENCY_PER_SAMPLE will optimize for low end-to-end latency per sample, including client-side operations, server-side operations and communication time, and excluding the time it takes to initialize the model and HE context. Requires setting the system specification by calling set_system_spec.

set_security_level(self: pyhelayers.HeRunRequirements, security_level: int) None#

Sets the requirement for the security level (default: 128).

Parameters:

security_level – The security level.

set_simple_generic_packing(self: pyhelayers.HeRunRequirements, light: bool = False, generic_packing_config: pyhelayers.GenericPackingConfig = <pyhelayers.GenericPackingConfig object at 0x7fbdc19cd070>) None#

Sets the requirement for using generically-packed inputs for the computation, where the HE context used to generically pack data is the default generic-packing HE context.

Parameters:

light – Whether the default generic-packing HE context was initialized

for light mode or not (default: false). :param generic_packing_config: An optional generic packing config. The same config that was used to generically pack data shall be provided.

set_system_spec(self: pyhelayers.HeRunRequirements, client_parallelization_speedup: float, server_parallelization_speedup: float, client_upload_speed: int, server_upload_speed: int) None#

Sets the system specification, to be used to estimate the expected end-to-end latency when the optimization target is END_TO_END_LATENCY_PER_SAMPLE. Required when optimizing for the above target, not-supported when optimizing for any other target.

Parameters:
  • client_parallelization_speedup – The speedup factor in client-side operations achieved by running them in client-side environment compared to a single-thread environment. With the lack of better estimation, consider setting the number of threads available in client-side.

  • server_parallelization_speedup – The speedup factor in server-side operations achieved by running them in server-side environment compared to a single-thread environment. With the lack of better estimation, consider setting the number of threads available in server-side.

  • client_upload_speed – The client-side upload speed, in bytes/sec.

  • server_upload_speed – The server-side upload speed, in bytes/sec.

set_use_aes_inputs(self: pyhelayers.HeRunRequirements, use_aes_inputs: bool) None#

Sets the requirement for whether or not to use AES inputs for the computation (default: false).

Parameters:

use_aes_inputs – Whether or not to use AES inputs.