TTFunctionEvaluator#
- class TTFunctionEvaluator#
This class is used to evaluate different available functions on CTileTensor objects, such as polynomial evaluation and sigmoid.
- static bootstrap_ahead_of_uncomposed_computation_of_given_depth(x: pyhelayers.CTileTensor, depth: int) None #
- Bootstrap ahead of uncomposed computation to avoid multiple
bootstraps inside the computation.
- Parameters:
x – The CTileTensor The input for the computation that may be bootstrapped now.
depth – The depth of the expected uncomposed computation.
- compare(self: pyhelayers.TTFunctionEvaluator, a: pyhelayers.CTileTensor, b: pyhelayers.TileTensor, g_rep: int, f_rep: int, max_diff: float) pyhelayers.CTileTensor #
The same as FunctionEvaluator::compare, but works on tile tensors elementwise. The first argument must be a CTileTensor and the second a TileTensor (encrypted or plain). The usual broadcasting rules of binary operations apply.
For more details on the comparison operator, see documentation of FunctionEvaluator::compare
- Parameters:
a (CTileTensor) – First CTileTensor to compare.
b (TileTensor) – Second TileTensor to compare.
g_rep (int) – Controls the accuracy of the result. A higher g_rep value increases accuracy on the account of slower and deeper computation.
f_rep (int) – Controls the accuracy of the result. A higher f_rep value increases accuracy on the account of slower and deeper computation.
max_diff – An upper bound on |a-b|.
- Ptype max_diff:
float
- Raises:
ValueError – If “a” and “b” don’t have the same shape.
- compute_Lagrange_Basis(self: pyhelayers.TTFunctionEvaluator, res: pyhelayers.CTileTensor, src: pyhelayers.CTileTensor, data: std::vector<double, std::allocator<double> >, index: int) None #
Computes Lagrange basis polynomial over a set of points, in a specific point :param res: Output CTileTensor. :param src:Ppoint to compute the interpolation at. :param data: List of the points to interpolation on (the basis is 0 in all
the points expect for one)
- Parameters:
index – the index in which the interpolation should be 1
- gelu_by_sigmoid(self: pyhelayers.TTFunctionEvaluator, x: pyhelayers.CTileTensor, scale: float, composition: str) None #
Calculates (an approximation of) GELU(src) in place.
- Parameters:
x – CTileTensor. The values to calculate GELU on. The output will be stored in place
scale – double. Scale to divide x by in order to compute sigmoid from sign.
composition – string. The composed functions that form the sign function.
- goldschmidt_inverse_sqrt(self: pyhelayers.TTFunctionEvaluator, src: pyhelayers.CTileTensor, iterations: int, max_abs_val: float = 1, input_scale: float = 1, reuired_output_scale: float = 1) None #
Calculates (an approximation of) required_output_scale/sqrt(src/input_scale), elementwise and in place. The values of “src” must be positive values not larger than max_abs_val.
- Parameters:
src – CTileTensor. The values to calculate their inverse. The output will be stored in place.
iterations – Controls the accuracy of the result.
max_abs_val – double. A positive value specifying the maximum absolute value of src/input_scale.
input_scale – double. The scale of the input “src”.
reuired_output_scale – double.The output of the function is the actual 1/sqrt result multiplied by this “required_output_scale” value.
- inverse_positive(self: pyhelayers.TTFunctionEvaluator, src: pyhelayers.CTileTensor, lower_bound: float, upper_bound: float, bit_resolution: int = 5) pyhelayers.CTileTensor #
The same as FunctionEvaluator::inversePositive, but works on a CTileTensor object as source, CTile-wise. For more details, see documentation of FunctionEvaluator::inversePositive.
- Parameters:
src (CTileTensor) – CTileTensor to calculate its inverse.
lower_bound (double) – a lower bound on the tiles of src. The tighter this bound is, the more accurate the result will be. This lower bound must be non-negative.
upper_bound (double) – an upper upper bound on the tiles src. The tighter this bound is, the more accurate the result will be
bit_resolution (int) – Controls the accuracy of the result. A higher bit_resolution value will increase the accuracy of the result at the account of consuming more multiplication depth. Defaults to 5.
- Return type:
- multiply_many(self: pyhelayers.TTFunctionEvaluator, output: pyhelayers.CTileTensor, ctts: std::vector<helayers::CTileTensor, std::allocator<helayers::CTileTensor> >) None #
- Multiplies all CTileTensors in “ctts” and saves the resulting
product “output”. The multiplications are performed in a way that aims to minimize the number of bootstrapping operations and maximize the chain index of the result. :param output: The resulting product will be stored here. :param ctts: The CTileTensors to multiply.
- partial_sums_indicators_get_layer(self: pyhelayers.TTFunctionEvaluator, res: pyhelayers.CTileTensor, layer_one: pyhelayers.CTileTensor, prev_layer: pyhelayers.CTileTensor, len: int) None #
Given a one dimensional interleaved CTileTensor indicators of shape
- [1~ / slotCount, n] that contains binary values (either 0 or 1) and an
- integer len>=2, calculates
a CTileTensor of shape [n~ / slotCount, n] s.t. res[i,j]=1 iff indicators[i:i+len] contains exactly len 1’s, and 0 otherwise. To run this function with len=n you must run it before with len=n-1, and transfer to this function the result of the previous run (denoted by prevLayer). Furthermore, you need to transfer the result of partialSumsIndicatorsGetLayerOne which returns the result for len=1.
- param res:
an empty CTileTensor
- param layer_one:
the result of partialSumsIndicatorsGetLayerOne
- param prev_layer:
the result of running this function with len-1
- param len:
number of elements to include in the partial sum calculation. len >= 2
- partial_sums_indicators_get_layer_one(self: pyhelayers.TTFunctionEvaluator, res: pyhelayers.CTileTensor, indicators: pyhelayers.CTileTensor) None #
Given a one dimensional interleaved CTileTensor indicators of shape
- [1~ / slotCount, n] that contains binary values (either 0 or 1) calculates
a CTileTensor of shape [n~ / slotCount, n] s.t. res[i,j]=1 iff indicators[i:i+1] contains exactly one 1, and 0 otherwise.
- param res:
an empty CTileTensor
- param indicators:
one dimensional interleaved CTileTensor of shape[1~ / slotCount, n] that contains binary values (either 0 or 1)
- partial_sums_indicators_get_layer_zero(self: pyhelayers.TTFunctionEvaluator, res: pyhelayers.CTileTensor, indicators: pyhelayers.CTileTensor) None #
Given a one dimensional interleaved CTileTensor indicators of shape
- [1~ / slotCount, n] that contains binary values (either 0 or 1) calculates
a CTileTensor of shape [n~ / slotCount, n] s.t. res[i,j]=1 iff indicators[i:i] contains 1, and 0 otherwise.
- param res:
an empty CTileTensor
- param indicators:
one dimensional interleaved CTileTensor of shape[1~ / slotCount, n] that contains binary values (either 0 or 1)
- poly_eval_in_place(*args, **kwargs)#
Overloaded function.
poly_eval_in_place(self: pyhelayers.TTFunctionEvaluator, src: pyhelayers.CTileTensor, coefs: numpy.ndarray[numpy.float64], type: pyhelayers.EvalType = <EvalType.MIN_DEPTH: 2>) -> None
Polynomial evaluation, in place. Evaluates the given plain polynomial on input “src”, and stores the result in “src”.
- param src:
The input of the polynomial. This will contain the result of the evaluation at the end of the function execution.
- type src:
CTileTensor
- param coefs:
The coefficients of the polynomial. coefs[0] is the free coefficient.
- type coefs:
numpy array of floats
- param type:
The type of the evaluation algorithm. See also the documentation of “EvalType”. Defaults to MIN_DEPTH.
- type type:
EvalType
poly_eval_in_place(self: pyhelayers.TTFunctionEvaluator, src: pyhelayers.CTileTensor, coefs: pyhelayers.CTileVector, normalized: bool = False) -> None
Polynomial evaluation, in place. Receives a polynomial with encrypted coefficients, evaluates it on the input “src” and stores the result in “src”.
- param src:
The input of the polynomial. This will contain the result of the evaluation at the end of the function execution.
- type src:
CTileTensor
- param coefs:
The (encrypted) coefficients of the polynomial. coefs[0] is the free coefficient.
- type coefs:
numpy array of floats
- param normalized:
If False, the polynomial to evaluate will be composed from the coefficients in “coeffs” vector only. Otherwise, an extra term, whose coefficient is 1 and whose power is len(coefs), will be added. Defaults to False.
- type normalized:
bool
- pow_in_place(self: pyhelayers.TTFunctionEvaluator, src: pyhelayers.CTileTensor, degree: int) None #
Computes src^degree, in place.
- Parameters:
src (CTileTensor) – A CTileTensor to calculate its power.
degree (int) – The required exponent.
- power_norm(self: pyhelayers.TTFunctionEvaluator, src: pyhelayers.CTileTensor, dim: int, power: int, num_of_iterations: int, epsilon: float, max_denominator: float) None #
- Computes the power norm over a given vector, where every value Vi is
replaced in place with Vi^p/sum_j(Vj^p). :param src: The CTileTensor over which the power norm is to be computed in
place.
- param dim:
The dimension along which to normalize.
- param power:
The power to which all the values are raised as part of the power norm.
- param num_of_iterations:
Higher values will increase the accuracy of the result at the expense of consuming more multiplication depth.
- param epsilon:
This small positive value will be added to the positive denominator prior to division, in order to ensure a non-zero denominator and to distance the denominator from the area of 0 which is hard to approximate.
- param max_denominator:
Maximal expected value of the sum of powers serving as the denominator. The minimal expected value of the power sum is 0.
- relu(self: pyhelayers.TTFunctionEvaluator, src: pyhelayers.CTileTensor, min_val: float, max_val: float) None #
Calculates (an approximation of) ReLU(src) in place assuming that src is in [min_val,max_val].
- Parameters:
src – CTileTensor. The values to calculate ReLU on. The output will be stored in place.
min_val – double. Minimal expected value in src
max_val – double. Maximal expected value in src
- reshape_vector_horizontal_to_vertical(self: pyhelayers.TTFunctionEvaluator, res: pyhelayers.CTileTensor, src: pyhelayers.CTileTensor) None #
Given a one dimensional (possibly interleaved) CTileTensor src of shape [n/t, 1] (or [n~/t, 1]), return a CTileTensor res of shape [1/t, n] (or [1~/t, n]) s.t. res[0,i] = src[i,0]. :param res: CTileTensor result :param src: CTileTensor source.
- sigmoid_by_sign_scaled(self: pyhelayers.TTFunctionEvaluator, x: pyhelayers.CTileTensor, scale: float, composition: str) None #
An approximation of sigmoid via an approximation of sign and supporting a pre-scaling of the input.
- Parameters:
x – CTileTensor. Cipher to calculate its sigmoid. This will contain the sigmoid result at the end of the function execution.
scale – double. will be divided by scale before computing its sign.
composition – string. The composition of functions that form the custom op.
- sigmoid_in_place(self: pyhelayers.TTFunctionEvaluator, src: pyhelayers.CTileTensor, type: pyhelayers.SigmoidType) None #
An approximation of sigmoid, in place. :param src: A CTileTensor to calculate its sigmoid. :type src: CTileTensor :param type: Specifies the degree of the approximating polynomial.
See also FunctionEvaluator::SigmoidType documentation.
- sign_by_giant_baby_composition_in_place(self: pyhelayers.TTFunctionEvaluator, src: pyhelayers.CTileTensor, composition: str, binary_res: bool) None #
Computes the sign of “src” using the specified composition of
“giant step” and “baby step” functions.
- Parameters:
src – The CTileTensor to compute its sign.
composition – The composition of functions that form the sign.
binary_res –
- If true, the result will be close to 0 when src < 0 and
close to 1 when src > 0. If false, the result will be close to -1 when src
< 0 and close to 1 when src > 0. Defaults to false.
- sign_in_place(self: pyhelayers.TTFunctionEvaluator, src: pyhelayers.CTileTensor, g_rep: int, f_rep: int, max_abs_val: float = 1, binary_res: bool = False) None #
Computes the (approximated) sign of src, in place. The sign is computed as g(g(…g(f(f(…f(x)))))), where g(x) and f(x) are two degree 7 polynomials. The number of times g(x) and f(x) appear in the composition can be controlled by gRep and fRep arguments, respectively. all values in src must be in the range [-max_abs_val, max_abs_val].
- Parameters:
src (CTileTensor) – A CTileTensor to calculate its sign. All of its values must be in the range [-1, 1].
g_rep (int) – How many repetitions of g(x).
f_rep (int) – How many repetitions of f(x).
max_abs_val (float) – An upper bound on the absolute value of src. Defaults to 1.
binary_res (bool) – If True, the result will be close to 0 when src < 0 and close to 1 when src > 0. If False, the result will be close to -1 when src < 0 and close to 1 when src > 0. Defaults to False.
- sort_along_dim(self: pyhelayers.TTFunctionEvaluator, res: pyhelayers.CTileTensor, sort_network: helayers::SortNetwork, dim: int, g_rep: int, f_rep: int, max_possible_abs_of_diff: float = 1.0) None #
Sorts the given CTileTensor along the specified dimension in a decreasing order. :param res: The CTileTensor to sort. The result will be stored in place. :param sort_network: An object specifying the network to sort with. This
network consists of a set of binary comparison gates.
- Parameters:
dim – The dimension to sort along.
g_rep – Controls the accuracy of the comparisons. Higher g_rep increases accuracy on the account of higher computation depth.
f_rep – Controls the accuracy of the comparisons. Higher f_rep increases accuracy on the account of higher computation depth.
max_possible_abs_of_diff – The maximum absolute difference between any two values in “res” that have the same index in “dim”.
- sqrt(self: pyhelayers.TTFunctionEvaluator, src: pyhelayers.CTileTensor, bit_resolution: int) None #
The same as FunctionEvaluator::sqrt, but works on a CTileTensor object as source, CTile-wise. For more details, see
- documentation of FunctionEvaluator::sqrt.
- param src:
CTileTensor
- param bit_resolution:
int.
-
class TTFunctionEvaluator#
This class is used to evaluate different available functions on CTileTensor objects, such as polynomial evaluation and sigmoid.
Public Functions
-
TTFunctionEvaluator(const HeContext &he)#
Construct a new TT function evaluator object.
- Parameters:
he – the context
-
~TTFunctionEvaluator()#
Destroy the TT function evaluator object.
-
TTFunctionEvaluator(const TTFunctionEvaluator &src) = delete#
Deleted copy constructor.
- Parameters:
src – Source object
-
TTFunctionEvaluator &operator=(const TTFunctionEvaluator &src) = delete#
Deleted operator=.
- Parameters:
src – Source object
-
void polyEvalInPlace(CTileTensor &src, const std::vector<double> &coefs, EvalType type = DEFAULT) const#
The same as FunctionEvaluator::polyEvalInPlace, but works on a CTileTensor object as source, CTile-wise.
For more details, see documentation of FunctionEvaluator::polyEvalInPlace.
-
void polyEvalInPlace(CTileTensor &src, const std::vector<CTile> &coefs, bool normalized = false) const#
The same as FunctionEvaluator::polyEvalInPlace, but works on a CTileTensor object as source, CTile-wise.
For more details, see documentation of FunctionEvaluator::polyEvalInPlace.
-
void powInPlace(CTileTensor &src, int degree) const#
The same as FunctionEvaluator::powInPlace, but works on a CTileTensor object as source, CTile-wise.
For more details, see documentation of FunctionEvaluator::powInPlace.
-
void sigmoidInPlace(CTileTensor &src, SigmoidType type) const#
A polynomial approximation of sigmoid, in place.
- Parameters:
src – Ciphertext to calculate its sigmoid. The result will be stored here.
type – Specifies the degree of the approximating polynomial. See also SigmoidType documentation.
-
void signInPlace(CTileTensor &src, int gRep, int fRep, double maxAbsVal = 1, bool binaryRes = false) const#
The same as FunctionEvaluator::signInPlace, but works on a CTileTensor object as source, CTile-wise.
For more details, see documentation of FunctionEvaluator::signInPlace.
-
void signByGiantBabyCompositionInPlace(CTileTensor &src, const std::string &composition, bool binaryRes = false) const#
Computes the sign of “src” using the specified composition of “giant step” and “baby step” functions.
Giant step functions bring any input in the range [-1, 1] to become close to the sign output (e.g. -1 or 1), while baby step functions bring values that are already close to one of the sign outputs (e.g. -1 or 1) to become even closer. The specific giant step and baby step functions that compose the sign should be specified in “composition” argument. Currently we support giant steps “g1” and “g3”, of degrees 3 and 7 respectively, and the baby step “f3” of degrees 3. “composition” argument should contain the composed functions in the order of invocation, separated by “_”. So for example, the composition “g3_g1_f3” defines the composition f3(g1(g3(x))). It is recommended to start the composition with giant steps and then finish it with baby steps.
- Parameters:
src – The CTileTensor to compute its sign.
composition – The composition of functions that form the sign.
binaryRes – If true, the result will be close to 0 when src < 0 and close to 1 when src > 0. If false, the result will be close to -1 when src < 0 and close to 1 when src > 0. Defaults to false.
-
CTileTensor compare(const CTileTensor &a, const TileTensor &b, int gRep, int fRep, double maxPossibleAbsOfDiff) const#
Comparison of “a” and “b”, elementwise.
This method compares a CTileTensor a and a TileTensor b, and returns a CTileTensor having the same shape as a. The returned CTileTensor will approximately contian 1 in places where a>b, 0.5 in places where a=b and 0 in places where a<b.
- Parameters:
a – First CTileTensor to compare.
b – Second TileTensor to compare.
gRep – Controls the accuracy of the result. Increasing gRep will increase accuracy on the account of higher multiplication depth.
fRep – Controls the accuracy of the result. Increasing fRep will increase accuracy on the account of higher multiplication depth.
maxPossibleAbsOfDiff – An upper upper bound on |a-b|.
-
void inverseWithoutScaling(CTileTensor &src, int bitResolution = 5) const#
Calculates (an approximation of) 1/src.
src must be in the range (-sqrt(2), sqrt(2)), and the approximation becomes less accurate near the boundaries.
- Parameters:
src – ciphertext to calculate its inverse.
bitResolution – Controls the accuracy of the result. A higher bitResolution value will increase the accuracy of the result at the expense of consuming more multiplication depth.
-
void inverse(CTileTensor &src, double absUpperBound, int bitResolution = 5) const#
Calculates (an approximation of) 1/src.
the absolute value of src must be smaller than absUpperBound.
- Parameters:
src – ciphertext to calculate its inverse.
absLowerBound – a lower bound on the absolute value of src. The tighter this bound is, the more accurate the result will be. This lower bound must be non-negative.
bitResolution – Controls the accuracy of the result. A higher bitResolution value will increase the accuracy of the result at the expense of consuming more multiplication depth.
- Throws:
If – absUpperBound is not positive.
-
void inversePositive(CTileTensor &src, double lowerBound, double upperBound, int bitResolution = 5) const#
The same as FunctionEvaluator::inversePositive, but works on a CTileTensor object as source, CTile-wise.
For more details, see documentation of FunctionEvaluator::inversePositive.
-
void goldschmidtInverseSqrt(CTileTensor &src, DimInt iterations, double maxAbsVal = 1, double inputScale = 1, double reuiredOutputScale = 1) const#
Calculates (an approximation of) requiredOutputScale/sqrt(src/inputScale), elementwise and in place.
The values of “src” must be positive values not larger than maxAbsVal.
- Parameters:
src – The values to calculate their inverse. The output will be stored in place.
iterations – Controls the accuracy of the result. A higher number of iterations increases the accuracy at the expense of slower and more deep computations.
maxAbsVal – A positive value specifying the maximum absolute value of src/inputScale.
inputScale – The scale of the input “src”. The actual values represented by “src” are the encrypted values divided by this scale.
requiredOutputScale – The output of the function is the actual 1/sqrt result multiplied by this “requiredOutputScale” value.
- Throws:
invalid_argument – If “maxAbsVal” is not positive.
-
void relu(CTileTensor &src, double minVal = -8.0, double maxVal = 8.0) const#
Calculates (an approximation of) ReLU(src) in place assuming that src is in [minVal,maxVal].
- Parameters:
src – The values to calculate ReLU on. The output will be stored in place.
minVal – minimal expected value in src
maxVal – maximal expected value in src
-
void sigmoidBySignScaled(CTileTensor &x, double scale = FunctionEvaluator::SIGMOID_FROM_SIGN_SCALE, const std::string &composition = "g3_g1_f3") const#
An approximation of sigmoid via an approximation of sign and supporting a pre-scaling of the input.
- Parameters:
x – Cipher to calculate its sigmoid. This will contain the sigmoid result at the end of the function execution.
scale – x will be divided by scale before computing its sign. This is because sigmoid(x) ~= 0.5*sign(x/scale) + 0.5 where the scale can be optimized to produce the best approximation for different input ranges. The default SIGMOID_FROM_SIGN_SCALE is optimized to give the best mean accuracy of Sigmoid(x) from Sign(x) for x in [30,30].
composition – The composition of functions that form the custom op - in order of invocation. Currently only the functions g1, g3, f3 are supported. For example “g3_g1_f3” defines the composition f3(g1(g3(x))).
-
void geluBySigmoid(CTileTensor &x, double scale = FunctionEvaluator::SIGMOID_FROM_SIGN_SCALE, const std::string &composition = "g3_g1_f3") const#
Calculates (an approximation of) GELU(src) in place.
- Parameters:
x – The values to calculate GELU on. The output will be stored in place
scale – scale to divide x by in order to compute sigmoid from sign. See documentation of sigmoidBySignScaled for further details.
composition – the composed functions that form the sign function - in order of invocation.The default is optimized to give the best aproximation for x in [-30,30]. For example “g3_g1_f3” defines the composition f3(g1(g3(x))).
-
void sqrt(CTileTensor &src, int bitResolution) const#
The same as FunctionEvaluator::sqrt, but works on a CTileTensor object as source, CTile-wise.
For more details, see documentation of FunctionEvaluator::sqrt.
-
void reshapeVectorHorizontalToVertical(CTileTensor &res, const CTileTensor &src) const#
Given a one dimensional (possibly interleaved) CTileTensor src of shape n/t, 1, return a CTileTensor res of shape 1/t, n s.t.
res[0,i] = src[i,0].
- Parameters:
res – an empty CTileTensor
src – one dimensional binary CTileTensor.
-
void partialSumsIndicatorsGetLayerZero(CTileTensor &res, const CTileTensor &indicators) const#
see partialSumsIndicatorsGetLayer
- Parameters:
res – an empty CTileTensor
indicators – one dimensional interleaved CTileTensor of shape [1~ / slotCount, n] that contains binary values (either 0 or 1)
-
void partialSumsIndicatorsGetLayerOne(CTileTensor &res, const CTileTensor &indicators) const#
see partialSumsIndicatorsGetLayer
- Parameters:
res – an empty CTileTensor
indicators – one dimensional interleaved CTileTensor of shape [1~ / slotCount, n] that contains binary values (either 0 or 1)
-
void partialSumsIndicatorsGetLayer(CTileTensor &res, const CTileTensor &layerOne, const CTileTensor &prevLayer, int len) const#
Given a one dimensional interleaved CTileTensor indicators of shape [1~ / slotCount, n] that contains binary values (either 0 or 1) and an integer len>=2, calculates a CTileTensor of shape [n~ / slotCount, n] s.t.
res[i,j]=1 iff indicators[i:i+len] contains exactly len 1’s, and 0 otherwise. To run this function with len=n you must run it before with len=n-1, and transfer to this function the result of the previous run (denoted by prevLayer). Furthermore, you need to transfer the result of partialSumsIndicatorsGetLayerOne which returns the result for len=1.
- Parameters:
res – an empty CTileTensor
layerOne – the result of partialSumsIndicatorsGetLayerOne
prevLayer – the result of running this function with len-1
len – number of elements to include in the partial sum calculation. len >= 2
-
void computeLagrangeBasis(CTileTensor &res, const CTileTensor &src, const std::vector<double> &data, const int index) const#
Computes Lagrange basis polynomial over a set of points, in a specific point.
- Parameters:
res – Output CTileTensor
src – point to compute the interpolation at
data – list of the points to interpolation on (the basis is 0 in all the points expect for one)
index – the index in which the interpolation should be 1
-
void sortAlongDim(CTileTensor &res, const SortNetwork &sortNetwork, DimInt dim, int gRep, int fRep, double maxPossibleAbsOfDiff = 1.0) const#
Sorts the given CTileTensor along the specified dimension in a decreasing order.
- Parameters:
res – The CTileTensor to sort. The result will be stored in place.
sortNetwork – An object specifying the network to sort with. This network consists of a set of binary comparison gates.
dim – The dimension to sort along.
gRep – Controls the accuracy of the comparisons. Higher gRep increases accuracy on the account of higher computation depth.
fRep – Controls the accuracy of the comparisons. Higher fRep increases accuracy on the account of higher computation depth.
maxPossibleAbsOfDiff – The maximum absolute difference between any two values in “res” that have the same index in “dim”.
-
void multiplyMany(CTileTensor &output, const std::vector<CTileTensor> &ctts) const#
Multiplies all CTileTensors in “ctts” and saves the resulting product “output”.
The multiplications are performed in a way that aims to minimize the number of bootstrapping operations and maximize the chain index of the result.
- Parameters:
output – The resulting product will be stored here.
ctts – The CTileTensors to multiply.
-
void powerNorm(CTileTensor &src, DimInt dim, int power, int numOfIterations, double epsilon, double maxDenominator) const#
Computes the power norm over a given vector, where every value Vi is replaced in place with Vi^p/sum_j(Vj^p).
- Parameters:
src – The CTileTensor over which the power norm is to be computed in place.
dim – The dimension along which to normalize.
power – the power to which all the values are raised as part of the power norm.
numOfIterations – Higher values will increase the accuracy of the result at the expense of consuming more multiplication depth.
epsilon – This small positive value will be added to the positive denominator prior to division, in order to ensure a non-zero denominator and to distance the denominator from the area of 0 which is hard to approximate.
maxDenominator – Maximal expected value of the sum of powers serving as the denominator. The minimal expected value of the power sum is 0.
Public Static Functions
-
static void bootstrapAheadOfUncomposedComputationOfGivenDepth(CTileTensor &x, int depth)#
Bootstrap ahead of uncomposed computation to avoid multiple bootstraps inside the computation.
- Parameters:
x – The input for the computation that may be bootstrapped now.
depth – The depth of the expected uncomposed computation.
-
static std::vector<TTFunctionEvaluator::ComponentFunction> getCompositionList(const std::string &composition)#
compiles the input composition string into a vector of component functions that form the custom op
- Parameters:
composition – The composition of functions that form the custom op - in order of invocation. Currently only the functions g1, g3, f3 are supported. For example “g3_g1_f3” defines the composition f3(g1(g3(x))).
- Returns:
a vector of component functions
-
struct ComponentFunction#
-
TTFunctionEvaluator(const HeContext &he)#