Tile tensors#

Tile tensor is a powerful and simple tool for handling tensors under encryption. It takes care of the packing details, how a tensor is flattened, and how its data is laid out within ciphertexts, while offering the usual API of a tensor, with operations like add, multiply, sum-over-dim, etc.

The packing method is controlled by the tile tensor shape. This is a simple language that defines the shape of the tensor, and how it is packed.

The various options achievable with tile tensor shapes cover many packing options known in the literature (see examples below), unify them to a generic framework, generalize upon them, and add novel possibilities

Tile tensor shapes#

Tile tensor is a tensor divided into tiles, where each tile contains a part of the original tensor, and is stored flattened in a ciphertext.

The tile tensor shape defines the shape of the tensor, and how it is divided into tiles. For example, the tile tensor shape [\frac{5}{2},\frac{6}{4}] specifies the following:

  • The numbers above the fraction line indicate the shape of the tensor, [5,6]. This means the tile tensor contains a 5 by 6 matrix.

  • The numbers below the line indicate the shape of the tiles, [2,4]. This means we divided our matrix into contiguous blocks of shape [2,4] and stored each such block in a separate ciphertext. This also implies each ciphertext has $8$ slots.

See this example in the interactive demo below, where you can try various other shapes, including the options of duplication, unknowns, and interleaved packing.

Tile Tensor Applet

Demo

Try the following tile tensor shapes:
[5/2,6/4]
[5/2,1/4]
[5/2,*/4]
[5/2,1?/4]
[5/2,6?/4]
[5/2,6~/4]
[5/2,6~?/4]

Tile tensor shape: [ / , / ]


Logical representation#

A tile tensor T contains some tensor X packed divided among several tiles or ciphertexts. T logically represents X, meaning you can work with T as if it was X, ignoring the packing details.

For example: executing T_2 = T.sum\_over\_dim(0) will return a tile tensor T_2 that logically represents a tensor equal to X.sum\_over\_dim(0).

To simplify working with tile tensors, they provide the same set of operators ordinarily supported by tensors: elementwise operators, matrix-multiplication, sum_over_dim, convolution, rotate (roll), slice, etc.

Here too, the way these operators are implemented covers and generalizes upon many known techniques from the literature. See examples below.

Examples from the literature#

To implement a matrix-vector multiplication of a matrix M with a vector V as in [1]. Simply do:

  • Pack the matrix M of shape [a,b] in a tile tensor T_m with shape [\frac{a}{t_0},\frac{b}{t_1}]

  • Pack the vector V of shape [b] in a tile tensor T_v with shape [\frac{*}{t_0},\frac{b}{t_1}]

  • Compute the elementwise product T_m * T_v then sum over the second dimension.

Here, t_0` and t_1` can have any value such that t_0 t_1` is the number of slots in a given ciphertext. They can be tuned to optimize various goals, e.g., latency, throughput, memory, or network bandwidth.

A generalization for matrix-matrix multiplication as in [2] is as follows.

  • Pack matrix M_1 of shape [a,b] in a tile tensor T_{M_1} with shape [\frac{a}{t_0},\frac{b}{t_1},\frac{*}{t_2}]

  • Pack matrix M_2 of shape [b,c] in a tile tensor T_{M_2} with shape [\frac{*}{t_0},\frac{b}{t_1},\frac{c}{t_2}]

  • Compute the elementwise product T_{M_1} * T_{M_2} then sum over the second dimension.

The authors of HyPHEN [3] describe several methods for convolutional networks that can also be written using the tile tensor notation. For exmaple, the CAConv with SISO method is implemented as follows.

  • The input image [c_i,w_i,h_i] is packed as [\frac{c_i}{2},\frac{w_i}{32},\frac{h_i}{32}].

  • The kernels [c_o,c_i,f_w,f_h] are packed as [f_w,f_h,\frac{c_i}{2},c_o,\frac{*}{32},\frac{*}{32}].

  • The computation is the same as the tile tensor’s convolution operator.

  • The result is packed as [\frac{*}{2},c_o,\frac{w_i}{32},\frac{h_i}{32}].

Tile tensors generalize on the above, using interleaved packing to support arbitrary image size and aribtrary tile sizes.

References#

[1] Crockett, E.: A low-depth homomorphic circuit for logistic regression model training. Cryptology ePrint Archive (2020). Link

[2] Rizomiliotis, P., Triakosia, A.: On matrix multiplication with homomorphic encryption. In: Proceedings of the 2022 on Cloud Computing Security Workshop, CCSW’22, p. 53–61. Association for Computing Machinery, New York, NY, USA (2022). Link

[3] Kim, D., Park, J., Kim, J., Kim, S., Ahn, J.H.: HyPHEN: A hybrid packing method and its optimizations for homomorphic encryption-based neural networks. IEEE Access (2023) Link.

[4] KKrastev, A., Samardzic, N., Langowski, S., Devadas, S., & Sanchez, D. (2024). A Tensor Compiler with Automatic Data Packing for Simple and Efficient Fully Homomorphic Encryption. Proceedings of the ACM on Programming Languages, 8(PLDI), 126-150. Link.