TensorFlow Serving is an open source, high-performance, serving system that provides a system to handle the inference aspect of machine learning.
On IBM® z16™ and later (running Linux on IBM Z or IBM® z/OS® Container Extensions (IBM zCX)), TensorFlow core Graph Execution will leverage new inference acceleration capabilities that transparently target the IBM Integrated Accelerator for AI through the IBM z Deep Neural Network (zDNN) library. The IBM zDNN library contains a set of primitives that support Deep Neural Networks. These primitives transparently target the IBM Integrated Accelerator for AI on IBM z16™ and later. No changes to the original model are needed to take advantage of the new inference acceleration capabilities.
Note. When using IBM Z Accelerated Serving for TensorFlow on either an IBM z14™ or an IBM z15™, TensorFlow will transparently target the CPU with no changes to the model. See IBM Z Accelerated Serving for TensorFlow for more information This image is built by IBM to run on the IBM Z architecture and is not affiliated with any other community that provides a version of this image.1.4.2 | docker pull icr.io/ibmz/ibmz-accelerated-serving-for-tensorflow@sha256:e1e9ce9a251b9bb3302d2287dbc3fae0b8834a20d73c6e37316bc45b850c2550 | Vulnerability Report | 09-04-2025 |
Version | Pull String | Security (IBM Cloud) | Created |
---|---|---|---|
For documentation and samples for the IBM Z Accelerated Serving for TensorFlow container image, please visit the GitHub Repository here.