TensorFlow Serving is an open source, high-performance, serving system that provides a system to handle the inference aspect of machine learning.
On IBM® z16™ and later (running Linux on IBM Z or IBM® z/OS® Container Extensions (IBM zCX)), TensorFlow core Graph Execution will leverage new inference acceleration capabilities that transparently target the IBM Integrated Accelerator for AI through the IBM z Deep Neural Network (zDNN) library. The IBM zDNN library contains a set of primitives that support Deep Neural Networks. These primitives transparently target the IBM Integrated Accelerator for AI on IBM z16™ and later. No changes to the original model are needed to take advantage of the new inference acceleration capabilities.
Note. When using IBM Z Accelerated Serving for TensorFlow on either an IBM z14™ or an IBM z15™, TensorFlow will transparently target the CPU with no changes to the model. See IBM Z Accelerated Serving for TensorFlow for more information This image is built by IBM to run on the IBM Z architecture and is not affiliated with any other community that provides a version of this image.1.3.0 | docker pull icr.io/ibmz/ibmz-accelerated-serving-for-tensorflow@sha256:8ed0daa4779beb67ada2a40185738676e58cb860b521062d692d270da0c28e60 | Vulnerability Report | 11-15-2024 |
Version | Pull String | Security (IBM Cloud) | Created |
---|---|---|---|
For documentation and samples for the IBM Z Accelerated Serving for TensorFlow container image, please visit the GitHub Repository here.