TensorFlow Serving is an open source, high-performance, serving system that provides a system to handle the inference aspect of machine learning.
On IBM® z16™ and later (running Linux on IBM Z or IBM® z/OS® Container Extensions (IBM zCX)), TensorFlow core Graph Execution will leverage new inference acceleration capabilities that transparently target the IBM Integrated Accelerator for AI through the IBM z Deep Neural Network (zDNN) library. The IBM zDNN library contains a set of primitives that support Deep Neural Networks. These primitives transparently target the IBM Integrated Accelerator for AI on IBM z16™ and later. No changes to the original model are needed to take advantage of the new inference acceleration capabilities.
Note. When using IBM Z Accelerated Serving for TensorFlow on either an IBM z14™ or an IBM z15™, TensorFlow will transparently target the CPU with no changes to the model. See IBM Z Accelerated Serving for TensorFlow for more information This image is built by IBM to run on the IBM Z architecture and is not affiliated with any other community that provides a version of this image.1.1.0 | docker pull icr.io/ibmz/ibmz-accelerated-serving-for-tensorflow@sha256:a38b2c4a78fcae6bb9d9061fdcaaf96b70e50f6d2a9120f1ca03066e3b86b2ca | Vulnerability Report | 11-08-2023 |
Version | Pull String | Security (IBM Cloud) | Created |
---|---|---|---|
For documentation and samples for the IBM Z Accelerated Serving for TensorFlow container image, please visit the GitHub Repository here.