IBM Z Accelerated for Triton Inference Server an open source fast, scalable, and open-source AI inference server, by standardizing model deployment and execution streamlined and optimized for high performance. The Triton Inference Server can deploy AI models such as deep learning (DL) and machine learning (ML).
Triton Inference Server specific usage examples are available at IBM Z Accelerated for Triton Inference Server.
See IBM Z Accelerated for Triton Inference Server for more information This image is built by IBM to run on the IBM Z architecture and is not affiliated with any other community that provides a version of this image.1.3.0 | docker pull icr.io/ibmz/ibmz-accelerated-for-nvidia-triton-inference-server@sha256:2cedd535805c316fec7dff6cac8129d873da39348459f645240eec005172b641 | Vulnerability Report | 11-12-2024 | 1.2.0 | docker pull icr.io/ibmz/ibmz-accelerated-for-nvidia-triton-inference-server@sha256:d44d3fbe67ba61be60527196ab949406faa9e3fd3deffa39765c5efa69514550 | Vulnerability Report | 06-20-2024 |
Version | Pull String | Security (IBM Cloud) | Created |
---|---|---|---|
For documentation and samples for the IBM Z Accelerated for Triton Inference Server container image, please visit the GitHub Repository here.