Return to Image List

ibmz-accelerated-for-nvidia-triton-inference-server

IBM Z Accelerated for Triton Inference Server an open source fast, scalable, and open-source AI inference server, by standardizing model deployment and execution streamlined and optimized for high performance. The Triton Inference Server can deploy AI models such as deep learning (DL) and machine learning (ML).

Triton Inference Server specific usage examples are available at IBM Z Accelerated for Triton Inference Server.

See IBM Z Accelerated for Triton Inference Server for more information

This image is built by IBM to run on the IBM Z architecture and is not affiliated with any other community that provides a version of this image.


License

View license information here

As with all Docker images, these likely also contain other software which may be under other licenses (such as Bash, etc from the base distribution, along with any direct or indirect dependencies of the primary software being contained).

As for any pre-built image usage, it is the image user's responsibility to ensure that any use of this image complies with any relevant licenses for all software contained within.


Versions

Use the pull string below for the version of this image you require.
1.1.0 docker pull icr.io/ibmz/ibmz-accelerated-for-nvidia-triton-inference-server@sha256:d9524b7bd587cd3456d84d3748eec39f99ce87c631ef83612aaa1e893626c929 Vulnerability Report11-08-2023
Version Pull String Security (IBM Cloud) Created

Usage Notes

For documentation and samples for the IBM Z Accelerated for Triton Inference Server container image, please visit the GitHub Repository here.