Skip to content

OpenShift AI

Solution report card
Runs on IBM i?
On-prem
IBM Cloud
AI capabilitiesMachine Learning
AutoAI
Deep Learning
Large Language Models
many more…
Commercial support
Free to try?
Requirements

Red Hat OpenShift AI (RHOAI) is a managed MLOps platform built on OpenShift — Red Hat’s enterprise Kubernetes distribution. It provides an end-to-end environment for the full AI/ML lifecycle: data exploration, model training, experiment tracking, model serving, and monitoring — all within a governed, enterprise-grade infrastructure.

RHOAI is available as a managed cloud service (on AWS, Azure, or IBM Cloud via OpenShift Dedicated) and as a self-managed deployment on any OpenShift cluster, including on IBM Power.

RHOAI provides Jupyter-based workbenches with pre-built notebook images containing popular AI/ML libraries (PyTorch, TensorFlow, scikit-learn, Hugging Face Transformers). Data scientists can connect to Db2 for i via JDBC or Mapepire from within these notebooks.

RHOAI integrates Kubeflow Pipelines for orchestrating multi-step ML workflows — data ingestion, preprocessing, training, evaluation, and deployment — as reproducible, versioned pipelines. IBM i Db2 data can serve as input via pipeline components that query Db2 for i.

Trained models deploy to KServe-based model serving endpoints with auto-scaling and monitoring. These endpoints expose a REST API callable from IBM i applications.

For large models requiring more compute than a single server, RHOAI supports distributed training across multiple nodes using PyTorch DDP or Ray.

RHOAI runs on OpenShift, which supports IBM Power (ppc64le). This means the entire MLOps platform can run on-premises on IBM Power hardware — the same infrastructure family as IBM i — keeping data within your network and taking advantage of Power’s hardware capabilities.

See also: Red Hat AI Inference Server for standalone LLM serving without the full MLOps platform.