Watson Pipelines¶
With the Watson™ Pipelines service, you can create a pipeline to automate the end-to-end flow of various assets, whether it’s training a model or running script-based assets from the time they are created through their deployment. Automating the end-to-end flow of these assets with a pipeline makes it simpler to build, run, and evaluate models, which speeds up the flow and reduces the overall time investment.
You use a pipelines editor canvas to assemble and configure a pipeline that creates, trains, deploys, and updates machine learning models and Python scripts. To design a pipeline, you drag nodes onto the canvas, specify objects and parameters, then run and monitor the pipeline. Note that you must have the required service for any asset you include in your pipeline. For example, if you are cleaning data with DataStage, the DataStage service must be installed or provisioned for your Cloud Pak suite.
Your team can collaborate across roles in the pipelines editor. For example, a data scientist can create a flow to train a model in the editor, and then a ModelOps engineer can add the steps to the flow to automate the process of training, deploying, and evaluating the model to a production environment.
After you assemble the pipeline, you can rapidly update and test modifications with the Pipelines editor canvas, which provides tools to visualize the pipeline, customize it at run time with pipeline parameter variables, and then run it as a trial job or on a schedule.
These tools are available with the Watson Pipelines service:
- Create a flow to collect data, run scripts, train models, store results, and more.
- Customize a unique pipelines component that can run a user-written function.
- Schedule jobs to run flows and enhance automation by adding node conditions.
For more information check out the official documentation.
Example Use [official documentation] (https://www.ibm.com/docs/en/cloud-paks/cp-data/4.7.x?topic=assets-watson-pipelines).