Skip to content

Demo

The following videos provide an overview of using ado to benchmark fine-tuning performance across a range of fine-tuning workload configurations.

List actuators and experiments

We begin by listing the experiments provided by the SFTTrainer actuator, which provides fine-tuning benchmarking capabilities. We can use ado to get the details of one of the experiments finetune_full_benchmark-v1.0.0 and see what it requires as input and what it measures.

Create a discoveryspace to explore fine-tuning performance

Next we create a discoveryspace that represents a fine-tuning benchmarking campaign. To get started quickly, we use ados template functionality to create a default configuration space for lora and full fine-tuning benchmark experiments.

Explore the discoveryspace with a RandomWalk

This clip demonstrates how to view the available operators and then creating a RandomWalk operation to explore the discovery space created above. The operation is configured to sample all 40 of the configurations, a.k.a. entities, in the discoveryspace. After the operation is finished we can look results at a summary of the operation and get the results as a CSV file.

Examine spaces collaborators have created

ado enables multiple distributed users to collaborate on the same project. Here another user can query the discoveryspaces created by their colleagues, including the one created earlier. Resources, like discoveryspaces, can be annotated with custom metadata. For example, in this clip the user requests a summary of all spaces tagged with exp=ft. They then apply a custom export operator to the data which in this case integrates new data with an external store in a rigorous and repeatable way.