The parameters for the forecast request.

interface DeploymentTSForecastParameters {
    inference_batch_size?: number;
    prediction_length?: number;
}

Properties

inference_batch_size?: number

The batch size used during inference. When multiple time series are present, the inference will be conducted in batches. If not specified, the model default batch size will be used.

prediction_length?: number

The prediction length for the forecast. The service will return this many periods beyond the last timestamp in the inference data payload. If specified, prediction_length must be an integer >=1 and no more than the model default prediction length. When omitted the model default prediction_length will be used.