Skip to content

7. Predict and Evaluate

After running Ideation, you have trained models ready to evaluate. In this tutorial, we will:

  • Score a model on holdout observation tables to compute predictions and evaluate performance.
  • Compare models using the Leaderboard on both validation and holdout sets.
  • Visualize forecasts using Forecast Comparison plots on FORECAST_SERIES observation tables.

Step 1: Score the Model on Validation and Holdout Sets

  1. Navigate to the Model catalog from the 'Experiment' section of the menu and select the best performing model.

    Select Model


  2. In the model's Predict tab, click Predict Button to compute predictions.

  3. Select the Holdout_eval observation table as the input.

    Select Holdout


  4. Submit the prediction task and wait for it to complete.

    Prediction Running

  5. Repeat the same process for the second model produced by Ideation, so that both models are scored on the holdout set.

Validation predictions

Predictions on the Validation_eval observation table are already produced by Ideation during model training. You only need to score on the Holdout_eval table to get an unbiased estimate of final model performance on data that was never used during training or model selection.


Step 2: Review the Leaderboard

Once predictions are computed on an observation table that includes target values, the model's metrics are automatically added to the Leaderboard.

  1. Navigate to the Leaderboard from the model's page or from the Use Case.

    Leaderboard Navigation


  2. Select the Validation_eval observation table and the Validation leaderboard type. All models scored on this table are ranked by their metrics.

    Validation Leaderboard


  3. Switch to the Holdout_eval observation table and the Holdout leaderboard type to confirm the model generalizes well.

    Holdout Leaderboard

Leaderboard

The Leaderboard automatically ranks all models scored on the same observation table, making it easy to compare alternatives. Use the validation leaderboard for model selection and the holdout leaderboard for final performance reporting.


Step 3: Create a FORECAST_SERIES Observation Table

To visualize predictions as continuous time series, we first need to create an observation table in FORECAST_SERIES mode.

  1. Navigate to the Observation Table catalog from the 'Formulate' section and select the Use Case.

  2. Click Image and select the 'Forecast Automation' tab.

  3. Use the same settings as before:

    • Prediction Schedule: Weekly, every Monday at 3:30 AM (30 3 * * 1)
    • Prediction Schedule Timezone: America/Los_Angeles
    • Forecast Start Offset: 0
    • Forecast Horizon: 28
  4. Define a single period covering the full evaluation range:

    • Name: Forecast_series
    • Start: 2016-01-01
    • End: 2016-05-23
    • Target Observation Count: 50,000
    • Purpose: Other
    • Mode: FORECAST_SERIES

    Create Forecast Series


  5. Submit and wait for it to complete.

    Create Forecast Series

Why FORECAST_SERIES?

Unlike ONE_ROW_PER_ENTITY_FORECAST_POINT tables (used for training and evaluation), FORECAST_SERIES tables contain complete forecast series — for each Point In Time, all forecast points within the 28-day horizon are included. This produces the continuous prediction lines needed for Forecast Comparison visualizations.


Step 4: Score the Model on the FORECAST_SERIES Table

  1. Go back to the Model catalog from the 'Experiment' section and select your model.

  2. In the model's Predict tab, click Predict Button.

  3. Select the Forecast_series observation table.

    Select Forecast Series


  4. Submit and wait for the prediction to complete.

    Select Forecast Series


Step 5: Visualize Forecast Comparisons

Once predictions are computed on a FORECAST_SERIES observation table, you can generate interactive Forecast Comparison plots.

  1. From the model page, go to the Forecast Comparison tab and select the prediction table generated from Forecast_series.

    Select Prediction Table


  2. Click Extract Entities. Select an entity to visualize. For example, filter by store_id = CA_1. And click Generate Comparison

    Select Entity


  3. The system generates an interactive plot showing:

    • Prediction lines (colored) — one for each Point In Time, showing the full 28-day forecast series.
    • Actual values (grey) — the target values that actually occurred.

    Forecast Comparison Plot


  4. Use the interactive controls to:

    • Hover over data points for exact values.
    • Filter by Point In Time range to focus on specific prediction dates.
    • Compare how predictions made at different times converge or diverge from actuals.

    Forecast Comparison Interactive


  5. Try other stores to get a comprehensive view of model behavior across locations. For example, compare WX_1 (a relatively smooth series) with WI_2 (a more volatile series) to see how the model handles different levels of variability.

    Forecast Comparison Other Stores

    Forecast Comparison Other Stores


Next Steps

To learn how to refine ideation, deploy features, and manage the feature life cycle, refer to the Credit Default UI tutorials: