I am training timeseries forecast models using ML.NET Not having any troubles, but I recently was wondering about what really affects the SsaForecastingEstimator.Fit methods performance. I noticed that it almost never utilizes anything near 100% CPU but something around 60-70% (which was also mentioned by other users: How can I use the full potential of my CPU for the training process in ML.NET?) I now had the opportunity to swap my CPU from a i5 12400 to a i5 14600KF I did not expect a gigantic impact on performance, but to my surprise it didn't affect performance at all. Training time is nearly identical to the second. Is there some kind of fixed training times being applied in the fit method or whats going on here?
SsaForecastingEstimator pipeline = Context.Forecasting.ForecastBySsa(windowSize: 300, seriesLength: 3600, trainSize: 3600, horizon: 30, outputColumnName: @"col1", inputColumnName: @"col1", confidenceLowerBoundColumn: @"col1_LB", confidenceUpperBoundColumn: @"col1_UB");
SsaForecastingTransformer model = pipeline.Fit(trainData);
using (TimeSeriesPredictionEngine<TimeSeriesModelInput, TimeSeriesModelOutput> forecastingEngine = model.CreateTimeSeriesEngine<TimeSeriesModelInput, TimeSeriesModelOutput>(Context))
{
TimeSeriesModelOutput forecast = forecastingEngine.Predict();
...
}
- Made a profiling run, verifying that its the fit method consuming most CPU time: Indeed
- Compared Debug und Release build: No difference
- CPU swap: No difference