This second module focuses on the concept of models scores, including the test score and train score. Those scores are then used to define overfitting and underfitting, as well as the concepts of bias and variance.
We’ll also see how to inspect model’s performance with respect to their complexity and the number of input samples.
All images by author.
If you didn’t catch it, I strongly recommend my first post of this series — it’ll be way easier to follow along:
The first concept I want to talk about are train score and test score. The score is a way to numericaly express the performance of a model. To compute such performance, we use a score function, that aggregates the “distance” or “error” between what the model predicted versus what the ground truth is. For example:
model = LinearRegressor()model.fit(X_train, y_train)y_predicted = model.predict(X_test)test_score = some_score_function(y_predicted, y_test)
In sklearn, all models (also called estimators) provide an even quicker way to compute a score using the model:
# the model will computed the predicted y-value from X_test, # and compare it to y_test with a score functiontest_score = model.score(X_test, y_test)train_score = model.score(X_train, y_train)
The actual score function of the model depends on the model and the kind of problem it is designed to solve. For example a linear regressor is the R² coefficient (numerical regression) while a support-verctor classifier (classication) will use the accuracy which is basicaly the number of good class-prediction.