Tähän video, tarvitsee testvideomp4.yaml edukamu-komponentin, jos käytetään videoita mp4:na, pelkkä url kansioon ei tunnista videoformaattia
To make predictions we must choose an algorithm to use. In this case, SVM was the most accurate model, we will use that as our final model. Now we want to get the accuracy of the model on our validation set.
This will give us an independent final check on the accuracy of the model. It is recommended to keep a validation set just in case we made a slip during training, such as training overfitting or data leak. Both of these issues will result in an overly optimistic result.
We will fit the model on the entire training dataset and make predictions on the validation dataset
# Make predictions on validation dataset
model = SVC(gamma='auto')
model.fit(X_train, Y_train)
predictions = model.predict(X_validation)
We evaluate the predictions by comparing them to the expected results in the validation set. Then we calculate classification accuracy, as well as a confusion matrix and a classification report.
# Evaluate predictions
print(accuracy_score(Y_validation, predictions))
print(confusion_matrix(Y_validation, predictions))
print(classification_report(Y_validation, predictions))
We can see that the accuracy is 0.966 or about 96% on the hold out dataset.
The confusion matrix provides an indication of the errors made.
Finally, the classification report provides a breakdown of each class by precision, recall, f1-score, and support showing excellent results.
0.9666666666666667
[[11 0 0]
[ 0 12 1]
[ 0 0 6]]
precision recall f1-score support
Iris-setosa 1.00 1.00 1.00 11
Iris-versicolor 1.00 0.92 0.96 13
Iris-virginica 0.86 1.00 0.92 6
accuracy 0.97 30
macro avg 0.95 0.97 0.96 30
weighted avg 0.97 0.97 0.97 30
Tähän questionscroll- tehtäväkomponentti
Ja tähän kirjoita vastaus- tehtäväkomponentti
You can see the completed notebook here.