Commonly used evaluation metrics and techniquesfor classification problems:

Confusion Matrix:

- A confusion matrix is a table that shows the true positive, true negative, false positive, and false negative counts.

- Formula : 

- Interpretation : Assess the model's performance in terms of both false positives and false negatives. Goal is to minimize these errors.

- Python Code :

from sklearn.metrics import confusion_matrix
conf_matrix = confusion_matrix(y_true, y_pred)
print("Confusion Matrix:")


- Accuracy measures the proportion of correct predictions out of all predictions made.

- Formula : 

- Interpretation : A high accuracy score generally indicates good performance, but it can be misleading if the classes are imbalanced.

- Python Code :

from sklearn.metrics import accuracy_score
accuracy = accuracy_score(y_true, y_pred)
print("Accuracy:", accuracy)

Precision, Recall(Sensitivity), and F1-Score:

-  01. Precision measures the proportion of true positives out of all predicted   

   02. Recall (Sensitivity) measures the proportion of true positives out of all 
       actual positives. 

   03. F1-score is the harmonic mean of precision and recall.

- Formula : 

- Interpretation : 
  Precision is important when minimizing false positives is a priority, 
  Recall is crucial when minimizing false negatives is a priority.
  F1-Score is useful when you want a balance between precision and recall.

- Python Code :

from sklearn.metrics import precision_score, recall_score, f1_score
precision = precision_score(y_true, y_pred)
recall = recall_score(y_true, y_pred)
f1 = f1_score(y_true, y_pred)

print("Precision:", precision)
print("Recall:", recall)
print("F1-Score:", f1)

ROC Curve and AUC:

- 01. Receiver Operating Characteristic (ROC) curve visualizes the trade-off between true positive rate (recall) and false positive rate (1-specificity) for different classification thresholds. 
  02. Area Under the ROC Curve (AUC) quantifies the overall performance of the model.

- Formula : 

- Interpretation : 

  The ROC curve visualizes the trade-off between true positive rate (recall) and 
  false positive rate for different classification thresholds.

  Higher AUC values indicate better. The closer the AUC is to 1.0, the better the 
  model's ability to distinguish between positive and negative instances.An AUC of 
  0.5 suggests no discrimination (similar to random guessing), while an AUC of 1.0 
  indicates perfect discrimination.

- Python Code :

from sklearn.metrics import roc_curve, roc_auc_score
import matplotlib.pyplot as plt
fpr, tpr, thresholds = roc_curve(y_true, y_prob)
auc = roc_auc_score(y_true, y_prob)

# Plot ROC curve
plt.figure(figsize=(8, 6))
plt.plot(fpr, tpr, label='ROC Curve (AUC = {:.2f})'.format(auc))
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')

5. Cross-Validation:
Cross-validation is a technique to assess a model's performance on multiple subsets of the data. It helps evaluate the model's generalization ability and detect overfitting.

- Interpretation :
High cross-validation scores (close to 1) indicate that the model is consistent and performs well on different data subsets.
Low scores may suggest that the model overfits the training data and does not generalize well to new data.

- Python Code :

from sklearn.model_selection import cross_val_score
scores = cross_val_score(model, X, y, cv=5)  # Replace 'model', 'X', and 'y' with your model and data
print("Cross-Validation Scores:", scores)


Login here

Forgot your password?



I am an enthusiastic advocate for the transformative power of data in the fashion realm. Armed with a strong background in data science, I am committed to revolutionizing the industry by unlocking valuable insights, optimizing processes, and fostering a data-centric culture that propels fashion businesses into a successful and forward-thinking future. - Masud Rana, Certified Data Scientist, IABAC

© Data4Fashion 2023-2024

Developed by:

Please accept cookies
Accept All Cookies