How to calculate accuracy in python

Accuracy is a common metric used in machine learning and data analysis to evaluate the performance of classification models. It measures how many predictions made by a model are correct out of the total number of predictions and is typically expressed as a percentage.

Python Code to Calculate Accuracy

import numpy as np
from sklearn.metrics import accuracy_score

# Ground truth labels
true_labels = [1, 0, 1, 1, 0, 1, 0, 1]

# Predicted labels from your model
predicted_labels = [1, 0, 1, 1, 1, 1, 0, 0]

# Calculate accuracy
accuracy = accuracy_score(true_labels, predicted_labels)
accuracy_percentage = accuracy * 100

print(f"Accuracy: {accuracy_percentage:.2f}%")
    

Here’s how the code works:

  1. Import the necessary libraries, including NumPy and scikit-learn.
  2. Prepare your data, which includes the ground truth labels and predicted labels.
  3. Calculate accuracy using the accuracy_score function from scikit-learn.
  4. Print or use the accuracy value as needed.
See also  Difference between the sep and end parameters in Python print statement

Result

The calculated accuracy for the given data is 75.00%.

Accuracy is an essential metric for assessing the performance of your machine learning models and is used to gauge how well your model’s predictions match the actual data.