Accuracy is a common metric used in machine learning and data analysis to evaluate the performance of classification models. It measures how many predictions made by a model are correct out of the total number of predictions and is typically expressed as a percentage.
Python Code to Calculate Accuracy
import numpy as np from sklearn.metrics import accuracy_score # Ground truth labels true_labels = [1, 0, 1, 1, 0, 1, 0, 1] # Predicted labels from your model predicted_labels = [1, 0, 1, 1, 1, 1, 0, 0] # Calculate accuracy accuracy = accuracy_score(true_labels, predicted_labels) accuracy_percentage = accuracy * 100 print(f"Accuracy: {accuracy_percentage:.2f}%")
Here’s how the code works:
- Import the necessary libraries, including NumPy and scikit-learn.
- Prepare your data, which includes the ground truth labels and predicted labels.
- Calculate accuracy using the accuracy_score function from scikit-learn.
- Print or use the accuracy value as needed.
Result
The calculated accuracy for the given data is 75.00%.
Accuracy is an essential metric for assessing the performance of your machine learning models and is used to gauge how well your model’s predictions match the actual data.