如何使用Python Scikit-learn构建朴素贝叶斯分类器?


基于贝叶斯定理朴素贝叶斯分类是根据未知数据集预测类别的过程。Scikit-learn有三个朴素贝叶斯模型,分别是:

  • 高斯朴素贝叶斯
  • 伯努利朴素贝叶斯
  • 多项式朴素贝叶斯

在本教程中,我们将学习使用Python Scikit-learn (Sklearn)构建高斯朴素贝叶斯和伯努利朴素贝叶斯分类器。

高斯朴素贝叶斯分类器

高斯朴素贝叶斯分类器基于以均值和方差为特征的连续分布。

让我们通过一个例子看看如何使用Scikit-Learn Python ML库来构建高斯朴素贝叶斯分类器。

在这个例子中,我们将使用高斯朴素贝叶斯模型,该模型假设每个标签的数据都来自简单的正态分布。我们将使用的数据集是威斯康星州乳腺癌诊断数据库。

示例

# Importing the necessary packages import sklearn from sklearn.datasets import load_breast_cancer # Loading the dataset and organizing the data DataSet = load_breast_cancer() labelnames = DataSet['target_names'] labels = DataSet['target'] featurenames = DataSet['feature_names'] features = DataSet['data'] # Organizing dataset into training and testing set # by using train_test_split() function from sklearn.model_selection import train_test_split train, test, train_labels, test_labels = train_test_split(features,labels,test_size = 0.30, random_state = 300) # Model evaluation by using Naïve Bayes algorithm. from sklearn.naive_bayes import GaussianNB # Let's initializing the model: NBclassifier = GaussianNB() # Train the model: NBmodel = NBclassifier.fit(train, train_labels) # Making predictions by using pred() function: NBpreds = NBclassifier.predict(test) print("The predictions are:\n", NBpreds[:15]) # Finding accuracy of our Naive Bayes classifier: from sklearn.metrics import accuracy_score print("Accuracy of our classifier is:", accuracy_score(test_labels, NBpreds) *100)

输出

它将产生以下输出:

The predictions are:
[0 0 1 1 0 0 0 1 1 1 1 1 0 1 0]
Accuracy of our classifier is: 93.56725146198829

伯努利朴素贝叶斯分类器

伯努利朴素贝叶斯分类器是一种二元算法。当我们需要检查特征是否存在时,它非常有用。

让我们通过一个例子看看如何使用Scikit-Learn Python ML库来构建伯努利朴素贝叶斯分类器。

示例

在下面的例子中,我们将使用scikit-learn python库在一个虚拟数据集上实现伯努利朴素贝叶斯算法。

from sklearn.datasets import make_classification # Importing libraries from sklearn.datasets import make_classification import matplotlib.pyplot as plt # Creating the classification dataset with one informative feature and one cluster per class nb_samples = 300 X, Y = make_classification(n_samples=nb_samples, n_features=2, n_informative=2, n_redundant=0) # Plotting the dataset plt.figure(figsize=(7.50, 3.50)) plt.subplots_adjust(bottom=0.05, top=0.9, left=0.05, right=0.95) plt.subplot(111) plt.scatter(X[:, 0], X[:, 1], marker="o", c=Y, s=40, edgecolor="k") plt.show()

输出

我们将获得以下虚拟数据集:

示例

现在,让我们在这个虚拟数据集上构建伯努利朴素贝叶斯分类器:

# Importing libraries from sklearn.datasets import make_classification import numpy as np # Model evaluation by using Bernoulli Naïve Bayes algorithm. # Import Bernoulli Naive bayes from sklearn from sklearn.naive_bayes import BernoulliNB # Organizing dataset into training and testing set # by using train_test_split() function from sklearn.model_selection import train_test_split # Creating the classification dataset with one informative feature and one cluster per class nb_samples = 300 X, Y = make_classification(n_samples=nb_samples, n_features=2, n_informative=2, n_redundant=0) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.30) # Let's initializing the model B_NaiveBayes = BernoulliNB(binarize=0.0) # Train the model B_NaiveBayes.fit(X_train, Y_train) # Making predictions by using pred() function data = np.array([[0, 0], [0, 1], [1, 0], [1, 1]]) Preds=B_NaiveBayes.predict(data) print(Preds)

输出

它将产生以下输出:

array([0, 0, 1, 1])

更新于:2022年10月4日

3K+ 浏览量

开启你的职业生涯

完成课程获得认证

开始学习
广告