Multilabel classification(多标签分类 SVM)

您所在的位置:网站首页 多标签分类实现 Multilabel classification(多标签分类 SVM)

Multilabel classification(多标签分类 SVM)

2023-08-23 16:28| 来源: 网络整理| 查看: 265

这个例子模拟了多标签文档分类问题。基于以下过程随机生成数据集:

pick the number of labels: n ~ Poisson(n_labels)    提取的标签的数量:n~泊松(n_labels)n times, choose a class c: c ~ Multinomial(theta)    n次,选择类c:c~多项式(theta)pick the document length: k ~ Poisson(length)        提取该文件的长度为k~泊松分布(length)k times, choose a word: w ~ Multinomial(theta_c)   k次:选择词:w~多项式(theta_c)

       在上述过程中,使用拒绝抽样来确保n大于2,并且文档长度永远不会为零。同样,我们拒绝已经被选择的类。分配给这两个类的文档被两个有色圆圈包围。

      分类的方法是将PCA和CCA发现的前两个主成分投影到可视化目的,然后使用sklearn.multiclass.OneVsRestfier元助词,使用两个带有线性核的SVCS来学习每个类的判别模型。注意,PCA用于执行无监督的降维,而CCA用于执行有监督的降维。

     注:在图中,“未标记样本”并不意味着我们不知道标签(就像半监督学习中的那样),而只是样本没有标签。

 

print(__doc__) import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import make_multilabel_classification from sklearn.multiclass import OneVsRestClassifier from sklearn.svm import SVC from sklearn.preprocessing import LabelBinarizer from sklearn.decomposition import PCA from sklearn.cross_decomposition import CCA def plot_hyperplane(clf, min_x, max_x, linestyle, label): # get the separating hyperplane w = clf.coef_[0] a = -w[0] / w[1] xx = np.linspace(min_x - 5, max_x + 5) # make sure the line is long enough yy = a * xx - (clf.intercept_[0]) / w[1] plt.plot(xx, yy, linestyle, label=label) def plot_subfigure(X, Y, subplot, title, transform): if transform == "pca": X = PCA(n_components=2).fit_transform(X) elif transform == "cca": X = CCA(n_components=2).fit(X, Y).transform(X) else: raise ValueError min_x = np.min(X[:, 0]) max_x = np.max(X[:, 0]) min_y = np.min(X[:, 1]) max_y = np.max(X[:, 1]) classif = OneVsRestClassifier(SVC(kernel='linear')) classif.fit(X, Y) plt.subplot(2, 2, subplot) plt.title(title) zero_class = np.where(Y[:, 0]) one_class = np.where(Y[:, 1]) plt.scatter(X[:, 0], X[:, 1], s=40, c='gray', edgecolors=(0, 0, 0)) plt.scatter(X[zero_class, 0], X[zero_class, 1], s=160, edgecolors='b', facecolors='none', linewidths=2, label='Class 1') plt.scatter(X[one_class, 0], X[one_class, 1], s=80, edgecolors='orange', facecolors='none', linewidths=2, label='Class 2') plot_hyperplane(classif.estimators_[0], min_x, max_x, 'k--', 'Boundary\nfor class 1') plot_hyperplane(classif.estimators_[1], min_x, max_x, 'k-.', 'Boundary\nfor class 2') plt.xticks(()) plt.yticks(()) plt.xlim(min_x - .5 * max_x, max_x + .5 * max_x) plt.ylim(min_y - .5 * max_y, max_y + .5 * max_y) if subplot == 2: plt.xlabel('First principal component') plt.ylabel('Second principal component') plt.legend(loc="upper left") plt.figure(figsize=(8, 6)) X, Y = make_multilabel_classification(n_classes=2, n_labels=1, allow_unlabeled=True, random_state=1) plot_subfigure(X, Y, 1, "With unlabeled samples + CCA", "cca") plot_subfigure(X, Y, 2, "With unlabeled samples + PCA", "pca") X, Y = make_multilabel_classification(n_classes=2, n_labels=1, allow_unlabeled=False, random_state=1) plot_subfigure(X, Y, 3, "Without unlabeled samples + CCA", "cca") plot_subfigure(X, Y, 4, "Without unlabeled samples + PCA", "pca") plt.subplots_adjust(.04, .02, .97, .94, .09, .2) plt.show()

 



【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3