python机器学习实战

您所在的位置:网站首页 菜菜的机器学习实战 python机器学习实战

python机器学习实战

2023-03-24 01:19| 来源: 网络整理| 查看: 265

最近在阅读《机器学习实战》这本书,想根据自己的想法重新实现下一些机器学习算法。KNN算法的原理在书上已经讲得很清楚了,而且KNN算法本身比较简单,所以本文就直接粘贴自己写的代码了。相对于原文,本文代码有以下简单改动:

(1)将代码整理成用类来实现,更贴近工程实际。

(2)把数据换成了其他数据(这里我用了银行贷款的小测试数据)。

(3)用pandas读取数据,再转换为ndarray。(仅是自己随兴趣做的改动)

下面是代码:

# coding:utf-8 from __future__ import division import numpy as np import pandas as pd import operator class datingClassifier: # 训练样本路径,输入待测样本 def __init__(self, file, test_data, data, labels, k): self.file = file self.test_data = test_data self.data = data self.labels = labels self.k = k # # 分为样本和标签 # def data2matrix(self): # read_data = pd.read_table(self.file, sep='\t', header=None) # sample_num, feature_num = read_data.shape # data = read_data.iloc[:, :(feature_num - 1)] # labels = read_data.iloc[:, feature_num - 1] # return data, labels # 数据归一化 def data2norm(self): data, labels = self.data, self.labels min_value = data.min(0) #对各列求最小值,而不是行 max_value = data.max(0) #对各列求最大值 sample_num = data.shape[0] range_data = max_value - min_value norm_data = (data - np.tile(min_value, (sample_num, 1))) / (np.tile(range_data, (sample_num, 1))) #tile函数请自查,很简单 return norm_data, min_value, range_data, labels # 分类器 def knnClassifier(self): norm_data, min_value, range_data, labels = self.data2norm() data_input = self.test_data data_size = norm_data.shape[0] norm_input = (data_input - min_value) / (range_data) diff = np.tile(norm_input, (data_size, 1)) - norm_data dist = ((diff ** 2).sum(axis=1)) ** 0.5 sort_label_indices = np.argsort(dist) vote_count = {} k = self.k for i in range(k): vote_label = labels[sort_label_indices[i]] vote_count[vote_label] = vote_count.get(vote_label, 0) + 1 sort_vote_count = sorted(vote_count.items(), key=operator.itemgetter(1), reverse=True) return sort_vote_count[0][0] # 外层表示排序结果,里层是标签和对应标签的投票结果 if __name__ == '__main__': file = 'bankloan.txt' ratio = 0.1 k = 5 read_data = pd.read_table(file, sep='\t', header=None, encoding='utf-8') print(read_data) sample_num, feature_num = read_data.shape read_data.iloc[:, :feature_num] = read_data.iloc[:, :feature_num] test_num = int(sample_num * ratio) data = np.array(read_data.iloc[test_num:, :(feature_num - 1)]) labels = np.array(read_data.iloc[test_num:, feature_num - 1]) test_data = np.array(read_data.iloc[:test_num, :(feature_num - 1)]) test_label = np.array(read_data.iloc[:test_num, feature_num - 1]) pre_value = [] acc_num = 0 for i in range(test_num): clf = datingClassifier(file, test_data[i, :], data, labels, k) result = clf.knnClassifier() pre_value.append(result) print('预测值为:%s, 实际值为:%s\n' % (result, test_label[i])) if result == test_label[i]: acc_num += 1 acc_rate = acc_num / test_num print('预测准确率为:%s\n' % acc_rate)

本文的测试数据请看下面链接:

链接:http://pan.baidu.com/s/1nuK1PJb 密码:9r2b

其他跟原文基本上一样,不过这个代码是我自己闭卷(但网上查资料)实现的,算是给自己一个学习成果吧。这里面的代码有啥不懂的,欢迎大家来共同交流~



【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3