神经网络实例Python

您所在的位置:网站首页 神经网络经典实例 神经网络实例Python

神经网络实例Python

2023-11-16 01:14| 来源: 网络整理| 查看: 265

      本文采用一个模拟数据集进行神经网络的训练,相关知识点包括数据预处理、BN层、神经网络模型、梯度反向传播、梯度检查、监测训练过程、超参数随机搜索等,使读者掌握一个完整的机器学习流程。

1、生成数据

     生成一个线性不可分的数据集,它是一个随时间增长的振荡数据。

import numpy as np import matplotlib.pyplot as plt # 显示数据 def show_data(X, labels): plt.scatter(X[:, 0], X[:, 1], c=labels, s=10, cmap=plt.cm.Spectral) plt.show() def gen_random_data(dim, N_class, num_samp_per_class): num_examples = num_samp_per_class*N_class X = np.random.randn(num_examples, dim) # data matrix (each row = single example) labels = np.random.randint(N_class, size = num_examples) # class labels show_data(X, labels) return (X, labels) dim = 2 # dimensionality N_class = 4 # number of classes layer_param = [dim, 10, 20, N_class] (X, labels) = gen_random_data(dim, N_class, num_samp_per_class=20)

# 生成数据 def gen_toy_data(dim, N_class, num_samp_per_class): num_examples = num_samp_per_class * N_class X = np.zeros((num_examples, dim)) # data matrix labels = np.zeros(num_examples, dtype='uint8') # class labels for j in range(N_class): ix = range(num_samp_per_class * j, num_samp_per_class * (j + 1)) x = np.linspace(-np.pi, np.pi, num_samp_per_class) + 5 # x axis y = np.sin(x + j * np.pi / (0.5 * N_class)) y += 0.2 * np.sin(10 * x + j * np.pi / (0.5 * N_class)) y += 0.25 * x + 10 y += np.random.randn(num_samp_per_class) * 0.1 # noise X[ix] = np.c_[x, y] labels[ix] = j show_data(X, labels) return (X, labels)

2、数据预处理

中心化和归一化

def normalize(X): # (x-u)/delta mean = np.mean(X, axis=0) X_norm = X - mean std = np.std(X_norm, axis=0) X_norm /= std + 10**(-5) return (X_norm, mean, std)

PCA白化

def PCA_white(X): mean = np.mean(X, axis=0) X_norm = X - mean cov = np.dot(X_norm.T, X_norm)/X_norm.shape[0] U,S,V = np.linalg.svd(cov) X_norm = np.dot(X_norm, U) X_norm /= np.sqrt(S + 10**(-5)) return (X_norm, mean, U, S)

数据集按比例2:1:1随机分为训练集、验证集和测试集

def split_data(X, labels): split1 = 2 split2 = 4 num_examples = X.shape[0] shuffle_no = list(range(num_examples)) np.random.shuffle(shuffle_no) X_train = X[shuffle_no[:num_examples//split1]] labels_train = labels[shuffle_no[:num_examples//split1]] X_val = X[shuffle_no[num_examples//split1:num_examples//split1+num_examples//split2]] labels_val = labels[shuffle_no[num_examples//split1:num_examples//split1+num_examples//split2]] X_test = X[shuffle_no[-num_examples//4:]] labels_test = labels[shuffle_no[-num_examples//4:]] return (X_train, labels_train, X_val, labels_val, X_test, labels_test)

进行预处理

def data_preprocess(X_train, X_val, X_test): (X_train_pca, mean, U, S) = PCA_white(X_train) X_val_pca = np.dot(X_val - mean, U) X_val_pca /= np.sqrt(S + 10**(-5)) X_test_pca = np.dot(X_test - mean, U) X_test_pca /= np.sqrt(S + 10**(-5)) return (X_train_pca, X_val_pca, X_test_pca) 3、网络模型

超参数主要是网络深度(隐含层的数量)和每层神经元的数量。可以采用list结构存储每层神经元数量,包括输入层和输出层。

初始化

# layer_param = [dim, 100, 100, N_class] def initialize_parameters(layer_param): weights = [] biases = [] vweights = [] vbiases = [] for i in range(len(layer_param) - 1): in_depth = layer_param[i] out_depth = layer_param[i+1] std = np.sqrt(2/in_depth)*0.5 weights.append(std*np.random.randn(in_depth, out_depth)) biases.append((np.zeros((1, out_depth)))) vweights.append(np.zeros((in_depth, out_depth))) vbiases.append(np.zeros((1, out_depth))) return (weights, biases, vweights, vbiases)

前向计算

def forward(X, layer_param, weights, biases): hiddens = [] hiddens.append(X) for i in range(len(layer_param)-2): hiddens.append(np.maximum(0, np.dot(hiddens[i], weights[i]) + biases[i])) # 最后一层不需要非线性激活函数 scores = np.dot(hiddens[-1], weights[-1]) + biases[-1] return (hiddens, scores)

计算softmax数据损失值

def data_loss_softmax(scores, labels): num_examples = scores.shape[0] exp_scores = np.exp(scores) exp_cores_sum = np.sum(exp_scores, axis=1) corect_probs = exp_scores[range(num_examples), labels]/exp_cores_sum corect_logprobs = -np.log(corect_probs) data_loss = np.sum(corect_logprobs)/num_examples return data_loss

计算L2范数损失

def reg_L2_loss(weights, reg): reg_loss = 0 for weight in weights: reg_loss += 0.5*reg*np.sum(weight*weight) return reg_loss

计算分值矩阵梯度

def dscores_softmax(scores, labels): num_examples = scores.shape[0] exp_scores = np.exp(scores) probs = exp_scores/np.sum(exp_scores, axis=1, keepdims=True) dscores = probs dscores[range(num_examples), labels] -= 1 dscores /= num_examples return dscores

准确率预测,和前向函数几乎一致,只是不需要保存hidden神经元

def predict(X, labels, layer_param, weights, biases): hidden = X for i in range(len(layer_param)-2): hidden = np.maximum(0, np.dot(hidden, weights[i]) + biases[i]) scores = np.dot(hidden, weights[-1]) + biases[-1] predicted_class = np.argmax(scores, axis=1) right_class = predicted_class == labels return np.mean(right_class)

梯度反向传播算法

def gradient_backprop(dscores, hiddens, weights, biases, reg): dweights = [] dbiases = [] dhidden = dscores for i in range(len(hiddens)-1, -1, -1): dweights.append(np.dot(hiddens[i].T, dhidden) + reg*weights[i]) dbiases.append(np.sum(dhidden, axis=0, keepdims=True)) dhidden = np.dot(dhidden, weights[i].T) dhidden[hiddens[i]


【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3