python .npy文件自制数据集基本使用

您所在的位置:网站首页 freihand数据集 python .npy文件自制数据集基本使用

python .npy文件自制数据集基本使用

2023-09-19 07:21| 来源: 网络整理| 查看: 265

读取和保存npy文件

有实习需要我们对数据集做出一系列处理,如归一化等等,如果每次从文件中读出来使用都要进行归一化、转换格式等操作会很浪费时间,于是可以自制数据集并保存到npy文件中以供随时调用。

import tensorflow as tf from PIL import Image import numpy as np import os train_path = './mnist_image_label/mnist_train_jpg_60000/' train_txt = './mnist_image_label/mnist_train_jpg_60000.txt' x_train_savepath = './mnist_image_label/mnist_x_train.npy' y_train_savepath = './mnist_image_label/mnist_y_train.npy' test_path = './mnist_image_label/mnist_test_jpg_10000/' test_txt = './mnist_image_label/mnist_test_jpg_10000.txt' x_test_savepath = './mnist_image_label/mnist_x_test.npy' y_test_savepath = './mnist_image_label/mnist_y_test.npy' def generateds(path, txt): f = open(txt, 'r') # 以只读形式打开txt文件 contents = f.readlines() # 读取文件中所有行 f.close() # 关闭txt文件 x, y_ = [], [] # 建立空列表 for content in contents: # 逐行取出 value = content.split() # 以空格分开,图片路径为value[0] , 标签为value[1] , 存入列表 img_path = path + value[0] # 拼出图片路径和文件名 img = Image.open(img_path) # 读入图片 img = np.array(img.convert('L')) # 图片变为8位宽灰度值的np.array格式 img = img / 255. # 数据归一化 (实现预处理) x.append(img) # 归一化后的数据,贴到列表x y_.append(value[1]) # 标签贴到列表y_ print('loading : ' + content) # 打印状态提示 x = np.array(x) # 变为np.array格式 y_ = np.array(y_) # 变为np.array格式 y_ = y_.astype(np.int64) # 变为64位整型 return x, y_ # 返回输入特征x,返回标签y_ if os.path.exists(x_train_savepath) and os.path.exists(y_train_savepath) and os.path.exists( x_test_savepath) and os.path.exists(y_test_savepath): print('-------------Load Datasets-----------------') x_train_save = np.load(x_train_savepath) y_train = np.load(y_train_savepath) x_test_save = np.load(x_test_savepath) y_test = np.load(y_test_savepath) x_train = np.reshape(x_train_save, (len(x_train_save), 28, 28)) x_test = np.reshape(x_test_save, (len(x_test_save), 28, 28)) else: print('-------------Generate Datasets-----------------') x_train, y_train = generateds(train_path, train_txt) x_test, y_test = generateds(test_path, test_txt) print('-------------Save Datasets-----------------') x_train_save = np.reshape(x_train, (len(x_train), -1)) x_test_save = np.reshape(x_test, (len(x_test), -1)) np.save(x_train_savepath, x_train_save) np.save(y_train_savepath, y_train) np.save(x_test_savepath, x_test_save) np.save(y_test_savepath, y_test) model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False), metrics=['sparse_categorical_accuracy']) model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test), validation_freq=1) model.summary()

例子中:

import numpy as np

# .npy文件是numpy专用的二进制文件 arr = np.array([[1, 2], [3, 4]])

# 保存.npy文件 np.save("../data/arr.npy", arr) print("save .npy done")

# 读取.npy文件 np.load("../data/arr.npy") print(arr) print("load .npy done")

参考:

https://blog.csdn.net/qq_33472146/article/details/90768258 



【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3