本篇文章给人人带来的内容是关于Tensorflow分类器项目自定义数据读入的要领引见(代码示例),有肯定的参考价值,有须要的朋侪能够参考一下,愿望对你有所协助。
Tensorflow分类器项目自定义数据读入
在照着Tensorflow官网的demo敲了一遍分类器项目的代码后,运转却是胜利了,效果也不错。然则终究照样要练习本身的数据,所以尝试预备加载自定义的数据,但是demo中只是涌现了fashion_mnist.load_data()并没有细致的读取历程,随后我又找了些材料,把读取的历程纪录在这里。
起首提一下须要用到的模块:
import os import keras import matplotlib.pyplot as plt from PIL import Image from keras.preprocessing.image import ImageDataGenerator from sklearn.model_selection import train_test_split
图片分类器项目,起首肯定你要处置惩罚的图片分辨率将是若干,这里的例子为30像素:
IMG_SIZE_X = 30 IMG_SIZE_Y = 30
其次肯定你图片的体式格局目次:
image_path = r'D:\Projects\ImageClassifier\data\set' path = ".\data" # 你也能够运用相对路径的体式格局 # image_path =os.path.join(path, "set")
目次下的构造以下:
响应的label.txt以下:
动漫 景致 玉人 物语 樱花
接下来是接在labels.txt,以下:
label_name = "labels.txt" label_path = os.path.join(path, label_name) class_names = np.loadtxt(label_path, type(""))
这里轻便起见,直接应用了numpy的loadtxt函数直接加载。
以后就是正式处置惩罚图片数据了,解释就写在内里了:
re_load = False re_build = False # re_load = True re_build = True data_name = "data.npz" data_path = os.path.join(path, data_name) model_name = "model.h5" model_path = os.path.join(path, model_name) count = 0 # 这里推断是不是存在序列化以后的数据,re_load是一个开关,是不是强迫重新处置惩罚,测试用,能够去除。 if not os.path.exists(data_path) or re_load: labels = [] images = [] print('Handle images') # 因为label.txt是和图片防备目次的分类目次一一对应的,即每一个子目次的目次名就是labels.txt里的一个label,所以这里能够经由过程读取class_names的每一项去拼接path后读取 for index, name in enumerate(class_names): # 这里是拼接后的子目次path classpath = os.path.join(image_path, name) # 先推断一下是不是是目次 if not os.path.isdir(classpath): continue # limit是测试时刻用的这里能够去除 limit = 0 for image_name in os.listdir(classpath): if limit >= max_size: break # 这里是拼接后的待处置惩罚的图片path imagepath = os.path.join(classpath, image_name) count = count + 1 limit = limit + 1 # 应用Image翻开图片 img = Image.open(imagepath) # 缩放到你最初肯定要处置惩罚的图片分辨率大小 img = img.resize((IMG_SIZE_X, IMG_SIZE_Y)) # 转为灰度图片,这里彩色通道会滋扰效果,并且会加大盘算量 img = img.convert("L") # 转为numpy数组 img = np.array(img) # 由(30,30)转为(1,30,30)(即`channels_first`),固然你也能够转换为(30,30,1)(即`channels_last`)但为了以后预览处置惩罚后的图片轻易这里采用了(1,30,30)的花样寄存 img = np.reshape(img, (1, IMG_SIZE_X, IMG_SIZE_Y)) # 这里应用轮回生成labels数据,个中寄存的现实是class_names中对应元素的索引 labels.append([index]) # 添加到images中,末了一致处置惩罚 images.append(img) # 轮回中一些状况的输出,能够去除 print("{} class: {} {} limit: {} {}" .format(count, index + 1, class_names[index], limit, imagepath)) # 末了一次性将images和labels都转换成numpy数组 npy_data = np.array(images) npy_labels = np.array(labels) # 处置惩罚数据只须要一次,所以我们挑选在这里应用numpy自带的要领将处置惩罚以后的数据序列化存储 np.savez(data_path, x=npy_data, y=npy_labels) print("Save images by npz") else: # 假如存在序列化号的数据,便直接读取,进步速率 npy_data = np.load(data_path)["x"] npy_labels = np.load(data_path)["y"] print("Load images by npz") image_data = npy_data labels_data = npy_labels
到了这里原始数据的加工预处置惩罚便已完成,只须要末了一步,就和demo中fashion_mnist.load_data()
返回的效果一样了。代码以下:
# 末了一步就是将原始数据分红练习数据和测试数据 train_images, test_images, train_labels, test_labels = \ train_test_split(image_data, labels_data, test_size=0.2, random_state=6)
这里将相干信息打印的要领也附上:
print("_________________________________________________________________") print("%-28s %-s" % ("Name", "Shape")) print("=================================================================") print("%-28s %-s" % ("Image Data", image_data.shape)) print("%-28s %-s" % ("Labels Data", labels_data.shape)) print("=================================================================") print('Split train and test data,p=%') print("_________________________________________________________________") print("%-28s %-s" % ("Name", "Shape")) print("=================================================================") print("%-28s %-s" % ("Train Images", train_images.shape)) print("%-28s %-s" % ("Test Images", test_images.shape)) print("%-28s %-s" % ("Train Labels", train_labels.shape)) print("%-28s %-s" % ("Test Labels", test_labels.shape)) print("=================================================================")
以后别忘了归一化哟:
print("Normalize images") train_images = train_images / 255.0 test_images = test_images / 255.0
末了附上读取自定义数据的完全代码:
import os import keras import matplotlib.pyplot as plt from PIL import Image from keras.layers import * from keras.models import * from keras.optimizers import Adam from keras.preprocessing.image import ImageDataGenerator from sklearn.model_selection import train_test_split os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' # 支撑中文 plt.rcParams['font.sans-serif'] = ['SimHei'] # 用来一般显现中文标签 plt.rcParams['axes.unicode_minus'] = False # 用来一般显现负号 re_load = False re_build = False # re_load = True re_build = True epochs = 50 batch_size = 5 count = 0 max_size = 2000000000 IMG_SIZE_X = 30 IMG_SIZE_Y = 30 np.random.seed(9277) image_path = r'D:\Projects\ImageClassifier\data\set' path = ".\data" data_name = "data.npz" data_path = os.path.join(path, data_name) model_name = "model.h5" model_path = os.path.join(path, model_name) label_name = "labels.txt" label_path = os.path.join(path, label_name) class_names = np.loadtxt(label_path, type("")) print('Load class names') if not os.path.exists(data_path) or re_load: labels = [] images = [] print('Handle images') for index, name in enumerate(class_names): classpath = os.path.join(image_path, name) if not os.path.isdir(classpath): continue limit = 0 for image_name in os.listdir(classpath): if limit >= max_size: break imagepath = os.path.join(classpath, image_name) count = count + 1 limit = limit + 1 img = Image.open(imagepath) img = img.resize((30, 30)) img = img.convert("L") img = np.array(img) img = np.reshape(img, (1, 30, 30)) # img = skimage.io.imread(imagepath, as_grey=True) # if img.shape[2] != 3: # print("{} shape is {}".format(image_name, img.shape)) # continue # data = transform.resize(img, (IMG_SIZE_X, IMG_SIZE_Y)) labels.append([index]) images.append(img) print("{} class: {} {} limit: {} {}" .format(count, index + 1, class_names[index], limit, imagepath)) npy_data = np.array(images) npy_labels = np.array(labels) np.savez(data_path, x=npy_data, y=npy_labels) print("Save images by npz") else: npy_data = np.load(data_path)["x"] npy_labels = np.load(data_path)["y"] print("Load images by npz") image_data = npy_data labels_data = npy_labels print("_________________________________________________________________") print("%-28s %-s" % ("Name", "Shape")) print("=================================================================") print("%-28s %-s" % ("Image Data", image_data.shape)) print("%-28s %-s" % ("Labels Data", labels_data.shape)) print("=================================================================") train_images, test_images, train_labels, test_labels = \ train_test_split(image_data, labels_data, test_size=0.2, random_state=6) print('Split train and test data,p=%') print("_________________________________________________________________") print("%-28s %-s" % ("Name", "Shape")) print("=================================================================") print("%-28s %-s" % ("Train Images", train_images.shape)) print("%-28s %-s" % ("Test Images", test_images.shape)) print("%-28s %-s" % ("Train Labels", train_labels.shape)) print("%-28s %-s" % ("Test Labels", test_labels.shape)) print("=================================================================") # 归一化 # 我们将这些值缩小到 0 到 1 之间,然后将其馈送到神经网络模子。为此,将图象组件的数据类型从整数转换为浮点数,然后除以 255。以下是预处置惩罚图象的函数: # 务必要以雷同的体式格局对练习集和测试集举行预处置惩罚: print("Normalize images") train_images = train_images / 255.0 test_images = test_images / 255.0
以上就是Tensorflow分类器项目自定义数据读入的要领引见(代码示例)的细致内容,更多请关注ki4网别的相干文章!