语义分割数据集Pascal VOC2012的读取与处理

您所在的位置:网站首页 aerplane怎么读的 语义分割数据集Pascal VOC2012的读取与处理

语义分割数据集Pascal VOC2012的读取与处理

2023-09-05 09:24| 来源: 网络整理| 查看: 265

语义分割数据集Pascal VOC2012的读取与处理 前言读取文件路径数据预处理自定义数据集类 完整代码

前言

Pascal VOC2012是语义分割的一个重要数据集。学习一下使用Pytorch读取和处理VOC2012。 下载解压到当前文件夹,得到VOCdevkit/VOC2012文件夹。其下有5个文件夹:

Annotations JPEGImages SegmentationObject ImageSets SegmentationClass

其中,只需要用到JPEGImages SegmentationObject ImageSets三个文件夹。 ImageSets/Segmentation路径包含了指定训练和测试样本的文本文件,而JPEGImages和SegmentationClass路径下分别包含了样本的输入图像和标签。这里的标签也是图像格式,其尺寸和它所标注的输入图像的尺寸相同。标签中颜色相同的像素属于同一个语义类别。

读取文件路径

下面首先定义read_file_list函数读取所有输入图像和标签的文件路径。

def read_file_list(root, is_train=True): txt_fname = root + '/ImageSets/Segmentation/' + ('train.txt' if is_train else 'val.txt') with open(txt_fname, 'r') as f: filenames = f.read().split() images = [os.path.join(root, 'JPEGImages', i + '.jpg') for i in filenames] labels = [os.path.join(root, 'SegmentationClass', i + '.png') for i in filenames] return images, labels # file list

root是VOC2012文件夹的根目录。选取第一张图像和标签显示一下:

voc_root = r"D:\Workspace\Datasets\data\VOCdevkit\VOC2012" images, labels = read_file_list(voc_root, True) img = Image.open(images[0]).convert('RGB') label = Image.open(labels[0]).convert('RGB') import matplotlib.pyplot as plt plt.subplot(121), plt.imshow(img) plt.subplot(122), plt.imshow(label) plt.show()

在这里插入图片描述

数据预处理

为了使用batch训练,需要将所有图片固定一个输入尺寸。由于语义分割对像素精度要求很高,而标签文件也是图片,直接resize会造成误差,所以只能使用随机裁剪成固定尺寸:

def voc_rand_crop(image, label, height, width): """ Random crop image (PIL image) and label (PIL image). """ i, j, h, w = transforms.RandomCrop.get_params( image, output_size=(height, width)) image = transforms.functional.crop(image, i, j, h, w) label = transforms.functional.crop(label, i, j, h, w) return image, label

可视化一下:

img, label = voc_rand_crop(img, label, 200, 300) plt.subplot(121), plt.imshow(img) plt.subplot(122), plt.imshow(label) plt.show()

在这里插入图片描述 接下来,对标签图片进行处理,将其转换为对应的标签矩阵。先列出标签中每个RGB的值及其对应类别,一共21类:

VOC_COLORMAP = [[0, 0, 0], [128, 0, 0], [0, 128, 0], [128, 128, 0], [0, 0, 128], [128, 0, 128], [0, 128, 128], [128, 128, 128], [64, 0, 0], [192, 0, 0], [64, 128, 0], [192, 128, 0], [64, 0, 128], [192, 0, 128], [64, 128, 128], [192, 128, 128], [0, 64, 0], [128, 64, 0], [0, 192, 0], [128, 192, 0], [0, 64, 128]] VOC_CLASSES = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'potted plant', 'sheep', 'sofa', 'train', 'tv/monitor']

建立一个索引,将标签图片中每个像素的RGB值一对一映射到对应的类别索引:

colormap2label = torch.zeros(256 ** 3, dtype=torch.uint8) for i, colormap in enumerate(VOC_COLORMAP): colormap2label[(colormap[0] * 256 + colormap[1]) * 256 + colormap[2]] = i def voc_label_indices(colormap): """ convert colormap (PIL image) to colormap2label (uint8 tensor). """ colormap = np.array(colormap).astype('int32') idx = ((colormap[:, :, 0] * 256 + colormap[:, :, 1]) * 256 + colormap[:, :, 2]) return colormap2label[idx] 自定义数据集类

最后,通过torch.utils.data.Dataset自定义数据集类,通过__getitem__函数,访问数据集中索引为idx的输入图像及其对应的标签矩阵。由于数据集中有些图像的尺寸可能小于随机裁剪所指定的输出尺寸,这些样本需要通过自定义的filter函数所移除。此外,还对输入图像的RGB三个通道的值分别做标准化。

class VOCSegDataset(torch.utils.data.Dataset): def __init__(self, is_train, crop_size, voc_root): """ crop_size: (h, w) """ self.rgb_mean = np.array([0.485, 0.456, 0.406]) self.rgb_std = np.array([0.229, 0.224, 0.225]) self.tsf = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=self.rgb_mean, std=self.rgb_std) ]) self.crop_size = crop_size # (h, w) images, labels = read_file_list(root=voc_root, is_train=is_train) self.images = self.filter(images) # images list self.labels = self.filter(labels) # labels list print('Read ' + str(len(self.images)) + ' valid examples') def filter(self, imgs): # 过滤掉尺寸小于crop_size的图片 return [img for img in imgs if ( Image.open(img).size[1] >= self.crop_size[0] and Image.open(img).size[0] >= self.crop_size[1])] def __getitem__(self, idx): image = self.images[idx] label = self.labels[idx] image = Image.open(image).convert('RGB') label = Image.open(label).convert('RGB') image, label = voc_rand_crop(image, label, *self.crop_size) image = self.tsf(image) label = voc_label_indices(label) return image, label # float32 tensor, uint8 tensor def __len__(self): return len(self.images)

验证一下:

voc_root = r"D:\Workspace\Datasets\data\VOCdevkit\VOC2012" crop_size = (320, 480) voc_train = VOCSegDataset(True, crop_size, voc_root) img, label = voc_train[0] print(img.dtype, label.dtype) Read 1114 valid examples torch.float32 torch.uint8 完整代码 import os from PIL import Image import numpy as np import torch from torch.utils.data import Dataset from torchvision import transforms VOC_COLORMAP = [[0, 0, 0], [128, 0, 0], [0, 128, 0], [128, 128, 0], [0, 0, 128], [128, 0, 128], [0, 128, 128], [128, 128, 128], [64, 0, 0], [192, 0, 0], [64, 128, 0], [192, 128, 0], [64, 0, 128], [192, 0, 128], [64, 128, 128], [192, 128, 128], [0, 64, 0], [128, 64, 0], [0, 192, 0], [128, 192, 0], [0, 64, 128]] VOC_CLASSES = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'potted plant', 'sheep', 'sofa', 'train', 'tv/monitor'] colormap2label = torch.zeros(256 ** 3, dtype=torch.uint8) for i, colormap in enumerate(VOC_COLORMAP): colormap2label[(colormap[0] * 256 + colormap[1]) * 256 + colormap[2]] = i def voc_label_indices(colormap): """ convert colormap (PIL image) to colormap2label (uint8 tensor). """ colormap = np.array(colormap).astype('int32') idx = ((colormap[:, :, 0] * 256 + colormap[:, :, 1]) * 256 + colormap[:, :, 2]) return colormap2label[idx] def read_file_list(root, is_train=True): txt_fname = root + '/ImageSets/Segmentation/' + ('train.txt' if is_train else 'val.txt') with open(txt_fname, 'r') as f: filenames = f.read().split() images = [os.path.join(root, 'JPEGImages', i + '.jpg') for i in filenames] labels = [os.path.join(root, 'SegmentationClass', i + '.png') for i in filenames] return images, labels # file list def voc_rand_crop(image, label, height, width): """ Random crop image (PIL image) and label (PIL image). """ i, j, h, w = transforms.RandomCrop.get_params( image, output_size=(height, width)) image = transforms.functional.crop(image, i, j, h, w) label = transforms.functional.crop(label, i, j, h, w) return image, label class VOCSegDataset(torch.utils.data.Dataset): def __init__(self, is_train, crop_size, voc_root): """ crop_size: (h, w) """ self.rgb_mean = np.array([0.485, 0.456, 0.406]) self.rgb_std = np.array([0.229, 0.224, 0.225]) self.tsf = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=self.rgb_mean, std=self.rgb_std) ]) self.crop_size = crop_size # (h, w) images, labels = read_file_list(root=voc_root, is_train=is_train) self.images = self.filter(images) # images list self.labels = self.filter(labels) # labels list print('Read ' + str(len(self.images)) + ' valid examples') def filter(self, imgs): # 过滤掉尺寸小于crop_size的图片 return [img for img in imgs if ( Image.open(img).size[1] >= self.crop_size[0] and Image.open(img).size[0] >= self.crop_size[1])] def __getitem__(self, idx): image = self.images[idx] label = self.labels[idx] image = Image.open(image).convert('RGB') label = Image.open(label).convert('RGB') image, label = voc_rand_crop(image, label, *self.crop_size) image = self.tsf(image) label = voc_label_indices(label) return image, label # float32 tensor, uint8 tensor def __len__(self): return len(self.images)

Reference 9.9 语义分割和数据集 - Dive-into-DL-PyTorch



【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3