[PyTorch] dataloader使用教程

cv中的dataloader使用

加载头文件

from torch.utils.data import DataLoader, Sampler
from torchvision import datasets,transforms

transforms表示对图片的预处理方式

data_transform={'train':transforms.Compose([
                    # transforms.RandomResizedCrop(image_size),
                    # transforms.Resize(224),
                    transforms.RandomResizedCrop(int(image_size*1.2)),
                    # transforms.ToPILImage(),
                    transforms.RandomAffine(15),
                    # transforms.RandomHorizontalFlip(),
                    transforms.RandomVerticalFlip(),
                    transforms.RandomRotation(10),
                    transforms.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1),
                    transforms.RandomGrayscale(),
                    transforms.TenCrop(image_size),
                    transforms.Lambda(lambda crops: torch.stack([transforms.ToTensor()(crop) for crop in crops])),
                    transforms.Lambda(lambda crops: torch.stack([transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])(crop) for crop in crops])),
                   
                    # transforms.FiveCop(image_size),
                    # Lambda(lambda crops: torch.stack([transfoms.ToTensor()(crop) for crop in crops])),
                    # transforms.ToTensor(),
                    # transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
                    ]),
                "val":transforms.Compose([
                    transforms.Resize(image_size),
                    transforms.CenterCrop(image_size),
                    transforms.ToTensor(),
                    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
                ]),
                "test":transforms.Compose([
                    transforms.Resize(image_size),
                    transforms.CenterCrop(image_size),
                    transforms.ToTensor(),
                    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
                ])}

使用datasets.ImageFolder加载图片数据

image_datasets={name:datasets.ImageFolder(os.path.join(rootpath,name),data_transform[name]) for name in ['train','val','test']}

生成dataloader

dataloaders={name : torch.utils.data.DataLoader(image_datasets[name],batch_size=batch_size,shuffle=True) for name in ['train','val']}
testDataloader=torch.utils.data.DataLoader(image_datasets['test'],batch_size=1,shuffle=False)

使用方法,每次会读出一个batch_size的数据。

for index,item in enumerate(dataloaders['train'])

nlp中的dataloader的使用

torch.utils.data.DataLoader中的参数:

  • dataset (Dataset) – dataset from which to load the data.
  • batch_size (int, optional) – how many samples per batch to load (default: 1).
  • shuffle (bool, optional) – set to True to have the data reshuffled at every epoch (default: False).
  • sampler (Sampler, optional) – defines the strategy to draw samples from the dataset. If specified, shuffle must be False.
  • batch_sampler (Sampler, optional) – like sampler, but returns a batch of indices at a time. Mutually exclusive with batch_size, shuffle, sampler, and drop_last.
  • num_workers (int, optional) – how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default: 0)
  • collate_fn (callable*, *optional) – merges a list of samples to form a mini-batch.
  • pin_memory (bool, optional) – If True, the data loader will copy tensors into CUDA pinned memory before returning them.
  • drop_last (bool, optional) – set to True to drop the last incomplete batch, if the dataset size is not divisible by the batch size. If False and the size of dataset is not divisible by the batch size, then the last batch will be smaller. (default: False)
  • timeout (numeric, optional) – if positive, the timeout value for collecting a batch from workers. Should always be non-negative. (default: 0)
  • worker_init_fn (callable, optional) – If not None, this will be called on each worker subprocess with the worker id (an int in [0, num_workers - 1]) as input, after seeding and before data loading. (default: None)

需要自己构造的两个东西

Dataloader的处理逻辑是先通过Dataset类里面的 __getitem__ 函数获取单个的数据,然后组合成batch,再使用collate_fn所指定的函数对这个batch做一些操作,比如padding啊之类的。

NLP中的使用主要是要重构两个两个东西,一个是dataset,必须继承自torch.utils.data.Dataset,内部要实现两个函数一个是__lent__用来获取整个数据集的大小,一个是__getitem__用来从数据集中得到一个数据片段item

class Dataset(torch.utils.data.Dataset):
    def __init__(self, filepath=None,dataLen=None):
        self.file = filepath
        self.dataLen = dataLen
        
    def __getitem__(self, index):
        A,B,path,hop= linecache.getline(self.file, index+1).split('\t')
        return A,B,path.split(' '),int(hop)

    def __len__(self):
        return self.dataLen

因为dataloader是有batch_size参数的,我们可以通过自定义collate_fn=myfunction来设计数据收集的方式,意思是已经通过上面的Dataset类中的__getitem__函数采样了batch_size数据,以一个包的形式传递给collate_fn所指定的函数。

def myfunction(data):
    A,B,path,hop=zip(*data)
    print('A:',A," B:",B," path:",path," hop:",hop)
    raise Exception('utils collate_fun 147')
    return A,B,path,hop
for index,item in enumerate(dataloaders['train'])
    A,B,path.hop=item

进行了解包

nlp任务中,经常在collate_fn指定的函数里面做padding,就是将在同一个batch中不一样长的句子padding成一样长。

def myfunction(data):
    src, tgt, original_src, original_tgt = zip(*data)

    src_len = [len(s) for s in src]
    src_pad = torch.zeros(len(src), max(src_len)).long()
    for i, s in enumerate(src):
        end = src_len[i]
        src_pad[i, :end] = torch.LongTensor(s[end-1::-1])

    tgt_len = [len(s) for s in tgt]
    tgt_pad = torch.zeros(len(tgt), max(tgt_len)).long()
    for i, s in enumerate(tgt):
        end = tgt_len[i]
        tgt_pad[i, :end] = torch.LongTensor(s)[:end]

    return src_pad, tgt_pad, \
           torch.LongTensor(src_len), torch.LongTensor(tgt_len), \
           original_src, original_tgt

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容