这篇文章主要介绍了关于pytorch + visdom cnn处理自建图片数据集的方法,有着一定的参考价值,现在分享给大家,有需要的朋友可以参考一下
环境
系统:win10
cpu:i7-6700hq
gpu:gtx965m
python : 3.6
pytorch :0.3
数据下载
来源自sasank chilamkurthy 的教程; 数据:下载链接。
下载后解压放到项目根目录:
数据集为用来分类 蚂蚁和蜜蜂。有大约120个训练图像,每个类有75个验证图像。
数据导入
可以使用 torchvision.datasets.imagefolder(root,transforms) 模块 可以将 图片转换为 tensor。
先定义transform:
ata_transforms = {
'train': transforms.compose([
# 随机切成224x224 大小图片 统一图片格式
transforms.randomresizedcrop(224),
# 图像翻转
transforms.randomhorizontalflip(),
# totensor 归一化(0,255) >> (0,1) normalize channel=(channel-mean)/std
transforms.totensor(),
transforms.normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
]),
"val" : transforms.compose([
# 图片大小缩放 统一图片格式
transforms.resize(256),
# 以中心裁剪
transforms.centercrop(224),
transforms.totensor(),
transforms.normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
}
导入,加载数据:
data_dir = './hymenoptera_data'
# trans data
image_datasets = {x: datasets.imagefolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']}
# load data
data_loaders = {x: dataloader(image_datasets[x], batch_size=batch_size, shuffle=true) for x in ['train', 'val']}
data_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
class_names = image_datasets['train'].classes
print(data_sizes, class_names)
{'train': 244, 'val': 153} ['ants', 'bees']
训练集 244图片 , 测试集153图片 。
可视化部分图片看看,由于visdom支持tensor输入 ,不用换成numpy,直接用tensor计算即可 :
inputs, classes = next(iter(data_loaders['val']))
out = torchvision.utils.make_grid(inputs)
inp = torch.transpose(out, 0, 2)
mean = torch.floattensor([0.485, 0.456, 0.406])
std = torch.floattensor([0.229, 0.224, 0.225])
inp = std * inp + mean
inp = torch.transpose(inp, 0, 2)
viz.images(inp)
创建cnn
net 根据上一篇的处理cifar10的改了一下规格:
class cnn(nn.module):
def __init__(self, in_dim, n_class):
super(cnn, self).__init__()
self.cnn = nn.sequential(
nn.batchnorm2d(in_dim),
nn.relu(true),
nn.conv2d(in_dim, 16, 7), # 224 >> 218
nn.batchnorm2d(16),
nn.relu(inplace=true),
nn.maxpool2d(2, 2), # 218 >> 109
nn.relu(true),
nn.conv2d(16, 32, 5), # 105
nn.batchnorm2d(32),
nn.relu(true),
nn.conv2d(32, 64, 5), # 101
nn.batchnorm2d(64),
nn.relu(true),
nn.conv2d(64, 64, 3, 1, 1),
nn.batchnorm2d(64),
nn.relu(true),
nn.maxpool2d(2, 2), # 101 >> 50
nn.conv2d(64, 128, 3, 1, 1), #
nn.batchnorm2d(128),
nn.relu(true),
nn.maxpool2d(3), # 50 >> 16
)
self.fc = nn.sequential(
nn.linear(128*16*16, 120),
nn.batchnorm1d(120),
nn.relu(true),
nn.linear(120, n_class))
def forward(self, x):
out = self.cnn(x)
out = self.fc(out.view(-1, 128*16*16))
return out
# 输入3层rgb ,输出 分类 2
model = cnn(3, 2)
loss,优化函数:
line = viz.line(y=np.arange(10))
loss_f = nn.crossentropyloss()
optimizer = optim.sgd(model.parameters(), lr=lr, momentum=0.9)
scheduler = optim.lr_scheduler.steplr(optimizer, step_size=7, gamma=0.1)
参数:
batch_size = 4
lr = 0.001
epochs = 10
运行 10个 epoch 看看:
[9/10] train_loss:0.650|train_acc:0.639|test_loss:0.621|test_acc0.706
[10/10] train_loss:0.645|train_acc:0.627|test_loss:0.654|test_acc0.686
training complete in 1m 16s
best val acc: 0.712418
运行 20个看看:
[19/20] train_loss:0.592|train_acc:0.701|test_loss:0.563|test_acc0.712
[20/20] train_loss:0.564|train_acc:0.721|test_loss:0.571|test_acc0.706
training complete in 2m 30s
best val acc: 0.745098
准确率比较低:只有74.5%
我们使用models 里的 resnet18 运行 10个epoch:
model = torchvision.models.resnet18(true)
num_ftrs = model.fc.in_features
model.fc = nn.linear(num_ftrs, 2)
[9/10] train_loss:0.621|train_acc:0.652|test_loss:0.588|test_acc0.667
[10/10] train_loss:0.610|train_acc:0.680|test_loss:0.561|test_acc0.667
training complete in 1m 24s
best val acc: 0.686275
效果也很一般,想要短时间内就训练出效果很好的models,我们可以下载训练好的state,在此基础上训练:
model = torchvision.models.resnet18(pretrained=true)
num_ftrs = model.fc.in_features
model.fc = nn.linear(num_ftrs, 2)
[9/10] train_loss:0.308|train_acc:0.877|test_loss:0.160|test_acc0.941
[10/10] train_loss:0.267|train_acc:0.885|test_loss:0.148|test_acc0.954
training complete in 1m 25s
best val acc: 0.954248
10个epoch直接的到95%的准确率。
相关推荐:
pytorch
+ visdom 处理简单分类问题
以上就是pytorch + visdom cnn处理自建图片数据集的方法的详细内容。