您好,欢迎访问一九零五行业门户网

pytorch .t(pytorch documentation)

本文主要介绍title: pytorch.t (pytorc档),下面一起看看title: pytorch.t (pytorc档)相关资讯。
本节参考小土丘的pytorch入门视频教程。
现有模型使用和修改#pytorch框架提供了很多现有模型,其中torchvision.models包包含了很多视觉(图像)领域的模型,如下图所示:
以vgg16为例说明如何使用和更改现有模型:
预训练为真,返回imagenet上预训练的模型;当pregress为true时,在下载模型时,进度条将通过标准错误流输出。
创建并运行以下脚本:
从torchvision导入模型#创建预训练模型并输出进度vgg 16 _ pr: https:// download . py torch . org/models/vgg 16-397923 af . pth 致c:\users\winlsr/。cache \ torch \ hub \ check points \ vgg 16-397923 af . pth 100.0% vgg((0): conv2d(3,64,k: relu(in plac: conv2d(64,64,k: relu(in plac: maxpool 2d(kernel _ size = 2,inilation=1,c: conv 2d(64,128,k: relu(in plac: conv 2d(128,128,k: relu(in plac: maxpool 2d(kernel _ size = 2,stride=2,padding=0,exploation = 1,c: relu(in plac: conv 2d(512,512,k: relu(in plac: maxpool 2d(kernel _ size = 2,stride=2,padding=0,exploation = 1,c: conv 2d(512,512,kernel_size=(3,3)。com dropout(p=0.5,in plac: lin: relu(in plac: dropout(p = 0.5,in plac: linear(in _ features = 4096,out_features=1000,bias = true))如上图输出所示,vgg代表模型的类名,features是vgg中包含的顺序组件(module),avgpool是adaptiveavgpool
创建并运行以下脚本:
从torch vision导入模型从torch import nn #创建预训练模型并输出进度vgg 16 _ pr:序列((0): conv2d(3,64,k: relu(in plac: conv2d(64,64,k: relu(in plac: maxpool 2d(kernel _ size = 2,stride=2,padding=0,dilation=1,c: conv 2d(64,128,k: relu(in plac: conv 2d(128,128,k: relu(in plac: maxpool 2d(kernel _ size = 2om relu(in plac: conv 2d(512,512,k: relu(in plac: conv 2d(512,512,k: relu(in plac: maxpool 2d(kernel _ size = 2,stride=2,padding=0,exploation = 1,c: relu(in plac: dropout(p = 0.5,in plac: lin: relu(in plac: dropout(p = 0.5,in plac: lin: linear(in _ features = 1000,out _ features = 10,bias = true) #成功添加
从torch vision导入模型从torch import nn #创建预训练模型并输出进度vgg 16 _ pre-trained = models . vgg 16(pre-trained = true,progress = true) #创建未训练模型。不输出进度vgg 16 = models . vgg 16(retrained = false,progress = false)# del vgg 16 _ retrained . features # add vgg 16 _ retrained . classifi:自适应avgpool2d (output _ size = (7,7))(classifi:顺序((0):线性(in_f: soft max(dim = non:退学者(p=0.5,in plac:线性(in_f: relu(in plac:退学者(p = 0.5,in plac:线性(in_features=4096,out_features=1000,bias = true
执行以下脚本:
from _ 07 _ cifar 10 _ model . cifar 10 _ model导入mymodel导入torch cifar 10 _ model = mymodel# mode 1:保存模型参数torch.save(cifar10_model, cifar10 _ model.pth )#模式二:只保存参数(官方推荐)火炬。保存(cifar10 _ model。state _ dict, cifa r10 _ model _ state _ dict . pth ).执行成功后,会生成脚本文件所在的目录:cifar10_model.pth,和。
恢复模式1保存的模型:
导入火炬#模式1 cifar10 _ model =火炬。加载( cifar10 _型号。pth )打印(cifar10 _ mod:序列((0): conv 2d(3,32,k: maxpool 2d(kernel _ size = 2,stride=2,padding=0,dilation=1,c: conv 2d(32,32,k: maxpool 2d(kernel _ size = 2,stride=2,padding=0,dilation=1,c: conv 2d(32
导入torch from _ 07 _ cifar 10 _ model . cifar 10 _ model import my model # mode 2(官方推荐)cifar 10 _ model = my modelcifar 10 _ model . load _ state _ dict(torch . load( cifar10 _ model _ dict.pth ))打印。
my mod:序列((0): conv2d(3,32,k: maxpool 2d(kernel_size= 2,stride=2,padding=0,dilation=1,c: conv 2d(32,32,k: maxpool 2d(kernel _ size = 2,stride=2,padding=0,dilation=1,c: conv 2d(32,64,kernel _ size = .下面将是我们模型的完整训练。培训代码如下:
导入timefrom torch.utils导入tensor board from torch . utils . data导入dataloader from _ 07 _ cifar 10 _ model . cifar 10 _ model导入mymodel导入torch vis io import torch . nnif _ _ nam: start _ tim测试数据集transform = torch vision . transforms . pose({ torch vision . transforms . to tansor})。train _ data = torch vision . datasets . cifar 10( 。/dataset ,train=true,transform=transform,download = true)test _ data = torch vision . datasets . cifar 10( 。/datas: { } 。格式(train _ data _ l测试集的长度{ : } 。format(test _ data _ len))# data loader train _ data loader = data loader(dataset = train _ data,batch_size=64,shuffle = true,num _ workers = 16)test _ data loader = data loader(dataset = test _ data,batch _ size = 64,shuffle=true,num _ workers = 16)# create network cifar 10 _ model = my model# create loss func = torch . nn . crossover# create optimizer #学习率,科学计数的形式便于改变learning _ rate = 1e-2writer =张力板。总结作者( 日志 )为我在range(epoch):打印( - #模型进入训练模式。方法在当前模型(加上是个好习惯)cifar10 _ mod:图像中的数据,targets = data outputs = cifar 10 _ model(images)loss = loss _ func(outputs,targets) #清空上一轮计算的梯度优化器。zero _ grad #反向传播计算梯度损失. backward #优化器优化参数(执行梯度下降)optimizer . steptotal _ train _ step = 1 writer . add _ scalar( 培训/损失 ,loss.it:打印( 训练时报: { },loss: { } 。格式(total _ train _ step,loss。item)total _ test _ loss = 0.0 total _ accuracy = 0.0 #每轮历元后,在测试集# 测试上计算模型的损失性能时,不需要计算梯度。计算速度可以加快#模型进入验证(测试)模式,方法在当前模型中可有可无(加上是个好习惯)cifar10 _ model。eval与torch.no_grad:一起用于t:映像中的数据,targets =数据输出= cifar 10 _ model(images)loss = loss _ func(outputs,targets) total_test_亏损=亏损。item准确度=(输出。argmax (1) = = targets)。sumtotal _ accuracy = accuracy print(;测试精度:{ 0 } 。format(total _ accuracy/test _ data _ len))writer . add _ scalar( 测试/损失 ,total_test_loss,i)writer . add _ scalar( 测试/准确性和,total_accuracy/test_data_len,i) #保存模型torch . save(cifar 10 _ model . state _ dict)每轮训练结束后, cifr 10 _ model _ state _ dict _ { } _ epoch . pth 。format(i))writer . closeend _ time = time . timeprint( 耗时:{ } 。格式(结束时间-开始时间))。在上面的代码中调用模型的train和eval方法,主要是对模型中的dropout和batchnorm等模块有用(如果有的话)。官方解释如下:
张量板可视化结果如下:
与gpu #的学生谁不 没有gpu可以想办法用google colab,它提供免费的gpu使用时间,类似于jupyter notebook。
用gpu训练很简单:
方法1:。cuda#只需要调用。关于网络模型、数据(输入、标记)和损失函数的cuda方法:
导入timefrom torch.utils导入tensor board from torch . utils . data导入dataloaderfrom _ 07 _ cifar 10 _ model . cifar 10 _ model导入mymodel导入torchvis导入torch。nnif _ _ nam:开始时间=时间。tim测试数据集转换= torch vision。转换姿势。({ torch vision . transforms . totensor})train _ data = torch vision . datasets . cifar 10( 。/dataset ,train=true,transform=transform,download = true)test _ data = torch vision . datasets . cifar 10( 。/datas: { } 。格式(train _ data _ l测试集的长度{ : } 。format(test _ data _ len))# data loader train _ data loader = data loader(dataset = train _ data,batch_size=64,shuffle = true,num _ workers = 16)test _ data loader = data loader(dataset = test _ data,batch _ size = 64,shuffle=true,num_workers=16) #创建网络cifar10 _ model = mymodel如果torch.cuda.is_availabl: cifar 10 _ model = cifar 10 _ model . cuda#创建损失函数loss _ func = torch.nn.crossover如果torch.cuda.is_availabl:损失 科学计数的形式方便更改learning _ rate = 1e-2 optimizer = torch . optim . sgd(cifar 10 _ model . parameters、lr = learni。ng_rate) #训练次数total_train_step = 0 #训练轮次epoch = 20 #创建tensorboard摘要writer = tensorboard。总结作者( 日志 ).为我在range(epoch):打印( -方法在当前模型中可有可无(加上是个好习惯)cifar10 _ mod:图像中的数据,targets = data if torch.cuda.is_availabl:图像= images . cudatargets = targets . cudaoutputs = cifar 10 _ model(images)loss = loss _ func(outputs,targets) #清空上一轮计算的梯度优化器。zero _ grad #反向传播计算梯度损失. backward #优化器优化参数(执行梯度下降)optimizer . steptotal _ train _ step = 1 writer . add _ scalar( 培训/损失 ,loss.it:打印( 训练时报: { },loss: { } 。格式(total _ train _ step,loss。item)total _ test _ loss = 0.0 total _ accuracy = 0.0 #每轮历元后,在测试集# 测试上计算模型的损失性能时,不需要计算梯度。可以加快计算速度#模型进入验证(测试)模式,这在当前模型cifar10_mod:图像中数据的th torch.no_grad:,targets = data if torch.cuda.is_availabl:图像= images . cudatargets = targets . cudaoutputs = cifar 10 _ model(图像)loss = loss_func(输出,目标)total _ test _ loss = loss。item准确度=(输出。argmax (1) = = targets)。sumtotal _ accuracy = accuracy print(;测试准确度:{ } 。format(total _ accuracy/test _ data _ len))writer . add _ scalar( 测试/损失 ,total_test_loss,i)writer . add _ scalar( 测试/准确性和,total _ accuracy/test _ data _ len,i) #保存模型torch . save(cifar 10 _ model . state _ dict每轮训练结束后, cifr 10 _ model _ state _ dict _ { } _ epoch . pth 。format(i))writer . closeend _ time = time . timeprint( 耗时:{ } 。格式(end_time-start_time))方法二:。敬。这种方法的好处是,你不仅可以使用gpu,还可以在有多个gpu的时候指定一个gpu。
如下所示:
# cpu cpu _ device = torch . device( cpu )# gpu只有一个显卡,不需要指定哪个gpu _ device = torch . device( cuda )# 0 gpu gpu _ 0 _ device = torch . device( cuda : 0 )完整的代码如下:
进口port time from torch . utils import tensorboard from torch . utils . data import dataloader from _ 07 _ cifar 10 _ model . cifar 10 _ model import mymodel import torch visio nimport torch . nnif _ _ nam:设备= torch . device( cuda 如果torch . cuda . is _ availableelse cpu )start_tim测试数据集transform = torch vision . transforms . pose({ torch vision . transforms . to tansor})。train _ data = torch vision . datasets . cifar 10( 。/dataset ,train=true,transform=transform,download = true)test _ data = torch vision . datasets . cifar 10( 。/datas: { } 。格式(train _ data _ l测试集的长度{ : } 。format(test _ data _ len))# data loader train _ data loader = data loader(dataset = train _ data,batch _ size = 64,shuffle = true,num _ workers = 16)test _ data loader = data loader(dataset = test _ data,batch _ size = 64,shuff。le=true,num_workers=16) #创建网络cifar 10 _ model = mymodelcifar 10 _ model = cifar 10 _ model . to(device)# if torch.cuda.is_availabl: # cifar 10 _ model = cifar 10 _ model . cuda#创建损失函数loss _ func = torch . nn . crossentropylossloss _ func = loss _ func . to(device)# if torch.cuda.is_availabl: # loss _ func = loss _ func。cuda #创建优化器#学习率,科学计数的形式方便更改learning _ rate = 1e-2 optimizer = torch . optim . sgd(cifar 10 _ model . parameters,lr=learning_rate) #训练次数total_train_step = 0 #训练轮次epoch = 20 #创建tensorboard摘要writer = tensorboard。总结作者( 日志 )为我在range(epoch):打印( -方法在当前模型中可有可无(加上是个好习惯)cifar10 _ mod: images,targets = data images = images . to(device)targets = targets . to(device)# if torch.cuda.is_availabl: # images = images . cuda# targets = targets . cudaoutputs = cifar 10 _ model(images)loss = loss _ func(outputs,targets) #清空上一轮计算的梯度优化器。zero _ grad #反向传播计算梯度损失. backward #优化器优化参数(执行梯度下降)optimizer . steptotal _ train _ step = 1 writer . add _ scalar( 培训/损失 ,loss.it:打印( 训练时报: { },loss: { } 。格式(total _ train _ step,loss。item)total _ test _ loss = 0.0 total _ accuracy = 0.0 #每轮历元后,在测试集# 测试上计算模型的损失性能时,不需要计算梯度。计算速度可以加快#模型进入验证(测试)模式,方法在当前模型中可有可无(加上是个好习惯)cifar10 _ model。eval与torch.no_grad:一起用于t:映像中的数据,targets = data images = images . to(device)targets = targets . to(device)# if torch.cuda.is_availabl: # images = images . cuda# targets = targets . cudaoutputs = cifar 10 _ model(images)loss = loss _ func(outputs,targets) total _ test _ loss = loss。item准确度=(输出。argmax (1) = = targets)。sumtotal _ accuracy = accuracy print(;测试准确度:{ } 。格式(总精确度/测试数据长度)。en))writer . add _ scalar( 测试/损失 ,total_test_loss,i)writer . add _ scalar( 测试/准确性和,total_accuracy/test_data_len,i) #保存模型torch . save(cifar 10 _ model . state _ dict)每轮训练结束后, cifr10 _模型_状态_字典_ {} _纪元。pth 。格式(i))编写器。clos测试收藏中性能最好的模型来恢复,然后在网上找一些图片,看看我们的模型是否能正确分类。据tensorboard介绍,性能最好的模型是经过18轮训练后的模型,可以达到65%左右的准确率。预测情况如下:
根据cifar10数据集中的定义,狗的目标是5,飞机的目标是0:
预测代码如下:
从pil进口火炬进口图片进口火炬视觉from _ 07 _ cifar 10 _ model . cifar 10 _ model进口mymodeldog _ img _ path = 狗. png 飞机图像路径= 机场路dog _img_pil = image.open(dog _ img _ path)飞机_ img _ pil = image . open(飞机_img_path)#将4通道rgba转换为3通道rgb dog _ img _ pil = dog _ img _ pil . convert( rgb )airplane _ img _ pil = airplane _ img _ pil . convert( rgb ).transform = torch vision . transforms . compose([torch vision . transforms . resize((32,32)),torch vision . transforms . totensor])dog _ img _ tensor = transform(dog _ img _ pil)planet _ img _ tensor = transform(planet _ img _ pil)# print(dog _ img _ tensor . shape)dog _ img _ tensor = torch . shape(dog _ img _ tensor,(-1,3,32,32))planet _ img _ tensor = torch . shape(planet _ img _ tensor,(1,3,32,32))cifar 10 _ model = my modelcifar 10 _ model . load _ state _ dict(中../_ 10 _ train _ model/cifar 10 _ model _ state _ dict _ 18 _ epoch . pth ))cifar10 _ model.eval带torch.no_grad: output = cifar 10 _ model(dog _ img _ tensor)print(output . arg max(1))output = cifar 10 _ model(airplane _ img _ tensor)print(output . arg max(1))输出如下:
张量([7]) #预测误差张量([0]) #预测正确标签:
模型梯度
了解更多title: pytorch.t (pytorc档)相关内容请关注本站点。
其它类似信息

推荐信息