线性回归实战

使用PyTorch定义线性回归模型一般分以下几步:

1.设计网络架构
2.构建损失函数(loss)和优化器(optimizer)
3.训练(包括前馈(forward)、反向传播(backward)、更新模型参数(update))

#author:yuquanle
#data:2018.2.5
#Study of LinearRegression use PyTorch

import torch
from torch.autograd import Variable

# train data
x_data = Variable(torch.Tensor([[1.0], [2.0], [3.0]]))
y_data = Variable(torch.Tensor([[2.0], [4.0], [6.0]]))

class Model(torch.nn.Module):
  def __init__(self):
    super(Model, self).__init__()
    self.linear = torch.nn.Linear(1, 1) # One in and one out

  def forward(self, x):
    y_pred = self.linear(x)
    return y_pred

# our model
model = Model()

criterion = torch.nn.MSELoss(size_average=False) # Defined loss function
optimizer = torch.optim.SGD(model.parameters(), lr=0.01) # Defined optimizer

# Training: forward, loss, backward, step
# Training loop
for epoch in range(50):
  # Forward pass
  y_pred = model(x_data)

  # Compute loss
  loss = criterion(y_pred, y_data)
  print(epoch, loss.data[0])

  # Zero gradients
  optimizer.zero_grad()
  # perform backward pass
  loss.backward()
  # update weights
  optimizer.step()

# After training
hour_var = Variable(torch.Tensor([[4.0]]))
print("predict (after training)", 4, model.forward(hour_var).data[0][0])

迭代十次打印结果:

loss还在继续下降,此时输入4得到的结果还不是预测的很准

当迭代次数设置为50时:

此时,函数已经拟合比较好了

再运行一次:

发现同为迭代50次,但是当输入为4时,结果不同,感觉应该是使用pytorch定义线性回归模型时:
torch.nn.Linear(1, 1),只需要知道输入和输出维度,里面的参数矩阵是随机初始化的(具体是不是随机的还是按照一定约束条件初始化的我不确定),所有每次计算loss会下降到不同的位置(模型的参数更新从而也不同),导致结果不一样。

逻辑回归实战

线性回归是解决回归问题的,逻辑回归和线性回归很像,但是它是解决分类问题的(一般二分类问题:0 or 1)。也可以多分类问题(用softmax可以实现)。

使用pytorch实现逻辑回归的基本过程和线性回归差不多,但是有以下几个区别:

下面为sigmoid函数:

 

在逻辑回归中,我们预测如果 当输出大于0.5时,y=1;否则y=0。

损失函数一般采用交叉熵loss:

# date:2018.2.6
# LogisticRegression

import torch
from torch.autograd import Variable

x_data = Variable(torch.Tensor([[0.6], [1.0], [3.5], [4.0]]))
y_data = Variable(torch.Tensor([[0.], [0.], [1.], [1.]]))

class Model(torch.nn.Module):
  def __init__(self):
    super(Model, self).__init__()
    self.linear = torch.nn.Linear(1, 1) # One in one out
    self.sigmoid = torch.nn.Sigmoid()

  def forward(self, x):
    y_pred = self.sigmoid(self.linear(x))
    return y_pred

# Our model
model = Model()

# Construct loss function and optimizer
criterion = torch.nn.BCELoss(size_average=True)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)

# Training loop
for epoch in range(500):
  # Forward pass
  y_pred = model(x_data)

  # Compute loss
  loss = criterion(y_pred, y_data)
  if epoch % 20 == 0:
    print(epoch, loss.data[0])

  # Zero gradients
  optimizer.zero_grad()
  # Backward pass
  loss.backward()
  # update weights
  optimizer.step()

# After training
hour_var = Variable(torch.Tensor([[0.5]]))
print("predict (after training)", 0.5, model.forward(hour_var).data[0][0])
hour_var = Variable(torch.Tensor([[7.0]]))
print("predict (after training)", 7.0, model.forward(hour_var).data[0][0])

输入结果:

训练完模型之后,输入新的数据0.5,此时输出小于0.5,则为0类别,输入7输出大于0.5,则为1类别。使用softmax做多分类时,那个维度的数值大,则为那个数值所对应位置的类别。

更深更宽的网络

前面的例子都是浅层输入为一维的网络,如果需要更深更宽的网络,使用pytorch也可以很好的实现,以逻辑回归为例:
当输入x的维度很大时,需要更宽的网络:

更深的网络:

采用下面数据集(下载地址:https://github.com/hunkim/PyTorchZeroToAll/tree/master/data)

 

输入维度为八。

#author:yuquanle
#date:2018.2.7
#Deep and Wide


import torch
from torch.autograd import Variable
import numpy as np

xy = np.loadtxt('./data/diabetes.csv', delimiter=',', dtype=np.float32)
x_data = Variable(torch.from_numpy(xy[:, 0:-1]))
y_data = Variable(torch.from_numpy(xy[:, [-1]]))

#print(x_data.data.shape)
#print(y_data.data.shape)

class Model(torch.nn.Module):
  def __init__(self):
    super(Model, self).__init__()
    self.l1 = torch.nn.Linear(8, 6)
    self.l2 = torch.nn.Linear(6, 4)
    self.l3 = torch.nn.Linear(4, 1)
    self.sigmoid = torch.nn.Sigmoid()

  def forward(self, x):
    x = self.sigmoid(self.l1(x))
    x = self.sigmoid(self.l2(x))
    y_pred = self.sigmoid(self.l3(x))
    return y_pred

# our model
model = Model()

cirterion = torch.nn.BCELoss(size_average=True)
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)

hour_var = Variable(torch.Tensor([[-0.294118,0.487437,0.180328,-0.292929,0,0.00149028,-0.53117,-0.0333333]]))
print("(Before training)", model.forward(hour_var).data[0][0])

# Training loop
for epoch in range(1000):
  y_pred = model(x_data)
  # y_pred,y_data不能写反(因为损失函数为交叉熵loss)
  loss = cirterion(y_pred, y_data)
  optimizer.zero_grad()
  loss.backward()
  optimizer.step()
  if epoch % 50 == 0:
    print(epoch, loss.data[0])


# After training
hour_var = Variable(torch.Tensor([[-0.294118,0.487437,0.180328,-0.292929,0,0.00149028,-0.53117,-0.0333333]]))
print("predict (after training)", model.forward(hour_var).data[0][0])

结果:

参考:

1.https://github.com/hunkim/PyTorchZeroToAll

2.http://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持脚本之家。

02-04 21:20