目前,比较开源常见的深度学习框架包括:Caffe/Caffe2、Torch、TensorFlow、Keras、MXNet、PaddlePaddle、MindSpore、CNTK、DL4J等。我们本文着重介绍PyTorch框架。

本文重点使用PyTorch框架实现二分类感知器功能( “深度学习4:感知器-三种激活函数及梯度下降算法”中的实例3.4

实例:在一个二维平面中,对给定的若干个线性的可分的离散点,其中部分属于0类,另一部分属于1类。离散点为(2.49,2.86)(0.50,0.21)(2.73,2.91)(3.47,2.34)(1.38,0.37)(1.03,0.27)(0.59,1.73)(2.25,3.75)(0.15,1.45)(2.73,3.42)分别标记为1,0,1,1,0,0,0,1,0,1。(样本的特征值个数为2个)

import torch
import matplotlib.pyplot as plt
import torch.nn as nn

#读取数据
X1 = [2.49,0.50,2.73,3.47,1.38,1.03,0.59,2.25,0.15,2.73]
X2 = [2.86,0.21,2.91,2.34,0.37,0.27,1.73,3.75,1.45,3.42]
Y = [1,0,1,1,0,0,0,1,0,1]
X1 = torch.Tensor(X1)
X2 = torch.Tensor(X2)
X = torch.stack((X1,X2),dim=1)
Y = torch.Tensor(Y)

#定义类
class Perceptron2(nn.Module):#必须继承类nn.Module
	def__int__(self):
		super(Perceptron2,self).__int__
		self.w1 = nn.Parameter(torch.Tensor([0.0]))	
		self.w2 =nn.Parameter( torch.Tensor([0.0]))	
		self.b = nn.Parameter(torch.Tensor([0.0]))
	def f(self,x):
		x1,x2 = x[0],x[1]
		t = self.w1*x1+self.w2*x2+self.b
		z = 1.0/(1+torch.exp(t))
		return z
	def forward(self,x):
		pre_y = self.f(x)
		return pre_y
perceptron2 = Perceptron2()
optimizer = torch.optim.Adam(perceptron2.parameters(),lr=0.1)
for ep in range(100):
	for (x,y) in zip(X,Y):
		pre_y = perceptron2(x)
		y = torch.Tensor([y])
		loss = nn.BCELoss()(pre_y,y)
		optimizer.zero_grad()
		loss.backward()
		optimizer.step[()]
s = '学习到的感知器:pre_y = sigmoid(%0.2f*x1+%0.2f*x2+%0.2f)'%(perceptron2.w1,perceptron2.w2,perceptron2.b)
print(s)
for (x,y) in zip(X,Y):
	t = 1 if perceptron2.f(x)>0.5 else 0
	s=''
	if t == y.item():
		s='(点%0.2f,%0.2f被正确分类)'%(x[0],x[1])
	else:
		s='(点%0.2f,%0.2f被错误分类)'%(x[0],x[1])
	print(s)


如果需要构建带偏置项的单个神经元,可以用下面的代码:

nn.Linear(in_features=m,out_features=1,bias=True)

m为样本的特征个数,bias表示偏置项。注意:这个函数没有设置激活函数。
如果使用nn.Linear()函数创建感知器:

import torch
import matplotlib.pyplot as plt
import torch.nn as nn

#读数据
X1 = [2.49,0.50,2.73,3.47,1.38,1.03,0.59,2.25,0.15,2.73]
X2 = [2.86,0.21,2.91,2.34,0.37,0.27,1.73,3.75,1.45,3.42]
Y = [1,0,1,1,0,0,0,1,0,1]
X1 = torch.Tensor(X1)
X2 = torch.Tensor(X2)
X = torch.stack((X1,X2),dim=1)
Y = torch.Tensor(Y)

class Perceptron2(nn.Module):
	def__int__(self):
		super(Perceptron2,self).__int__()#创建感知器,2个参数+1个偏置项
		self.fc = nn.Linear(in_features=2out_features=1,bias=True)
	def forward(self,x):#前向计算逻辑
		out = self.fc(x)#调用感知器,执行前向计算
		out = torch.sigmoid(out)#运用sigmoid函数
		return out
perceptron2 = Perceptron2()#创建实例
optimizer = torch.optim.Adam(perceptron2.parameters(),lr=0.1)#设计优化器和学习率
for ep in range(100):
	for (x,y) in zip(X,Y):
		pre_y = perceptron2(x)
		y = torch.Tensor([y])
		loss = nn.BCELoss()(pre_y,y)#nn提供的一种目标函数
		optimizer.zero_grad()#梯度清零,清除之前保存过的梯度
		loss.backward()#方向传播计算参数梯度
		optimizer.step[()]#利用梯度更新参数
#读取感知器的参数w1,w2,b
t = list(perceptron2.parameters())
w1 = t[0].data[0,0]
w2 = t[0].data[0,1]
b = t[1].data
s = '学习到的感知器:pre_y = sigmoid(%0.2f*x1+%0.2f*x2+%0.2f)'%w(w1,w2,b))
print(s)
perceptron2.eval()#设置为测试模式
for (x,y) in zip(X,Y):
	t = 1 if perceptron2.f(x)>0.5 else 0
	s=''
	if t == y.item():
		s='(点%0.2f,%0.2f被正确分类)'%(x[0],x[1])
	else:
		s='(点%0.2f,%0.2f被错误分类)'%(x[0],x[1])
	print(s)

以上,本文内容参考自蒙祖强,欧元汉编著的《深度学习理论与应用》。

Logo

DAMO开发者矩阵,由阿里巴巴达摩院和中国互联网协会联合发起,致力于探讨最前沿的技术趋势与应用成果,搭建高质量的交流与分享平台,推动技术创新与产业应用链接,围绕“人工智能与新型计算”构建开放共享的开发者生态。

更多推荐