defgradient_descent_runner(points, starting_b, starting_w, learning_rate, num_iterations):
b = starting_b
w = starting_w
# 迭代num_iterations次for i inrange(num_iterations):
b, w = step_gradient(b, w, np.array(points), learning_rate)return[b, w]# 以learning_rate为学习率,迭代训练num_iterations次[b, w]= gradient_descent_runner(points, initial_b, initial_w, learning_rate, num_iterations)
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
四、实验结果与预测
(1) 训练前后损失对比
将100组数据的平均损失作为损失进行计算。
defcompute_error_for_line_given_points(b, w, points):
totalError =0for i inrange(0,len(points)):
x = points[i,0]# 获取点的横坐标,等价于points[i][0]
y = points[i,1]# 获取点的纵坐标,等价于points[i][1]# 累计平方和误差损失
totalError +=(y -(w * x + b))**2# 平均损失return totalError /float(len(points))
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
分别计算训练前和训练后的平均损失,来进行比较:
print("训练开始: b = {0}, w = {1}, error = {2}".format(initial_b, initial_w,compute_error_for_line_given_points(initial_b, initial_w, points)))print("Running...")[b, w]= gradient_descent_runner(points, initial_b, initial_w, learning_rate, num_iterations)print("训练 {0} 轮后: b = {1}, w = {2}, error = {3}".format(num_iterations, b, w,compute_error_for_line_given_points(b, w, points)))
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
输出对比:
从输出结果结果可以看出,迭代完1000次后,平均损失降低了很多,所以可见模型是朝着好的方向在训练。
(2) 可视化拟合结果
通过matplotlib绘制拟合结果:
plt.scatter(points[:,0], points[:,1])
x = np.arange(0,100)
y = w * x + b
plt.xlabel('x')
plt.ylabel('y')
plt.title("y = wx + b")
plt.plot(x, y, color='r', linewidth=2.5)
plt.show()