TensorFlow线性回归
目录
数据可视化
梯度下降
结果可视化
|
数据可视化 |
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt # 随机生成1000个点,围绕在y=0.1x+0.3的直线周围
num_points = 1000
vectors_set = []
for i in range(num_points):
x1 = np.random.normal(0.0, 0.55)
y1 = x1 * 0.1 + 0.3 + np.random.normal(0.0, 0.03)
vectors_set.append([x1, y1]) # 生成一些样本
x_data = [v[0] for v in vectors_set]
y_data = [v[1] for v in vectors_set] plt.scatter(x_data,y_data,c='r')
plt.show()

|
梯度下降 |
# -*- coding: utf-8 -*-
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt # 随机生成1000个点,围绕在y=0.1x+0.3的直线周围
num_points = 1000
vectors_set = []
for i in range(num_points):
x1 = np.random.normal(0.0, 0.55)
y1 = x1 * 0.1 + 0.3 + np.random.normal(0.0, 0.03)
vectors_set.append([x1, y1]) # 生成一些样本
x_data = [v[0] for v in vectors_set]
y_data = [v[1] for v in vectors_set] # 生成1维的W矩阵,取值是[-1,1]之间的随机数
W = tf.Variable(tf.random_uniform([1], -1.0, 1.0), name='W')
# 生成1维的b矩阵,初始值是0
b = tf.Variable(tf.zeros([1]), name='b')
# 经过计算得出预估值y
y = W * x_data + b # 以预估值y和实际值y_data之间的均方误差作为损失
loss = tf.reduce_mean(tf.square(y - y_data), name='loss')
# 采用梯度下降法来优化参数
optimizer = tf.train.GradientDescentOptimizer(0.5) #参数是学习率
# 训练的过程就是最小化这个误差值
train = optimizer.minimize(loss, name='train') sess = tf.Session() init = tf.global_variables_initializer()
sess.run(init) # 初始化的W和b是多少
print ("W =", sess.run(W), "b =", sess.run(b), "loss =", sess.run(loss))
# 执行20次训练
for step in range(20):
sess.run(train)
# 输出训练好的W和b
print ("W =", sess.run(W), "b =", sess.run(b), "loss =", sess.run(loss))
'''
W = [ 0.72134733] b = [ 0.] loss = 0.204532
W = [ 0.54246926] b = [ 0.31014919] loss = 0.0552976
W = [ 0.41924465] b = [ 0.30693138] loss = 0.029155
W = [ 0.33045709] b = [ 0.30471471] loss = 0.0155833
W = [ 0.26648441] b = [ 0.30311754] loss = 0.00853772
W = [ 0.22039121] b = [ 0.30196676] loss = 0.00488007
W = [ 0.18718043] b = [ 0.3011376] loss = 0.00298124
W = [ 0.16325161] b = [ 0.30054021] loss = 0.00199547
W = [ 0.14601055] b = [ 0.30010974] loss = 0.00148373
W = [ 0.13358814] b = [ 0.29979959] loss = 0.00121806
W = [ 0.12463761] b = [ 0.29957613] loss = 0.00108014
W = [ 0.11818863] b = [ 0.29941514] loss = 0.00100854
W = [ 0.11354206] b = [ 0.29929912] loss = 0.000971367
W = [ 0.11019413] b = [ 0.29921553] loss = 0.00095207
W = [ 0.10778191] b = [ 0.29915532] loss = 0.000942053
W = [ 0.10604387] b = [ 0.29911193] loss = 0.000936852
W = [ 0.10479159] b = [ 0.29908064] loss = 0.000934153
W = [ 0.1038893] b = [ 0.29905814] loss = 0.000932751
W = [ 0.10323919] b = [ 0.2990419] loss = 0.000932023
W = [ 0.10277078] b = [ 0.29903021] loss = 0.000931646
W = [ 0.10243329] b = [ 0.29902178] loss = 0.00093145
'''
|
结果可视化 |
# -*- coding: utf-8 -*-
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt # 随机生成1000个点,围绕在y=0.1x+0.3的直线周围
num_points = 1000
vectors_set = []
for i in range(num_points):
x1 = np.random.normal(0.0, 0.55)
y1 = x1 * 0.1 + 0.3 + np.random.normal(0.0, 0.03)
vectors_set.append([x1, y1]) # 生成一些样本
x_data = [v[0] for v in vectors_set]
y_data = [v[1] for v in vectors_set] # 生成1维的W矩阵,取值是[-1,1]之间的随机数
W = tf.Variable(tf.random_uniform([1], -1.0, 1.0), name='W')
# 生成1维的b矩阵,初始值是0
b = tf.Variable(tf.zeros([1]), name='b')
# 经过计算得出预估值y
y = W * x_data + b # 以预估值y和实际值y_data之间的均方误差作为损失
loss = tf.reduce_mean(tf.square(y - y_data), name='loss')
# 采用梯度下降法来优化参数
optimizer = tf.train.GradientDescentOptimizer(0.5) #参数是学习率
# 训练的过程就是最小化这个误差值
train = optimizer.minimize(loss, name='train') sess = tf.Session() init = tf.global_variables_initializer()
sess.run(init) # 初始化的W和b是多少
print ("W =", sess.run(W), "b =", sess.run(b), "loss =", sess.run(loss))
# 执行20次训练
for step in range(20):
sess.run(train)
# 输出训练好的W和b
print ("W =", sess.run(W), "b =", sess.run(b), "loss =", sess.run(loss)) plt.scatter(x_data,y_data,c='r')
plt.plot(x_data,sess.run(W)*x_data+sess.run(b))
plt.show()

TensorFlow线性回归的更多相关文章
- [tensorflow] 线性回归模型实现
在这一篇博客中大概讲一下用tensorflow如何实现一个简单的线性回归模型,其中就可能涉及到一些tensorflow的基本概念和操作,然后因为我只是入门了点tensorflow,所以我只能对部分代码 ...
- python,tensorflow线性回归Django网页显示Gif动态图
1.工程组成 2.urls.py """Django_machine_learning_linear_regression URL Configuration The ` ...
- tensorflow 线性回归解决 iris 2分类
# Combining Everything Together #---------------------------------- # This file will perform binary ...
- 1.tensorflow——线性回归
tensorflow 1.一切都要tf. 2.只有sess.run才能生效 import tensorflow as tf import numpy as np import matplotlib.p ...
- tensorflow 线性回归 iris
线性拟合
- TensorFlow简要教程及线性回归算法示例
TensorFlow是谷歌推出的深度学习平台,目前在各大深度学习平台中使用的最广泛. 一.安装命令 pip3 install -U tensorflow --default-timeout=1800 ...
- TensorFlow API 汉化
TensorFlow API 汉化 模块:tf 定义于tensorflow/__init__.py. 将所有公共TensorFlow接口引入此模块. 模块 app module:通用入口点脚本. ...
- tfboys——tensorflow模块学习(三)
tf.estimator模块 定义在:tensorflow/python/estimator/estimator_lib.py 估算器(Estimator): 用于处理模型的高级工具. 主要模块 ex ...
- TensorFlow — 相关 API
TensorFlow — 相关 API TensorFlow 相关函数理解 任务时间:时间未知 tf.truncated_normal truncated_normal( shape, mean=0. ...
随机推荐
- MySQL数据库基础-JAVA
数据库 MySQL初步 MySQL基础认知 (Oracle真的是走哪祸害到哪23333) Java多用MySQL和Oracle SQLServer也收费,但是还行,比Oracle便宜,一个差不多3w多 ...
- [转载] Java注解
目录 元注解 @Retention @Documented @Target @Inherited @Repeatable 注解语法 ---------------------------------- ...
- ansible的基本学习-安装和简单的配置测试
当下有许多的运维自动化工具(配置管理),例如:ansible.saltstack.puppet.fabric等 ansible 是一种集成it系统的配置管理.应用部署.执行特定任务的开源平台,是ans ...
- 13 Zabbix4.4.1系统告警“More than 75% used in the configuration cache”
点击返回:自学Zabbix之路 点击返回:自学Zabbix4.0之路 点击返回:自学zabbix集锦 13 Zabbix4.4.1系统告警“More than 75% used in the conf ...
- java8学习之Stream分组与分区详解
Stream应用: 继续举例来操练Stream,对于下面这两个集合: 需求是:将这两个集合组合起来,形成对各自人员打招呼的结果,输出的结果如: "Hi zhangsan".&quo ...
- IDA Pro - 使用IDA Pro逆向C++程序
原文地址:Reversing C++ programs with IDA pro and Hex-rays 简介 在假期期间,我花了很多时间学习和逆向用C++写的程序.这是我第一次学习C++逆向,并且 ...
- SpringBoot项目多模块打包与部署【pom文件问题】
[bean的pom] [user的pom] 特别注意,user模块因为有返回jsp页面和web相关,所以需要加入web依赖. chapter23 com.yuqiyu 1.0.0 4.0.0 com. ...
- 【HDU6709】Fishing
题目大意:有 N 条鱼,每条鱼都有钓鱼和烤鱼的时间,钓鱼的时间均相同,每条鱼都有自己的烤鱼时间,一次只能烤一条鱼,且不能间断.现要求通过某种顺序将所有的鱼钓上来并烤完,求最小的时间是多少. 题解: 对 ...
- Android 热修复 Tinker platform 中的坑,以及详细步骤(二)
操作流程: 一.注册平台账号: http://www.tinkerpatch.com 二.查看操作文档: http://www.tinkerpatch.com/Docs/SDK 参考文档: https ...
- QT之QChar
QChar 类是 Qt 中用于表示一个字符的类,实现在 QtCore 共享库中.QChar 类内部用2个字节的Unicode编码来表示一个字符. Qchar构造函数: QChar ch=QChar() ...