"""Simple tutorial for using TensorFlow to compute a linear regression.

Parag K. Mital, Jan. 2016"""
# %% imports
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

# %% Let's create some toy data
plt.ion()  #enable interactive mode
n_observations = 100
fig, ax = plt.subplots(1, 1)
xs = np.linspace(-3, 3, n_observations)
ys = np.sin(xs) + np.random.uniform(-0.5, 0.5, n_observations)
ax.scatter(xs, ys)
fig.show()
plt.draw()

# %% tf.placeholders for the input and output of the network. Placeholders are
# variables which we need to fill in when we are ready to compute the graph.
X = tf.placeholder(tf.float32)
Y = tf.placeholder(tf.float32)

# %% We will try to optimize min_(W,b) ||(X*w + b) - y||^2
# The `Variable()` constructor requires an initial value for the variable,
# which can be a `Tensor` of any type and shape. The initial value defines the
# type and shape of the variable. After construction, the type and shape of
# the variable are fixed. The value can be changed using one of the assign
# methods.
W = tf.Variable(tf.random_normal([1]), name='weight')
b = tf.Variable(tf.random_normal([1]), name='bias')
Y_pred = tf.add(tf.mul(X, W), b)

# %% Loss function will measure the distance between our observations
# and predictions and average over them.
cost = tf.reduce_sum(tf.pow(Y_pred - Y, 2)) / (n_observations - 1)

# %% if we wanted to add regularization, we could add other terms to the cost,
# e.g. ridge regression has a parameter controlling the amount of shrinkage
# over the norm of activations. the larger the shrinkage, the more robust
# to collinearity.
# cost = tf.add(cost, tf.mul(1e-6, tf.global_norm([W])))

# %% Use gradient descent to optimize W,b
# Performs a single step in the negative gradient
learning_rate = 0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

# %% We create a session to use the graph
n_epochs = 1000
with tf.Session() as sess:
    # Here we tell tensorflow that we want to initialize all
    # the variables in the graph so we can use them
    sess.run(tf.initialize_all_variables())

    # Fit all training data
    prev_training_cost = 0.0
    for epoch_i in range(n_epochs):
        for (x, y) in zip(xs, ys):
            sess.run(optimizer, feed_dict={X: x, Y: y})

        training_cost = sess.run(
            cost, feed_dict={X: xs, Y: ys})
        print(training_cost)

        if epoch_i % 20 == 0:
            ax.plot(xs, Y_pred.eval(
                feed_dict={X: xs}, session=sess),
                    'k', alpha=epoch_i / n_epochs)
            fig.show()
            plt.draw()

        # Allow the training to quit if we've reached a minimum
        if np.abs(prev_training_cost - training_cost) < 0.000001:
            break
        prev_training_cost = training_cost
fig.show()
plt.waitforbuttonpress()

Simple tutorial for using TensorFlow to compute a linear regression的更多相关文章

  1. Simple tutorial for using TensorFlow to compute polynomial regression

    """Simple tutorial for using TensorFlow to compute polynomial regression. Parag K. Mi ...

  2. 深度学习 Deep Learning UFLDL 最新 Tutorial 学习笔记 1:Linear Regression

    1 前言 Andrew Ng的UFLDL在2014年9月底更新了. 对于開始研究Deep Learning的童鞋们来说这真的是极大的好消息! 新的Tutorial相比旧的Tutorial添加了Conv ...

  3. Machine Learning – 第2周(Linear Regression with Multiple Variables、Octave/Matlab Tutorial)

    Machine Learning – Coursera Octave for Microsoft Windows GNU Octave官网 GNU Octave帮助文档 (有900页的pdf版本) O ...

  4. STA 463 Simple Linear Regression Report

    STA 463 Simple Linear Regression ReportSpring 2019 The goal of this part of the project is to perfor ...

  5. TensorFlow 学习笔记(1)----线性回归(linear regression)的TensorFlow实现

    此系列将会每日持续更新,欢迎关注 线性回归(linear regression)的TensorFlow实现 #这里是基于python 3.7版本的TensorFlow TensorFlow是一个机器学 ...

  6. 机器学习-TensorFlow建模过程 Linear Regression线性拟合应用

    TensorFlow是咱们机器学习领域非常常用的一个组件,它在数据处理,模型建立,模型验证等等关于机器学习方面的领域都有很好的表现,前面的一节我已经简单介绍了一下TensorFlow里面基础的数据结构 ...

  7. Tensorflow - Implement for a Softmax Regression Model on MNIST.

    Coding according to TensorFlow 官方文档中文版 import tensorflow as tf from tensorflow.examples.tutorials.mn ...

  8. TensorFlow笔记二:线性回归预测(Linear Regression)

    代码: import tensorflow as tf import numpy as np import xlrd import matplotlib.pyplot as plt DATA_FILE ...

  9. 深度学习 Deep Learning UFLDL 最新Tutorial 学习笔记 3:Vectorization

    1 Vectorization 简述 Vectorization 翻译过来就是向量化,各简单的理解就是实现矩阵计算. 为什么MATLAB叫MATLAB?大概就是Matrix Lab,最根本的差别于其它 ...

随机推荐

  1. solr6.6初探之查询篇

    关于搜索与查询,首先我们来看一张图: 这张图说明了solr查询原理: 1.当通过solr发起查询的时候,引擎会选择一个RequestHandler(从字面意思上来说就是请求处理器)来进行查询处理 2. ...

  2. Delphi7.0常用函数-属性-事件

    abort 函数 引起放弃的意外处理 addexitproc 函数 将一过程添加到运行时库的结束过程表中 addr 函数 返回指定对象的地址 adjustlinebreaks 函数 将给定字符串的行分 ...

  3. 提高数据库的查询速率及其sql语句的优化问题

    在一个千万级的数据库查寻中,如何提高查询效率? 1)数据库设计方面:  a.对查询进行优化,应尽量避免全表扫描,首先应考虑在 where 及 order by 涉及的列上建立索引. b.应尽量避免在 ...

  4. 关于在同一个DIV下的Hover效果问题

    例子: (function bindColumnRowHoverEvent(){ $('.ticket_list_body .work_product').live('mouseenter', fun ...

  5. 渗透测试环境DVWA搭建

    一.DVWA介绍 DVWA(Damn Vulnerable Web Application)是一个用来进行安全脆弱性鉴定的PHP/MySQL Web应用,旨在为安全专业人员测试自己的专业技能和工具提供 ...

  6. Linux系统基础优化

    一.关闭防火墙iptables:                (1)关闭                 /etc/init.d/iptables stop                (2)检查 ...

  7. python环境搭建(python2和python3共存)

    安装两个版本的意义 验证自己代码对版本的兼容性 网上下载的某些源码只能在python2或者python3中运行 安装过程记录 1.去python官网下载python的安装包, 下载完成后如下图所示 2 ...

  8. Python 3.3.2 round函数并非"四舍五入"

    对于一些貌似很简单常见的函数,最好还是去读一下Python文档,否则当你被某个BUG折磨得死去活来时,还不知根源所在.尤其是Python这种不断更新的语言.(python 2.7 的round和3.3 ...

  9. Dynamics CRM2016 Web Api之查询查找字段的相关属性

    之前有篇博文介绍了如何获取查找字段的name值(跳转),本篇在此基础上再延伸下,实现的效果类似于EntityReference,可以取到查找字段的id,name,localname. 这里我以客户实体 ...

  10. STATE(状态)模式

    引子 场景 在我们软件开发的过程中,有许多对象是有状态的.而对象的行为会随着状态的改变而发生改变.例如开发一个电梯类,电梯有开门.关门.停止.运行等行为,同时电梯也会有开门状态.关门状态.停止状态.运 ...