In the multiclass case, the training algorithm uses the one-vs-rest (OvR)
scheme if the 'multi_class' option is set to 'ovr', and uses the cross-
entropy loss if the 'multi_class' option is set to 'multinomial'.
(Currently the 'multinomial' option is supported only by the 'lbfgs',
'sag' and 'newton-cg' solvers.)
This class implements regularized logistic regression using the
'liblinear' library, 'newton-cg', 'sag' and 'lbfgs' solvers. It can handle
both dense and sparse input. Use C-ordered arrays or CSR matrices
containing 64-bit floats for optimal performance; any other input format
will be converted (and copied).
The 'newton-cg', 'sag', and 'lbfgs' solvers support only L2 regularization
with primal formulation. The 'liblinear' solver supports both L1 and L2
regularization, with a dual formulation only for the L2 penalty.
Read more in the :ref:`User Guide <logistic_regression>`.
Parameters
----------
penalty : str, 'l1' or 'l2', default: 'l2'
Used to specify the norm used in the penalization. The 'newton-cg',
'sag' and 'lbfgs' solvers support only l2 penalties.
.. versionadded:: 0.19
l1 penalty with SAGA solver (allowing 'multinomial' + L1)
dual : bool, default: False
Dual or primal formulation. Dual formulation is only implemented for
l2 penalty with liblinear solver. Prefer dual=False when
n_samples > n_features.
tol : float, default: 1e-4
Tolerance for stopping criteria.
C : float, default: 1.0
Inverse of regularization strength; must be a positive float.
Like in support vector machines, smaller values specify stronger
regularization.
fit_intercept : bool, default: True
Specifies if a constant (a.k.a. bias or intercept) should be
added to the decision function. solver : {'newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'},
default: 'liblinear'
Algorithm to use in the optimization problem.
- For small datasets, 'liblinear' is a good choice, whereas 'sag' and
'saga' are faster for large ones.
- For multiclass problems, only 'newton-cg', 'sag', 'saga' and 'lbfgs'
handle multinomial loss; 'liblinear' is limited to one-versus-rest
schemes.
- 'newton-cg', 'lbfgs' and 'sag' only handle L2 penalty, whereas
'liblinear' and 'saga' handle L1 penalty.
Note that 'sag' and 'saga' fast convergence is only guaranteed on
features with approximately the same scale. You can
preprocess the data with a scaler from sklearn.preprocessing.
.. versionadded:: 0.17
Stochastic Average Gradient descent solver.
.. versionadded:: 0.19
SAGA solver. multi_class : str, {'ovr', 'multinomial'}, default: 'ovr'
Multiclass option can be either 'ovr' or 'multinomial'. If the option
chosen is 'ovr', then a binary problem is fit for each label. Else
the loss minimised is the multinomial loss fit across
the entire probability distribution. Does not work for liblinear
solver.
.. versionadded:: 0.18
Stochastic Average Gradient descent solver for 'multinomial' case. Attributes coef_ : array, shape (1, n_features) or (n_classes, n_features)
Coefficient of the features in the decision function.
`coef_` is of shape (1, n_features) when the given problem
is binary.
intercept_ : array, shape (1,) or (n_classes,)
Intercept (a.k.a. bias) added to the decision function.
If `fit_intercept` is set to False, the intercept is set to zero.
`intercept_` is of shape(1,) when the problem is binary.
n_iter_ : array, shape (n_classes,) or (1, )
Actual number of iterations for all classes. If binary or multinomial,
it returns only 1 element. For liblinear solver, only the maximum
number of iteration across all classes is given.

基于Softmax的mnist回归

# -*- coding: utf-8 -*-
"""
Created on Thu Sep 7 10:47:18 2017 @author: Administrator
""" import gzip
import struct
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn import preprocessing
from sklearn.metrics import accuracy_score
import tensorflow as tf # MNIST data is stored in binary format,
# and we transform them into numpy ndarray objects by the following two utility functions
def read_image(file_name):
with gzip.open(file_name, 'rb') as f:
buf = f.read()
index = 0
magic, images, rows, columns = struct.unpack_from('>IIII' , buf , index)
index += struct.calcsize('>IIII') image_size = '>' + str(images*rows*columns) + 'B'
ims = struct.unpack_from(image_size, buf, index) im_array = np.array(ims).reshape(images, rows, columns)
return im_array def read_label(file_name):
with gzip.open(file_name, 'rb') as f:
buf = f.read()
index = 0
magic, labels = struct.unpack_from('>II', buf, index)
index += struct.calcsize('>II') label_size = '>' + str(labels) + 'B'
labels = struct.unpack_from(label_size, buf, index) label_array = np.array(labels)
return label_array print ("Start processing MNIST handwritten digits data...")
train_x_data = read_image("MNIST_data/train-images-idx3-ubyte.gz")
train_x_data = train_x_data.reshape(train_x_data.shape[0], -1).astype(np.float32)
train_y_data = read_label("MNIST_data/train-labels-idx1-ubyte.gz")
test_x_data = read_image("MNIST_data/t10k-images-idx3-ubyte.gz")
test_x_data = test_x_data.reshape(test_x_data.shape[0], -1).astype(np.float32)
test_y_data = read_label("MNIST_data/t10k-labels-idx1-ubyte.gz") train_x_minmax = train_x_data / 255.0
test_x_minmax = test_x_data / 255.0 # Of course you can also use the utility function to read in MNIST provided by tensorflow
# from tensorflow.examples.tutorials.mnist import input_data
# mnist = input_data.read_data_sets("MNIST_data/", one_hot=False)
# train_x_minmax = mnist.train.images
# train_y_data = mnist.train.labels
# test_x_minmax = mnist.test.images
# test_y_data = mnist.test.labels # We evaluate the softmax regression model by sklearn first
eval_sklearn = False
if eval_sklearn:
print ("Start evaluating softmax regression model by sklearn...")
reg = LogisticRegression(solver="lbfgs", multi_class="multinomial")
reg.fit(train_x_minmax, train_y_data)
np.savetxt('coef_softmax_sklearn.txt', reg.coef_, fmt='%.6f') # Save coefficients to a text file
test_y_predict = reg.predict(test_x_minmax)
print ("Accuracy of test set: %f" % accuracy_score(test_y_data, test_y_predict)) eval_tensorflow = True
batch_gradient = False
if eval_tensorflow:
print ("Start evaluating softmax regression model by tensorflow...")
# reformat y into one-hot encoding style
lb = preprocessing.LabelBinarizer()
lb.fit(train_y_data)
train_y_data_trans = lb.transform(train_y_data)
test_y_data_trans = lb.transform(test_y_data) x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
V = tf.matmul(x, W) + b
y = tf.nn.softmax(V) y_ = tf.placeholder(tf.float32, [None, 10]) loss = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss) init = tf.initialize_all_variables() sess = tf.Session()
sess.run(init) if batch_gradient:
for step in range(300):
sess.run(train, feed_dict={x: train_x_minmax, y_: train_y_data_trans})
if step % 10 == 0:
print ("Batch Gradient Descent processing step %d" % step)
print ("Finally we got the estimated results, take such a long time...")
else:
for step in range(1000):
sample_index = np.random.choice(train_x_minmax.shape[0], 100)
batch_xs = train_x_minmax[sample_index, :]
batch_ys = train_y_data_trans[sample_index, :]
sess.run(train, feed_dict={x: batch_xs, y_: batch_ys})
if step % 100 == 0:
print ("Stochastic Gradient Descent processing step %d" % step)
np.savetxt('coef_softmax_tf.txt', np.transpose(sess.run(W)), fmt='%.6f') # Save coefficients to a text file
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print ("Accuracy of test set: %f" % sess.run(accuracy, feed_dict={x: test_x_minmax, y_: test_y_data_trans}))
  • 注意:
  • A Variable is a modifiable tensor that lives in TensorFlow's graph of interacting operations. It can be used and even modified by the computation. For machine learning applications, one generally has the model parameters be Variables.
  • 不过从测试集的准确率来看,二者都在92%左右,sklearn稍微好一点。注意,92%的准确率看起来不错,但其实是一个很低的准确率,按照官网教程的说法,应该要感到羞愧。
  • sklearn的估计时间有点长,因为每一轮参数更新都是基于全量的训练集数据算出损失,再算出梯度,然后再改进结果的。
  • tensorflow采用batch gradient descent估计算法时,时间也比较长,原因同上。
  • tensorflow采用stochastic gradient descent估计算法时间短,最后的估计结果也挺好,相当于每轮迭代只用到了部分数据集算出损失和梯度,速度变快,但可能bias增加;所以把迭代次数增多,这样可以降低variance,总体上的误差相比batch gradient descent并没有差多少。

官网demo

  • 自动下载数据

# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================== """A very simple MNIST classifier.
See extensive documentation at
https://www.tensorflow.org/get_started/mnist/beginners
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function import argparse
import sys from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf FLAGS = None def main(_):
# Import data
mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True) # Create the model
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.matmul(x, W) + b # Define loss and optimizer
y_ = tf.placeholder(tf.float32, [None, 10]) # The raw formulation of cross-entropy,
#
# tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.nn.softmax(y)),
# reduction_indices=[1]))
#
# can be numerically unstable.
#
# So here we use tf.nn.softmax_cross_entropy_with_logits on the raw
# outputs of 'y', and then average across the batch.
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
# Train
# 该循环的每个步骤中,我们都会随机抓取训练数据中的100个批处理数据点,然后我们用这些数据点作为参数替换之前的占位符来运行train_step。
# 使用一小部分的随机数据来进行训练被称为随机训练(stochastic
# training)- 在这里更确切的说是随机梯度下降训练。在理想情况下,我们希望用我们所有的数据来进行每一步的训练,因为这能给我们更好的训练结果,但显然这需要很大的计算开销。
# 所以,每一次训练我们可以使用不同的数据子集,这样做既可以减少计算开销,又可以最大化地学习到数据集的总体特性。
for _ in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100) ##
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) # Test trained model
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images,
y_: mnist.test.labels})) if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--data_dir', type=str, default='/tmp/tensorflow/mnist/input_data',
help='Directory for storing input data')
FLAGS, unparsed = parser.parse_known_args()
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
  • result
Start processing MNIST handwritten digits data...
Start evaluating softmax regression model by tensorflow...
WARNING:tensorflow:From D:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\util\tf_should_use.py:175: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
2017-09-08 16:47:36.504803: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-09-08 16:47:36.504803: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
Stochastic Gradient Descent processing step 0
Stochastic Gradient Descent processing step 100
Stochastic Gradient Descent processing step 200
Stochastic Gradient Descent processing step 300
Stochastic Gradient Descent processing step 400
Stochastic Gradient Descent processing step 500
Stochastic Gradient Descent processing step 600
Stochastic Gradient Descent processing step 700
Stochastic Gradient Descent processing step 800
Stochastic Gradient Descent processing step 900
Accuracy of test set: 0.915600

基于MNIST数据的softmax regression的更多相关文章

  1. 基于MNIST数据的卷积神经网络CNN

    基于tensorflow使用CNN识别MNIST 参数数量:第一个卷积层5x5x1x32=800个参数,第二个卷积层5x5x32x64=51200个参数,第三个全连接层7x7x64x1024=3211 ...

  2. R︱Softmax Regression建模 (MNIST 手写体识别和文档多分类应用)

    本文转载自经管之家论坛, R语言中的Softmax Regression建模 (MNIST 手写体识别和文档多分类应用) R中的softmaxreg包,发自2016-09-09,链接:https:// ...

  3. 【TensorFlow-windows】(一)实现Softmax Regression进行手写数字识别(mnist)

    博文主要内容有: 1.softmax regression的TensorFlow实现代码(教科书级的代码注释) 2.该实现中的函数总结 平台: 1.windows 10 64位 2.Anaconda3 ...

  4. Tensorflow - Implement for a Softmax Regression Model on MNIST.

    Coding according to TensorFlow 官方文档中文版 import tensorflow as tf from tensorflow.examples.tutorials.mn ...

  5. Deep Learning Tutorial - Classifying MNIST digits using Logistic Regression

    Deep Learning Tutorial 由 Montreal大学的LISA实验室所作,基于Theano的深度学习材料.Theano是一个python库,使得写深度模型更容易些,也可以在GPU上训 ...

  6. Tensorflow之基于MNIST手写识别的入门介绍

    Tensorflow是当下AI热潮下,最为受欢迎的开源框架.无论是从Github上的fork数量还是star数量,还是从支持的语音,开发资料,社区活跃度等多方面,他当之为superstar. 在前面介 ...

  7. (六)6.10 Neurons Networks implements of softmax regression

    softmax可以看做只有输入和输出的Neurons Networks,如下图: 其参数数量为k*(n+1) ,但在本实现中没有加入截距项,所以参数为k*n的矩阵. 对损失函数J(θ)的形式有: 算法 ...

  8. UFLDL实验报告1: Softmax Regression

    PS:这些是今年4月份,跟斯坦福UFLDL教程时的实验报告,当时就应该好好整理的…留到现在好凌乱了 Softmax Regression实验报告 1.Softmax Regression实验描述 So ...

  9. 学习笔记TF024:TensorFlow实现Softmax Regression(回归)识别手写数字

    TensorFlow实现Softmax Regression(回归)识别手写数字.MNIST(Mixed National Institute of Standards and Technology ...

随机推荐

  1. Python中读取txt文本出现:SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape问题解决

    windows中的路径是反斜杠\,然而反斜杠\在python中有着转义字符的意义,所以在py文件中写windows文件路径的时候,要特别注意反斜杠\的使用. 下面有三种解决方式: 方式一:转义的方式 ...

  2. react 列表渲染

    https://reactjs.org/docs/lists-and-keys.html#keys 以下代码运行会报错:Warning: Each child in an array or itera ...

  3. [php] 接口及方法和抽象类及方法的异同点

    比较项目 接口 抽象类 方法是否有实体 无 抽象方法无实体,非抽象方法可以有实体 方法开闭性 public public,protected,private 重载方法的开闭性 public 必须与父类 ...

  4. HTML5 移动端web

    概述 HTML5 提供了很多新的功能,主要有: 新的 HTML 元素,例如 section, nav, header, footer, article 等 用于绘画的 Canvas 元素 用于多媒体播 ...

  5. 使用 RMAN 同步数据库

    使用 RMAN同步数据库 使用 RMAN 同步数据库 一.概述 二 操作步骤 (一).把生产库置为归档模式 (二).启动rman做数据库0级备份 (三).修改生产库数据库到未归档 (四).拷贝备份集到 ...

  6. flask中的上下文_请求上下文和应用上下文

    前引 在了解flask上下文管理机制之前,先来一波必知必会的知识点. 面向对象双下方法 首先,先来聊一聊面向对象中的一些特殊的双下划线方法,比如__call__.__getattr__系列.__get ...

  7. idea 无法创建Scala class 选项解决办法汇总

    原因一:没有添加scala sdk 解决:file=>project structure =>Global Libraries,添加scala-sdk: 没有scala sdk的可以去网上 ...

  8. SQL server 事务实例

    简单的SQLserver事务实例: 执行SQL 组合操作A.操作B,只有AB都执行成功时才提交事务,否则回滚事务. 测试数据表: --1.数据表A CREATE TABLE A( A1 VARCHAR ...

  9. Caffe 编译: undefined reference to imencode()

    本系列文章由 @yhl_leo 出品,转载请注明出处. 文章链接: http://blog.csdn.net/yhl_leo/article/details/52150781 整理之前编译工程中遇到的 ...

  10. TheBrain8破解方式

    破解文件下载地址:http://rghost.net/51736270 mac破解方式: 我用的MAC 装的8007版本的,今天竟然提示要升级专业版本了.补救方法是,先打开TB,把之前手贱输入的云服务 ...