这是用Python实现的Neural Networks, 基于Python 2.7.9, numpy, matplotlib。

代码来源于斯坦福大学的课程: http://cs231n.github.io/neural-networks-case-study/

基本是照搬过来,通过这个程序有助于了解python语法,以及Neural Networks 的原理。

import numpy as np
import matplotlib.pyplot as plt N = 200 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N*K,D)) # data matrix (each row = single example)
y = np.zeros(N*K, dtype='uint8') # class labels for j in xrange(K):
ix = range(N*j,N*(j+1))
r = np.linspace(0.0,1,N) # radius
t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
y[ix] = j # print y # lets visualize the data:
plt.scatter(X[:,0], X[:,1], s=40, c=y, alpha=0.5)
plt.show() # Train a Linear Classifier # initialize parameters randomly h = 20 # size of hidden layer
W = 0.01 * np.random.randn(D,h)
b = np.zeros((1,h))
W2 = 0.01 * np.random.randn(h,K)
b2 = np.zeros((1,K)) # define some hyperparameters
step_size = 1e-0
reg = 1e-3 # regularization strength # gradient descent loop
num_examples = X.shape[0]
for i in xrange(1): # evaluate class scores, [N x K]
hidden_layer = np.maximum(0, np.dot(X, W) + b) # note, ReLU activation
# print np.size(hidden_layer,1)
scores = np.dot(hidden_layer, W2) + b2 # compute the class probabilities
exp_scores = np.exp(scores)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # [N x K] # compute the loss: average cross-entropy loss and regularization
corect_logprobs = -np.log(probs[range(num_examples),y])
data_loss = np.sum(corect_logprobs)/num_examples
reg_loss = 0.5*reg*np.sum(W*W) + 0.5*reg*np.sum(W2*W2)
loss = data_loss + reg_loss if i % 1000 == 0:
print "iteration %d: loss %f" % (i, loss) # compute the gradient on scores
dscores = probs
dscores[range(num_examples),y] -= 1
dscores /= num_examples # backpropate the gradient to the parameters
# first backprop into parameters W2 and b2
dW2 = np.dot(hidden_layer.T, dscores)
db2 = np.sum(dscores, axis=0, keepdims=True)
# next backprop into hidden layer
dhidden = np.dot(dscores, W2.T)
# backprop the ReLU non-linearity
dhidden[hidden_layer <= 0] = 0 # finally into W,b
dW = np.dot(X.T, dhidden)
db = np.sum(dhidden, axis=0, keepdims=True) # add regularization gradient contribution
dW2 += reg * W2
dW += reg * W # perform a parameter update
W += -step_size * dW
b += -step_size * db
W2 += -step_size * dW2
b2 += -step_size * db2 # evaluate training set accuracy
hidden_layer = np.maximum(0, np.dot(X, W) + b)
scores = np.dot(hidden_layer, W2) + b2
predicted_class = np.argmax(scores, axis=1) print 'training accuracy: %.2f' % (np.mean(predicted_class == y))

随机生成的数据

运行结果

Python: Neural Networks的更多相关文章

  1. 【转】Artificial Neurons and Single-Layer Neural Networks

    原文:written by Sebastian Raschka on March 14, 2015 中文版译文:伯乐在线 - atmanic 翻译,toolate 校稿 This article of ...

  2. tensorfolw配置过程中遇到的一些问题及其解决过程的记录(配置SqueezeDet: Unified, Small, Low Power Fully Convolutional Neural Networks for Real-Time Object Detection for Autonomous Driving)

    今天看到一篇关于检测的论文<SqueezeDet: Unified, Small, Low Power Fully Convolutional Neural Networks for Real- ...

  3. 卷积神经网络CNN(Convolutional Neural Networks)没有原理只有实现

    零.说明: 本文的所有代码均可在 DML 找到,欢迎点星星. 注.CNN的这份代码非常慢,基本上没有实际使用的可能,所以我只是发出来,代表我还是实践过而已 一.引入: CNN这个模型实在是有些年份了, ...

  4. 循环神经网络(RNN, Recurrent Neural Networks)介绍(转载)

    循环神经网络(RNN, Recurrent Neural Networks)介绍    这篇文章很多内容是参考:http://www.wildml.com/2015/09/recurrent-neur ...

  5. Training Deep Neural Networks

    http://handong1587.github.io/deep_learning/2015/10/09/training-dnn.html  //转载于 Training Deep Neural ...

  6. Hacker's guide to Neural Networks

    Hacker's guide to Neural Networks Hi there, I'm a CS PhD student at Stanford. I've worked on Deep Le ...

  7. 深度学习笔记(三 )Constitutional Neural Networks

    一. 预备知识 包括 Linear Regression, Logistic Regression和 Multi-Layer Neural Network.参考 http://ufldl.stanfo ...

  8. 提高神经网络的学习方式Improving the way neural networks learn

    When a golf player is first learning to play golf, they usually spend most of their time developing ...

  9. Introduction to Deep Neural Networks

    Introduction to Deep Neural Networks Neural networks are a set of algorithms, modeled loosely after ...

随机推荐

  1. SpringBoot学习之启动方式

    1.通过@SpringBootAppliction注解类启动 启动方法:找到注解类->鼠标右键->run as-> java application. 2 通过maven启动Spri ...

  2. HDOJ Oulipo 1686【KMP】

    Oulipo Time Limit: 3000/1000 MS (Java/Others)    Memory Limit: 32768/32768 K (Java/Others) Total Sub ...

  3. git分支处理

    查看分支:git branch 创建分支:git branch <name> 切换分支:git checkout <name> 创建+切换分支:git checkout -b ...

  4. c# .net 关于接口实现方式:隐式实现/显式实现!

    以前在用到接口时,从来没注意到接口分为隐式实现与显示实现.昨天在浏览博客时看到相关内容,现在根据自己的理解记录一下,方便日后碰到的时候温习温习.  通俗的来讲,“显示接口实现”就是使用接口名称作为方法 ...

  5. Window下安装Gradle并在IDEA中配置

    真是作死啊,Maven都只是用得半瓢水,还来搞Gradle,小心撩得一身骚啊. 下载Gradle 下载页面为:https://gradle.org/releases/ Gradle 4.1的下载地址: ...

  6. erlang的timer定时器浅析

    timer作为其计时器: erlang的计时器timer是通过一个唯一的timer进程实现的,该进程是一个gen_server,用户通过timer:send_after和timer:apply_aft ...

  7. Java8中 Date和LocalDateTime的相互转换

    一.在Java 8中将Date转换为LocalDateTime 方法1: 将Date转换为LocalDatetime,我们可以使用以下方法: 1.从日期获取ZonedDateTime并使用其方法toL ...

  8. MySQL CREATE TRIGGER (1)

    CREATE TRIGGER语法 CREATE TRIGGER trigger_name trigger_time trigger_event    ON tbl_name FOR EACH ROW ...

  9. 关于 Delphi 中流的使用(7) 压缩与解压缩(TCompressionStream、TDecompressionStream)

    unit Unit1; interface uses   Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, For ...

  10. Java泛型【转】

    一. 泛型概念的提出(为什么需要泛型)? 首先,我们看下下面这段简短的代码: public class GenericTest { public static void main(String[] a ...