使用Keras编写GAN的入门

GAN

Time: 2017-5-31


前言

主要参考了网页[1]的教程,同时主要算法来自Ian J. Goodfellow 的论文,算法如下:

gan

代码

%matplotlib inline
import numpy as np
import pandas as pd from keras.models import Model
from keras.layers import Dense, Activation, Input, Reshape
from keras.layers import Conv1D, Flatten, Dropout
from keras.optimizers import SGD, Adam from tqdm import tqdm_notebook as tqdm # 进度条 # 生成随机正弦曲线的数据
def sample_data(n_samples=10000, x_vals=np.arange(0, 5, .1), max_offset=1000, mul_range=[1, 2]):
vectors = []
for i in range(n_samples):
offset = np.random.random() * max_offset
mul = mul_range[0] + np.random.random() * (mul_range[1] - mul_range[0])
vectors.append(np.sin(offset + x_vals * mul) / 2 + .5) return np.array(vectors) # 创建生成模型
def get_generative(G_in, dense_dim=200, out_dim=50, lr=1e-3):
x = Dense(dense_dim)(G_in)
x = Activation('tanh')(x)
G_out = Dense(out_dim, activation='tanh')(x)
G = Model(G_in, G_out)
opt = SGD(lr=lr) G.compile(loss='binary_crossentropy', optimizer=opt) return G, G_out # 创建判别模型
def get_discriminative(D_in, lr=1e-3, drate = .25, n_channels=50, conv_sz=5, leak=.2):
x = Reshape((-1, 1))(D_in)
x = Conv1D(n_channels, conv_sz, activation='relu')(x)
x = Dropout(drate)(x)
x = Flatten()(x)
x = Dense(n_channels)(x)
D_out = Dense(2, activation='sigmoid')(x)
D = Model(D_in, D_out)
dopt = Adam(lr=lr)
D.compile(loss='binary_crossentropy', optimizer=dopt) return D, D_out def set_trainability(model, trainable=False):
model.trainable = trainable
for layer in model.layers:
layer.trainable = trainable def make_gan(GAN_in, G, D):
set_trainability(D, False)
x = G(GAN_in)
GAN_out = D(x)
GAN = Model(GAN_in, GAN_out)
GAN.compile(loss='binary_crossentropy', optimizer=G.optimizer)
return GAN, GAN_out # 通过生成数据 预训练判别模型
def sample_data_and_gen(G, noise_dim=10, n_samples=10000):
XT = sample_data(n_samples=n_samples)
XN_noise = np.random.uniform(0, 1, size=[n_samples, noise_dim])
XN = G.predict(XN_noise)
X = np.concatenate((XT, XN))
y = np.zeros((2*n_samples, 2))
y[:n_samples, 1] = 1
y[n_samples:, 0] = 1 return X, y def pretrain(G, D, noise_dim=10, n_samples=10000, batch_size=32):
X, y = sample_data_and_gen(G, noise_dim=noise_dim, n_samples=n_samples)
set_trainability(D, True)
D.fit(X, y, epochs=1, batch_size=batch_size) # 开始交叉训练步骤
def sample_noise(G, noise_dim=10, n_samples=10000):
X = np.random.uniform(0, 1, size=[n_samples, noise_dim])
y = np.zeros((n_samples, 2))
y[:, 1] = 1 return X, y def train(GAN, G, D, epochs=500, n_samples=10000, noise_dim=10, batch_size=32, verbose=False, v_freq=50):
d_loss = []
g_loss = []
e_range = range(epochs)
if verbose:
e_range = tqdm(e_range) for epoch in e_range:
X, y = sample_data_and_gen(G, n_samples=n_samples, noise_dim=noise_dim) # 对D进行训练
set_trainability(D, True)
d_loss.append(D.train_on_batch(X, y)) X, y = sample_noise(G, n_samples=n_samples, noise_dim=noise_dim) # 对G训练
set_trainability(D, False)
g_loss.append(GAN.train_on_batch(X, y)) if verbose and (epoch + 1) % v_freq == 0:
print("Epoch #{}: Generative Loss: {}, Discriminative Loss: {}".format(epoch + 1, g_loss[-1], d_loss[-1])) return d_loss, g_loss
ax = pd.DataFrame(np.transpose(sample_data(5))).plot()
G_in = Input(shape=[10])
G, G_out = get_generative(G_in)
G.summary() D_in = Input(shape=[50])
D, D_out = get_discriminative(D_in)
D.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_9 (InputLayer) (None, 10) 0
_________________________________________________________________
dense_13 (Dense) (None, 200) 2200
_________________________________________________________________
activation_4 (Activation) (None, 200) 0
_________________________________________________________________
dense_14 (Dense) (None, 50) 10050
=================================================================
Total params: 12,250
Trainable params: 12,250
Non-trainable params: 0
_________________________________________________________________
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_10 (InputLayer) (None, 50) 0
_________________________________________________________________
reshape_4 (Reshape) (None, 50, 1) 0
_________________________________________________________________
conv1d_4 (Conv1D) (None, 46, 50) 300
_________________________________________________________________
dropout_4 (Dropout) (None, 46, 50) 0
_________________________________________________________________
flatten_4 (Flatten) (None, 2300) 0
_________________________________________________________________
dense_15 (Dense) (None, 50) 115050
_________________________________________________________________
dense_16 (Dense) (None, 2) 102
=================================================================
Total params: 115,452
Trainable params: 115,452
Non-trainable params: 0
_________________________________________________________________

png
GAN_in = Input([10])
GAN, GAN_out = make_gan(GAN_in, G, D)
GAN.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_11 (InputLayer) (None, 10) 0
_________________________________________________________________
model_9 (Model) (None, 50) 12250
_________________________________________________________________
model_10 (Model) (None, 2) 115452
=================================================================
Total params: 127,702
Trainable params: 12,250
Non-trainable params: 115,452
_________________________________________________________________
pretrain(G, D)
Epoch 1/1
20000/20000 [==============================] - 3s - loss: 0.0072
d_loss, g_loss = train(GAN, G, D, verbose=True)
Epoch #50: Generative Loss: 4.41527795791626, Discriminative Loss: 0.6733301877975464
Epoch #100: Generative Loss: 3.8898046016693115, Discriminative Loss: 0.09901376813650131
Epoch #150: Generative Loss: 6.2410054206848145, Discriminative Loss: 0.034074194729328156
Epoch #200: Generative Loss: 5.206066608428955, Discriminative Loss: 0.13078376650810242
Epoch #250: Generative Loss: 3.5144925117492676, Discriminative Loss: 0.07160962373018265
Epoch #300: Generative Loss: 3.705162525177002, Discriminative Loss: 0.05893774330615997
Epoch #350: Generative Loss: 3.511479616165161, Discriminative Loss: 0.09775738418102264
Epoch #400: Generative Loss: 4.141300678253174, Discriminative Loss: 0.03169865906238556
Epoch #450: Generative Loss: 3.500260829925537, Discriminative Loss: 0.05957922339439392
Epoch #500: Generative Loss: 2.9797921180725098, Discriminative Loss: 0.10566817969083786
ax = pd.DataFrame(
{
'Generative Loss': g_loss,
'Discriminative Loss': d_loss,
}
).plot(title='Training loss', logy=True)
ax.set_xlabel("Epochs")
ax.set_ylabel("Loss")

png
N_VIEWED_SAMPLES = 2
data_and_gen, _ = sample_data_and_gen(G, n_samples=N_VIEWED_SAMPLES)
pd.DataFrame(np.transpose(data_and_gen[N_VIEWED_SAMPLES:])).plot()

png
N_VIEWED_SAMPLES = 2
data_and_gen, _ = sample_data_and_gen(G, n_samples=N_VIEWED_SAMPLES)
pd.DataFrame(np.transpose(data_and_gen[N_VIEWED_SAMPLES:])).rolling(5).mean()[5:].plot()

png

reference

[1] http://www.rricard.me/machine/learning/generative/adversarial/networks/keras/tensorflow/2017/04/05/gans-part2.html#Imports

使用Keras编写GAN的入门的更多相关文章

  1. BAT脚本编写教程简单入门篇

    BAT脚本编写教程简单入门篇 批处理文件最常用的几个命令: echo表示显示此命令后的字符 echo on  表示在此语句后所有运行的命令都显示命令行本身 echo off 表示在此语句后所有运行的命 ...

  2. keras搭建神经网络快速入门笔记

    之前学习了tensorflow2.0的小伙伴可能会遇到一些问题,就是在读论文中的代码和一些实战项目往往使用keras+tensorflow1.0搭建, 所以本次和大家一起分享keras如何搭建神经网络 ...

  3. 在ubuntu下编写python(python入门)

    在ubuntu下编写python 一般情况下,ubuntu已经安装了python,打开终端,直接输入python,即可进行python编写. 默认为python2 如果想写python3,在终端输入p ...

  4. 【深度学习】--GAN从入门到初始

    一.前述 GAN,生成对抗网络,在2016年基本火爆深度学习,所有有必要学习一下.生成对抗网络直观的应用可以帮我们生成数据,图片. 二.具体 1.生活案例 比如假设真钱 r 坏人定义为G  我们通过 ...

  5. Linux编写Shell脚本入门

    一. 一般编写shell需要分3个步骤 1. 新建一个脚本文件,并编写程序 vi hello.sh #!/bin/bash #注释 #输出 printf '%s\n' "Hello Worl ...

  6. keras人工神经网络构建入门

    //2019.07.29-301.Keras 是提供一些高度可用神经网络框架的 Python API ,能帮助你快速的构建和训练自己的深度学习模型,它的后端是 TensorFlow 或者 Theano ...

  7. keras运行gan的几个bug解决

    http://blog.csdn.net/u012317000/article/details/79211274 https://www.jianshu.com/p/5b1f7004144d

  8. GAN网络之入门教程(四)之基于DCGAN动漫头像生成

    目录 使用前准备 数据集 定义参数 构建网络 构建G网络 构建D网络 构建GAN网络 关于GAN的小trick 训练 总结 参考 这一篇博客以代码为主,主要是来介绍如果使用keras构建一个DCGAN ...

  9. WPF 像素着色器入门:使用 Shazzam Shader Editor 编写 HLSL 像素着色器代码

    原文:WPF 像素着色器入门:使用 Shazzam Shader Editor 编写 HLSL 像素着色器代码 HLSL,High Level Shader Language,高级着色器语言,是 Di ...

随机推荐

  1. 【学习笔记】OI玄学道—代码坑点

    [学习笔记]\(OI\) 玄学道-代码坑点 [目录] [逻辑运算符的短路运算] [\(cmath\)里的贝塞尔函数] 一:[逻辑运算符的短路运算] [运算规则] && 和 || 属于逻 ...

  2. vscode----vue中HTML代码tab键自动补全

    1.在vscode中插件下载并重新加载HTML Snippets 2.settings.json中配置files.associations对象. 找到setting.json文件:文件 --> ...

  3. Razor的使用

    Razor可以识别尖括号,且关键词是@,默认情况下会对输出的html代码进行转义 1.C#代码 用 @ 加 中括号 包起来 @{ ; i < ; i++) { <h3>C#语句块要用 ...

  4. Hive扩展功能(六)--HPL/SQL(可使用存储过程)

    软件环境: linux系统: CentOS6.7 Hadoop版本: 2.6.5 zookeeper版本: 3.4.8 主机配置: 一共m1, m2, m3这五部机, 每部主机的用户名都为centos ...

  5. JS——事件冒泡与捕获

    事件冒泡与事件捕获 1.冒泡:addEventListener("click",fn,false)或者addEventListener("click",fn): ...

  6. Git 分支创建

    分支策略:git上始终保持两个分支,master分支与develop分支.master分支主要用于发布时使用,而develop分支主要用于开发使用. 创建master的分支developgit che ...

  7. js 性能调试

    今天有幸偶遇我早就神往已久的性能调试问题. 原来js调试工具里面有可以记录每个方法的执行时间的功能,站在此功能的肩膀上就可以对自己的程序性能.瓶颈了如指掌,就可以针对性的,瞄准目标,斩草除根,以绝后患 ...

  8. Apache服务器防范DoS

    Apache服务器对拒绝服务攻击的防范主要通过软件Apache DoS Evasive Maneuvers Module  来实现.它是一款mod_access的替代软件,可以对抗DoS攻击.该软件可 ...

  9. C# WebKitBrowser 设置内容

    WebKit.WebKitBrowser kitBrowser = new WebKit.WebKitBrowser(); kitBrowser.Dock = DockStyle.Fill; // k ...

  10. Uedior上传大文件超时报错

    出错原因: 1.php超时等待时间太短 2.uedior中设置了请求超时,提示信息: 上传失败,请重试 先解决第一个问题: 设置php.ini中的max_execution_time 为0 (意思是h ...