使用Keras编写GAN的入门
使用Keras编写GAN的入门
Time: 2017-5-31
前言
主要参考了网页[1]的教程,同时主要算法来自Ian J. Goodfellow 的论文,算法如下:

代码
%matplotlib inline
import numpy as np
import pandas as pd
from keras.models import Model
from keras.layers import Dense, Activation, Input, Reshape
from keras.layers import Conv1D, Flatten, Dropout
from keras.optimizers import SGD, Adam
from tqdm import tqdm_notebook as tqdm # 进度条
# 生成随机正弦曲线的数据
def sample_data(n_samples=10000, x_vals=np.arange(0, 5, .1), max_offset=1000, mul_range=[1, 2]):
vectors = []
for i in range(n_samples):
offset = np.random.random() * max_offset
mul = mul_range[0] + np.random.random() * (mul_range[1] - mul_range[0])
vectors.append(np.sin(offset + x_vals * mul) / 2 + .5)
return np.array(vectors)
# 创建生成模型
def get_generative(G_in, dense_dim=200, out_dim=50, lr=1e-3):
x = Dense(dense_dim)(G_in)
x = Activation('tanh')(x)
G_out = Dense(out_dim, activation='tanh')(x)
G = Model(G_in, G_out)
opt = SGD(lr=lr)
G.compile(loss='binary_crossentropy', optimizer=opt)
return G, G_out
# 创建判别模型
def get_discriminative(D_in, lr=1e-3, drate = .25, n_channels=50, conv_sz=5, leak=.2):
x = Reshape((-1, 1))(D_in)
x = Conv1D(n_channels, conv_sz, activation='relu')(x)
x = Dropout(drate)(x)
x = Flatten()(x)
x = Dense(n_channels)(x)
D_out = Dense(2, activation='sigmoid')(x)
D = Model(D_in, D_out)
dopt = Adam(lr=lr)
D.compile(loss='binary_crossentropy', optimizer=dopt)
return D, D_out
def set_trainability(model, trainable=False):
model.trainable = trainable
for layer in model.layers:
layer.trainable = trainable
def make_gan(GAN_in, G, D):
set_trainability(D, False)
x = G(GAN_in)
GAN_out = D(x)
GAN = Model(GAN_in, GAN_out)
GAN.compile(loss='binary_crossentropy', optimizer=G.optimizer)
return GAN, GAN_out
# 通过生成数据 预训练判别模型
def sample_data_and_gen(G, noise_dim=10, n_samples=10000):
XT = sample_data(n_samples=n_samples)
XN_noise = np.random.uniform(0, 1, size=[n_samples, noise_dim])
XN = G.predict(XN_noise)
X = np.concatenate((XT, XN))
y = np.zeros((2*n_samples, 2))
y[:n_samples, 1] = 1
y[n_samples:, 0] = 1
return X, y
def pretrain(G, D, noise_dim=10, n_samples=10000, batch_size=32):
X, y = sample_data_and_gen(G, noise_dim=noise_dim, n_samples=n_samples)
set_trainability(D, True)
D.fit(X, y, epochs=1, batch_size=batch_size)
# 开始交叉训练步骤
def sample_noise(G, noise_dim=10, n_samples=10000):
X = np.random.uniform(0, 1, size=[n_samples, noise_dim])
y = np.zeros((n_samples, 2))
y[:, 1] = 1
return X, y
def train(GAN, G, D, epochs=500, n_samples=10000, noise_dim=10, batch_size=32, verbose=False, v_freq=50):
d_loss = []
g_loss = []
e_range = range(epochs)
if verbose:
e_range = tqdm(e_range)
for epoch in e_range:
X, y = sample_data_and_gen(G, n_samples=n_samples, noise_dim=noise_dim) # 对D进行训练
set_trainability(D, True)
d_loss.append(D.train_on_batch(X, y))
X, y = sample_noise(G, n_samples=n_samples, noise_dim=noise_dim) # 对G训练
set_trainability(D, False)
g_loss.append(GAN.train_on_batch(X, y))
if verbose and (epoch + 1) % v_freq == 0:
print("Epoch #{}: Generative Loss: {}, Discriminative Loss: {}".format(epoch + 1, g_loss[-1], d_loss[-1]))
return d_loss, g_loss
ax = pd.DataFrame(np.transpose(sample_data(5))).plot()
G_in = Input(shape=[10])
G, G_out = get_generative(G_in)
G.summary()
D_in = Input(shape=[50])
D, D_out = get_discriminative(D_in)
D.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_9 (InputLayer) (None, 10) 0
_________________________________________________________________
dense_13 (Dense) (None, 200) 2200
_________________________________________________________________
activation_4 (Activation) (None, 200) 0
_________________________________________________________________
dense_14 (Dense) (None, 50) 10050
=================================================================
Total params: 12,250
Trainable params: 12,250
Non-trainable params: 0
_________________________________________________________________
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_10 (InputLayer) (None, 50) 0
_________________________________________________________________
reshape_4 (Reshape) (None, 50, 1) 0
_________________________________________________________________
conv1d_4 (Conv1D) (None, 46, 50) 300
_________________________________________________________________
dropout_4 (Dropout) (None, 46, 50) 0
_________________________________________________________________
flatten_4 (Flatten) (None, 2300) 0
_________________________________________________________________
dense_15 (Dense) (None, 50) 115050
_________________________________________________________________
dense_16 (Dense) (None, 2) 102
=================================================================
Total params: 115,452
Trainable params: 115,452
Non-trainable params: 0
_________________________________________________________________

GAN_in = Input([10])
GAN, GAN_out = make_gan(GAN_in, G, D)
GAN.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_11 (InputLayer) (None, 10) 0
_________________________________________________________________
model_9 (Model) (None, 50) 12250
_________________________________________________________________
model_10 (Model) (None, 2) 115452
=================================================================
Total params: 127,702
Trainable params: 12,250
Non-trainable params: 115,452
_________________________________________________________________
pretrain(G, D)
Epoch 1/1
20000/20000 [==============================] - 3s - loss: 0.0072
d_loss, g_loss = train(GAN, G, D, verbose=True)
Epoch #50: Generative Loss: 4.41527795791626, Discriminative Loss: 0.6733301877975464
Epoch #100: Generative Loss: 3.8898046016693115, Discriminative Loss: 0.09901376813650131
Epoch #150: Generative Loss: 6.2410054206848145, Discriminative Loss: 0.034074194729328156
Epoch #200: Generative Loss: 5.206066608428955, Discriminative Loss: 0.13078376650810242
Epoch #250: Generative Loss: 3.5144925117492676, Discriminative Loss: 0.07160962373018265
Epoch #300: Generative Loss: 3.705162525177002, Discriminative Loss: 0.05893774330615997
Epoch #350: Generative Loss: 3.511479616165161, Discriminative Loss: 0.09775738418102264
Epoch #400: Generative Loss: 4.141300678253174, Discriminative Loss: 0.03169865906238556
Epoch #450: Generative Loss: 3.500260829925537, Discriminative Loss: 0.05957922339439392
Epoch #500: Generative Loss: 2.9797921180725098, Discriminative Loss: 0.10566817969083786
ax = pd.DataFrame(
{
'Generative Loss': g_loss,
'Discriminative Loss': d_loss,
}
).plot(title='Training loss', logy=True)
ax.set_xlabel("Epochs")
ax.set_ylabel("Loss")

N_VIEWED_SAMPLES = 2
data_and_gen, _ = sample_data_and_gen(G, n_samples=N_VIEWED_SAMPLES)
pd.DataFrame(np.transpose(data_and_gen[N_VIEWED_SAMPLES:])).plot()

N_VIEWED_SAMPLES = 2
data_and_gen, _ = sample_data_and_gen(G, n_samples=N_VIEWED_SAMPLES)
pd.DataFrame(np.transpose(data_and_gen[N_VIEWED_SAMPLES:])).rolling(5).mean()[5:].plot()

reference
使用Keras编写GAN的入门的更多相关文章
- BAT脚本编写教程简单入门篇
BAT脚本编写教程简单入门篇 批处理文件最常用的几个命令: echo表示显示此命令后的字符 echo on 表示在此语句后所有运行的命令都显示命令行本身 echo off 表示在此语句后所有运行的命 ...
- keras搭建神经网络快速入门笔记
之前学习了tensorflow2.0的小伙伴可能会遇到一些问题,就是在读论文中的代码和一些实战项目往往使用keras+tensorflow1.0搭建, 所以本次和大家一起分享keras如何搭建神经网络 ...
- 在ubuntu下编写python(python入门)
在ubuntu下编写python 一般情况下,ubuntu已经安装了python,打开终端,直接输入python,即可进行python编写. 默认为python2 如果想写python3,在终端输入p ...
- 【深度学习】--GAN从入门到初始
一.前述 GAN,生成对抗网络,在2016年基本火爆深度学习,所有有必要学习一下.生成对抗网络直观的应用可以帮我们生成数据,图片. 二.具体 1.生活案例 比如假设真钱 r 坏人定义为G 我们通过 ...
- Linux编写Shell脚本入门
一. 一般编写shell需要分3个步骤 1. 新建一个脚本文件,并编写程序 vi hello.sh #!/bin/bash #注释 #输出 printf '%s\n' "Hello Worl ...
- keras人工神经网络构建入门
//2019.07.29-301.Keras 是提供一些高度可用神经网络框架的 Python API ,能帮助你快速的构建和训练自己的深度学习模型,它的后端是 TensorFlow 或者 Theano ...
- keras运行gan的几个bug解决
http://blog.csdn.net/u012317000/article/details/79211274 https://www.jianshu.com/p/5b1f7004144d
- GAN网络之入门教程(四)之基于DCGAN动漫头像生成
目录 使用前准备 数据集 定义参数 构建网络 构建G网络 构建D网络 构建GAN网络 关于GAN的小trick 训练 总结 参考 这一篇博客以代码为主,主要是来介绍如果使用keras构建一个DCGAN ...
- WPF 像素着色器入门:使用 Shazzam Shader Editor 编写 HLSL 像素着色器代码
原文:WPF 像素着色器入门:使用 Shazzam Shader Editor 编写 HLSL 像素着色器代码 HLSL,High Level Shader Language,高级着色器语言,是 Di ...
随机推荐
- javascript 处理链接的多种方式
在页面中的链接除了常规的方式以外,如果使用javascript,还有很多种方式,下面是一些使用javascript,打开链接的几种方式: 1.使用window的open方法打开链接,这里可是在制定页面 ...
- python自动化测试学习笔记-6redis应用
上次我们学到了redis的一些操作,下面来实际运用以下. 这里我们先来学习一下什么是cookie和session. 什么是Cookie 其实简单的说就是当用户通过http协议访问一个服务器的时候,这个 ...
- Bitmap与String之间的转换
/** * 将bitmap转换成base64字符串 * * @param bitmap * @return base64 字符串 */ public String bitmaptoString(Bit ...
- 【洛谷3467/BZOJ1113】[POI2008]海报PLA-Postering(单调栈)
题目: 洛谷3467 分析: (ti jie shuo)这题是个单调栈经典题. 单调栈就是栈元素递增或递减的栈,这里只考虑递增.新元素入递增栈时,先将所有比它大的元素弹出,然后让新元素入栈,这样保证栈 ...
- HTML 表格与表单 个人简历
<title>个人简历</title></head> <body background="1e30e924b899a9015b946ac41f950 ...
- PHP基础知识测试题及解析
本试题共40道选择题,10道判断题,考试时间1个半小时 一:选择题(单项选择,每题2分): 1. LAMP具体结构不包含下面哪种(A ) A:Windows系统 B:Apache服务器 C:MyS ...
- Android中ViewPager动态创建的ImageView铺满屏幕
ImageView imageView=new ImageView(context); imageView.setScaleType(ScaleType.FIT_XY);//铺满屏幕
- 【sqli-labs】 less44 POST -Error based -String -Stacked Blind(POST型基于盲注的堆叠字符型注入)
盲注漏洞,登陆失败和注入失败显示的同一个页面 可以用sleep函数通过延时判断是否闭合引号成功 这个方法有一点不好的地方在于,并不能去控制延时,延时的时间取决于users表中的数据数量和sleep函数 ...
- Symbolicating Crash Reports With atos
地址:0x1000e4000 + 49116 = 0x00000001000effdc都是运行时地址: 0x1000e4000:基址偏移后的地址: 0x100000000: 共知基址:各个环境都知道, ...
- 小实例 hangman game
代码 #include <bits/stdc++.h> using namespace std; int bk[110]; string sj(int t) { string ans=&q ...