tflearn 保存模型重新训练
from:https://stackoverflow.com/questions/41616292/how-to-load-and-retrain-tflean-model
This is to create a graph and save it
graph1 = tf.Graph()
with graph1.as_default():
network = input_data(shape=[None, MAX_DOCUMENT_LENGTH])
network = tflearn.embedding(network, input_dim=n_words, output_dim=128)
branch1 = conv_1d(network, 128, 3, padding='valid', activation='relu', regularizer="L2")
branch2 = conv_1d(network, 128, 4, padding='valid', activation='relu', regularizer="L2")
branch3 = conv_1d(network, 128, 5, padding='valid', activation='relu', regularizer="L2")
network = merge([branch1, branch2, branch3], mode='concat', axis=1)
network = tf.expand_dims(network, 2)
network = global_max_pool(network)
network = dropout(network, 0.5)
network = fully_connected(network, 2, activation='softmax')
network = regression(network, optimizer='adam', learning_rate=0.001,loss='categorical_crossentropy', name='target')
model = tflearn.DNN(network, tensorboard_verbose=0)
clf, acc, roc_auc,fpr,tpr =classify_DNN(data,clas,model)
clf.save(model_path)
To reload and retrain or use it for prediction
MODEL = None
with tf.Graph().as_default():
## Building deep neural network
network = input_data(shape=[None, MAX_DOCUMENT_LENGTH])
network = tflearn.embedding(network, input_dim=n_words, output_dim=128)
branch1 = conv_1d(network, 128, 3, padding='valid', activation='relu', regularizer="L2")
branch2 = conv_1d(network, 128, 4, padding='valid', activation='relu', regularizer="L2")
branch3 = conv_1d(network, 128, 5, padding='valid', activation='relu', regularizer="L2")
network = merge([branch1, branch2, branch3], mode='concat', axis=1)
network = tf.expand_dims(network, 2)
network = global_max_pool(network)
network = dropout(network, 0.5)
network = fully_connected(network, 2, activation='softmax')
network = regression(network, optimizer='adam', learning_rate=0.001,loss='categorical_crossentropy', name='target')
new_model = tflearn.DNN(network, tensorboard_verbose=3)
new_model.load(model_path)
MODEL = new_model
Use the MODEL for prediction or retraining. The 1st line and the with loop was important. For anyone who might need help
官方例子:
""" An example showing how to save/restore models and retrieve weights. """ from __future__ import absolute_import, division, print_function import tflearn import tflearn.datasets.mnist as mnist # MNIST Data
X, Y, testX, testY = mnist.load_data(one_hot=True) # Model
input_layer = tflearn.input_data(shape=[None, 784], name='input')
dense1 = tflearn.fully_connected(input_layer, 128, name='dense1')
dense2 = tflearn.fully_connected(dense1, 256, name='dense2')
softmax = tflearn.fully_connected(dense2, 10, activation='softmax')
regression = tflearn.regression(softmax, optimizer='adam',
learning_rate=0.001,
loss='categorical_crossentropy') # Define classifier, with model checkpoint (autosave)
model = tflearn.DNN(regression, checkpoint_path='model.tfl.ckpt') # Train model, with model checkpoint every epoch and every 200 training steps.
model.fit(X, Y, n_epoch=1,
validation_set=(testX, testY),
show_metric=True,
snapshot_epoch=True, # Snapshot (save & evaluate) model every epoch.
snapshot_step=500, # Snapshot (save & evalaute) model every 500 steps.
run_id='model_and_weights') # ---------------------
# Save and load a model
# --------------------- # Manually save model
model.save("model.tfl") # Load a model
model.load("model.tfl") # Or Load a model from auto-generated checkpoint
# >> model.load("model.tfl.ckpt-500") # Resume training
model.fit(X, Y, n_epoch=1,
validation_set=(testX, testY),
show_metric=True,
snapshot_epoch=True,
run_id='model_and_weights') # ------------------
# Retrieving weights
# ------------------ # Retrieve a layer weights, by layer name:
dense1_vars = tflearn.variables.get_layer_variables_by_name('dense1')
# Get a variable's value, using model `get_weights` method:
print("Dense1 layer weights:")
print(model.get_weights(dense1_vars[0]))
# Or using generic tflearn function:
print("Dense1 layer biases:")
with model.session.as_default():
print(tflearn.variables.get_value(dense1_vars[1])) # It is also possible to retrieve a layer weights through its attributes `W`
# and `b` (if available).
# Get variable's value, using model `get_weights` method:
print("Dense2 layer weights:")
print(model.get_weights(dense2.W))
# Or using generic tflearn function:
print("Dense2 layer biases:")
with model.session.as_default():
print(tflearn.variables.get_value(dense2.b))
tflearn 保存模型重新训练的更多相关文章
- tflearn 中文汉字识别,训练后模型存为pb给TensorFlow使用——模型层次太深,或者太复杂训练时候都不会收敛
tflearn 中文汉字识别,训练后模型存为pb给TensorFlow使用. 数据目录在data,data下放了汉字识别图片: data$ ls0 1 10 11 12 13 14 15 ...
- tensorflow训练自己的数据集实现CNN图像分类2(保存模型&测试单张图片)
神经网络训练的时候,我们需要将模型保存下来,方便后面继续训练或者用训练好的模型进行测试.因此,我们需要创建一个saver保存模型. def run_training(): data_dir = 'C: ...
- 将tflearn的模型保存为pb,给TensorFlow使用
参考:https://github.com/tflearn/tflearn/issues/964 解决方法: """ Tensorflow graph freezer C ...
- Keras保存模型并载入模型继续训练
我们以MNIST手写数字识别为例 import numpy as np from keras.datasets import mnist from keras.utils import np_util ...
- sklearn保存模型-【老鱼学sklearn】
训练好了一个Model 以后总需要保存和再次预测, 所以保存和读取我们的sklearn model也是同样重要的一步. 比如,我们根据房源样本数据训练了一下房价模型,当用户输入自己的房子后,我们就需要 ...
- pytorch加载和保存模型
在模型完成训练后,我们需要将训练好的模型保存为一个文件供测试使用,或者因为一些原因我们需要继续之前的状态训练之前保存的模型,那么如何在PyTorch中保存和恢复模型呢? 方法一(推荐): 第一种方法也 ...
- PyTorch保存模型与加载模型+Finetune预训练模型使用
Pytorch 保存模型与加载模型 PyTorch之保存加载模型 参数初始化参 数的初始化其实就是对参数赋值.而我们需要学习的参数其实都是Variable,它其实是对Tensor的封装,同时提供了da ...
- (原)tensorflow保存模型及载入保存的模型
转载请注明出处: http://www.cnblogs.com/darkknightzh/p/7198773.html 参考网址: http://stackoverflow.com/questions ...
- 转sklearn保存模型
训练好了一个Model 以后总需要保存和再次预测, 所以保存和读取我们的sklearn model也是同样重要的一步. 比如,我们根据房源样本数据训练了一下房价模型,当用户输入自己的房子后,我们就需要 ...
随机推荐
- MYSQL每日一学 - 时间间隔表达式
参考链接:https://dev.mysql.com/doc/refman/5.7/en/expressions.html Interval表达式(Temporal intervals)的使用 Int ...
- python装饰器、迭代器、生成器
装饰器:为已存在的函数或者或者对象添加额外的功能 def wrapper(f): #装饰器函数,f是被装饰的函数 def inner(*args,**kwargs): '''在被装饰函数之前要做的事' ...
- Poj 1106 Transmitters
Poj 1106 Transmitters 传送门 给出一个半圆,可以任意旋转,问这个半圆能够覆盖的最多点数. 我们枚举每一个点作为必然覆盖点,那么使用叉积看极角关系即可判断其余的点是否能够与其存在一 ...
- vector元素的删除 remove的使用 unique的使用
在vector删除指定元素可用以下语句 : v.erase(remove(v.begin(), v.end(), element), installed.end()); 可将vector中所有值为el ...
- JavaWeb 项目,更改本地文件需刷新才有效问题 (tomcat相关)
问题 如果JavaWeb项目需要读取实时更新的本地文件内容,可能遇到必须在更新后手动refresh才能有效的问题. 原因 这是由于项目实际上是运行在Tomcat中,而非本地的工作目录.eclipse可 ...
- [luoguP1040] 加分二叉树(DP)
传送门 区间DP水题 代码 #include <cstdio> #include <iostream> #define N 41 #define max(x, y) ((x) ...
- java 构造不可变类集的使用方法
首先用到的就是Collections.unmodifiablexxx( set.add("hello"); set.add("insert ...
- 微信开放平台PC端扫码登录功能个人总结
最近公司给我安排一个微信登录的功能,需求是这样的: 1.登录授权 点击二维码图标后,登录界面切换为如下样式(二维码),微信扫描二维码并授权,即可成功登录: 若当前账号未绑定微信账号,扫描后提示“ ...
- codeforces 363B
#include<stdio.h> #include<string.h> #define inf 999999999 #define N 151000 int a[N],c[N ...
- TCP/IP学习笔记(4)------ICMP,ping,traceroute
IMCP协议介绍 当传送IP数据包发生错误--比如主机不可达,路由不可达等等,ICMP协议将会把错误信息封包,然后传送回给主机.给主机一个处理错误的机会,这 也就是为什么说建立在IP层以上的协议是可能 ...