tensorflow运行时错误:服务似乎挂掉了,但是会立刻重启的.
以前在POD里跑起来,没问题的示例代码。
移到jupyter中,多给两个GPU,有时运行就会爆出这个错误:
于是,按网上的意见,暂时加了个使用GPU的指定,
暂时搞定。
如下红色部分。
import timeit
import os
import tensorflow as tf
import numpy as np
from tensorflow.keras.datasets.cifar10 import load_data
os.environ['CUDA_VISIBLE_DEVICES'] = '0,1'
def model():
x = tf.placeholder(tf.float32, shape=[None, 32, 32, 3])
y = tf.placeholder(tf.float32, shape=[None, 10])
rate = tf.placeholder(tf.float32)
# convolutional layer 1
conv_1 = tf.layers.conv2d(x, 32, [3, 3], padding='SAME', activation=tf.nn.relu)
max_pool_1 = tf.layers.max_pooling2d(conv_1, [2, 2], strides=2, padding='SAME')
drop_1 = tf.layers.dropout(max_pool_1, rate=rate)
# convolutional layer 2
conv_2 = tf.layers.conv2d(drop_1, 64, [3, 3], padding="SAME", activation=tf.nn.relu)
max_pool_2 = tf.layers.max_pooling2d(conv_2, [2, 2], strides=2, padding="SAME")
drop_2 = tf.layers.dropout(max_pool_2, rate=rate)
# convolutional layers 3
conv_3 = tf.layers.conv2d(drop_2, 128, [3, 3], padding="SAME", activation=tf.nn.relu)
max_pool_3 = tf.layers.max_pooling2d(conv_3, [2, 2], strides=2, padding="SAME")
drop_3 = tf.layers.dropout(max_pool_3, rate=rate)
# fully connected layer 1
flat = tf.reshape(drop_3, shape=[-1, 4 * 4 * 128])
fc_1 = tf.layers.dense(flat, 80, activation=tf.nn.relu)
drop_4 = tf.layers.dropout(fc_1 , rate=rate)
# fully connected layer 2 or the output layers
fc_2 = tf.layers.dense(drop_4, 10)
output = tf.nn.relu(fc_2)
# accuracy
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(output, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# loss
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=output, labels=y))
# optimizer
optimizer = tf.train.AdamOptimizer(1e-4, beta1=0.9, beta2=0.999, epsilon=1e-8).minimize(loss)
return x, y, rate, accuracy, loss, optimizer
def one_hot_encoder(y):
ret = np.zeros(len(y) * 10)
ret = ret.reshape([-1, 10])
for i in range(len(y)):
ret[i][y[i]] = 1
return (ret)
def train(x_train, y_train, sess, x, y, rate, optimizer, accuracy, loss):
batch_size = 128
y_train_cls = one_hot_encoder(y_train)
start = end = 0
for i in range(int(len(x_train) / batch_size)):
if (i + 1) % 100 == 1:
start = timeit.default_timer()
batch_x = x_train[i * batch_size:(i + 1) * batch_size]
batch_y = y_train_cls[i * batch_size:(i + 1) * batch_size]
_, batch_loss, batch_accuracy = sess.run([optimizer, loss, accuracy], feed_dict={x:batch_x, y:batch_y, rate:0.4})
if (i + 1) % 100 == 0:
end = timeit.default_timer()
print("Time:", end-start, "s the loss is ", batch_loss, " and the accuracy is ", batch_accuracy * 100, "%")
def test(x_test, y_test, sess, x, y, rate, accuracy, loss):
batch_size = 64
y_test_cls = one_hot_encoder(y_test)
global_loss = 0
global_accuracy = 0
for t in range(int(len(x_test) / batch_size)):
batch_x = x_test[t * batch_size : (t + 1) * batch_size]
batch_y = y_test_cls[t * batch_size : (t + 1) * batch_size]
batch_loss, batch_accuracy = sess.run([loss, accuracy], feed_dict={x:batch_x, y:batch_y, rate:1})
global_loss += batch_loss
global_accuracy += batch_accuracy
global_loss = global_loss / (len(x_test) / batch_size)
global_accuracy = global_accuracy / (len(x_test) / batch_size)
print("In Test Time, loss is ", global_loss, ' and the accuracy is ', global_accuracy)
EPOCH = 100
(x_train, y_train), (x_test, y_test) = load_data()
print("There is ", len(x_train), " training images and ", len(x_test), " images")
x, y, rate, accuracy, loss, optimizer = model()
sess = tf.Session()
sess.run(tf.global_variables_initializer())
for i in range(EPOCH):
print("Train on epoch ", i ," start")
train(x_train, y_train, sess, x, y, rate, optimizer, accuracy, loss)
test(x_train, y_train, sess, x, y, rate, accuracy, loss)
tensorflow运行时错误:服务似乎挂掉了,但是会立刻重启的.的更多相关文章
- TensorFlow Serving-TensorFlow 服务
TensorFlow服务是一个用于服务机器学习模型的开源软件库.它处理机器学习的推断方面,在培训和管理他们的生命周期后采取模型,通过高性能,引用计数的查找表为客户端提供版本化访问. 可以同时提供多个模 ...
- linux 编写定时任务,查询服务是否挂掉
shell 脚本 #!/bin/bash a=`netstat -unltp|grep fdfs|wc -l` echo "$a" if [ "$a" -ne ...
- 平时服务正常,突然挂了,怎么重启都起不来,查看日志Insufficient space for shared memory file 内存文件空间不足
Java HotSpot(TM) 64-Bit Server VM warning: Insufficient space for shared memory file: /tmp/hsperfd ...
- nodejs-Cluster模块
JavaScript 标准参考教程(alpha) 草稿二:Node.js Cluster模块 GitHub TOP Cluster模块 来自<JavaScript 标准参考教程(alpha)&g ...
- 踩坑踩坑之Flask+ uWSGI + Tensorflow的Web服务部署
一.简介 作为算法开发人员,在算法模块完成后,拟部署Web服务以对外提供服务,从而将算法模型落地应用.本文针对首次基于Flask + uWSGI + Tensorflow + Nginx部署Web服务 ...
- Dubbo框架中的应用(两)--服务治理
Dubbo服务治理了看法 watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvbGlzaGVoZQ==/font/5a6L5L2T/fontsize/400/fi ...
- 【深度解析】Google第二代深度学习引擎TensorFlow开源
作者:王嘉俊 王婉婷 TensorFlow 是 Google 第二代深度学习系统,今天宣布完全开源.TensorFlow 是一种编写机器学习算法的界面,也可以编译执行机器学习算法的代码.使用 Tens ...
- linux下监视进程 崩溃挂掉后自动重启的shell脚本
如何保证服务一直运行?如何保证即使服务挂掉了也能自动重启?在写服务程序时经常会碰到这样的问题.在Linux系统中,强大的shell就可以很灵活的处理这样的事务. 下面的shell通过一个while-d ...
- java 服务治理办法
在大规模服务化之前.应用可能仅仅是通过RMI或Hessian等工具.简单的暴露和引用远程服务,通过配置服务的URL地址进行调用.通过F5等硬件进行负载均衡. (1) 当服务越来越多时.服务URL配置管 ...
随机推荐
- 清北学堂(2019 5 3) part 6
今天讲STL 1.pair——<algorithm> 声明形如pair<int,int> x;(不是int也可以),表示x有前后两个成员,都是int类型,调用时写x.first ...
- subprocess实用手册
背景 python执行操作系统的命令,如python执行shell命令 subprocess模块主要用于创建子进程,并连接它们的输入.输出和错误管道,获取它们的返回状态.通俗地说就是通过这个模块,你可 ...
- Redis数据结构及常用命令(草稿)
通用命令 数据类型 string 字符 list 列表 set 集合 zset 有序集合 hash 散列(字典中的字典) bitmap 位图 hyperloglog
- 【网络知识之二】HTTP协议
HTTP协议(Hypertext Transfer Protocol,超文本传输协议),一种无状态的.应用层的.以请求/应答方式运行的协议,它使用可扩展的语义和自描述消息格式,与基于网络的超文本信息系 ...
- jsp之el表达式jstl标签
不管是el表达式还是jstl标签最终的目的都是要消除jsp中的java代码,当然是消除显式的java代码 el表达式的出现是为了简化jsp中读取数据并写入页面的操作. el表达式的功能不多,也很好记 ...
- spering getBean(),IOC
IOC:前面都是对bean定义的处理,postProcess已经实例化了. 解析bean的时候,把需要依赖注入的字段和方法,在postProcessMergedBeanDefinition方法中加到A ...
- python笔记 面向对象编程从入门到高级
目录: 一.概念 二.方法 2.1组合 2.2继承 2.3多态 2.4封装 2.5归一化设计 三.面向对象高级 3.1 反射(自省) 3.2 内置方法__getatter__, __ ...
- docke网络之bridge、host、none
一.bridge网络 1.创建一个测试容器 [root@localhost ~]# docker run -d -it --name busybox_1 busybox /bin/sh -c &quo ...
- http内网转发
package main import ( "io" "log" "net/http" "strings" ) func ...
- [原创]SpringSecurity控制授权(鉴权)功能介绍
1.spring security 过滤器链 spring security中的除了用户登录校验相关的过滤器,最后还包含了鉴权功能的过滤器,还有匿名资源访问的过滤器链,相关的图解如下: 2.控制授 ...