By default, TensorFlow maps nearly all of the GPU memory of all GPUs (subject to CUDA_VISIBLE_DEVICES) visible to the process. This is done to more efficiently use the relatively precious GPU memory resources on the devices by reducing memory fragmentation.

In some cases it is desirable for the process to only allocate a subset of the available memory, or to only grow the memory usage as is needed by the process. TensorFlow provides two Config options on the Session to control this.

The first is the allow_growth option, which attempts to allocate only as much GPU memory based on runtime allocations: it starts out allocating very little memory, and as Sessions get run and more GPU memory is needed, we extend the GPU memory region needed by the TensorFlow process. Note that we do not release memory, since that can lead to even worse memory fragmentation. To turn this option on, set the option in the ConfigProto by:

 
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config, ...)

The second method is the per_process_gpu_memory_fraction option, which determines the fraction of the overall amount of memory that each visible GPU should be allocated. For example, you can tell TensorFlow to only allocate 40% of the total memory of each GPU by:

 
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.4
session = tf.Session(config=config, ...)

This is useful if you want to truly bound the amount of GPU memory available to the TensorFlow process.

Allowing GPU memory growth的更多相关文章

  1. Reducing and Profiling GPU Memory Usage in Keras with TensorFlow Backend

    keras 自适应分配显存 & 清理不用的变量释放 GPU 显存 Intro Are you running out of GPU memory when using keras or ten ...

  2. GPU Memory Usage占满而GPU-Util却为0的调试

    最近使用github上的一个开源项目训练基于CNN的翻译模型,使用THEANO_FLAGS='floatX=float32,device=gpu2,lib.cnmem=1' python run_nn ...

  3. 重置GPU显存 Reset GPU memory after CUDA errors

    Sometimes CUDA program crashed during execution, before memory was flushed. As a result, device memo ...

  4. tensorflow 运行效率 GPU memory leak 问题解决

    问题描述: Tensorflow 训练时运行越来越慢,重启后又变好. 用的是Tensorflow-GPU 1.2版本,在GPU上跑,大概就是才开始训练的时候每个batch的时间很低,然后随着训练的推进 ...

  5. Jupyter notebook Tensorflow GPU Memory 释放

    Jupyter notebook 每次运行完tensorflow的程序,占着显存不释放.而又因为tensorflow是默认申请可使用的全部显存,就会使得后续程序难以运行.暂时还没有找到在jupyter ...

  6. Gradient Boosting, Decision Trees and XGBoost with CUDA ——GPU加速5-6倍

    xgboost的可以参考:https://xgboost.readthedocs.io/en/latest/gpu/index.html 整体看加速5-6倍的样子. Gradient Boosting ...

  7. NVIDIA GPU Pascal架构简述

    NVIDIA GPU Pascal架构简述 本文摘抄自英伟达Pascal架构官方白皮书:https://www.nvidia.com/en-us/data-center/resources/pasca ...

  8. Tensorflow2对GPU内存的分配策略

    一.问题源起 从以下的异常堆栈可以看到是BLAS程序集初始化失败,可以看到是执行MatMul的时候发生的异常,基本可以断定可能数据集太大导致memory不够用了. 2021-08-10 16:38:0 ...

  9. GPU keylogger && GPU Based rootkit(Jellyfish rootkit)

    catalog . OpenCL . Linux DMA(Direct Memory Access) . GPU rootkit PoC by Team Jellyfish . GPU keylogg ...

随机推荐

  1. Chrome(谷歌浏览器)和Firefox浏览器flash的swf文件发黑不透明问题解决方法

    一直以来看到各大网站的FLASH都是黑框框的,很好奇,难道他们不知道flash是可以设成透明的?于是用IE Tab插件浏览了下,发现人家的网页又正常,这样一来我就开始怀疑是我的Chrome有问题,于是 ...

  2. javascript自定义简单map对象功能

    这里介绍一种js创建简单map对象的方法: function Map() { //创建object对象, 并给object对象添加key和value属性 var obj1=new Object(); ...

  3. 用U盘制作启动盘后空间变小的恢复方法

    先把u盘插好, 运行cmd(按住键盘左下角第二个windows键的同时按R), 输入diskpart,回车, (此时可以再输入list disk,回车,能看到这台电脑的所有磁盘大致情况,u盘一般是磁盘 ...

  4. Proftpd 服务器安装配置

    yum install proftpd 如果提示没有找到源 rpm -iUvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6 ...

  5. Python_01 执行方式、解释器路径、编码、变量、条件语句

    1.第一句python --文件后缀名可以是任意? --导入模块时,如果不是.py会报错 ==>文件后缀名是.py 2.两种执行方式 python解释器 py文件路径 python 进入解释器: ...

  6. Tomcat命令

    如果原始内存不够用经常内存溢出,可以在catalina.bat中设置: 电脑2G内存的情况 :set JAVA_OPTS='-server -Xms1024m -Xmx1536m -XX:PermSi ...

  7. ISPF常用命令

    [ISPF功能键] PF1: HELP帮助键 PF2: SPLIT键,改变分屏位置 PF3: END键,结束并退回上级菜单 PF4: RETURN键,结束并退回主菜单 PF5: REFIND键,重复最 ...

  8. debian下redis2.8.17安装过程

    下载redis源码包,我下载的是redis2.8.17 解压缩该源码包 tar zxf redis-2.8.17.tar.gz 进入解压缩后的目录 cd redis-2.8.17/ 添加redis用户 ...

  9. IP/IGMP/UDP校验和算法

    校验和算法:IP.IGMP.UDP和TCP报文头部都有检验和字段,其算法都是一样的. IP.IGMP.UDP和TCP校验和的范围:仅报文头部长度. 在发送数据时,为了计算数据包的检验和.应该按如下步骤 ...

  10. Python 类的魔术方法

    Python中类的魔术方法 在Python中以两个下划线开头的方法,__init__.__str__.__doc__.__new__等,被称为"魔术方法"(Magic method ...