Test with:

Keras: 2.2.4
Python: 3.6.9
Tensorflow: 1.12.0

==================

Problem:

Using code from https://github.com/matterport/Mask_RCNN

When setting GPU_COUNT > 1

enconter this error:

RuntimeError: It looks like you are subclassing `Model` and you forgot to call `super(YourClass, self).__init__()`. Always start with this line.
Traceback (most recent call last):
File "D:\Anaconda33\lib\site-packages\keras\engine\network.py", line 313, in __setattr__
is_graph_network = self._is_graph_network
File "parallel_model.py", line 46, in __getattribute__
return super(ParallelModel, self).__getattribute__(attrname)
AttributeError: 'ParallelModel' object has no attribute '_is_graph_network' During handling of the above exception, another exception occurred: Traceback (most recent call last):
File "parallel_model.py", line 159, in <module>
model = ParallelModel(model, GPU_COUNT)
File "parallel_model.py", line 35, in __init__
self.inner_model = keras_model
File "D:\Anaconda33\lib\site-packages\keras\engine\network.py", line 316, in __setattr__
'It looks like you are subclassing `Model` and you '
RuntimeError: It looks like you are subclassing `Model` and you forgot to call `super(YourClass, self).__init__()`. Always start with this line.

Solution 1:

changing code in mrcnn/parallel_model.py as the following:

class ParallelModel(KM.Model):
def __init__(self, keras_model, gpu_count):
"""Class constructor.
keras_model: The Keras model to parallelize
gpu_count: Number of GPUs. Must be > 1
"""
super(ParallelModel, self).__init__()
self.inner_model = keras_model
self.gpu_count = gpu_count
merged_outputs = self.make_parallel()
super(ParallelModel, self).__init__(inputs=self.inner_model.inputs,
outputs=merged_outputs)

When getting this error:

asking for two arguments: inputs and outputs

Just upgrade your Keras to 2.2.4

When getting this error:

No node-device colocations were active during op 'tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0' creation.
Device assignments active during op 'tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0' creation:
with tf.device(/gpu:1): <M:\new\mrcnn\parallel_model.py:70>

No node-device colocations were active during op 'anchors/Variable' creation.
No device assignments were active during op 'anchors/Variable' creation.

Traceback (most recent call last):
File "D:\Anaconda33\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
return fn(*args)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\client\session.py", line 1317, in _run_fn
self._extend_graph()
File "D:\Anaconda33\lib\site-packages\tensorflow\python\client\session.py", line 1352, in _extend_graph
tf_session.ExtendSession(self._session)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot colocate nodes {{colocation_node tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0}} and {{colocation_node anchors/Variable}}: Cannot merge devices with incompatible ids: '/device:GPU:0' and '/device:GPU:1'
[[{{node tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0}} = Identity[T=DT_FLOAT, _class=["loc:@anchors/Variable"], _device="/device:GPU:1"](tower_1/mask_rcnn/anchors/Variable/cond/Merge)]] During handling of the above exception, another exception occurred: Traceback (most recent call last):
File "train_mul.py", line 448, in <module>
"mrcnn_bbox", "mrcnn_mask"])
File "M:\new\mrcnn\model.py", line 2132, in load_weights
saving.load_weights_from_hdf5_group_by_name(f, layers)
File "D:\Anaconda33\lib\site-packages\keras\engine\saving.py", line 1022, in load_weights_from_hdf5_group_by_name
K.batch_set_value(weight_value_tuples)
File "D:\Anaconda33\lib\site-packages\keras\backend\tensorflow_backend.py", line 2440, in batch_set_value
get_session().run(assign_ops, feed_dict=feed_dict)
File "D:\Anaconda33\lib\site-packages\keras\backend\tensorflow_backend.py", line 197, in get_session
[tf.is_variable_initialized(v) for v in candidate_vars])
File "D:\Anaconda33\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
run_metadata_ptr)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
run_metadata)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot colocate nodes node tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0 (defined at M:\new\mrcnn\model.py:1936) having device Device assignments active during op 'tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0' creation:
with tf.device(/gpu:1): <M:\new\mrcnn\parallel_model.py:70> and node anchors/Variable (defined at M:\new\mrcnn\model.py:1936) having device No device assignments were active during op 'anchors/Variable' creation. : Cannot merge devices with incompatible ids: '/device:GPU:0' and '/device:GPU:1'
[[node tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0 (defined at M:\new\mrcnn\model.py:1936) = Identity[T=DT_FLOAT, _class=["loc:@anchors/Variable"], _device="/device:GPU:1"](tower_1/mask_rcnn/anchors/Variable/cond/Merge)]] No node-device colocations were active during op 'tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0' creation.
Device assignments active during op 'tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0' creation:
with tf.device(/gpu:1): <M:\new\mrcnn\parallel_model.py:70> No node-device colocations were active during op 'anchors/Variable' creation.
No device assignments were active during op 'anchors/Variable' creation. Caused by op 'tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0', defined at:
File "train_mul.py", line 417, in <module>
model_dir=MODEL_DIR)
File "M:\new\mrcnn\model.py", line 1839, in __init__
self.keras_model = self.build(mode=mode, config=config)
File "M:\new\mrcnn\model.py", line 2064, in build
model = ParallelModel(model, config.GPU_COUNT)
File "M:\new\mrcnn\parallel_model.py", line 36, in __init__
merged_outputs = self.make_parallel()
File "M:\new\mrcnn\parallel_model.py", line 80, in make_parallel
outputs = self.inner_model(inputs)
File "D:\Anaconda33\lib\site-packages\keras\engine\base_layer.py", line 457, in __call__
output = self.call(inputs, **kwargs)
File "D:\Anaconda33\lib\site-packages\keras\engine\network.py", line 570, in call
output_tensors, _, _ = self.run_internal_graph(inputs, masks)
File "D:\Anaconda33\lib\site-packages\keras\engine\network.py", line 724, in run_internal_graph
output_tensors = to_list(layer.call(computed_tensor, **kwargs))
File "D:\Anaconda33\lib\site-packages\keras\layers\core.py", line 682, in call
return self.function(inputs, **arguments)
File "M:\new\mrcnn\model.py", line 1936, in <lambda>
anchors = KL.Lambda(lambda x: tf.Variable(anchors), name="anchors")(input_image)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\ops\variables.py", line 183, in __call__
return cls._variable_v1_call(*args, **kwargs)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\ops\variables.py", line 146, in _variable_v1_call
aggregation=aggregation)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\ops\variables.py", line 125, in <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 2444, in default_variable_creator
expected_shape=expected_shape, import_scope=import_scope)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\ops\variables.py", line 187, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\ops\variables.py", line 1329, in __init__
constraint=constraint)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\ops\variables.py", line 1480, in _init_from_args
self._initial_value),
File "D:\Anaconda33\lib\site-packages\tensorflow\python\ops\variables.py", line 2177, in _try_guard_against_uninitialized_dependencies
return self._safe_initial_value_from_tensor(initial_value, op_cache={})
File "D:\Anaconda33\lib\site-packages\tensorflow\python\ops\variables.py", line 2195, in _safe_initial_value_from_tensor
new_op = self._safe_initial_value_from_op(op, op_cache)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\ops\variables.py", line 2241, in _safe_initial_value_from_op
name=new_op_name, attrs=op.node_def.attr)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\framework\ops.py", line 3274, in create_op
op_def=op_def)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\framework\ops.py", line 1770, in __init__
self._traceback = tf_stack.extract_stack() InvalidArgumentError (see above for traceback): Cannot colocate nodes node tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0 (defined at M:\new\mrcnn\model.py:1936) having device Device assignments active during op 'tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0' creation:
with tf.device(/gpu:1): <M:\new\mrcnn\parallel_model.py:70> and node anchors/Variable (defined at M:\new\mrcnn\model.py:1936) having device No device assignments were active during op 'anchors/Variable' creation. : Cannot merge devices with incompatible ids: '/device:GPU:0' and '/device:GPU:1'
[[node tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0 (defined at M:\new\mrcnn\model.py:1936) = Identity[T=DT_FLOAT, _class=["loc:@anchors/Variable"], _device="/device:GPU:1"](tower_1/mask_rcnn/anchors/Variable/cond/Merge)]] No node-device colocations were active during op 'tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0' creation.
Device assignments active during op 'tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0' creation:
with tf.device(/gpu:1): <M:\new\mrcnn\parallel_model.py:70> No node-device colocations were active during op 'anchors/Variable' creation.
No device assignments were active during op 'anchors/Variable' creation.

Adding this line:

import keras.backend.tensorflow_backend as KTF

config = tf.ConfigProto()
config.allow_soft_placement=True
session = tf.Session(config=config)
KTF.set_session(session)

Solution 2:(not recommended)

downgrade Keras to 2.1.3:

conda install keras=2.1.3

(this works for someone but not works for me)

Reference:

https://github.com/matterport/Mask_RCNN/issues/921

https://github.com/tensorflow/tensorflow/issues/2285

Fix multiple GPUs fails in training Mask_RCNN的更多相关文章

  1. HDU 4913 Least common multiple(2014 Multi-University Training Contest 5)

    题意:求所有自己的最小公倍数的和. 该集合是  2^ai  * 3^bi 思路:线段树. 线段树中存的是  [3^b * f(b)]   f(b)表示 因子3 的最小公倍数3的部分  为 3^b的个数 ...

  2. Stochastic Multiple Choice Learning for Training Diverse Deep Ensembles

    作者提出的方法是Algotithm 2.简单来说就是,训练的时候,在几个模型中,选取预测最准确的(也就是loss最低的)模型进行权重更新.

  3. CatBoost使用GPU实现决策树的快速梯度提升CatBoost Enables Fast Gradient Boosting on Decision Trees Using GPUs

    python机器学习-乳腺癌细胞挖掘(博主亲自录制视频)https://study.163.com/course/introduction.htm?courseId=1005269003&ut ...

  4. Training a classifier

    你已经学习了如何定义神经网络,计算损失和执行网络权重的更新. 现在你或许在思考. What about data? 通常当你需要处理图像,文本,音频,视频数据,你能够使用标准的python包将数据加载 ...

  5. 用matlab训练数字分类的深度神经网络Training a Deep Neural Network for Digit Classification

    This example shows how to use Neural Network Toolbox™ to train a deep neural network to classify ima ...

  6. PatentTips - Hierarchical RAID system including multiple RAIDs

    BACKGROUND OF THE INVENTION The present invention relates to a storage system offering large capacit ...

  7. [C4] Andrew Ng - Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization

    About this Course This course will teach you the "magic" of getting deep learning to work ...

  8. Deep Learning with Torch

    原文地址:https://github.com/soumith/cvpr2015/blob/master/Deep%20Learning%20with%20Torch.ipynb Deep Learn ...

  9. VGGNet论文翻译-Very Deep Convolutional Networks for Large-Scale Image Recognition

    Very Deep Convolutional Networks for Large-Scale Image Recognition Karen Simonyan[‡] & Andrew Zi ...

随机推荐

  1. docker-compose进阶

    笔者在前文<Docker Compose 简介>和<Dcoker Compose 原理>两篇文章中分别介绍了 docker compose 的基本概念以及实现原理.本文我们将继 ...

  2. linux环境:FTP服务器搭建

    转载及参考至:https://www.linuxprobe.com/chapter-11.html https://www.cnblogs.com/lxwphp/p/8916664.html 感谢原作 ...

  3. Java自学-接口与继承 Object类

    Java中的超类 Object 步骤 1 : Object类是所有类的父类 声明一个类的时候,默认是继承了Object public class Hero extends Object package ...

  4. AQS原理

    1. AQS原理 1.1. 是什么 AQS全程AbstractQueuedSynchronizer抽象队列同步器,它是并发包中的基础类 ReetrantLock,ReentrantReadWriteL ...

  5. Java JDBC结果集的处理

    结果集指针的移动 while (resultSet.next()){ //...... } 指针最初指向第一条记录之前,next()是指向下一个位置,返回的是boolean值,true表示有内容(记录 ...

  6. Golang中文乱码问题

    在学习golang读取文件的过程中,遇到中文显示乱码的问题!golang没有自带的编解码包,因此需要借助第三方包 解决方法: 引入第三发转码包:git clone https://github.com ...

  7. 初探PHP设计模式

    设计模式不是一套具体的语言框架,是行之有效的编码规范,是一套被反复使用.多数人知晓的.经过分类的.代码设计经验的总结.使用设计模式的目的:为了代码可重用性.让代码更容易被他人理解.保证代码可靠性.合理 ...

  8. css实现弹框垂直居中

    原文链接:https://blog.csdn.net/sunny327/article/details/47419949/ <!DOCTYPE html><html> < ...

  9. manjaro手动安装Redis

    以前都是用的Windows系统,最近有被win10搞得有点烦,就入了manjaro的坑,windows下部分软件在manjaro安装记录,留个记录. 我的系统信息 下面开始正式干活. 一.准备步骤 下 ...

  10. docker配置镜像加速器

    docker配置镜像加速器 针对Docker客户端版本大于 1.10.0 的用户 您可以通过修改daemon配置文件/etc/docker/daemon.json来使用加速器 sudo mkdir - ...