几乎没用过linux操作系统,不懂shell编程,linux下shell+windows下UltraEdit勉勉强强生成了train.txt和val.txt期间各种错误辛酸不表,照着examples/imagenet/readme勉勉强强用自己的数据,按imagenet的训练方法,把reference_caffenet训起来了,小笔记本的风扇又开始呼呼呼的转了。

跑了一晚上,小笔记本憋了,还是报错(syncedmem.hpp:25 check failed:*ptr host allocation of size 191102976 failed)猜测还是内存不够的问题,相同的配置方式在台式机上能跑,早晨过来迭代到800次了:

 I1101 05:51:24.763746  2942 solver.cpp:243] Iteration 698, loss = 0.246704
I1101 05:51:24.763829 2942 solver.cpp:259] Train net output #0: loss = 0.246704 (* 1 = 0.246704 loss)
I1101 05:51:24.763837 2942 sgd_solver.cpp:138] Iteration 698, lr = 0.01
I1101 05:52:20.169235 2942 solver.cpp:243] Iteration 699, loss = 0.214295
I1101 05:52:20.169353 2942 solver.cpp:259] Train net output #0: loss = 0.214295 (* 1 = 0.214295 loss)
I1101 05:52:20.169360 2942 sgd_solver.cpp:138] Iteration 699, lr = 0.01
I1101 05:53:15.372921 2942 solver.cpp:243] Iteration 700, loss = 0.247836
I1101 05:53:15.373028 2942 solver.cpp:259] Train net output #0: loss = 0.247836 (* 1 = 0.247836 loss)
I1101 05:53:15.373049 2942 sgd_solver.cpp:138] Iteration 700, lr = 0.01
I1101 05:54:11.039271 2942 solver.cpp:243] Iteration 701, loss = 0.24083
I1101 05:54:11.039353 2942 solver.cpp:259] Train net output #0: loss = 0.24083 (* 1 = 0.24083 loss)
I1101 05:54:11.039361 2942 sgd_solver.cpp:138] Iteration 701, lr = 0.01
I1101 05:55:06.733603 2942 solver.cpp:243] Iteration 702, loss = 0.185371
I1101 05:55:06.733696 2942 solver.cpp:259] Train net output #0: loss = 0.185371 (* 1 = 0.185371 loss)
I1101 05:55:06.733716 2942 sgd_solver.cpp:138] Iteration 702, lr = 0.01
I1101 05:56:02.576714 2942 solver.cpp:243] Iteration 703, loss = 0.154825
I1101 05:56:02.576802 2942 solver.cpp:259] Train net output #0: loss = 0.154825 (* 1 = 0.154825 loss)
I1101 05:56:02.576810 2942 sgd_solver.cpp:138] Iteration 703, lr = 0.01
I1101 05:56:58.484149 2942 solver.cpp:243] Iteration 704, loss = 0.222496
I1101 05:56:58.484272 2942 solver.cpp:259] Train net output #0: loss = 0.222496 (* 1 = 0.222496 loss)
I1101 05:56:58.484292 2942 sgd_solver.cpp:138] Iteration 704, lr = 0.01
I1101 05:57:53.968674 2942 solver.cpp:243] Iteration 705, loss = 0.223804
I1101 05:57:53.968770 2942 solver.cpp:259] Train net output #0: loss = 0.223804 (* 1 = 0.223804 loss)
I1101 05:57:53.968789 2942 sgd_solver.cpp:138] Iteration 705, lr = 0.01
I1101 05:58:49.514394 2942 solver.cpp:243] Iteration 706, loss = 0.178994
I1101 05:58:49.514477 2942 solver.cpp:259] Train net output #0: loss = 0.178994 (* 1 = 0.178994 loss)
I1101 05:58:49.514482 2942 sgd_solver.cpp:138] Iteration 706, lr = 0.01
I1101 05:59:44.914528 2942 solver.cpp:243] Iteration 707, loss = 0.231146
I1101 05:59:44.914618 2942 solver.cpp:259] Train net output #0: loss = 0.231146 (* 1 = 0.231146 loss)
I1101 05:59:44.914625 2942 sgd_solver.cpp:138] Iteration 707, lr = 0.01
I1101 06:00:40.380048 2942 solver.cpp:243] Iteration 708, loss = 0.2585
I1101 06:00:40.380169 2942 solver.cpp:259] Train net output #0: loss = 0.2585 (* 1 = 0.2585 loss)
I1101 06:00:40.380188 2942 sgd_solver.cpp:138] Iteration 708, lr = 0.01
I1101 06:01:35.776782 2942 solver.cpp:243] Iteration 709, loss = 0.213343
I1101 06:01:35.776881 2942 solver.cpp:259] Train net output #0: loss = 0.213343 (* 1 = 0.213343 loss)
I1101 06:01:35.776888 2942 sgd_solver.cpp:138] Iteration 709, lr = 0.01
I1101 06:02:31.642572 2942 solver.cpp:243] Iteration 710, loss = 0.209495
I1101 06:02:31.642648 2942 solver.cpp:259] Train net output #0: loss = 0.209495 (* 1 = 0.209495 loss)
I1101 06:02:31.642654 2942 sgd_solver.cpp:138] Iteration 710, lr = 0.01
I1101 06:03:27.265415 2942 solver.cpp:243] Iteration 711, loss = 0.222363
I1101 06:03:27.265513 2942 solver.cpp:259] Train net output #0: loss = 0.222363 (* 1 = 0.222363 loss)
I1101 06:03:27.265522 2942 sgd_solver.cpp:138] Iteration 711, lr = 0.01
I1101 06:04:22.963587 2942 solver.cpp:243] Iteration 712, loss = 0.156492
I1101 06:04:22.963680 2942 solver.cpp:259] Train net output #0: loss = 0.156492 (* 1 = 0.156492 loss)
I1101 06:04:22.963701 2942 sgd_solver.cpp:138] Iteration 712, lr = 0.01
I1101 06:05:18.575387 2942 solver.cpp:243] Iteration 713, loss = 0.23963
I1101 06:05:18.575475 2942 solver.cpp:259] Train net output #0: loss = 0.23963 (* 1 = 0.23963 loss)
I1101 06:05:18.575484 2942 sgd_solver.cpp:138] Iteration 713, lr = 0.01
I1101 06:06:13.736877 2942 solver.cpp:243] Iteration 714, loss = 0.198127
I1101 06:06:13.736976 2942 solver.cpp:259] Train net output #0: loss = 0.198127 (* 1 = 0.198127 loss)
I1101 06:06:13.736984 2942 sgd_solver.cpp:138] Iteration 714, lr = 0.01
I1101 06:07:09.226873 2942 solver.cpp:243] Iteration 715, loss = 0.211781
I1101 06:07:09.226959 2942 solver.cpp:259] Train net output #0: loss = 0.211781 (* 1 = 0.211781 loss)
I1101 06:07:09.226966 2942 sgd_solver.cpp:138] Iteration 715, lr = 0.01
I1101 06:08:04.730242 2942 solver.cpp:243] Iteration 716, loss = 0.250581
I1101 06:08:04.730329 2942 solver.cpp:259] Train net output #0: loss = 0.250581 (* 1 = 0.250581 loss)
I1101 06:08:04.730335 2942 sgd_solver.cpp:138] Iteration 716, lr = 0.01
I1101 06:09:00.274008 2942 solver.cpp:243] Iteration 717, loss = 0.213366
I1101 06:09:00.274089 2942 solver.cpp:259] Train net output #0: loss = 0.213366 (* 1 = 0.213366 loss)
I1101 06:09:00.274096 2942 sgd_solver.cpp:138] Iteration 717, lr = 0.01
I1101 06:09:55.551977 2942 solver.cpp:243] Iteration 718, loss = 0.229803
I1101 06:09:55.552062 2942 solver.cpp:259] Train net output #0: loss = 0.229803 (* 1 = 0.229803 loss)
I1101 06:09:55.552070 2942 sgd_solver.cpp:138] Iteration 718, lr = 0.01
I1101 06:10:51.295166 2942 solver.cpp:243] Iteration 719, loss = 0.182805
I1101 06:10:51.295260 2942 solver.cpp:259] Train net output #0: loss = 0.182805 (* 1 = 0.182805 loss)
I1101 06:10:51.295281 2942 sgd_solver.cpp:138] Iteration 719, lr = 0.01
I1101 06:11:46.892568 2942 solver.cpp:243] Iteration 720, loss = 0.174111
I1101 06:11:46.892639 2942 solver.cpp:259] Train net output #0: loss = 0.174111 (* 1 = 0.174111 loss)
I1101 06:11:46.892647 2942 sgd_solver.cpp:138] Iteration 720, lr = 0.01
I1101 06:12:42.373394 2942 solver.cpp:243] Iteration 721, loss = 0.159915
I1101 06:12:42.373476 2942 solver.cpp:259] Train net output #0: loss = 0.159915 (* 1 = 0.159915 loss)
I1101 06:12:42.373482 2942 sgd_solver.cpp:138] Iteration 721, lr = 0.01
I1101 06:13:37.606986 2942 solver.cpp:243] Iteration 722, loss = 0.194667
I1101 06:13:37.607105 2942 solver.cpp:259] Train net output #0: loss = 0.194667 (* 1 = 0.194667 loss)
I1101 06:13:37.607125 2942 sgd_solver.cpp:138] Iteration 722, lr = 0.01
I1101 06:14:32.550334 2942 solver.cpp:243] Iteration 723, loss = 0.192629
I1101 06:14:32.550433 2942 solver.cpp:259] Train net output #0: loss = 0.192629 (* 1 = 0.192629 loss)
I1101 06:14:32.550442 2942 sgd_solver.cpp:138] Iteration 723, lr = 0.01
I1101 06:15:27.603406 2942 solver.cpp:243] Iteration 724, loss = 0.189146
I1101 06:15:27.603489 2942 solver.cpp:259] Train net output #0: loss = 0.189146 (* 1 = 0.189146 loss)
I1101 06:15:27.603497 2942 sgd_solver.cpp:138] Iteration 724, lr = 0.01
I1101 06:16:22.925781 2942 solver.cpp:243] Iteration 725, loss = 0.2837
I1101 06:16:22.925882 2942 solver.cpp:259] Train net output #0: loss = 0.2837 (* 1 = 0.2837 loss)
I1101 06:16:22.925902 2942 sgd_solver.cpp:138] Iteration 725, lr = 0.01
I1101 06:17:18.304738 2942 solver.cpp:243] Iteration 726, loss = 0.22247
I1101 06:17:18.304850 2942 solver.cpp:259] Train net output #0: loss = 0.22247 (* 1 = 0.22247 loss)
I1101 06:17:18.304870 2942 sgd_solver.cpp:138] Iteration 726, lr = 0.01
I1101 06:18:13.775182 2942 solver.cpp:243] Iteration 727, loss = 0.22343
I1101 06:18:13.775266 2942 solver.cpp:259] Train net output #0: loss = 0.22343 (* 1 = 0.22343 loss)
I1101 06:18:13.775274 2942 sgd_solver.cpp:138] Iteration 727, lr = 0.01
I1101 06:19:09.986521 2942 solver.cpp:243] Iteration 728, loss = 0.208602
I1101 06:19:09.986620 2942 solver.cpp:259] Train net output #0: loss = 0.208602 (* 1 = 0.208602 loss)
I1101 06:19:09.986629 2942 sgd_solver.cpp:138] Iteration 728, lr = 0.01
I1101 06:20:05.922881 2942 solver.cpp:243] Iteration 729, loss = 0.179899
I1101 06:20:05.922969 2942 solver.cpp:259] Train net output #0: loss = 0.179899 (* 1 = 0.179899 loss)
I1101 06:20:05.922976 2942 sgd_solver.cpp:138] Iteration 729, lr = 0.01
I1101 06:21:01.568653 2942 solver.cpp:243] Iteration 730, loss = 0.25694
I1101 06:21:01.568696 2942 solver.cpp:259] Train net output #0: loss = 0.25694 (* 1 = 0.25694 loss)
I1101 06:21:01.568701 2942 sgd_solver.cpp:138] Iteration 730, lr = 0.01
I1101 06:21:57.061185 2942 solver.cpp:243] Iteration 731, loss = 0.184521
I1101 06:21:57.061267 2942 solver.cpp:259] Train net output #0: loss = 0.184521 (* 1 = 0.184521 loss)
I1101 06:21:57.061275 2942 sgd_solver.cpp:138] Iteration 731, lr = 0.01
I1101 06:22:52.319211 2942 solver.cpp:243] Iteration 732, loss = 0.214978
I1101 06:22:52.319324 2942 solver.cpp:259] Train net output #0: loss = 0.214978 (* 1 = 0.214978 loss)
I1101 06:22:52.319332 2942 sgd_solver.cpp:138] Iteration 732, lr = 0.01
I1101 06:23:47.861532 2942 solver.cpp:243] Iteration 733, loss = 0.166787
I1101 06:23:47.861619 2942 solver.cpp:259] Train net output #0: loss = 0.166787 (* 1 = 0.166787 loss)
I1101 06:23:47.861626 2942 sgd_solver.cpp:138] Iteration 733, lr = 0.01
I1101 06:24:43.277447 2942 solver.cpp:243] Iteration 734, loss = 0.245544
I1101 06:24:43.277559 2942 solver.cpp:259] Train net output #0: loss = 0.245544 (* 1 = 0.245544 loss)
I1101 06:24:43.277565 2942 sgd_solver.cpp:138] Iteration 734, lr = 0.01
I1101 06:25:38.757647 2942 solver.cpp:243] Iteration 735, loss = 0.200957
I1101 06:25:38.757745 2942 solver.cpp:259] Train net output #0: loss = 0.200957 (* 1 = 0.200957 loss)
I1101 06:25:38.757766 2942 sgd_solver.cpp:138] Iteration 735, lr = 0.01
I1101 06:26:34.590348 2942 solver.cpp:243] Iteration 736, loss = 0.206711
I1101 06:26:34.590428 2942 solver.cpp:259] Train net output #0: loss = 0.206711 (* 1 = 0.206711 loss)
I1101 06:26:34.590435 2942 sgd_solver.cpp:138] Iteration 736, lr = 0.01
I1101 06:27:30.571000 2942 solver.cpp:243] Iteration 737, loss = 0.190287
I1101 06:27:30.571082 2942 solver.cpp:259] Train net output #0: loss = 0.190287 (* 1 = 0.190287 loss)
I1101 06:27:30.571089 2942 sgd_solver.cpp:138] Iteration 737, lr = 0.01
I1101 06:28:26.604413 2942 solver.cpp:243] Iteration 738, loss = 0.27267
I1101 06:28:26.604490 2942 solver.cpp:259] Train net output #0: loss = 0.27267 (* 1 = 0.27267 loss)
I1101 06:28:26.604509 2942 sgd_solver.cpp:138] Iteration 738, lr = 0.01
I1101 06:29:22.135064 2942 solver.cpp:243] Iteration 739, loss = 0.259939
I1101 06:29:22.135135 2942 solver.cpp:259] Train net output #0: loss = 0.259939 (* 1 = 0.259939 loss)
I1101 06:29:22.135143 2942 sgd_solver.cpp:138] Iteration 739, lr = 0.01
I1101 06:30:17.477607 2942 solver.cpp:243] Iteration 740, loss = 0.180358
I1101 06:30:17.477692 2942 solver.cpp:259] Train net output #0: loss = 0.180358 (* 1 = 0.180358 loss)
I1101 06:30:17.477699 2942 sgd_solver.cpp:138] Iteration 740, lr = 0.01
I1101 06:31:12.490366 2942 solver.cpp:243] Iteration 741, loss = 0.210995
I1101 06:31:12.490449 2942 solver.cpp:259] Train net output #0: loss = 0.210995 (* 1 = 0.210995 loss)
I1101 06:31:12.490468 2942 sgd_solver.cpp:138] Iteration 741, lr = 0.01
I1101 06:32:07.610287 2942 solver.cpp:243] Iteration 742, loss = 0.240796
I1101 06:32:07.610374 2942 solver.cpp:259] Train net output #0: loss = 0.240796 (* 1 = 0.240796 loss)
I1101 06:32:07.610383 2942 sgd_solver.cpp:138] Iteration 742, lr = 0.01
I1101 06:33:02.604507 2942 solver.cpp:243] Iteration 743, loss = 0.242676
I1101 06:33:02.604640 2942 solver.cpp:259] Train net output #0: loss = 0.242676 (* 1 = 0.242676 loss)
I1101 06:33:02.604648 2942 sgd_solver.cpp:138] Iteration 743, lr = 0.01
I1101 06:33:57.804772 2942 solver.cpp:243] Iteration 744, loss = 0.213677
I1101 06:33:57.804877 2942 solver.cpp:259] Train net output #0: loss = 0.213677 (* 1 = 0.213677 loss)
I1101 06:33:57.804898 2942 sgd_solver.cpp:138] Iteration 744, lr = 0.01
I1101 06:34:53.220233 2942 solver.cpp:243] Iteration 745, loss = 0.164903
I1101 06:34:53.220304 2942 solver.cpp:259] Train net output #0: loss = 0.164903 (* 1 = 0.164903 loss)
I1101 06:34:53.220310 2942 sgd_solver.cpp:138] Iteration 745, lr = 0.01
I1101 06:35:48.960155 2942 solver.cpp:243] Iteration 746, loss = 0.229432
I1101 06:35:48.960199 2942 solver.cpp:259] Train net output #0: loss = 0.229432 (* 1 = 0.229432 loss)
I1101 06:35:48.960220 2942 sgd_solver.cpp:138] Iteration 746, lr = 0.01
I1101 06:36:44.706097 2942 solver.cpp:243] Iteration 747, loss = 0.164644
I1101 06:36:44.706193 2942 solver.cpp:259] Train net output #0: loss = 0.164644 (* 1 = 0.164644 loss)
I1101 06:36:44.706212 2942 sgd_solver.cpp:138] Iteration 747, lr = 0.01
I1101 06:37:40.333650 2942 solver.cpp:243] Iteration 748, loss = 0.190379
I1101 06:37:40.333721 2942 solver.cpp:259] Train net output #0: loss = 0.190379 (* 1 = 0.190379 loss)
I1101 06:37:40.333729 2942 sgd_solver.cpp:138] Iteration 748, lr = 0.01
I1101 06:38:35.466141 2942 solver.cpp:243] Iteration 749, loss = 0.19267
I1101 06:38:35.466250 2942 solver.cpp:259] Train net output #0: loss = 0.19267 (* 1 = 0.19267 loss)
I1101 06:38:35.466259 2942 sgd_solver.cpp:138] Iteration 749, lr = 0.01
I1101 06:39:30.480350 2942 solver.cpp:243] Iteration 750, loss = 0.183797
I1101 06:39:30.480445 2942 solver.cpp:259] Train net output #0: loss = 0.183797 (* 1 = 0.183797 loss)
I1101 06:39:30.480453 2942 sgd_solver.cpp:138] Iteration 750, lr = 0.01
I1101 06:40:25.350738 2942 solver.cpp:243] Iteration 751, loss = 0.159131
I1101 06:40:25.350818 2942 solver.cpp:259] Train net output #0: loss = 0.159131 (* 1 = 0.159131 loss)
I1101 06:40:25.350826 2942 sgd_solver.cpp:138] Iteration 751, lr = 0.01
I1101 06:41:20.152151 2942 solver.cpp:243] Iteration 752, loss = 0.228896
I1101 06:41:20.152248 2942 solver.cpp:259] Train net output #0: loss = 0.228896 (* 1 = 0.228896 loss)
I1101 06:41:20.152256 2942 sgd_solver.cpp:138] Iteration 752, lr = 0.01
I1101 06:42:15.041281 2942 solver.cpp:243] Iteration 753, loss = 0.18304
I1101 06:42:15.041394 2942 solver.cpp:259] Train net output #0: loss = 0.18304 (* 1 = 0.18304 loss)
I1101 06:42:15.041402 2942 sgd_solver.cpp:138] Iteration 753, lr = 0.01
I1101 06:43:10.346072 2942 solver.cpp:243] Iteration 754, loss = 0.156069
I1101 06:43:10.346170 2942 solver.cpp:259] Train net output #0: loss = 0.156069 (* 1 = 0.156069 loss)
I1101 06:43:10.346177 2942 sgd_solver.cpp:138] Iteration 754, lr = 0.01
I1101 06:44:05.998122 2942 solver.cpp:243] Iteration 755, loss = 0.182228
I1101 06:44:05.998195 2942 solver.cpp:259] Train net output #0: loss = 0.182228 (* 1 = 0.182228 loss)
I1101 06:44:05.998214 2942 sgd_solver.cpp:138] Iteration 755, lr = 0.01
I1101 06:45:01.561781 2942 solver.cpp:243] Iteration 756, loss = 0.216226
I1101 06:45:01.561890 2942 solver.cpp:259] Train net output #0: loss = 0.216226 (* 1 = 0.216226 loss)
I1101 06:45:01.561898 2942 sgd_solver.cpp:138] Iteration 756, lr = 0.01
I1101 06:45:56.949368 2942 solver.cpp:243] Iteration 757, loss = 0.18065
I1101 06:45:56.949447 2942 solver.cpp:259] Train net output #0: loss = 0.18065 (* 1 = 0.18065 loss)
I1101 06:45:56.949455 2942 sgd_solver.cpp:138] Iteration 757, lr = 0.01
I1101 06:46:52.247467 2942 solver.cpp:243] Iteration 758, loss = 0.182474
I1101 06:46:52.247581 2942 solver.cpp:259] Train net output #0: loss = 0.182474 (* 1 = 0.182474 loss)
I1101 06:46:52.247588 2942 sgd_solver.cpp:138] Iteration 758, lr = 0.01
I1101 06:47:47.383482 2942 solver.cpp:243] Iteration 759, loss = 0.212113
I1101 06:47:47.383568 2942 solver.cpp:259] Train net output #0: loss = 0.212113 (* 1 = 0.212113 loss)
I1101 06:47:47.383574 2942 sgd_solver.cpp:138] Iteration 759, lr = 0.01
I1101 06:48:42.570590 2942 solver.cpp:243] Iteration 760, loss = 0.206157
I1101 06:48:42.570747 2942 solver.cpp:259] Train net output #0: loss = 0.206157 (* 1 = 0.206157 loss)
I1101 06:48:42.570770 2942 sgd_solver.cpp:138] Iteration 760, lr = 0.01
I1101 06:49:37.778367 2942 solver.cpp:243] Iteration 761, loss = 0.201435
I1101 06:49:37.778491 2942 solver.cpp:259] Train net output #0: loss = 0.201435 (* 1 = 0.201435 loss)
I1101 06:49:37.778497 2942 sgd_solver.cpp:138] Iteration 761, lr = 0.01
I1101 06:50:32.906011 2942 solver.cpp:243] Iteration 762, loss = 0.232756
I1101 06:50:32.906136 2942 solver.cpp:259] Train net output #0: loss = 0.232756 (* 1 = 0.232756 loss)
I1101 06:50:32.906154 2942 sgd_solver.cpp:138] Iteration 762, lr = 0.01
I1101 06:51:28.507810 2942 solver.cpp:243] Iteration 763, loss = 0.239409
I1101 06:51:28.507935 2942 solver.cpp:259] Train net output #0: loss = 0.239409 (* 1 = 0.239409 loss)
I1101 06:51:28.507941 2942 sgd_solver.cpp:138] Iteration 763, lr = 0.01
I1101 06:52:24.117368 2942 solver.cpp:243] Iteration 764, loss = 0.210396
I1101 06:52:24.117455 2942 solver.cpp:259] Train net output #0: loss = 0.210396 (* 1 = 0.210396 loss)
I1101 06:52:24.117462 2942 sgd_solver.cpp:138] Iteration 764, lr = 0.01
I1101 06:53:19.973865 2942 solver.cpp:243] Iteration 765, loss = 0.213389
I1101 06:53:19.973986 2942 solver.cpp:259] Train net output #0: loss = 0.213389 (* 1 = 0.213389 loss)
I1101 06:53:19.973994 2942 sgd_solver.cpp:138] Iteration 765, lr = 0.01
I1101 06:54:15.469249 2942 solver.cpp:243] Iteration 766, loss = 0.176683
I1101 06:54:15.469341 2942 solver.cpp:259] Train net output #0: loss = 0.176683 (* 1 = 0.176683 loss)
I1101 06:54:15.469347 2942 sgd_solver.cpp:138] Iteration 766, lr = 0.01
I1101 06:55:10.433040 2942 solver.cpp:243] Iteration 767, loss = 0.175243
I1101 06:55:10.433122 2942 solver.cpp:259] Train net output #0: loss = 0.175243 (* 1 = 0.175243 loss)
I1101 06:55:10.433130 2942 sgd_solver.cpp:138] Iteration 767, lr = 0.01
I1101 06:56:05.749205 2942 solver.cpp:243] Iteration 768, loss = 0.240504
I1101 06:56:05.749297 2942 solver.cpp:259] Train net output #0: loss = 0.240504 (* 1 = 0.240504 loss)
I1101 06:56:05.749305 2942 sgd_solver.cpp:138] Iteration 768, lr = 0.01
I1101 06:57:00.961922 2942 solver.cpp:243] Iteration 769, loss = 0.196663
I1101 06:57:00.962010 2942 solver.cpp:259] Train net output #0: loss = 0.196663 (* 1 = 0.196663 loss)
I1101 06:57:00.962018 2942 sgd_solver.cpp:138] Iteration 769, lr = 0.01
I1101 06:57:56.258919 2942 solver.cpp:243] Iteration 770, loss = 0.180423
I1101 06:57:56.259018 2942 solver.cpp:259] Train net output #0: loss = 0.180423 (* 1 = 0.180423 loss)
I1101 06:57:56.259026 2942 sgd_solver.cpp:138] Iteration 770, lr = 0.01
I1101 06:58:51.617398 2942 solver.cpp:243] Iteration 771, loss = 0.175648
I1101 06:58:51.617507 2942 solver.cpp:259] Train net output #0: loss = 0.175648 (* 1 = 0.175648 loss)
I1101 06:58:51.617527 2942 sgd_solver.cpp:138] Iteration 771, lr = 0.01
I1101 06:59:47.129223 2942 solver.cpp:243] Iteration 772, loss = 0.217475
I1101 06:59:47.129295 2942 solver.cpp:259] Train net output #0: loss = 0.217475 (* 1 = 0.217475 loss)
I1101 06:59:47.129302 2942 sgd_solver.cpp:138] Iteration 772, lr = 0.01
I1101 07:00:42.674275 2942 solver.cpp:243] Iteration 773, loss = 0.172873
I1101 07:00:42.674332 2942 solver.cpp:259] Train net output #0: loss = 0.172873 (* 1 = 0.172873 loss)
I1101 07:00:42.674340 2942 sgd_solver.cpp:138] Iteration 773, lr = 0.01
I1101 07:01:38.446044 2942 solver.cpp:243] Iteration 774, loss = 0.20526
I1101 07:01:38.446117 2942 solver.cpp:259] Train net output #0: loss = 0.20526 (* 1 = 0.20526 loss)
I1101 07:01:38.446125 2942 sgd_solver.cpp:138] Iteration 774, lr = 0.01
I1101 07:02:33.842972 2942 solver.cpp:243] Iteration 775, loss = 0.164669
I1101 07:02:33.843098 2942 solver.cpp:259] Train net output #0: loss = 0.164669 (* 1 = 0.164669 loss)
I1101 07:02:33.843106 2942 sgd_solver.cpp:138] Iteration 775, lr = 0.01
I1101 07:03:28.843194 2942 solver.cpp:243] Iteration 776, loss = 0.123786
I1101 07:03:28.843338 2942 solver.cpp:259] Train net output #0: loss = 0.123786 (* 1 = 0.123786 loss)
I1101 07:03:28.843358 2942 sgd_solver.cpp:138] Iteration 776, lr = 0.01
I1101 07:04:24.223012 2942 solver.cpp:243] Iteration 777, loss = 0.152694
I1101 07:04:24.223104 2942 solver.cpp:259] Train net output #0: loss = 0.152694 (* 1 = 0.152694 loss)
I1101 07:04:24.223112 2942 sgd_solver.cpp:138] Iteration 777, lr = 0.01
I1101 07:05:19.547505 2942 solver.cpp:243] Iteration 778, loss = 0.16592
I1101 07:05:19.547611 2942 solver.cpp:259] Train net output #0: loss = 0.16592 (* 1 = 0.16592 loss)
I1101 07:05:19.547618 2942 sgd_solver.cpp:138] Iteration 778, lr = 0.01
I1101 07:06:14.945013 2942 solver.cpp:243] Iteration 779, loss = 0.131236
I1101 07:06:14.945102 2942 solver.cpp:259] Train net output #0: loss = 0.131236 (* 1 = 0.131236 loss)
I1101 07:06:14.945109 2942 sgd_solver.cpp:138] Iteration 779, lr = 0.01
I1101 07:07:10.377750 2942 solver.cpp:243] Iteration 780, loss = 0.180781
I1101 07:07:10.377817 2942 solver.cpp:259] Train net output #0: loss = 0.180781 (* 1 = 0.180781 loss)
I1101 07:07:10.377825 2942 sgd_solver.cpp:138] Iteration 780, lr = 0.01
I1101 07:08:06.142426 2942 solver.cpp:243] Iteration 781, loss = 0.200052
I1101 07:08:06.142537 2942 solver.cpp:259] Train net output #0: loss = 0.200052 (* 1 = 0.200052 loss)
I1101 07:08:06.142545 2942 sgd_solver.cpp:138] Iteration 781, lr = 0.01
I1101 07:09:01.782235 2942 solver.cpp:243] Iteration 782, loss = 0.166285
I1101 07:09:01.782305 2942 solver.cpp:259] Train net output #0: loss = 0.166285 (* 1 = 0.166285 loss)
I1101 07:09:01.782312 2942 sgd_solver.cpp:138] Iteration 782, lr = 0.01
I1101 07:09:57.450909 2942 solver.cpp:243] Iteration 783, loss = 0.204904
I1101 07:09:57.451010 2942 solver.cpp:259] Train net output #0: loss = 0.204904 (* 1 = 0.204904 loss)
I1101 07:09:57.451030 2942 sgd_solver.cpp:138] Iteration 783, lr = 0.01
I1101 07:10:52.858960 2942 solver.cpp:243] Iteration 784, loss = 0.143823
I1101 07:10:52.859050 2942 solver.cpp:259] Train net output #0: loss = 0.143823 (* 1 = 0.143823 loss)
I1101 07:10:52.859056 2942 sgd_solver.cpp:138] Iteration 784, lr = 0.01
I1101 07:11:48.006325 2942 solver.cpp:243] Iteration 785, loss = 0.158639
I1101 07:11:48.006422 2942 solver.cpp:259] Train net output #0: loss = 0.158639 (* 1 = 0.158639 loss)
I1101 07:11:48.006443 2942 sgd_solver.cpp:138] Iteration 785, lr = 0.01
I1101 07:12:43.566946 2942 solver.cpp:243] Iteration 786, loss = 0.157527
I1101 07:12:43.567029 2942 solver.cpp:259] Train net output #0: loss = 0.157527 (* 1 = 0.157527 loss)
I1101 07:12:43.567036 2942 sgd_solver.cpp:138] Iteration 786, lr = 0.01
I1101 07:13:38.747087 2942 solver.cpp:243] Iteration 787, loss = 0.229001
I1101 07:13:38.747169 2942 solver.cpp:259] Train net output #0: loss = 0.229001 (* 1 = 0.229001 loss)
I1101 07:13:38.747176 2942 sgd_solver.cpp:138] Iteration 787, lr = 0.01
I1101 07:14:34.269659 2942 solver.cpp:243] Iteration 788, loss = 0.166042
I1101 07:14:34.269748 2942 solver.cpp:259] Train net output #0: loss = 0.166042 (* 1 = 0.166042 loss)
I1101 07:14:34.269755 2942 sgd_solver.cpp:138] Iteration 788, lr = 0.01
I1101 07:15:29.537577 2942 solver.cpp:243] Iteration 789, loss = 0.212571
I1101 07:15:29.537619 2942 solver.cpp:259] Train net output #0: loss = 0.212571 (* 1 = 0.212571 loss)
I1101 07:15:29.537626 2942 sgd_solver.cpp:138] Iteration 789, lr = 0.01
I1101 07:16:25.185962 2942 solver.cpp:243] Iteration 790, loss = 0.177549
I1101 07:16:25.186005 2942 solver.cpp:259] Train net output #0: loss = 0.177549 (* 1 = 0.177549 loss)
I1101 07:16:25.186012 2942 sgd_solver.cpp:138] Iteration 790, lr = 0.01
I1101 07:17:20.694247 2942 solver.cpp:243] Iteration 791, loss = 0.219427
I1101 07:17:20.694320 2942 solver.cpp:259] Train net output #0: loss = 0.219427 (* 1 = 0.219427 loss)
I1101 07:17:20.694329 2942 sgd_solver.cpp:138] Iteration 791, lr = 0.01
I1101 07:18:16.576424 2942 solver.cpp:243] Iteration 792, loss = 0.184091
I1101 07:18:16.576484 2942 solver.cpp:259] Train net output #0: loss = 0.184091 (* 1 = 0.184091 loss)
I1101 07:18:16.576506 2942 sgd_solver.cpp:138] Iteration 792, lr = 0.01
I1101 07:19:11.834085 2942 solver.cpp:243] Iteration 793, loss = 0.182248
I1101 07:19:11.834184 2942 solver.cpp:259] Train net output #0: loss = 0.182248 (* 1 = 0.182248 loss)
I1101 07:19:11.834192 2942 sgd_solver.cpp:138] Iteration 793, lr = 0.01
I1101 07:20:06.932883 2942 solver.cpp:243] Iteration 794, loss = 0.138351
I1101 07:20:06.932976 2942 solver.cpp:259] Train net output #0: loss = 0.138351 (* 1 = 0.138351 loss)
I1101 07:20:06.932982 2942 sgd_solver.cpp:138] Iteration 794, lr = 0.01
I1101 07:21:02.166926 2942 solver.cpp:243] Iteration 795, loss = 0.131442
I1101 07:21:02.167026 2942 solver.cpp:259] Train net output #0: loss = 0.131442 (* 1 = 0.131442 loss)
I1101 07:21:02.167033 2942 sgd_solver.cpp:138] Iteration 795, lr = 0.01
I1101 07:21:57.211791 2942 solver.cpp:243] Iteration 796, loss = 0.177292
I1101 07:21:57.211889 2942 solver.cpp:259] Train net output #0: loss = 0.177292 (* 1 = 0.177292 loss)
I1101 07:21:57.211910 2942 sgd_solver.cpp:138] Iteration 796, lr = 0.01
I1101 07:22:52.467435 2942 solver.cpp:243] Iteration 797, loss = 0.163172
I1101 07:22:52.467532 2942 solver.cpp:259] Train net output #0: loss = 0.163172 (* 1 = 0.163172 loss)
I1101 07:22:52.467540 2942 sgd_solver.cpp:138] Iteration 797, lr = 0.01
I1101 07:23:47.584058 2942 solver.cpp:243] Iteration 798, loss = 0.1557
I1101 07:23:47.584126 2942 solver.cpp:259] Train net output #0: loss = 0.1557 (* 1 = 0.1557 loss)
I1101 07:23:47.584133 2942 sgd_solver.cpp:138] Iteration 798, lr = 0.01
I1101 07:24:42.980532 2942 solver.cpp:243] Iteration 799, loss = 0.158722
I1101 07:24:42.980628 2942 solver.cpp:259] Train net output #0: loss = 0.158722 (* 1 = 0.158722 loss)
I1101 07:24:42.980649 2942 sgd_solver.cpp:138] Iteration 799, lr = 0.01
I1101 07:25:38.133345 2942 solver.cpp:243] Iteration 800, loss = 0.193614
I1101 07:25:38.133430 2942 solver.cpp:259] Train net output #0: loss = 0.193614 (* 1 = 0.193614 loss)
I1101 07:25:38.133437 2942 sgd_solver.cpp:138] Iteration 800, lr = 0.01
I1101 07:26:33.691634 2942 solver.cpp:243] Iteration 801, loss = 0.16334
I1101 07:26:33.691720 2942 solver.cpp:259] Train net output #0: loss = 0.16334 (* 1 = 0.16334 loss)
I1101 07:26:33.691726 2942 sgd_solver.cpp:138] Iteration 801, lr = 0.01
I1101 07:27:28.735807 2942 solver.cpp:243] Iteration 802, loss = 0.135363
I1101 07:27:28.735888 2942 solver.cpp:259] Train net output #0: loss = 0.135363 (* 1 = 0.135363 loss)
I1101 07:27:28.735895 2942 sgd_solver.cpp:138] Iteration 802, lr = 0.01
I1101 07:28:23.747395 2942 solver.cpp:243] Iteration 803, loss = 0.201854
I1101 07:28:23.747498 2942 solver.cpp:259] Train net output #0: loss = 0.201854 (* 1 = 0.201854 loss)
I1101 07:28:23.747516 2942 sgd_solver.cpp:138] Iteration 803, lr = 0.01
I1101 07:29:18.985882 2942 solver.cpp:243] Iteration 804, loss = 0.152548
I1101 07:29:18.985962 2942 solver.cpp:259] Train net output #0: loss = 0.152548 (* 1 = 0.152548 loss)
I1101 07:29:18.985970 2942 sgd_solver.cpp:138] Iteration 804, lr = 0.01
I1101 07:30:14.139812 2942 solver.cpp:243] Iteration 805, loss = 0.173412
I1101 07:30:14.139940 2942 solver.cpp:259] Train net output #0: loss = 0.173412 (* 1 = 0.173412 loss)
I1101 07:30:14.139960 2942 sgd_solver.cpp:138] Iteration 805, lr = 0.01
I1101 07:31:09.495632 2942 solver.cpp:243] Iteration 806, loss = 0.185132
I1101 07:31:09.495724 2942 solver.cpp:259] Train net output #0: loss = 0.185132 (* 1 = 0.185132 loss)
I1101 07:31:09.495733 2942 sgd_solver.cpp:138] Iteration 806, lr = 0.01
I1101 07:32:04.826592 2942 solver.cpp:243] Iteration 807, loss = 0.172771
I1101 07:32:04.826647 2942 solver.cpp:259] Train net output #0: loss = 0.172771 (* 1 = 0.172771 loss)
I1101 07:32:04.826653 2942 sgd_solver.cpp:138] Iteration 807, lr = 0.01
I1101 07:33:00.284266 2942 solver.cpp:243] Iteration 808, loss = 0.177978
I1101 07:33:00.284364 2942 solver.cpp:259] Train net output #0: loss = 0.177978 (* 1 = 0.177978 loss)
I1101 07:33:00.284373 2942 sgd_solver.cpp:138] Iteration 808, lr = 0.01
I1101 07:33:55.824797 2942 solver.cpp:243] Iteration 809, loss = 0.130759
I1101 07:33:55.824872 2942 solver.cpp:259] Train net output #0: loss = 0.130759 (* 1 = 0.130759 loss)
I1101 07:33:55.824880 2942 sgd_solver.cpp:138] Iteration 809, lr = 0.01
I1101 07:34:50.941251 2942 solver.cpp:243] Iteration 810, loss = 0.257597
I1101 07:34:50.941334 2942 solver.cpp:259] Train net output #0: loss = 0.257597 (* 1 = 0.257597 loss)
I1101 07:34:50.941356 2942 sgd_solver.cpp:138] Iteration 810, lr = 0.01
I1101 07:35:45.703112 2942 solver.cpp:243] Iteration 811, loss = 0.2065
I1101 07:35:45.703183 2942 solver.cpp:259] Train net output #0: loss = 0.2065 (* 1 = 0.2065 loss)
I1101 07:35:45.703191 2942 sgd_solver.cpp:138] Iteration 811, lr = 0.01
I1101 07:36:40.185742 2942 solver.cpp:243] Iteration 812, loss = 0.197094
I1101 07:36:40.185852 2942 solver.cpp:259] Train net output #0: loss = 0.197094 (* 1 = 0.197094 loss)
I1101 07:36:40.185858 2942 sgd_solver.cpp:138] Iteration 812, lr = 0.01
I1101 07:37:35.029402 2942 solver.cpp:243] Iteration 813, loss = 0.122207
I1101 07:37:35.029482 2942 solver.cpp:259] Train net output #0: loss = 0.122207 (* 1 = 0.122207 loss)
I1101 07:37:35.029489 2942 sgd_solver.cpp:138] Iteration 813, lr = 0.01
I1101 07:38:30.204357 2942 solver.cpp:243] Iteration 814, loss = 0.162976
I1101 07:38:30.204438 2942 solver.cpp:259] Train net output #0: loss = 0.162976 (* 1 = 0.162976 loss)
I1101 07:38:30.204445 2942 sgd_solver.cpp:138] Iteration 814, lr = 0.01
I1101 07:39:25.489464 2942 solver.cpp:243] Iteration 815, loss = 0.218391
I1101 07:39:25.489573 2942 solver.cpp:259] Train net output #0: loss = 0.218391 (* 1 = 0.218391 loss)
I1101 07:39:25.489581 2942 sgd_solver.cpp:138] Iteration 815, lr = 0.01
I1101 07:40:21.197306 2942 solver.cpp:243] Iteration 816, loss = 0.152489
I1101 07:40:21.197386 2942 solver.cpp:259] Train net output #0: loss = 0.152489 (* 1 = 0.152489 loss)
I1101 07:40:21.197393 2942 sgd_solver.cpp:138] Iteration 816, lr = 0.01
I1101 07:41:16.851727 2942 solver.cpp:243] Iteration 817, loss = 0.211059
I1101 07:41:16.851809 2942 solver.cpp:259] Train net output #0: loss = 0.211059 (* 1 = 0.211059 loss)
I1101 07:41:16.851817 2942 sgd_solver.cpp:138] Iteration 817, lr = 0.01
I1101 07:42:12.292263 2942 solver.cpp:243] Iteration 818, loss = 0.172165
I1101 07:42:12.292335 2942 solver.cpp:259] Train net output #0: loss = 0.172165 (* 1 = 0.172165 loss)
I1101 07:42:12.292342 2942 sgd_solver.cpp:138] Iteration 818, lr = 0.01
I1101 07:43:07.584506 2942 solver.cpp:243] Iteration 819, loss = 0.217142
I1101 07:43:07.584583 2942 solver.cpp:259] Train net output #0: loss = 0.217142 (* 1 = 0.217142 loss)
I1101 07:43:07.584590 2942 sgd_solver.cpp:138] Iteration 819, lr = 0.01
I1101 07:44:02.289772 2942 solver.cpp:243] Iteration 820, loss = 0.223516
I1101 07:44:02.289875 2942 solver.cpp:259] Train net output #0: loss = 0.223516 (* 1 = 0.223516 loss)
I1101 07:44:02.289881 2942 sgd_solver.cpp:138] Iteration 820, lr = 0.01
I1101 07:44:56.864765 2942 solver.cpp:243] Iteration 821, loss = 0.201347
I1101 07:44:56.864830 2942 solver.cpp:259] Train net output #0: loss = 0.201347 (* 1 = 0.201347 loss)
I1101 07:44:56.864837 2942 sgd_solver.cpp:138] Iteration 821, lr = 0.01
I1101 07:45:51.757936 2942 solver.cpp:243] Iteration 822, loss = 0.137515
I1101 07:45:51.758020 2942 solver.cpp:259] Train net output #0: loss = 0.137515 (* 1 = 0.137515 loss)
I1101 07:45:51.758028 2942 sgd_solver.cpp:138] Iteration 822, lr = 0.01
I1101 07:46:46.580322 2942 solver.cpp:243] Iteration 823, loss = 0.194158
I1101 07:46:46.580425 2942 solver.cpp:259] Train net output #0: loss = 0.194158 (* 1 = 0.194158 loss)
I1101 07:46:46.580443 2942 sgd_solver.cpp:138] Iteration 823, lr = 0.01
I1101 07:47:41.901865 2942 solver.cpp:243] Iteration 824, loss = 0.201745
I1101 07:47:41.901943 2942 solver.cpp:259] Train net output #0: loss = 0.201745 (* 1 = 0.201745 loss)
I1101 07:47:41.901962 2942 sgd_solver.cpp:138] Iteration 824, lr = 0.01
I1101 07:48:38.301646 2942 solver.cpp:243] Iteration 825, loss = 0.193692
I1101 07:48:38.301780 2942 solver.cpp:259] Train net output #0: loss = 0.193692 (* 1 = 0.193692 loss)
I1101 07:48:38.301789 2942 sgd_solver.cpp:138] Iteration 825, lr = 0.01
^C

看起来好像损失函数在震荡,也不是很懂,ctrl+C停下来,得到了一个caffenet_train_iter_827.caffemodel,在models/bvlc_reference_caffenet目录下,拿到python里测了一下自己的数据,也能分类,虽然很多分不对。

分类程序是好久之前按网上的demo写的:

 if os.path.isfile(caffe_root + 'models/bvlc_reference_caffenet_stamp/caffenet_train_iter_827.caffemodel'):
print 'CaffeNet found.'
else:
print 'Downloading pre-trained CaffeNet model...' model_def = caffe_root + 'models/bvlc_reference_caffenet_stamp/deploy.prototxt'
model_weights = caffe_root + 'models/bvlc_reference_caffenet_stamp/caffenet_train_iter_827.caffemodel' net = caffe.Net(model_def, # defines the structure of the model
model_weights, # contains the trained weights
caffe.TEST) # use test mode (e.g., don't perform dropout) mu = np.load(caffe_root + 'python/caffe/imagenet/ilsvrc_2012_mean.npy')
mu = mu.mean(1).mean(1) # average over pixels to obtain the mean (BGR) pixel values
print 'mean-subtracted values:', zip('BGR', mu) # create transformer for the input called 'data'
transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape}) transformer.set_transpose('data', (2,0,1)) # move image channels to outermost dimension
transformer.set_mean('data', mu) # subtract the dataset-mean value in each channel
transformer.set_raw_scale('data', 255) # rescale from [0, 1] to [0, 255]
transformer.set_channel_swap('data', (2,1,0)) # swap channels from RGB to BGR
net.blobs['data'].reshape(50, # batch size
3, # 3-channel (BGR) images
227, 227) # image size is 227x227 image = caffe.io.load_image(caffe_root + 'examples/images/YP1000016.jpg')
transformed_image = transformer.preprocess('data', image)
plt.imshow(image) net.blobs['data'].data[...] = transformed_image ### perform classification
output = net.forward() output_prob = output['prob'][0] # the output probability vector for the first image in the batch print 'predicted class is:', output_prob.argmax()

train.txt参考各种,觉得这个博客比较良心,数据、怎么写shell也给了:http://blog.csdn.net/gaohuazhao/article/details/69568267

另有:python写train.txt

照葫芦画瓢,制作自己的训练数据:

find ./ -name "*.jpg" > train.txt 可以把目录下所有的.jpg带目录加到train.txt里,怎么把目录名(标签)加在后边还不会,最后是拷出来在windows里用UltraEdit做的...

find ./ -name "*.jpg" > 1.txt

参考creat_list.sh

paste -d' ' train.txt 1.txt >> 2.txt  可以把train.txt和1.txt拼在一起放在2.txt

因为给的数据都是训练数据,按标签放在一个目录下,怎么随机拆分成val还不知道,参考知乎答案,打算先训一下看,就不测试了,于是直接选了其中一张图片,当做val(如果val.txt为空的话,训练的时候会报错,说val长度不合法)

caffe_ssd学习-用自己的数据做训练的更多相关文章

  1. 迁移学习算法之TrAdaBoost ——本质上是在用不同分布的训练数据,训练出一个分类器

    迁移学习算法之TrAdaBoost from: https://blog.csdn.net/Augster/article/details/53039489 TradaBoost算法由来已久,具体算法 ...

  2. Fine-tuning Convolutional Neural Networks for Biomedical Image Analysis: Actively and Incrementally如何使用尽可能少的标注数据来训练一个效果有潜力的分类器

    作者:AI研习社链接:https://www.zhihu.com/question/57523080/answer/236301363来源:知乎著作权归作者所有.商业转载请联系作者获得授权,非商业转载 ...

  3. 【学习笔记】大数据技术原理与应用(MOOC视频、厦门大学林子雨)

    1 大数据概述 大数据特性:4v volume velocity variety value 即大量化.快速化.多样化.价值密度低 数据量大:大数据摩尔定律 快速化:从数据的生成到消耗,时间窗口小,可 ...

  4. LUSE: 无监督数据预训练短文本编码模型

    LUSE: 无监督数据预训练短文本编码模型 1 前言 本博文本应写之前立的Flag:基于加密技术编译一个自己的Python解释器,经过半个多月尝试已经成功,但考虑到安全性问题就不公开了,有兴趣的朋友私 ...

  5. 深度学习与CV教程(6) | 神经网络训练技巧 (上)

    作者:韩信子@ShowMeAI 教程地址:http://www.showmeai.tech/tutorials/37 本文地址:http://www.showmeai.tech/article-det ...

  6. QML学习笔记(五)— 做一个简单的待做事项列表

    做一个简单的QML待做事项列表,能够动态添加和删除和编辑数据 GitHub:八至 作者:狐狸家的鱼 本文链接:QML学习笔记(五)— 做一个待做事项列表 主要用到QML:ListView 效果 全部代 ...

  7. 深度学习笔记 (二) 在TensorFlow上训练一个多层卷积神经网络

    上一篇笔记主要介绍了卷积神经网络相关的基础知识.在本篇笔记中,将参考TensorFlow官方文档使用mnist数据集,在TensorFlow上训练一个多层卷积神经网络. 下载并导入mnist数据集 首 ...

  8. jsoup爬虫简书首页数据做个小Demo

    代码地址如下:http://www.demodashi.com/demo/11643.html 昨天LZ去面试,遇到一个大牛,被血虐一番,发现自己基础还是很薄弱,对java一些原理掌握的还是不够稳固, ...

  9. pandas学习(四)--数据的归一化

    欢迎加入python学习交流群 667279387 Pandas学习(一)–数据的导入 pandas学习(二)–双色球数据分析 pandas学习(三)–NAB球员薪资分析 pandas学习(四)–数据 ...

随机推荐

  1. Java如何循环数组并使用Split

    场景: 当写方法时遇到1个参数有3个值, 该参数类型为数组.    例如:  aaa|bbb|ccc  .  而且需要循环打印,这个时候我们就需要用数组循环输出的方法. 一:feature 示例 Wh ...

  2. mysql数据库的备份和还原

    mysql数据库的备份命令:mysqldump -u root  -p 要备份的现有数据库名  >  备份后的sql文件名.sql,例如:  mysqldump -u root -p  heal ...

  3. C++ 方阵原地旋转90度

    不额外申请内存(另外的一个二维数组空间),将一个方阵(二维数组)原地旋转90度,主要的思路是,由外向内,一圈圈的进行旋转(就是依次进行交换),如下图所示,当这些圈圈都交换完了之后,就完成了原地旋转了. ...

  4. 【数据库】left join(左关联)、right join(右关联)、inner join(自关联)的区别

    left join(左关联).right join(右关联).inner join(自关联)的区别 用一张图说明三者的区别: 总结: left join(左联接) 返回包括左表中的所有记录和右表中关联 ...

  5. SQL Server数据恢复准备之TRUNCATE TABLE理解

    SQL Server数据恢复准备之TRUNCATE TABLE理解 转自:https://blog.51cto.com/aimax/2142553 易语随风去关注0人评论6717人阅读2018-07- ...

  6. 【微软2014实习生及秋令营技术类职位在线測试】题目2 : K-th string

    时间限制:10000ms 单点时限:1000ms 内存限制:256MB Description Consider a string set that each of them consists of ...

  7. Linux系统启动排错实验集合

    Centos6系统启动流程 1. post  加电自检  检查硬件环境 2. 选择一个硬件类型引导启动           mbr 446字节   grub  stage1 3. 加载boot分区的文 ...

  8. Cglib动态代理实现原理

    Cglib动态代理实现方式 我们先通过一个demo看一下Cglib是如何实现动态代理的. 首先定义个服务类,有两个方法并且其中一个方法用final来修饰. public class PersonSer ...

  9. Linux更改IP地址

    1.进入到root用户 2.执行命令:ifconfig 查看本机的名称 3.执行命令:ifconfig eth0 192.168.25.128 netmask 255.255.255.0  //eth ...

  10. Ext.define细节分析

    自己写的其实还是不懂,再看看别人写的吧Extjs4 源码分析系列一 类的创建过程https://www.cnblogs.com/creazyguagua/p/4302864.htmlhttp://ww ...