Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6493 Discussions

How to without the --weights option for the "ncappzoo/apps/dogsvscats" ?

idata
Employee
682 Views

Hi all,

 

During my test run https://movidius.github.io/blog/deploying-custom-caffe-models/ , the model did not converge well. As the paper said it need rerurn the training session without the --weights option. But I don't where and how to without the --weights option?

 

root@ubuntu:~/workspace/ncappzoo/apps/dogsvscats# ls

 

AUTHORS.txt bvlc_googlenet create-labels.py create-lmdb.sh data Makefile README.md

 

root@ubuntu:~/workspace/ncappzoo/apps/dogsvscats# $CAFFE_PATH/build/tools/caffe train --solver bvlc_googlenet/org/solver.prototxt --weights $CAFFE_PATH/models/bvlc_googlenet/bvlc_googlenet.caffemodel 2>&1 | tee bvlc_googlenet/org/train.log

 

I0326 01:05:36.579694 6679 caffe.cpp:197] Use CPU.

 

I0326 01:05:36.582623 6679 solver.cpp:45] Initializing solver from parameters:

 

test_iter: 1000

 

test_interval: 1000

 

base_lr: 0.01

 

display: 40

 

max_iter: 40000

 

lr_policy: "step"

 

gamma: 0.96

 

momentum: 0.9

 

weight_decay: 0.0002

 

….

 

I0326 01:06:30.886548 6679 solver.cpp:239] Iteration 0 (-4.2039e-45 iter/s, 51.677s/40 iters), loss = 1.72577

 

I0326 01:06:30.887087 6679 solver.cpp:258] Train net output #0: loss1/loss1 = 1.6285 (* 0.3 = 0.48855 loss)

 

I0326 01:06:30.887099 6679 solver.cpp:258] Train net output #1: loss2/loss2 = 2.38326 (* 0.3 = 0.714978 loss)

 

I0326 01:06:30.887109 6679 solver.cpp:258] Train net output #2: loss3/loss3 = 0.522238 (* 1 = 0.522238 loss)

 

I0326 01:06:30.887133 6679 sgd_solver.cpp:112] Iteration 0, lr = 0.01

 

I0326 01:35:40.075528 6679 solver.cpp:239] Iteration 40 (0.0228678 iter/s, 1749.19s/40 iters), loss = 2.21898

 

I0326 01:35:40.082163 6679 solver.cpp:258] Train net output #0: loss1/loss1 = 13.4625 (* 0.3 = 4.03876 loss)

 

I0326 01:35:40.082223 6679 solver.cpp:258] Train net output #1: loss2/loss2 = 1.03025 (* 0.3 = 0.309074 loss)

 

I0326 01:35:40.082263 6679 solver.cpp:258] Train net output #2: loss3/loss3 = 0.691688 (* 1 = 0.691688 loss)

 

I0326 01:35:40.082301 6679 sgd_solver.cpp:112] Iteration 40, lr = 0.01

 

I0326 01:47:02.979389 6679 solver.cpp:239] Iteration 80 (0.058574 iter/s, 682.897s/40 iters), loss = -nan

 

I0326 01:47:02.979801 6679 solver.cpp:258] Train net output #0: loss1/loss1 = -nan (* 0.3 = -nan loss)

 

I0326 01:47:02.979809 6679 solver.cpp:258] Train net output #1: loss2/loss2 = -nan (* 0.3 = -nan loss)

 

I0326 01:47:02.979813 6679 solver.cpp:258] Train net output #2: loss3/loss3 = -nan (* 1 = -nan loss)

 

I0326 01:47:02.979820 6679 sgd_solver.cpp:112] Iteration 80, lr = 0.01

 

I0326 01:55:57.463356 6679 solver.cpp:239] Iteration 120 (0.0748387 iter/s, 534.483s/40 iters), loss = -nan

 

I0326 01:55:57.463701 6679 solver.cpp:258] Train net output #0: loss1/loss1 = -nan (* 0.3 = -nan loss)

 

I0326 01:55:57.463774 6679 solver.cpp:258] Train net output #1: loss2/loss2 = -nan (* 0.3 = -nan loss)

 

I0326 01:55:57.463795 6679 solver.cpp:258] Train net output #2: loss3/loss3 = -nan (* 1 = -nan loss)

 

I0326 01:55:57.463802 6679 sgd_solver.cpp:112] Iteration 120, lr = 0.01

 

I0326 02:07:28.233012 6679 solver.cpp:239] Iteration 160 (0.0579065 iter/s, 690.769s/40 iters), loss = -nan

 

I0326 02:07:28.239135 6679 solver.cpp:258] Train net output #0: loss1/loss1 = -nan (* 0.3 = -nan loss)

 

I0326 02:07:28.239162 6679 solver.cpp:258] Train net output #1: loss2/loss2 = -nan (* 0.3 = -nan loss)

 

I0326 02:07:28.239181 6679 solver.cpp:258] Train net output #2: loss3/loss3 = -nan (* 1 = -nan loss)

 

I0326 02:07:28.239190 6679 sgd_solver.cpp:112] Iteration 160, lr = 0.01

 

I0326 02:16:59.161689 6679 solver.cpp:239] Iteration 200 (0.0700621 iter/s, 570.922s/40 iters), loss = -nan

 

I0326 02:16:59.196723 6679 solver.cpp:258] Train net output #0: loss1/loss1 = -nan (* 0.3 = -nan loss)

 

I0326 02:16:59.196772 6679 solver.cpp:258] Train net output #1: loss2/loss2 = -nan (* 0.3 = -nan loss)

 

I0326 02:16:59.196811 6679 solver.cpp:258] Train net output #2: loss3/loss3 = -nan (* 1 = -nan loss)

 

I0326 02:16:59.196854 6679 sgd_solver.cpp:112] Iteration 200, lr = 0.01

 

I0326 01:35:40.082163 6679 solver.cpp:258] Train net output #0: loss1/loss1 = 13.4625 (* 0.3 = 4.03876 loss)

 

I0326 01:35:40.082223 6679 solver.cpp:258] Train net output #1: loss2/loss2 = 1.03025 (* 0.3 = 0.309074 loss)

 

I0326 02:28:47.755465 6679 solver.cpp:239] Iteration 240 (0.0564527 iter/s, 708.558s/40 iters), loss = -nan

 

I0326 02:28:47.755969 6679 solver.cpp:258] Train net output #0: loss1/loss1 = -nan (* 0.3 = -nan loss)

 

I0326 02:28:47.756013 6679 solver.cpp:258] Train net output #1: loss2/loss2 = -nan (* 0.3 = -nan loss)

 

I0326 02:28:47.756052 6679 solver.cpp:258] Train net output #2: loss3/loss3 = -nan (* 1 = -nan loss)

 

I0326 02:28:47.756093 6679 sgd_solver.cpp:112] Iteration 240, lr = 0.01

 

BRs,

 

@ideallyworld
0 Kudos
2 Replies
idata
Employee
415 Views

solved by retry W/O "--weights $CAFFE_PATH/models/bvlc_googlenet", close this problem.

0 Kudos
idata
Employee
415 Views

Hi @ideallyworld

 

To run without the --weights option, run the command like this:

 

$CAFFE_PATH/build/tools/caffe train --solver bvlc_googlenet/org/solver.prototxt 2>&1 | tee bvlc_googlenet/org/train.log

 

Best Regards,

 

Sahira
0 Kudos
Reply