- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi
I converted caffemodel into IR with this command “./ModelOptimizer -p FP16 -w $1 -d $2 -i -b 1 -f 1 ”; Then i run samples with this model, the command line is "./multi_output_sample -i armstrong_128.bmp -m my_model_fp16.xml -ni 100 -d GPU", but i get a error:
[ INFO ] Start inference (1 iterations)
terminate called after throwing an instance of 'std::logic_error'
what(): memory data type alignment do not match
Aborted (core dumped)
I see the sample code
for (int iter = 0; iter < FLAGS_ni; ++iter) { auto t0 = Time::now(); status = enginePtr->Infer(inputBlobs, outputBlobs, &resp); auto t1 = Time::now(); fsec fs = t1 - t0; ms d = std::chrono::duration_cast<ms>(fs); total += d.count(); if (status != InferenceEngine::OK) { throw std::logic_error(resp.msg); } } /** Show performace results **/ std::cout << std::endl << "Average running time of one iteration: " << total / static_cast<double>(FLAGS_ni) << " ms" << std::endl;
the error shows failure to infer, how can i save this problem?
Link Copied
1 Reply
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
Does it work if you run this model on CPU?
We can help you if you share your model with us. Is it possible?
Best wishes,
Anna
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page