- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
I like to have dynamic batch input for my Tensorflow model.
When I convert in model optimization for Tensorflow model, I have error for having
--input_shape [-1,24,94,3] .
So input_shape is set to batch size 1.
Then inside the program, I tried to set dynamic batch as
rec_exec_net = rec_ie.load_network(rec_net, args.device, {"DYN_BATCH_ENABLED": "YES"})
But when I run program, I have error as
ValueError: could not broadcast input array from shape (2,3,24,94) into shape (1,3,24,94)
at line
request_wrap.execute("async", {rec_input_blob: pimages})
How can I have dynamic batch using OpenVino in python?
링크가 복사됨
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Yes I know that one. That is C sample and I am looking for Python sample.
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi nnain1,
Thanks for reaching out. Unfortunately, we do not have anything yet for dynamic batching in Pyhton and only available for C configuration only at the moment.
Regards,
Aznie
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi nnain1,
This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.
Regards,
Aznie
